Controllability of cascade connection of two systems












2












$begingroup$


I have two linear control systems that are represented by their state space models



$$left(
begin{array}{c|c}
A_1 & B_1 \
hline
C_1 & D_1 \
end{array}
right),
left(
begin{array}{c|c}
A_2 & B_2 \
hline
C_2 & D_2 \
end{array}
right)$$



where $A_i$ is the state matrix, $B_i$ is the input matrix, $C_i$ is the output matrix and $D_i$ is the feed-forward matrix. With $sigma(A_1)capsigma(A_2)=emptyset$.



The output of the first system is a vector signal of dimension $n_1$, which is the same dimension of the input signal of the second system.



I have found that the state space representation is given by:
$$Sigma_G: begin{cases}
x'(t) = Ax(t)+Bu(t)\
y(t)=Cx(t)+Du(t)
end{cases}$$

where $A=left(
begin{array}{cc}
A_1 & 0 \
B_2C_1 & A_2 \
end{array}
right)$
, $B=begin{bmatrix} B_1\B_2D_1 end{bmatrix}$, $C=[D_2C_1,,,, C_2]$ and $D=[D_2D_1]$.



From all of this, I would like to show that $(A,B)$ is controllable if and only if $(A_1,B_1)$ is controllable and $rank(A_2-lambda I ,,,,,,B_2T_1(lambda))=n_2$ for all $lambda in sigma(A_2)$, where $n_2$ is the dimension of the second original state space and $T_1(s)$ is the transfer matrix of the first original state space.



My attempts/progress:



It is well documented that this transformation matrix, $T_1(lambda)$, is given by $T_1(lambda)=C_1 (lambda I-A_1)^{-1}B_1+D_1$.



For $(A,B)$ to be controllable we could first find $C=rank[B,,,AB,,,A^2B ... A^{n-1}B]$ though I'm not sure what value we should equate this to in order to show controllability. In finding $C$ I had the following workings:



$$AB=begin{bmatrix} A_1B_1\B_2C_1B_1+A_2B_2D_1 end{bmatrix}$$ and $$A^2B=begin{bmatrix} A_1^2B_1\ B_2C_1A_1B_1+A_2(B_2C_1B_1+A_2B_2D_1) end{bmatrix}.$$ Pretty soon I was able to see that the top part of the block Matrix C had the pattern of $$B_1, A_1B_1, A_1^2B_1,...$$ which resembles the condition of $(A_1,B_1)$ being controllable if and only if $$rank[B_1,A_1B_1,A_1^2B_1,...,A_1^{n-1}B_1]=n_1.$$



As for the bottom part of C, I see absolutely no pattern and so I have no idea how to approach the second part of the equivalence.



Is there a connection between solving this question and my observations?



After many wasted hours of failed attempts to solve this, I appreciate any help offered to me.










share|cite|improve this question









$endgroup$

















    2












    $begingroup$


    I have two linear control systems that are represented by their state space models



    $$left(
    begin{array}{c|c}
    A_1 & B_1 \
    hline
    C_1 & D_1 \
    end{array}
    right),
    left(
    begin{array}{c|c}
    A_2 & B_2 \
    hline
    C_2 & D_2 \
    end{array}
    right)$$



    where $A_i$ is the state matrix, $B_i$ is the input matrix, $C_i$ is the output matrix and $D_i$ is the feed-forward matrix. With $sigma(A_1)capsigma(A_2)=emptyset$.



    The output of the first system is a vector signal of dimension $n_1$, which is the same dimension of the input signal of the second system.



    I have found that the state space representation is given by:
    $$Sigma_G: begin{cases}
    x'(t) = Ax(t)+Bu(t)\
    y(t)=Cx(t)+Du(t)
    end{cases}$$

    where $A=left(
    begin{array}{cc}
    A_1 & 0 \
    B_2C_1 & A_2 \
    end{array}
    right)$
    , $B=begin{bmatrix} B_1\B_2D_1 end{bmatrix}$, $C=[D_2C_1,,,, C_2]$ and $D=[D_2D_1]$.



    From all of this, I would like to show that $(A,B)$ is controllable if and only if $(A_1,B_1)$ is controllable and $rank(A_2-lambda I ,,,,,,B_2T_1(lambda))=n_2$ for all $lambda in sigma(A_2)$, where $n_2$ is the dimension of the second original state space and $T_1(s)$ is the transfer matrix of the first original state space.



    My attempts/progress:



    It is well documented that this transformation matrix, $T_1(lambda)$, is given by $T_1(lambda)=C_1 (lambda I-A_1)^{-1}B_1+D_1$.



    For $(A,B)$ to be controllable we could first find $C=rank[B,,,AB,,,A^2B ... A^{n-1}B]$ though I'm not sure what value we should equate this to in order to show controllability. In finding $C$ I had the following workings:



    $$AB=begin{bmatrix} A_1B_1\B_2C_1B_1+A_2B_2D_1 end{bmatrix}$$ and $$A^2B=begin{bmatrix} A_1^2B_1\ B_2C_1A_1B_1+A_2(B_2C_1B_1+A_2B_2D_1) end{bmatrix}.$$ Pretty soon I was able to see that the top part of the block Matrix C had the pattern of $$B_1, A_1B_1, A_1^2B_1,...$$ which resembles the condition of $(A_1,B_1)$ being controllable if and only if $$rank[B_1,A_1B_1,A_1^2B_1,...,A_1^{n-1}B_1]=n_1.$$



    As for the bottom part of C, I see absolutely no pattern and so I have no idea how to approach the second part of the equivalence.



    Is there a connection between solving this question and my observations?



    After many wasted hours of failed attempts to solve this, I appreciate any help offered to me.










    share|cite|improve this question









    $endgroup$















      2












      2








      2





      $begingroup$


      I have two linear control systems that are represented by their state space models



      $$left(
      begin{array}{c|c}
      A_1 & B_1 \
      hline
      C_1 & D_1 \
      end{array}
      right),
      left(
      begin{array}{c|c}
      A_2 & B_2 \
      hline
      C_2 & D_2 \
      end{array}
      right)$$



      where $A_i$ is the state matrix, $B_i$ is the input matrix, $C_i$ is the output matrix and $D_i$ is the feed-forward matrix. With $sigma(A_1)capsigma(A_2)=emptyset$.



      The output of the first system is a vector signal of dimension $n_1$, which is the same dimension of the input signal of the second system.



      I have found that the state space representation is given by:
      $$Sigma_G: begin{cases}
      x'(t) = Ax(t)+Bu(t)\
      y(t)=Cx(t)+Du(t)
      end{cases}$$

      where $A=left(
      begin{array}{cc}
      A_1 & 0 \
      B_2C_1 & A_2 \
      end{array}
      right)$
      , $B=begin{bmatrix} B_1\B_2D_1 end{bmatrix}$, $C=[D_2C_1,,,, C_2]$ and $D=[D_2D_1]$.



      From all of this, I would like to show that $(A,B)$ is controllable if and only if $(A_1,B_1)$ is controllable and $rank(A_2-lambda I ,,,,,,B_2T_1(lambda))=n_2$ for all $lambda in sigma(A_2)$, where $n_2$ is the dimension of the second original state space and $T_1(s)$ is the transfer matrix of the first original state space.



      My attempts/progress:



      It is well documented that this transformation matrix, $T_1(lambda)$, is given by $T_1(lambda)=C_1 (lambda I-A_1)^{-1}B_1+D_1$.



      For $(A,B)$ to be controllable we could first find $C=rank[B,,,AB,,,A^2B ... A^{n-1}B]$ though I'm not sure what value we should equate this to in order to show controllability. In finding $C$ I had the following workings:



      $$AB=begin{bmatrix} A_1B_1\B_2C_1B_1+A_2B_2D_1 end{bmatrix}$$ and $$A^2B=begin{bmatrix} A_1^2B_1\ B_2C_1A_1B_1+A_2(B_2C_1B_1+A_2B_2D_1) end{bmatrix}.$$ Pretty soon I was able to see that the top part of the block Matrix C had the pattern of $$B_1, A_1B_1, A_1^2B_1,...$$ which resembles the condition of $(A_1,B_1)$ being controllable if and only if $$rank[B_1,A_1B_1,A_1^2B_1,...,A_1^{n-1}B_1]=n_1.$$



      As for the bottom part of C, I see absolutely no pattern and so I have no idea how to approach the second part of the equivalence.



      Is there a connection between solving this question and my observations?



      After many wasted hours of failed attempts to solve this, I appreciate any help offered to me.










      share|cite|improve this question









      $endgroup$




      I have two linear control systems that are represented by their state space models



      $$left(
      begin{array}{c|c}
      A_1 & B_1 \
      hline
      C_1 & D_1 \
      end{array}
      right),
      left(
      begin{array}{c|c}
      A_2 & B_2 \
      hline
      C_2 & D_2 \
      end{array}
      right)$$



      where $A_i$ is the state matrix, $B_i$ is the input matrix, $C_i$ is the output matrix and $D_i$ is the feed-forward matrix. With $sigma(A_1)capsigma(A_2)=emptyset$.



      The output of the first system is a vector signal of dimension $n_1$, which is the same dimension of the input signal of the second system.



      I have found that the state space representation is given by:
      $$Sigma_G: begin{cases}
      x'(t) = Ax(t)+Bu(t)\
      y(t)=Cx(t)+Du(t)
      end{cases}$$

      where $A=left(
      begin{array}{cc}
      A_1 & 0 \
      B_2C_1 & A_2 \
      end{array}
      right)$
      , $B=begin{bmatrix} B_1\B_2D_1 end{bmatrix}$, $C=[D_2C_1,,,, C_2]$ and $D=[D_2D_1]$.



      From all of this, I would like to show that $(A,B)$ is controllable if and only if $(A_1,B_1)$ is controllable and $rank(A_2-lambda I ,,,,,,B_2T_1(lambda))=n_2$ for all $lambda in sigma(A_2)$, where $n_2$ is the dimension of the second original state space and $T_1(s)$ is the transfer matrix of the first original state space.



      My attempts/progress:



      It is well documented that this transformation matrix, $T_1(lambda)$, is given by $T_1(lambda)=C_1 (lambda I-A_1)^{-1}B_1+D_1$.



      For $(A,B)$ to be controllable we could first find $C=rank[B,,,AB,,,A^2B ... A^{n-1}B]$ though I'm not sure what value we should equate this to in order to show controllability. In finding $C$ I had the following workings:



      $$AB=begin{bmatrix} A_1B_1\B_2C_1B_1+A_2B_2D_1 end{bmatrix}$$ and $$A^2B=begin{bmatrix} A_1^2B_1\ B_2C_1A_1B_1+A_2(B_2C_1B_1+A_2B_2D_1) end{bmatrix}.$$ Pretty soon I was able to see that the top part of the block Matrix C had the pattern of $$B_1, A_1B_1, A_1^2B_1,...$$ which resembles the condition of $(A_1,B_1)$ being controllable if and only if $$rank[B_1,A_1B_1,A_1^2B_1,...,A_1^{n-1}B_1]=n_1.$$



      As for the bottom part of C, I see absolutely no pattern and so I have no idea how to approach the second part of the equivalence.



      Is there a connection between solving this question and my observations?



      After many wasted hours of failed attempts to solve this, I appreciate any help offered to me.







      dynamical-systems control-theory linear-control systems-theory






      share|cite|improve this question













      share|cite|improve this question











      share|cite|improve this question




      share|cite|improve this question










      asked Dec 23 '18 at 23:14









      piece_and_lovepiece_and_love

      574




      574






















          1 Answer
          1






          active

          oldest

          votes


















          1












          $begingroup$

          This is not a complete answer, but I think it can give some insight to the problem.



          You cannot separate the top and bottom of the $C$ matrix in general. You have to use the assumption that the spectrum of $A_1$ and $A_2$ are distinct to do so.



          We can prove the statement more easily with the Hautus criteria, which is equivalent to
          $$begin{bmatrix}w_1^T && w_2^Tend{bmatrix}begin{bmatrix}A_1-lambda I && 0 && B_1 \ B_2 C_1 && A_2-lambda I && B_2 D_1end{bmatrix} = 0 iff begin{bmatrix}w_1^T && w_2^Tend{bmatrix} = 0, ~~ forall lambda in sigma(A) tag{1}$$
          if and only if $(A,B)$ is controllable.



          First, assume that $(A,B)$ is controllable. Then $(1)$ applies. Since the spectrum of $A_1$ and $A_2$ are distinct and $sigma(A)=sigma(A_1)cupsigma(A_2)$, we can first check for the $lambda in sigma(A_1)$. Since $lambda notin sigma(A_2)$, for any nonzero $w_2$, $w_2^T (A_2-lambda I) neq 0$, which means $(1)$ is satisfied. Now select $w_2=0$, which means
          $$w_1^T begin{bmatrix}A_1-lambda I && B_1end{bmatrix} = 0 iff w_1^T = 0, ~~ forall lambda in sigma(A_1)$$
          So, $(A_1,B_1)$ is controllable.



          Similarly, for $lambdainsigma(A_2)$ and any nonzero $w_1$, $w_1^T(A_1-lambda I)neq0$, hence $(1)$ is satisfied. For $w_1=0$ we need
          $$w_2^T begin{bmatrix}A_2-lambda I && B_2 C_1 & B_2 D_1end{bmatrix} = 0 iff w_2^T = 0, ~~ forall lambda in sigma(A_2)$$



          Now, we need to show somehow this is equivalent to
          $$operatorname{rank}begin{bmatrix}A_2-lambda I & B_2 T_1(lambda)end{bmatrix}=n_2, ~~ forall lambda in sigma(A_2)$$






          share|cite|improve this answer









          $endgroup$













          • $begingroup$
            One follow up question: in your third sentence you state that to split up the matrix C we require that the spectrum of A1 and A2 are distinct. Is this not equivalent to what is stated near the beginning of the question? More specifically; the part written as ' σ(A1)∩σ(A2)=∅'.
            $endgroup$
            – piece_and_love
            Dec 26 '18 at 20:33










          • $begingroup$
            Yes but you need to mention it in your proof, like "because of this assumption we can do this", etc. So the reader can understand that it only applies for this specific assumption and not true in general. If your assumption does not make any difference then you should write "without losing generality, we can assume that...".
            $endgroup$
            – obareey
            Dec 27 '18 at 14:50












          • $begingroup$
            Sorry to bother you after such a long time; is it required that $w_1$ and $w_2$ are eigenvectors of $A_1$ and $A_2$ respectively?
            $endgroup$
            – piece_and_love
            Jan 2 at 23:25










          • $begingroup$
            It should be true for all vectors, but checking only for the eigenvectors is enough as $w^T(A-lambda I)=0$ if and only if $w$ is a left eigenvector of $A$ for the eigenvalue $lambda$, by definition.
            $endgroup$
            – obareey
            Jan 3 at 7:49










          • $begingroup$
            And so to be absolutely clear; $w_1^T(A_1-lambda I ,,, B_1)=0 iff w_1^T=0$ is equivalent to $rank[A_1-lambda I ,,, B_1]=n_1$? I understand that it's a necessary condition for full rank but is it alone sufficient?
            $endgroup$
            – piece_and_love
            Jan 3 at 17:08












          Your Answer





          StackExchange.ifUsing("editor", function () {
          return StackExchange.using("mathjaxEditing", function () {
          StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
          StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
          });
          });
          }, "mathjax-editing");

          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "69"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          noCode: true, onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3050790%2fcontrollability-of-cascade-connection-of-two-systems%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          1












          $begingroup$

          This is not a complete answer, but I think it can give some insight to the problem.



          You cannot separate the top and bottom of the $C$ matrix in general. You have to use the assumption that the spectrum of $A_1$ and $A_2$ are distinct to do so.



          We can prove the statement more easily with the Hautus criteria, which is equivalent to
          $$begin{bmatrix}w_1^T && w_2^Tend{bmatrix}begin{bmatrix}A_1-lambda I && 0 && B_1 \ B_2 C_1 && A_2-lambda I && B_2 D_1end{bmatrix} = 0 iff begin{bmatrix}w_1^T && w_2^Tend{bmatrix} = 0, ~~ forall lambda in sigma(A) tag{1}$$
          if and only if $(A,B)$ is controllable.



          First, assume that $(A,B)$ is controllable. Then $(1)$ applies. Since the spectrum of $A_1$ and $A_2$ are distinct and $sigma(A)=sigma(A_1)cupsigma(A_2)$, we can first check for the $lambda in sigma(A_1)$. Since $lambda notin sigma(A_2)$, for any nonzero $w_2$, $w_2^T (A_2-lambda I) neq 0$, which means $(1)$ is satisfied. Now select $w_2=0$, which means
          $$w_1^T begin{bmatrix}A_1-lambda I && B_1end{bmatrix} = 0 iff w_1^T = 0, ~~ forall lambda in sigma(A_1)$$
          So, $(A_1,B_1)$ is controllable.



          Similarly, for $lambdainsigma(A_2)$ and any nonzero $w_1$, $w_1^T(A_1-lambda I)neq0$, hence $(1)$ is satisfied. For $w_1=0$ we need
          $$w_2^T begin{bmatrix}A_2-lambda I && B_2 C_1 & B_2 D_1end{bmatrix} = 0 iff w_2^T = 0, ~~ forall lambda in sigma(A_2)$$



          Now, we need to show somehow this is equivalent to
          $$operatorname{rank}begin{bmatrix}A_2-lambda I & B_2 T_1(lambda)end{bmatrix}=n_2, ~~ forall lambda in sigma(A_2)$$






          share|cite|improve this answer









          $endgroup$













          • $begingroup$
            One follow up question: in your third sentence you state that to split up the matrix C we require that the spectrum of A1 and A2 are distinct. Is this not equivalent to what is stated near the beginning of the question? More specifically; the part written as ' σ(A1)∩σ(A2)=∅'.
            $endgroup$
            – piece_and_love
            Dec 26 '18 at 20:33










          • $begingroup$
            Yes but you need to mention it in your proof, like "because of this assumption we can do this", etc. So the reader can understand that it only applies for this specific assumption and not true in general. If your assumption does not make any difference then you should write "without losing generality, we can assume that...".
            $endgroup$
            – obareey
            Dec 27 '18 at 14:50












          • $begingroup$
            Sorry to bother you after such a long time; is it required that $w_1$ and $w_2$ are eigenvectors of $A_1$ and $A_2$ respectively?
            $endgroup$
            – piece_and_love
            Jan 2 at 23:25










          • $begingroup$
            It should be true for all vectors, but checking only for the eigenvectors is enough as $w^T(A-lambda I)=0$ if and only if $w$ is a left eigenvector of $A$ for the eigenvalue $lambda$, by definition.
            $endgroup$
            – obareey
            Jan 3 at 7:49










          • $begingroup$
            And so to be absolutely clear; $w_1^T(A_1-lambda I ,,, B_1)=0 iff w_1^T=0$ is equivalent to $rank[A_1-lambda I ,,, B_1]=n_1$? I understand that it's a necessary condition for full rank but is it alone sufficient?
            $endgroup$
            – piece_and_love
            Jan 3 at 17:08
















          1












          $begingroup$

          This is not a complete answer, but I think it can give some insight to the problem.



          You cannot separate the top and bottom of the $C$ matrix in general. You have to use the assumption that the spectrum of $A_1$ and $A_2$ are distinct to do so.



          We can prove the statement more easily with the Hautus criteria, which is equivalent to
          $$begin{bmatrix}w_1^T && w_2^Tend{bmatrix}begin{bmatrix}A_1-lambda I && 0 && B_1 \ B_2 C_1 && A_2-lambda I && B_2 D_1end{bmatrix} = 0 iff begin{bmatrix}w_1^T && w_2^Tend{bmatrix} = 0, ~~ forall lambda in sigma(A) tag{1}$$
          if and only if $(A,B)$ is controllable.



          First, assume that $(A,B)$ is controllable. Then $(1)$ applies. Since the spectrum of $A_1$ and $A_2$ are distinct and $sigma(A)=sigma(A_1)cupsigma(A_2)$, we can first check for the $lambda in sigma(A_1)$. Since $lambda notin sigma(A_2)$, for any nonzero $w_2$, $w_2^T (A_2-lambda I) neq 0$, which means $(1)$ is satisfied. Now select $w_2=0$, which means
          $$w_1^T begin{bmatrix}A_1-lambda I && B_1end{bmatrix} = 0 iff w_1^T = 0, ~~ forall lambda in sigma(A_1)$$
          So, $(A_1,B_1)$ is controllable.



          Similarly, for $lambdainsigma(A_2)$ and any nonzero $w_1$, $w_1^T(A_1-lambda I)neq0$, hence $(1)$ is satisfied. For $w_1=0$ we need
          $$w_2^T begin{bmatrix}A_2-lambda I && B_2 C_1 & B_2 D_1end{bmatrix} = 0 iff w_2^T = 0, ~~ forall lambda in sigma(A_2)$$



          Now, we need to show somehow this is equivalent to
          $$operatorname{rank}begin{bmatrix}A_2-lambda I & B_2 T_1(lambda)end{bmatrix}=n_2, ~~ forall lambda in sigma(A_2)$$






          share|cite|improve this answer









          $endgroup$













          • $begingroup$
            One follow up question: in your third sentence you state that to split up the matrix C we require that the spectrum of A1 and A2 are distinct. Is this not equivalent to what is stated near the beginning of the question? More specifically; the part written as ' σ(A1)∩σ(A2)=∅'.
            $endgroup$
            – piece_and_love
            Dec 26 '18 at 20:33










          • $begingroup$
            Yes but you need to mention it in your proof, like "because of this assumption we can do this", etc. So the reader can understand that it only applies for this specific assumption and not true in general. If your assumption does not make any difference then you should write "without losing generality, we can assume that...".
            $endgroup$
            – obareey
            Dec 27 '18 at 14:50












          • $begingroup$
            Sorry to bother you after such a long time; is it required that $w_1$ and $w_2$ are eigenvectors of $A_1$ and $A_2$ respectively?
            $endgroup$
            – piece_and_love
            Jan 2 at 23:25










          • $begingroup$
            It should be true for all vectors, but checking only for the eigenvectors is enough as $w^T(A-lambda I)=0$ if and only if $w$ is a left eigenvector of $A$ for the eigenvalue $lambda$, by definition.
            $endgroup$
            – obareey
            Jan 3 at 7:49










          • $begingroup$
            And so to be absolutely clear; $w_1^T(A_1-lambda I ,,, B_1)=0 iff w_1^T=0$ is equivalent to $rank[A_1-lambda I ,,, B_1]=n_1$? I understand that it's a necessary condition for full rank but is it alone sufficient?
            $endgroup$
            – piece_and_love
            Jan 3 at 17:08














          1












          1








          1





          $begingroup$

          This is not a complete answer, but I think it can give some insight to the problem.



          You cannot separate the top and bottom of the $C$ matrix in general. You have to use the assumption that the spectrum of $A_1$ and $A_2$ are distinct to do so.



          We can prove the statement more easily with the Hautus criteria, which is equivalent to
          $$begin{bmatrix}w_1^T && w_2^Tend{bmatrix}begin{bmatrix}A_1-lambda I && 0 && B_1 \ B_2 C_1 && A_2-lambda I && B_2 D_1end{bmatrix} = 0 iff begin{bmatrix}w_1^T && w_2^Tend{bmatrix} = 0, ~~ forall lambda in sigma(A) tag{1}$$
          if and only if $(A,B)$ is controllable.



          First, assume that $(A,B)$ is controllable. Then $(1)$ applies. Since the spectrum of $A_1$ and $A_2$ are distinct and $sigma(A)=sigma(A_1)cupsigma(A_2)$, we can first check for the $lambda in sigma(A_1)$. Since $lambda notin sigma(A_2)$, for any nonzero $w_2$, $w_2^T (A_2-lambda I) neq 0$, which means $(1)$ is satisfied. Now select $w_2=0$, which means
          $$w_1^T begin{bmatrix}A_1-lambda I && B_1end{bmatrix} = 0 iff w_1^T = 0, ~~ forall lambda in sigma(A_1)$$
          So, $(A_1,B_1)$ is controllable.



          Similarly, for $lambdainsigma(A_2)$ and any nonzero $w_1$, $w_1^T(A_1-lambda I)neq0$, hence $(1)$ is satisfied. For $w_1=0$ we need
          $$w_2^T begin{bmatrix}A_2-lambda I && B_2 C_1 & B_2 D_1end{bmatrix} = 0 iff w_2^T = 0, ~~ forall lambda in sigma(A_2)$$



          Now, we need to show somehow this is equivalent to
          $$operatorname{rank}begin{bmatrix}A_2-lambda I & B_2 T_1(lambda)end{bmatrix}=n_2, ~~ forall lambda in sigma(A_2)$$






          share|cite|improve this answer









          $endgroup$



          This is not a complete answer, but I think it can give some insight to the problem.



          You cannot separate the top and bottom of the $C$ matrix in general. You have to use the assumption that the spectrum of $A_1$ and $A_2$ are distinct to do so.



          We can prove the statement more easily with the Hautus criteria, which is equivalent to
          $$begin{bmatrix}w_1^T && w_2^Tend{bmatrix}begin{bmatrix}A_1-lambda I && 0 && B_1 \ B_2 C_1 && A_2-lambda I && B_2 D_1end{bmatrix} = 0 iff begin{bmatrix}w_1^T && w_2^Tend{bmatrix} = 0, ~~ forall lambda in sigma(A) tag{1}$$
          if and only if $(A,B)$ is controllable.



          First, assume that $(A,B)$ is controllable. Then $(1)$ applies. Since the spectrum of $A_1$ and $A_2$ are distinct and $sigma(A)=sigma(A_1)cupsigma(A_2)$, we can first check for the $lambda in sigma(A_1)$. Since $lambda notin sigma(A_2)$, for any nonzero $w_2$, $w_2^T (A_2-lambda I) neq 0$, which means $(1)$ is satisfied. Now select $w_2=0$, which means
          $$w_1^T begin{bmatrix}A_1-lambda I && B_1end{bmatrix} = 0 iff w_1^T = 0, ~~ forall lambda in sigma(A_1)$$
          So, $(A_1,B_1)$ is controllable.



          Similarly, for $lambdainsigma(A_2)$ and any nonzero $w_1$, $w_1^T(A_1-lambda I)neq0$, hence $(1)$ is satisfied. For $w_1=0$ we need
          $$w_2^T begin{bmatrix}A_2-lambda I && B_2 C_1 & B_2 D_1end{bmatrix} = 0 iff w_2^T = 0, ~~ forall lambda in sigma(A_2)$$



          Now, we need to show somehow this is equivalent to
          $$operatorname{rank}begin{bmatrix}A_2-lambda I & B_2 T_1(lambda)end{bmatrix}=n_2, ~~ forall lambda in sigma(A_2)$$







          share|cite|improve this answer












          share|cite|improve this answer



          share|cite|improve this answer










          answered Dec 26 '18 at 7:05









          obareeyobareey

          3,06911128




          3,06911128












          • $begingroup$
            One follow up question: in your third sentence you state that to split up the matrix C we require that the spectrum of A1 and A2 are distinct. Is this not equivalent to what is stated near the beginning of the question? More specifically; the part written as ' σ(A1)∩σ(A2)=∅'.
            $endgroup$
            – piece_and_love
            Dec 26 '18 at 20:33










          • $begingroup$
            Yes but you need to mention it in your proof, like "because of this assumption we can do this", etc. So the reader can understand that it only applies for this specific assumption and not true in general. If your assumption does not make any difference then you should write "without losing generality, we can assume that...".
            $endgroup$
            – obareey
            Dec 27 '18 at 14:50












          • $begingroup$
            Sorry to bother you after such a long time; is it required that $w_1$ and $w_2$ are eigenvectors of $A_1$ and $A_2$ respectively?
            $endgroup$
            – piece_and_love
            Jan 2 at 23:25










          • $begingroup$
            It should be true for all vectors, but checking only for the eigenvectors is enough as $w^T(A-lambda I)=0$ if and only if $w$ is a left eigenvector of $A$ for the eigenvalue $lambda$, by definition.
            $endgroup$
            – obareey
            Jan 3 at 7:49










          • $begingroup$
            And so to be absolutely clear; $w_1^T(A_1-lambda I ,,, B_1)=0 iff w_1^T=0$ is equivalent to $rank[A_1-lambda I ,,, B_1]=n_1$? I understand that it's a necessary condition for full rank but is it alone sufficient?
            $endgroup$
            – piece_and_love
            Jan 3 at 17:08


















          • $begingroup$
            One follow up question: in your third sentence you state that to split up the matrix C we require that the spectrum of A1 and A2 are distinct. Is this not equivalent to what is stated near the beginning of the question? More specifically; the part written as ' σ(A1)∩σ(A2)=∅'.
            $endgroup$
            – piece_and_love
            Dec 26 '18 at 20:33










          • $begingroup$
            Yes but you need to mention it in your proof, like "because of this assumption we can do this", etc. So the reader can understand that it only applies for this specific assumption and not true in general. If your assumption does not make any difference then you should write "without losing generality, we can assume that...".
            $endgroup$
            – obareey
            Dec 27 '18 at 14:50












          • $begingroup$
            Sorry to bother you after such a long time; is it required that $w_1$ and $w_2$ are eigenvectors of $A_1$ and $A_2$ respectively?
            $endgroup$
            – piece_and_love
            Jan 2 at 23:25










          • $begingroup$
            It should be true for all vectors, but checking only for the eigenvectors is enough as $w^T(A-lambda I)=0$ if and only if $w$ is a left eigenvector of $A$ for the eigenvalue $lambda$, by definition.
            $endgroup$
            – obareey
            Jan 3 at 7:49










          • $begingroup$
            And so to be absolutely clear; $w_1^T(A_1-lambda I ,,, B_1)=0 iff w_1^T=0$ is equivalent to $rank[A_1-lambda I ,,, B_1]=n_1$? I understand that it's a necessary condition for full rank but is it alone sufficient?
            $endgroup$
            – piece_and_love
            Jan 3 at 17:08
















          $begingroup$
          One follow up question: in your third sentence you state that to split up the matrix C we require that the spectrum of A1 and A2 are distinct. Is this not equivalent to what is stated near the beginning of the question? More specifically; the part written as ' σ(A1)∩σ(A2)=∅'.
          $endgroup$
          – piece_and_love
          Dec 26 '18 at 20:33




          $begingroup$
          One follow up question: in your third sentence you state that to split up the matrix C we require that the spectrum of A1 and A2 are distinct. Is this not equivalent to what is stated near the beginning of the question? More specifically; the part written as ' σ(A1)∩σ(A2)=∅'.
          $endgroup$
          – piece_and_love
          Dec 26 '18 at 20:33












          $begingroup$
          Yes but you need to mention it in your proof, like "because of this assumption we can do this", etc. So the reader can understand that it only applies for this specific assumption and not true in general. If your assumption does not make any difference then you should write "without losing generality, we can assume that...".
          $endgroup$
          – obareey
          Dec 27 '18 at 14:50






          $begingroup$
          Yes but you need to mention it in your proof, like "because of this assumption we can do this", etc. So the reader can understand that it only applies for this specific assumption and not true in general. If your assumption does not make any difference then you should write "without losing generality, we can assume that...".
          $endgroup$
          – obareey
          Dec 27 '18 at 14:50














          $begingroup$
          Sorry to bother you after such a long time; is it required that $w_1$ and $w_2$ are eigenvectors of $A_1$ and $A_2$ respectively?
          $endgroup$
          – piece_and_love
          Jan 2 at 23:25




          $begingroup$
          Sorry to bother you after such a long time; is it required that $w_1$ and $w_2$ are eigenvectors of $A_1$ and $A_2$ respectively?
          $endgroup$
          – piece_and_love
          Jan 2 at 23:25












          $begingroup$
          It should be true for all vectors, but checking only for the eigenvectors is enough as $w^T(A-lambda I)=0$ if and only if $w$ is a left eigenvector of $A$ for the eigenvalue $lambda$, by definition.
          $endgroup$
          – obareey
          Jan 3 at 7:49




          $begingroup$
          It should be true for all vectors, but checking only for the eigenvectors is enough as $w^T(A-lambda I)=0$ if and only if $w$ is a left eigenvector of $A$ for the eigenvalue $lambda$, by definition.
          $endgroup$
          – obareey
          Jan 3 at 7:49












          $begingroup$
          And so to be absolutely clear; $w_1^T(A_1-lambda I ,,, B_1)=0 iff w_1^T=0$ is equivalent to $rank[A_1-lambda I ,,, B_1]=n_1$? I understand that it's a necessary condition for full rank but is it alone sufficient?
          $endgroup$
          – piece_and_love
          Jan 3 at 17:08




          $begingroup$
          And so to be absolutely clear; $w_1^T(A_1-lambda I ,,, B_1)=0 iff w_1^T=0$ is equivalent to $rank[A_1-lambda I ,,, B_1]=n_1$? I understand that it's a necessary condition for full rank but is it alone sufficient?
          $endgroup$
          – piece_and_love
          Jan 3 at 17:08


















          draft saved

          draft discarded




















































          Thanks for contributing an answer to Mathematics Stack Exchange!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          Use MathJax to format equations. MathJax reference.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3050790%2fcontrollability-of-cascade-connection-of-two-systems%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Bundesstraße 106

          Verónica Boquete

          Ida-Boy-Ed-Garten