An intuitive approach to the Jordan Normal form












40














I want to understand the meaning behind the Jordan Normal form, as I think this is crucial for a mathematician.



As far as I understand this, the idea is to get the closest representation of an arbitrary endomorphism towards the diagonal form. As diagonalization is only possible if there are sufficient eigenvectors, we try to get a representation of the endomorphism with respect to its generalized eigenspaces, as their sum always gives us the whole space. Therefore bringing an endomorphism to its Jordan normal form is always possible.



How often an eigenvalue appears on the diagonal in the JNF is determined by its algebraic multiplicity. The number of blocks is determined by its geometric multiplicity. Here I am not sure whether I've got the idea right. I mean, I have trouble interpreting this statement.




What is the meaning behind a Jordan normal block and why is the number of these blocks equal to the number of linearly independent eigenvectors?




I do not want to see a rigorous proof, but maybe someone could answer for me the following sub-questions.




(a) Why do we have to start a new block for each new linearly independent eigenvector that we can find?



(b) Why do we not have one block for each generalized eigenspace?



(c) What is the intuition behind the fact that the Jordan blocks that contain at least $k+1$ entries of the eigenvalue $lambda$ are determined by the following? $$dim(ker(A-lambda I)^{k+1}) - dim(ker(A-lambda I)^k)$$











share|cite|improve this question




















  • 6




    IMO you are asking for the intuition of the wrong things. As I always do, I would suggest you postpone your quest for intuition until a later time.
    – Mariano Suárez-Álvarez
    Jan 12 '14 at 11:15








  • 1




    At least part of this question coincides with Why does the largest Jordan block determine the degree for that factor in the minimal polynomial? and Why is the geometric multiplicity of an eigen value equal to number of jordan blocks corresponding to it?
    – Marc van Leeuwen
    Jan 12 '14 at 13:44








  • 1




    We don't always have JNF, only when underlying field $F$ is algebraically closed we are guaranteed JNF.
    – mez
    Jan 13 '14 at 3:33






  • 2




    @mezhang, when the field is not algebraically closed we have a very similar normal form. In fact, Jordan himself proved his theorem for finite fields, which are never algebraically closed!
    – Mariano Suárez-Álvarez
    Jan 13 '14 at 5:39






  • 2




    The JNF in the non algebraically closed field is different, so no, you do not need that when working in general.
    – Mariano Suárez-Álvarez
    Jan 13 '14 at 22:21
















40














I want to understand the meaning behind the Jordan Normal form, as I think this is crucial for a mathematician.



As far as I understand this, the idea is to get the closest representation of an arbitrary endomorphism towards the diagonal form. As diagonalization is only possible if there are sufficient eigenvectors, we try to get a representation of the endomorphism with respect to its generalized eigenspaces, as their sum always gives us the whole space. Therefore bringing an endomorphism to its Jordan normal form is always possible.



How often an eigenvalue appears on the diagonal in the JNF is determined by its algebraic multiplicity. The number of blocks is determined by its geometric multiplicity. Here I am not sure whether I've got the idea right. I mean, I have trouble interpreting this statement.




What is the meaning behind a Jordan normal block and why is the number of these blocks equal to the number of linearly independent eigenvectors?




I do not want to see a rigorous proof, but maybe someone could answer for me the following sub-questions.




(a) Why do we have to start a new block for each new linearly independent eigenvector that we can find?



(b) Why do we not have one block for each generalized eigenspace?



(c) What is the intuition behind the fact that the Jordan blocks that contain at least $k+1$ entries of the eigenvalue $lambda$ are determined by the following? $$dim(ker(A-lambda I)^{k+1}) - dim(ker(A-lambda I)^k)$$











share|cite|improve this question




















  • 6




    IMO you are asking for the intuition of the wrong things. As I always do, I would suggest you postpone your quest for intuition until a later time.
    – Mariano Suárez-Álvarez
    Jan 12 '14 at 11:15








  • 1




    At least part of this question coincides with Why does the largest Jordan block determine the degree for that factor in the minimal polynomial? and Why is the geometric multiplicity of an eigen value equal to number of jordan blocks corresponding to it?
    – Marc van Leeuwen
    Jan 12 '14 at 13:44








  • 1




    We don't always have JNF, only when underlying field $F$ is algebraically closed we are guaranteed JNF.
    – mez
    Jan 13 '14 at 3:33






  • 2




    @mezhang, when the field is not algebraically closed we have a very similar normal form. In fact, Jordan himself proved his theorem for finite fields, which are never algebraically closed!
    – Mariano Suárez-Álvarez
    Jan 13 '14 at 5:39






  • 2




    The JNF in the non algebraically closed field is different, so no, you do not need that when working in general.
    – Mariano Suárez-Álvarez
    Jan 13 '14 at 22:21














40












40








40


39





I want to understand the meaning behind the Jordan Normal form, as I think this is crucial for a mathematician.



As far as I understand this, the idea is to get the closest representation of an arbitrary endomorphism towards the diagonal form. As diagonalization is only possible if there are sufficient eigenvectors, we try to get a representation of the endomorphism with respect to its generalized eigenspaces, as their sum always gives us the whole space. Therefore bringing an endomorphism to its Jordan normal form is always possible.



How often an eigenvalue appears on the diagonal in the JNF is determined by its algebraic multiplicity. The number of blocks is determined by its geometric multiplicity. Here I am not sure whether I've got the idea right. I mean, I have trouble interpreting this statement.




What is the meaning behind a Jordan normal block and why is the number of these blocks equal to the number of linearly independent eigenvectors?




I do not want to see a rigorous proof, but maybe someone could answer for me the following sub-questions.




(a) Why do we have to start a new block for each new linearly independent eigenvector that we can find?



(b) Why do we not have one block for each generalized eigenspace?



(c) What is the intuition behind the fact that the Jordan blocks that contain at least $k+1$ entries of the eigenvalue $lambda$ are determined by the following? $$dim(ker(A-lambda I)^{k+1}) - dim(ker(A-lambda I)^k)$$











share|cite|improve this question















I want to understand the meaning behind the Jordan Normal form, as I think this is crucial for a mathematician.



As far as I understand this, the idea is to get the closest representation of an arbitrary endomorphism towards the diagonal form. As diagonalization is only possible if there are sufficient eigenvectors, we try to get a representation of the endomorphism with respect to its generalized eigenspaces, as their sum always gives us the whole space. Therefore bringing an endomorphism to its Jordan normal form is always possible.



How often an eigenvalue appears on the diagonal in the JNF is determined by its algebraic multiplicity. The number of blocks is determined by its geometric multiplicity. Here I am not sure whether I've got the idea right. I mean, I have trouble interpreting this statement.




What is the meaning behind a Jordan normal block and why is the number of these blocks equal to the number of linearly independent eigenvectors?




I do not want to see a rigorous proof, but maybe someone could answer for me the following sub-questions.




(a) Why do we have to start a new block for each new linearly independent eigenvector that we can find?



(b) Why do we not have one block for each generalized eigenspace?



(c) What is the intuition behind the fact that the Jordan blocks that contain at least $k+1$ entries of the eigenvalue $lambda$ are determined by the following? $$dim(ker(A-lambda I)^{k+1}) - dim(ker(A-lambda I)^k)$$








linear-algebra matrices intuition jordan-normal-form






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited May 18 '18 at 10:48









Rodrigo de Azevedo

12.8k41855




12.8k41855










asked Jun 5 '13 at 10:08







user66906















  • 6




    IMO you are asking for the intuition of the wrong things. As I always do, I would suggest you postpone your quest for intuition until a later time.
    – Mariano Suárez-Álvarez
    Jan 12 '14 at 11:15








  • 1




    At least part of this question coincides with Why does the largest Jordan block determine the degree for that factor in the minimal polynomial? and Why is the geometric multiplicity of an eigen value equal to number of jordan blocks corresponding to it?
    – Marc van Leeuwen
    Jan 12 '14 at 13:44








  • 1




    We don't always have JNF, only when underlying field $F$ is algebraically closed we are guaranteed JNF.
    – mez
    Jan 13 '14 at 3:33






  • 2




    @mezhang, when the field is not algebraically closed we have a very similar normal form. In fact, Jordan himself proved his theorem for finite fields, which are never algebraically closed!
    – Mariano Suárez-Álvarez
    Jan 13 '14 at 5:39






  • 2




    The JNF in the non algebraically closed field is different, so no, you do not need that when working in general.
    – Mariano Suárez-Álvarez
    Jan 13 '14 at 22:21














  • 6




    IMO you are asking for the intuition of the wrong things. As I always do, I would suggest you postpone your quest for intuition until a later time.
    – Mariano Suárez-Álvarez
    Jan 12 '14 at 11:15








  • 1




    At least part of this question coincides with Why does the largest Jordan block determine the degree for that factor in the minimal polynomial? and Why is the geometric multiplicity of an eigen value equal to number of jordan blocks corresponding to it?
    – Marc van Leeuwen
    Jan 12 '14 at 13:44








  • 1




    We don't always have JNF, only when underlying field $F$ is algebraically closed we are guaranteed JNF.
    – mez
    Jan 13 '14 at 3:33






  • 2




    @mezhang, when the field is not algebraically closed we have a very similar normal form. In fact, Jordan himself proved his theorem for finite fields, which are never algebraically closed!
    – Mariano Suárez-Álvarez
    Jan 13 '14 at 5:39






  • 2




    The JNF in the non algebraically closed field is different, so no, you do not need that when working in general.
    – Mariano Suárez-Álvarez
    Jan 13 '14 at 22:21








6




6




IMO you are asking for the intuition of the wrong things. As I always do, I would suggest you postpone your quest for intuition until a later time.
– Mariano Suárez-Álvarez
Jan 12 '14 at 11:15






IMO you are asking for the intuition of the wrong things. As I always do, I would suggest you postpone your quest for intuition until a later time.
– Mariano Suárez-Álvarez
Jan 12 '14 at 11:15






1




1




At least part of this question coincides with Why does the largest Jordan block determine the degree for that factor in the minimal polynomial? and Why is the geometric multiplicity of an eigen value equal to number of jordan blocks corresponding to it?
– Marc van Leeuwen
Jan 12 '14 at 13:44






At least part of this question coincides with Why does the largest Jordan block determine the degree for that factor in the minimal polynomial? and Why is the geometric multiplicity of an eigen value equal to number of jordan blocks corresponding to it?
– Marc van Leeuwen
Jan 12 '14 at 13:44






1




1




We don't always have JNF, only when underlying field $F$ is algebraically closed we are guaranteed JNF.
– mez
Jan 13 '14 at 3:33




We don't always have JNF, only when underlying field $F$ is algebraically closed we are guaranteed JNF.
– mez
Jan 13 '14 at 3:33




2




2




@mezhang, when the field is not algebraically closed we have a very similar normal form. In fact, Jordan himself proved his theorem for finite fields, which are never algebraically closed!
– Mariano Suárez-Álvarez
Jan 13 '14 at 5:39




@mezhang, when the field is not algebraically closed we have a very similar normal form. In fact, Jordan himself proved his theorem for finite fields, which are never algebraically closed!
– Mariano Suárez-Álvarez
Jan 13 '14 at 5:39




2




2




The JNF in the non algebraically closed field is different, so no, you do not need that when working in general.
– Mariano Suárez-Álvarez
Jan 13 '14 at 22:21




The JNF in the non algebraically closed field is different, so no, you do not need that when working in general.
– Mariano Suárez-Álvarez
Jan 13 '14 at 22:21










4 Answers
4






active

oldest

votes


















31





+50









Let me sketch a proof of existence of the Jordan canonical form which, I believe, makes it somewhat natural.





Let us say that a linear endomorphism $f:Vto V$ of a nonzero finite dimensional vector space is decomposable if there exist proper subspaces $U_1$, $U_2$ of $V$ such that $V=U_1oplus U_2$, $f(U_1)subseteq U_1$ and $f(U_2)subseteq U_2$, and let us say that $f$ is indecomposable if it is not decomposable. In terms of bases and matrices, it is easy to see that the map $f$ is decomposable iff there exists a basis of $V$ such that the matrix of $f$ with respect to which has a non-trivial diagonal block decomposition (that it, it is block diagonal two blocks)



Now it is not hard to prove the following:




Lemma 1. If $f:Vto V$ is an endomorphism of a nonzero finite dimensional vector space, then there exist $ngeq1$ and nonzero subspaces $U_1$, $dots$, $U_n$ of $V$ such that $V=bigoplus_{i=1}^nU_i$, $f(U_i)subseteq U_i$ for all $iin{1,dots,n}$ and for each such $i$ the restriction $f|_{U_i}:U_ito U_i$ is indecomposable.




Indeed, you can more or less imitate the usual argument that shows that every natural number larger than one is a product of prime numbers.



This lemma allows us to reduce the study of linear maps to the study of indecomposable linear maps. So we should start by trying to see how an indecomposable endomorphism looks like.



There is a general fact that comes useful at times:




Lemma. If $h:Vto V$ is an endomorphism of a finite dimensional vector space, then there exists an $mgeq1$ such that $V=ker h^moplusdefim{operatorname{im}}im h^m$.




I'll leave its proof as a pleasant exercise.



So let us fix an indecomposable endomorphism $f:Vto V$ of a nonzero finite dimensional vector space. As $k$ is algebraically closed, there is a nonzero $vin V$ and a scalar $lambdain k$ such that $f(v)=lambda v$. Consider the map $h=f-lambdamathrm{Id}:Vto V$: we can apply the lemma to $h$, and we conclude that $V=ker h^moplusdefim{operatorname{im}}im h^m$ for some $mgeq1$. moreover, it is very easy to check that $f(ker h^m)subseteqker h^m$ and that $f(im h^m)subseteqim h^m$. Since we are supposing that $f$ is indecomposable, one of $ker h^m$ or $im h^m$ must be the whole of $V$. As $v$ is in the kernel of $h$, so it is also in the kernel of $h^m$, so it is not in $im h^m$, and we see that $ker h^m=V$.



This means, precisely, that $h^m:Vto V$ is the zero map, and we see that $h$ is nilpotent. Suppose its nilpotency index is $kgeq1$, and let $win V$ be a vector such that $h^{k-1}(w)neq0=h^k(w)$.




Lemma. The set $mathcal B={w,h(w),h^2(w),dots,h^{k-1}(w)}$ is a basis of $V$.




This is again a nice exercise.



Now you should be able to check easily that the matrix of $f$ with respect to the basis $mathcal B$ of $V$ is a Jordan block.



In this way we conclude that every indecomposable endomorphism of a nonzero finite dimensional vector space has, in an appropriate basis, a Jordan block as a matrix.
According to Lemma 1, then, every endomorphism of a nonzero finite dimensional vector space has, in an appropriate basis, a block diagonal matrix with Jordan blocks.






share|cite|improve this answer



















  • 3




    This argument is purely existential. But as soon as one knows the JNF exists, then one can use it to first prove uniqueness and then to relate it to invariants like the minimal polynomial and the characteristic polynomial in order to come closer to effectively finding it.
    – Mariano Suárez-Álvarez
    Jan 13 '14 at 8:09






  • 2




    Nice proof. I got a question. Jordan normal form, and Rational canonical form are equivalent solutions to the same problem. Yet they both live on. Clearly they have different utility. Which one is better where?
    – Charlie Frohman
    Jan 13 '14 at 18:13










  • @CharlieFrohman: I suppose that could be a stand-alone question here (if it isn't already).
    – Shaun
    Jan 15 '14 at 10:28










  • I do not understand why the chain of vectors of length $k$ in the last lemma is a basis of $V$. Sure its linearly independent, but how do I know that $mathrm{dim};V = k$ ?
    – me10240
    Jan 3 '16 at 21:44










  • Hi Mariano, could you give a hint for showing that the set $cal B$ in the last lemma spans $V$? I'm assuming it doesn't and going for a contradiction to the indecomposability of $T$ by constructing further chains. But it seems messy and not at all a "nice exercise". :)
    – David
    Jan 30 '17 at 3:47



















8














The true meaning of the Jordan canonical form is explained in the context of representation theory, namely, of finite dimensional representations of the algebra $k[t]$ (where $k$ is your algebraically closed ground field):




  • Uniqueness of the normal form is the Krull-Schmidt theorem, and

  • existence is the description of the indecomposable modules of $k[t]$.


Moreover, the description of indecomposable modules follows more or less easily (in a strong sense: if you did not know about the Jordan canonical form, you could guess it by looking at the following:) the simple modules are very easy to describe (this is where algebraically closedness comes in) and the extensions between them (in the sense of homological algebra) are also easy to describe (because $k[t]$ is an hereditary ring) Putting these things together (plus the Jordan-Hölder theorem) one gets existence.






share|cite|improve this answer



















  • 2




    If you are justlearning linear algebra, this answer is probably not very satisfying, as it involves things you do not know about. But you can look as an enticement on studying further to eventually understand it!
    – Mariano Suárez-Álvarez
    Jan 12 '14 at 11:12












  • This approach would probably satisfy Terry Tao's unsatisfied need of knowing why the theorem works :-)
    – Mariano Suárez-Álvarez
    Jan 12 '14 at 11:13






  • 1




    Not really: googling for each of the terms i mentioned, and/or looking at a basic textbook on representation theory (this is surely the best course of action) should satisfy you. The definitions and the statements of the theorems I mentioned comprise the first few chapters of every introductory textbook on representation theory.
    – Mariano Suárez-Álvarez
    Jan 12 '14 at 11:17








  • 9




    Please, do not delete comments to which I have responded: it makes the comment thread become uncomprehensible.
    – Mariano Suárez-Álvarez
    Jan 12 '14 at 11:23






  • 1




    Looking up Remak's contribution, as in Krull-Remak-Schimdt (in some order), it turns out that he has an interesting and tragic biography. en.wikipedia.org/wiki/Robert_Remak_%28mathematician%29
    – zyx
    Jan 14 '14 at 20:47



















4














There is no real meaning behind the Jordan normal form; this form is just as good as it gets in general (and then only over a field where the characteristic polynomial splits). That is, as good as it gets in our attempts to understand the action of a linear operator$~phi$ on a finite dimensional vector space by decomposing the space as a direct sum of $phi$-stable subspaces, so that we can study the action of$~phi$ on each of the components separately, and reconstruct the whole action from the action on the components. (This is not the only possible approach to understanding$~phi$, but one may say that whenever such a decomposition is possible, it does simplify our understanding.) Direct sum decompositions into $phi$-stable subspaces correspond to reducing the matrix to a block diagonal form (the $phi$-stability means that the images of basis vectors in each summand only involve basis vectors in the same summand, whence the diagonal blocks), and the finer the decomposition is, the smaller the diagonal blocks. If one can decompose into a sum of $1$-dimensional $phi$-stable subspaces then one obtains a diagonal matrix, but this is not always possible. Jordan block correspond to $phi$-stable subspaces that cannot be decomposed in any way as a direct sum of smaller such subspaces, so they are the end of the line of our decompositions.



Your concrete questions are easier to answer. Since (subspaces corresponding to) Jordan blocks for$~lambda$ are obtained from a (non-unique) direct sum decomposition of the generalised eigenspace for $lambda$, one can study the generalised eigenspace along that decomposition; in particular the (true) eigenspace is the direct sum of the eigenspaces for each Jordan block, and each of them is of dimension$~1$, whence the dimension of the eigenspace for$~lambda$ equals the number of Jordan blocks for$~lambda$. See this answer.



This also answers question (a), although I should note that one does not start with eigenvectors to find a decomposition into Jordan blocks. It is the other ways around: each Jordan block one can decompose into comes with (up to a scalar) a single eigenvector, and (since the decomposition is a direct sum) these vectors for different blocks are linearly independent. One cannot in general just take any basis of the eigenspace for$~lambda$ and construct a Jordan block around each basis vector. To see why, consider the situation where the Jordan blocks are to be of sizes $2$ and $1$. Then the eigenvector coming from the larger Jordan block must be not only in the kernel, but also in the image of $phi-lambda I$, and not all eigenvectors for$~lambda$ have that property; therefore only bases where one basis vector is such a special eigenvector can correspond to a decomposition into Jordan blocks. (Actually giving an algorithm for decomposing into Jordan blocks is not easy, although the possibility to do so is an important theoretic fact.)



The answer to question (b) is implied by this: since a Jordan block by nature only contributes $1$ to the geometric multiplicity of$~lambda$, one must have multiple Jordan blocks inside the generalised eigenspace whenever the geometric multiplicity of$~lambda$ is more than one. Just think of the simple case of a diagonalisable matrix with a (generalised) eigenspace of dimension $d>1$: a diagonal matrix with $d$ diagonal entries$~lambda$ is not a Jordan block. and this can only be seen as $d$ Jordan blocks of size $1$ strung together. In fact one should not wish that there were only one Jordan block: this finer decomposition is actually much better (when it is possible). Note that in the diagonalisable case any decomposition of the eigenspace into $1$-dimensional subspaces will do, exemplifying the highly non-unique nature of decompositions.



Finally for question (c) note that inside a single Jordan block, the dimensions of the kernels of the powers of $A-lambda I$ in your formula increase with the exponent by unit steps until reaching the size of the Jordan block (after which they remain constant), so that the Jordan block contributes at most$~1$ to the difference of dimensions, and it does so if and only if its size is at least $k+1$. Again by the nice nature of direct sums, you can just add up these contributions from each of the Jordan blocks, so the difference of dimensions is equal to the number of Jordan block of size at least $k+1$. (And this is a way to see that this number cannot depend on the choices involved in decomposing the space into Jordan blocks.)






share|cite|improve this answer































    3














    In these notes I give a "middlebrow" approach to invariant subspaces and canonical forms. Middlebrow means here that it is a bit more sophisticated than what you would encounter in a first linear algebra course -- in particular I work over an arbitrary field and then specialize to the algebraically closed case -- but that it stays in the setting of linear algebra rather than module theory: especially, the idea of abstract isomorphism (of modules) is never used but only similarity (of matrices). Nevertheless this approach would generalize to give the structure theorem for finitely generated modules over a PID with little trouble.



    My perspective is that of understanding invariant subspaces more generally and finding all of them, if possible (I pursue this problem a bit more doggedly than in most of the standard treatments I know). The key result is the Cyclic Decomposition Theorem in Section 5, which says that given any endomorphism $T$ on a finite dimensional vector space $V$, one can write $V$ as a direct sum of subspaces stabilized by $T$ and on which the minimal polynomial of $T$ is primary, i.e., a power of an irreducible polynomial. This is the appropriate generalization of "generalized eigenspace" to the non-algebraically closed case. The Jordan canonical form follows easily and is discussed in Section 6. In my terminology, JCF exists if and only if the minimal polynomial is split, i.e., is a product of linear factors over the ground field.



    Before I wrote these notes it had been many years since I had had to think about JCF, so for me at least they are meant to give a simple(st) conceptual explanation of JCF.



    There are certainly other approaches. Just to briefly point at one more: JCF is a nice application of the Chinese Remainder Theorem for modules: see e.g. Section 4.3 of my commutative algebra notes. From this perspective the natural generalization would be the concept of primary decomposition of a module (which unfortunately I do not discuss in my commutative algebra notes, but most of the standard references do): what this is for becomes more clear when one studies algebraic geometry.






    share|cite|improve this answer























      Your Answer





      StackExchange.ifUsing("editor", function () {
      return StackExchange.using("mathjaxEditing", function () {
      StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
      StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
      });
      });
      }, "mathjax-editing");

      StackExchange.ready(function() {
      var channelOptions = {
      tags: "".split(" "),
      id: "69"
      };
      initTagRenderer("".split(" "), "".split(" "), channelOptions);

      StackExchange.using("externalEditor", function() {
      // Have to fire editor after snippets, if snippets enabled
      if (StackExchange.settings.snippets.snippetsEnabled) {
      StackExchange.using("snippets", function() {
      createEditor();
      });
      }
      else {
      createEditor();
      }
      });

      function createEditor() {
      StackExchange.prepareEditor({
      heartbeatType: 'answer',
      autoActivateHeartbeat: false,
      convertImagesToLinks: true,
      noModals: true,
      showLowRepImageUploadWarning: true,
      reputationToPostImages: 10,
      bindNavPrevention: true,
      postfix: "",
      imageUploader: {
      brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
      contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
      allowUrls: true
      },
      noCode: true, onDemand: true,
      discardSelector: ".discard-answer"
      ,immediatelyShowMarkdownHelp:true
      });


      }
      });














      draft saved

      draft discarded


















      StackExchange.ready(
      function () {
      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f411845%2fan-intuitive-approach-to-the-jordan-normal-form%23new-answer', 'question_page');
      }
      );

      Post as a guest















      Required, but never shown
























      4 Answers
      4






      active

      oldest

      votes








      4 Answers
      4






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes









      31





      +50









      Let me sketch a proof of existence of the Jordan canonical form which, I believe, makes it somewhat natural.





      Let us say that a linear endomorphism $f:Vto V$ of a nonzero finite dimensional vector space is decomposable if there exist proper subspaces $U_1$, $U_2$ of $V$ such that $V=U_1oplus U_2$, $f(U_1)subseteq U_1$ and $f(U_2)subseteq U_2$, and let us say that $f$ is indecomposable if it is not decomposable. In terms of bases and matrices, it is easy to see that the map $f$ is decomposable iff there exists a basis of $V$ such that the matrix of $f$ with respect to which has a non-trivial diagonal block decomposition (that it, it is block diagonal two blocks)



      Now it is not hard to prove the following:




      Lemma 1. If $f:Vto V$ is an endomorphism of a nonzero finite dimensional vector space, then there exist $ngeq1$ and nonzero subspaces $U_1$, $dots$, $U_n$ of $V$ such that $V=bigoplus_{i=1}^nU_i$, $f(U_i)subseteq U_i$ for all $iin{1,dots,n}$ and for each such $i$ the restriction $f|_{U_i}:U_ito U_i$ is indecomposable.




      Indeed, you can more or less imitate the usual argument that shows that every natural number larger than one is a product of prime numbers.



      This lemma allows us to reduce the study of linear maps to the study of indecomposable linear maps. So we should start by trying to see how an indecomposable endomorphism looks like.



      There is a general fact that comes useful at times:




      Lemma. If $h:Vto V$ is an endomorphism of a finite dimensional vector space, then there exists an $mgeq1$ such that $V=ker h^moplusdefim{operatorname{im}}im h^m$.




      I'll leave its proof as a pleasant exercise.



      So let us fix an indecomposable endomorphism $f:Vto V$ of a nonzero finite dimensional vector space. As $k$ is algebraically closed, there is a nonzero $vin V$ and a scalar $lambdain k$ such that $f(v)=lambda v$. Consider the map $h=f-lambdamathrm{Id}:Vto V$: we can apply the lemma to $h$, and we conclude that $V=ker h^moplusdefim{operatorname{im}}im h^m$ for some $mgeq1$. moreover, it is very easy to check that $f(ker h^m)subseteqker h^m$ and that $f(im h^m)subseteqim h^m$. Since we are supposing that $f$ is indecomposable, one of $ker h^m$ or $im h^m$ must be the whole of $V$. As $v$ is in the kernel of $h$, so it is also in the kernel of $h^m$, so it is not in $im h^m$, and we see that $ker h^m=V$.



      This means, precisely, that $h^m:Vto V$ is the zero map, and we see that $h$ is nilpotent. Suppose its nilpotency index is $kgeq1$, and let $win V$ be a vector such that $h^{k-1}(w)neq0=h^k(w)$.




      Lemma. The set $mathcal B={w,h(w),h^2(w),dots,h^{k-1}(w)}$ is a basis of $V$.




      This is again a nice exercise.



      Now you should be able to check easily that the matrix of $f$ with respect to the basis $mathcal B$ of $V$ is a Jordan block.



      In this way we conclude that every indecomposable endomorphism of a nonzero finite dimensional vector space has, in an appropriate basis, a Jordan block as a matrix.
      According to Lemma 1, then, every endomorphism of a nonzero finite dimensional vector space has, in an appropriate basis, a block diagonal matrix with Jordan blocks.






      share|cite|improve this answer



















      • 3




        This argument is purely existential. But as soon as one knows the JNF exists, then one can use it to first prove uniqueness and then to relate it to invariants like the minimal polynomial and the characteristic polynomial in order to come closer to effectively finding it.
        – Mariano Suárez-Álvarez
        Jan 13 '14 at 8:09






      • 2




        Nice proof. I got a question. Jordan normal form, and Rational canonical form are equivalent solutions to the same problem. Yet they both live on. Clearly they have different utility. Which one is better where?
        – Charlie Frohman
        Jan 13 '14 at 18:13










      • @CharlieFrohman: I suppose that could be a stand-alone question here (if it isn't already).
        – Shaun
        Jan 15 '14 at 10:28










      • I do not understand why the chain of vectors of length $k$ in the last lemma is a basis of $V$. Sure its linearly independent, but how do I know that $mathrm{dim};V = k$ ?
        – me10240
        Jan 3 '16 at 21:44










      • Hi Mariano, could you give a hint for showing that the set $cal B$ in the last lemma spans $V$? I'm assuming it doesn't and going for a contradiction to the indecomposability of $T$ by constructing further chains. But it seems messy and not at all a "nice exercise". :)
        – David
        Jan 30 '17 at 3:47
















      31





      +50









      Let me sketch a proof of existence of the Jordan canonical form which, I believe, makes it somewhat natural.





      Let us say that a linear endomorphism $f:Vto V$ of a nonzero finite dimensional vector space is decomposable if there exist proper subspaces $U_1$, $U_2$ of $V$ such that $V=U_1oplus U_2$, $f(U_1)subseteq U_1$ and $f(U_2)subseteq U_2$, and let us say that $f$ is indecomposable if it is not decomposable. In terms of bases and matrices, it is easy to see that the map $f$ is decomposable iff there exists a basis of $V$ such that the matrix of $f$ with respect to which has a non-trivial diagonal block decomposition (that it, it is block diagonal two blocks)



      Now it is not hard to prove the following:




      Lemma 1. If $f:Vto V$ is an endomorphism of a nonzero finite dimensional vector space, then there exist $ngeq1$ and nonzero subspaces $U_1$, $dots$, $U_n$ of $V$ such that $V=bigoplus_{i=1}^nU_i$, $f(U_i)subseteq U_i$ for all $iin{1,dots,n}$ and for each such $i$ the restriction $f|_{U_i}:U_ito U_i$ is indecomposable.




      Indeed, you can more or less imitate the usual argument that shows that every natural number larger than one is a product of prime numbers.



      This lemma allows us to reduce the study of linear maps to the study of indecomposable linear maps. So we should start by trying to see how an indecomposable endomorphism looks like.



      There is a general fact that comes useful at times:




      Lemma. If $h:Vto V$ is an endomorphism of a finite dimensional vector space, then there exists an $mgeq1$ such that $V=ker h^moplusdefim{operatorname{im}}im h^m$.




      I'll leave its proof as a pleasant exercise.



      So let us fix an indecomposable endomorphism $f:Vto V$ of a nonzero finite dimensional vector space. As $k$ is algebraically closed, there is a nonzero $vin V$ and a scalar $lambdain k$ such that $f(v)=lambda v$. Consider the map $h=f-lambdamathrm{Id}:Vto V$: we can apply the lemma to $h$, and we conclude that $V=ker h^moplusdefim{operatorname{im}}im h^m$ for some $mgeq1$. moreover, it is very easy to check that $f(ker h^m)subseteqker h^m$ and that $f(im h^m)subseteqim h^m$. Since we are supposing that $f$ is indecomposable, one of $ker h^m$ or $im h^m$ must be the whole of $V$. As $v$ is in the kernel of $h$, so it is also in the kernel of $h^m$, so it is not in $im h^m$, and we see that $ker h^m=V$.



      This means, precisely, that $h^m:Vto V$ is the zero map, and we see that $h$ is nilpotent. Suppose its nilpotency index is $kgeq1$, and let $win V$ be a vector such that $h^{k-1}(w)neq0=h^k(w)$.




      Lemma. The set $mathcal B={w,h(w),h^2(w),dots,h^{k-1}(w)}$ is a basis of $V$.




      This is again a nice exercise.



      Now you should be able to check easily that the matrix of $f$ with respect to the basis $mathcal B$ of $V$ is a Jordan block.



      In this way we conclude that every indecomposable endomorphism of a nonzero finite dimensional vector space has, in an appropriate basis, a Jordan block as a matrix.
      According to Lemma 1, then, every endomorphism of a nonzero finite dimensional vector space has, in an appropriate basis, a block diagonal matrix with Jordan blocks.






      share|cite|improve this answer



















      • 3




        This argument is purely existential. But as soon as one knows the JNF exists, then one can use it to first prove uniqueness and then to relate it to invariants like the minimal polynomial and the characteristic polynomial in order to come closer to effectively finding it.
        – Mariano Suárez-Álvarez
        Jan 13 '14 at 8:09






      • 2




        Nice proof. I got a question. Jordan normal form, and Rational canonical form are equivalent solutions to the same problem. Yet they both live on. Clearly they have different utility. Which one is better where?
        – Charlie Frohman
        Jan 13 '14 at 18:13










      • @CharlieFrohman: I suppose that could be a stand-alone question here (if it isn't already).
        – Shaun
        Jan 15 '14 at 10:28










      • I do not understand why the chain of vectors of length $k$ in the last lemma is a basis of $V$. Sure its linearly independent, but how do I know that $mathrm{dim};V = k$ ?
        – me10240
        Jan 3 '16 at 21:44










      • Hi Mariano, could you give a hint for showing that the set $cal B$ in the last lemma spans $V$? I'm assuming it doesn't and going for a contradiction to the indecomposability of $T$ by constructing further chains. But it seems messy and not at all a "nice exercise". :)
        – David
        Jan 30 '17 at 3:47














      31





      +50







      31





      +50



      31




      +50




      Let me sketch a proof of existence of the Jordan canonical form which, I believe, makes it somewhat natural.





      Let us say that a linear endomorphism $f:Vto V$ of a nonzero finite dimensional vector space is decomposable if there exist proper subspaces $U_1$, $U_2$ of $V$ such that $V=U_1oplus U_2$, $f(U_1)subseteq U_1$ and $f(U_2)subseteq U_2$, and let us say that $f$ is indecomposable if it is not decomposable. In terms of bases and matrices, it is easy to see that the map $f$ is decomposable iff there exists a basis of $V$ such that the matrix of $f$ with respect to which has a non-trivial diagonal block decomposition (that it, it is block diagonal two blocks)



      Now it is not hard to prove the following:




      Lemma 1. If $f:Vto V$ is an endomorphism of a nonzero finite dimensional vector space, then there exist $ngeq1$ and nonzero subspaces $U_1$, $dots$, $U_n$ of $V$ such that $V=bigoplus_{i=1}^nU_i$, $f(U_i)subseteq U_i$ for all $iin{1,dots,n}$ and for each such $i$ the restriction $f|_{U_i}:U_ito U_i$ is indecomposable.




      Indeed, you can more or less imitate the usual argument that shows that every natural number larger than one is a product of prime numbers.



      This lemma allows us to reduce the study of linear maps to the study of indecomposable linear maps. So we should start by trying to see how an indecomposable endomorphism looks like.



      There is a general fact that comes useful at times:




      Lemma. If $h:Vto V$ is an endomorphism of a finite dimensional vector space, then there exists an $mgeq1$ such that $V=ker h^moplusdefim{operatorname{im}}im h^m$.




      I'll leave its proof as a pleasant exercise.



      So let us fix an indecomposable endomorphism $f:Vto V$ of a nonzero finite dimensional vector space. As $k$ is algebraically closed, there is a nonzero $vin V$ and a scalar $lambdain k$ such that $f(v)=lambda v$. Consider the map $h=f-lambdamathrm{Id}:Vto V$: we can apply the lemma to $h$, and we conclude that $V=ker h^moplusdefim{operatorname{im}}im h^m$ for some $mgeq1$. moreover, it is very easy to check that $f(ker h^m)subseteqker h^m$ and that $f(im h^m)subseteqim h^m$. Since we are supposing that $f$ is indecomposable, one of $ker h^m$ or $im h^m$ must be the whole of $V$. As $v$ is in the kernel of $h$, so it is also in the kernel of $h^m$, so it is not in $im h^m$, and we see that $ker h^m=V$.



      This means, precisely, that $h^m:Vto V$ is the zero map, and we see that $h$ is nilpotent. Suppose its nilpotency index is $kgeq1$, and let $win V$ be a vector such that $h^{k-1}(w)neq0=h^k(w)$.




      Lemma. The set $mathcal B={w,h(w),h^2(w),dots,h^{k-1}(w)}$ is a basis of $V$.




      This is again a nice exercise.



      Now you should be able to check easily that the matrix of $f$ with respect to the basis $mathcal B$ of $V$ is a Jordan block.



      In this way we conclude that every indecomposable endomorphism of a nonzero finite dimensional vector space has, in an appropriate basis, a Jordan block as a matrix.
      According to Lemma 1, then, every endomorphism of a nonzero finite dimensional vector space has, in an appropriate basis, a block diagonal matrix with Jordan blocks.






      share|cite|improve this answer














      Let me sketch a proof of existence of the Jordan canonical form which, I believe, makes it somewhat natural.





      Let us say that a linear endomorphism $f:Vto V$ of a nonzero finite dimensional vector space is decomposable if there exist proper subspaces $U_1$, $U_2$ of $V$ such that $V=U_1oplus U_2$, $f(U_1)subseteq U_1$ and $f(U_2)subseteq U_2$, and let us say that $f$ is indecomposable if it is not decomposable. In terms of bases and matrices, it is easy to see that the map $f$ is decomposable iff there exists a basis of $V$ such that the matrix of $f$ with respect to which has a non-trivial diagonal block decomposition (that it, it is block diagonal two blocks)



      Now it is not hard to prove the following:




      Lemma 1. If $f:Vto V$ is an endomorphism of a nonzero finite dimensional vector space, then there exist $ngeq1$ and nonzero subspaces $U_1$, $dots$, $U_n$ of $V$ such that $V=bigoplus_{i=1}^nU_i$, $f(U_i)subseteq U_i$ for all $iin{1,dots,n}$ and for each such $i$ the restriction $f|_{U_i}:U_ito U_i$ is indecomposable.




      Indeed, you can more or less imitate the usual argument that shows that every natural number larger than one is a product of prime numbers.



      This lemma allows us to reduce the study of linear maps to the study of indecomposable linear maps. So we should start by trying to see how an indecomposable endomorphism looks like.



      There is a general fact that comes useful at times:




      Lemma. If $h:Vto V$ is an endomorphism of a finite dimensional vector space, then there exists an $mgeq1$ such that $V=ker h^moplusdefim{operatorname{im}}im h^m$.




      I'll leave its proof as a pleasant exercise.



      So let us fix an indecomposable endomorphism $f:Vto V$ of a nonzero finite dimensional vector space. As $k$ is algebraically closed, there is a nonzero $vin V$ and a scalar $lambdain k$ such that $f(v)=lambda v$. Consider the map $h=f-lambdamathrm{Id}:Vto V$: we can apply the lemma to $h$, and we conclude that $V=ker h^moplusdefim{operatorname{im}}im h^m$ for some $mgeq1$. moreover, it is very easy to check that $f(ker h^m)subseteqker h^m$ and that $f(im h^m)subseteqim h^m$. Since we are supposing that $f$ is indecomposable, one of $ker h^m$ or $im h^m$ must be the whole of $V$. As $v$ is in the kernel of $h$, so it is also in the kernel of $h^m$, so it is not in $im h^m$, and we see that $ker h^m=V$.



      This means, precisely, that $h^m:Vto V$ is the zero map, and we see that $h$ is nilpotent. Suppose its nilpotency index is $kgeq1$, and let $win V$ be a vector such that $h^{k-1}(w)neq0=h^k(w)$.




      Lemma. The set $mathcal B={w,h(w),h^2(w),dots,h^{k-1}(w)}$ is a basis of $V$.




      This is again a nice exercise.



      Now you should be able to check easily that the matrix of $f$ with respect to the basis $mathcal B$ of $V$ is a Jordan block.



      In this way we conclude that every indecomposable endomorphism of a nonzero finite dimensional vector space has, in an appropriate basis, a Jordan block as a matrix.
      According to Lemma 1, then, every endomorphism of a nonzero finite dimensional vector space has, in an appropriate basis, a block diagonal matrix with Jordan blocks.







      share|cite|improve this answer














      share|cite|improve this answer



      share|cite|improve this answer








      edited Feb 1 '17 at 3:13









      David

      67.7k664126




      67.7k664126










      answered Jan 13 '14 at 4:55









      Mariano Suárez-ÁlvarezMariano Suárez-Álvarez

      111k7155281




      111k7155281








      • 3




        This argument is purely existential. But as soon as one knows the JNF exists, then one can use it to first prove uniqueness and then to relate it to invariants like the minimal polynomial and the characteristic polynomial in order to come closer to effectively finding it.
        – Mariano Suárez-Álvarez
        Jan 13 '14 at 8:09






      • 2




        Nice proof. I got a question. Jordan normal form, and Rational canonical form are equivalent solutions to the same problem. Yet they both live on. Clearly they have different utility. Which one is better where?
        – Charlie Frohman
        Jan 13 '14 at 18:13










      • @CharlieFrohman: I suppose that could be a stand-alone question here (if it isn't already).
        – Shaun
        Jan 15 '14 at 10:28










      • I do not understand why the chain of vectors of length $k$ in the last lemma is a basis of $V$. Sure its linearly independent, but how do I know that $mathrm{dim};V = k$ ?
        – me10240
        Jan 3 '16 at 21:44










      • Hi Mariano, could you give a hint for showing that the set $cal B$ in the last lemma spans $V$? I'm assuming it doesn't and going for a contradiction to the indecomposability of $T$ by constructing further chains. But it seems messy and not at all a "nice exercise". :)
        – David
        Jan 30 '17 at 3:47














      • 3




        This argument is purely existential. But as soon as one knows the JNF exists, then one can use it to first prove uniqueness and then to relate it to invariants like the minimal polynomial and the characteristic polynomial in order to come closer to effectively finding it.
        – Mariano Suárez-Álvarez
        Jan 13 '14 at 8:09






      • 2




        Nice proof. I got a question. Jordan normal form, and Rational canonical form are equivalent solutions to the same problem. Yet they both live on. Clearly they have different utility. Which one is better where?
        – Charlie Frohman
        Jan 13 '14 at 18:13










      • @CharlieFrohman: I suppose that could be a stand-alone question here (if it isn't already).
        – Shaun
        Jan 15 '14 at 10:28










      • I do not understand why the chain of vectors of length $k$ in the last lemma is a basis of $V$. Sure its linearly independent, but how do I know that $mathrm{dim};V = k$ ?
        – me10240
        Jan 3 '16 at 21:44










      • Hi Mariano, could you give a hint for showing that the set $cal B$ in the last lemma spans $V$? I'm assuming it doesn't and going for a contradiction to the indecomposability of $T$ by constructing further chains. But it seems messy and not at all a "nice exercise". :)
        – David
        Jan 30 '17 at 3:47








      3




      3




      This argument is purely existential. But as soon as one knows the JNF exists, then one can use it to first prove uniqueness and then to relate it to invariants like the minimal polynomial and the characteristic polynomial in order to come closer to effectively finding it.
      – Mariano Suárez-Álvarez
      Jan 13 '14 at 8:09




      This argument is purely existential. But as soon as one knows the JNF exists, then one can use it to first prove uniqueness and then to relate it to invariants like the minimal polynomial and the characteristic polynomial in order to come closer to effectively finding it.
      – Mariano Suárez-Álvarez
      Jan 13 '14 at 8:09




      2




      2




      Nice proof. I got a question. Jordan normal form, and Rational canonical form are equivalent solutions to the same problem. Yet they both live on. Clearly they have different utility. Which one is better where?
      – Charlie Frohman
      Jan 13 '14 at 18:13




      Nice proof. I got a question. Jordan normal form, and Rational canonical form are equivalent solutions to the same problem. Yet they both live on. Clearly they have different utility. Which one is better where?
      – Charlie Frohman
      Jan 13 '14 at 18:13












      @CharlieFrohman: I suppose that could be a stand-alone question here (if it isn't already).
      – Shaun
      Jan 15 '14 at 10:28




      @CharlieFrohman: I suppose that could be a stand-alone question here (if it isn't already).
      – Shaun
      Jan 15 '14 at 10:28












      I do not understand why the chain of vectors of length $k$ in the last lemma is a basis of $V$. Sure its linearly independent, but how do I know that $mathrm{dim};V = k$ ?
      – me10240
      Jan 3 '16 at 21:44




      I do not understand why the chain of vectors of length $k$ in the last lemma is a basis of $V$. Sure its linearly independent, but how do I know that $mathrm{dim};V = k$ ?
      – me10240
      Jan 3 '16 at 21:44












      Hi Mariano, could you give a hint for showing that the set $cal B$ in the last lemma spans $V$? I'm assuming it doesn't and going for a contradiction to the indecomposability of $T$ by constructing further chains. But it seems messy and not at all a "nice exercise". :)
      – David
      Jan 30 '17 at 3:47




      Hi Mariano, could you give a hint for showing that the set $cal B$ in the last lemma spans $V$? I'm assuming it doesn't and going for a contradiction to the indecomposability of $T$ by constructing further chains. But it seems messy and not at all a "nice exercise". :)
      – David
      Jan 30 '17 at 3:47











      8














      The true meaning of the Jordan canonical form is explained in the context of representation theory, namely, of finite dimensional representations of the algebra $k[t]$ (where $k$ is your algebraically closed ground field):




      • Uniqueness of the normal form is the Krull-Schmidt theorem, and

      • existence is the description of the indecomposable modules of $k[t]$.


      Moreover, the description of indecomposable modules follows more or less easily (in a strong sense: if you did not know about the Jordan canonical form, you could guess it by looking at the following:) the simple modules are very easy to describe (this is where algebraically closedness comes in) and the extensions between them (in the sense of homological algebra) are also easy to describe (because $k[t]$ is an hereditary ring) Putting these things together (plus the Jordan-Hölder theorem) one gets existence.






      share|cite|improve this answer



















      • 2




        If you are justlearning linear algebra, this answer is probably not very satisfying, as it involves things you do not know about. But you can look as an enticement on studying further to eventually understand it!
        – Mariano Suárez-Álvarez
        Jan 12 '14 at 11:12












      • This approach would probably satisfy Terry Tao's unsatisfied need of knowing why the theorem works :-)
        – Mariano Suárez-Álvarez
        Jan 12 '14 at 11:13






      • 1




        Not really: googling for each of the terms i mentioned, and/or looking at a basic textbook on representation theory (this is surely the best course of action) should satisfy you. The definitions and the statements of the theorems I mentioned comprise the first few chapters of every introductory textbook on representation theory.
        – Mariano Suárez-Álvarez
        Jan 12 '14 at 11:17








      • 9




        Please, do not delete comments to which I have responded: it makes the comment thread become uncomprehensible.
        – Mariano Suárez-Álvarez
        Jan 12 '14 at 11:23






      • 1




        Looking up Remak's contribution, as in Krull-Remak-Schimdt (in some order), it turns out that he has an interesting and tragic biography. en.wikipedia.org/wiki/Robert_Remak_%28mathematician%29
        – zyx
        Jan 14 '14 at 20:47
















      8














      The true meaning of the Jordan canonical form is explained in the context of representation theory, namely, of finite dimensional representations of the algebra $k[t]$ (where $k$ is your algebraically closed ground field):




      • Uniqueness of the normal form is the Krull-Schmidt theorem, and

      • existence is the description of the indecomposable modules of $k[t]$.


      Moreover, the description of indecomposable modules follows more or less easily (in a strong sense: if you did not know about the Jordan canonical form, you could guess it by looking at the following:) the simple modules are very easy to describe (this is where algebraically closedness comes in) and the extensions between them (in the sense of homological algebra) are also easy to describe (because $k[t]$ is an hereditary ring) Putting these things together (plus the Jordan-Hölder theorem) one gets existence.






      share|cite|improve this answer



















      • 2




        If you are justlearning linear algebra, this answer is probably not very satisfying, as it involves things you do not know about. But you can look as an enticement on studying further to eventually understand it!
        – Mariano Suárez-Álvarez
        Jan 12 '14 at 11:12












      • This approach would probably satisfy Terry Tao's unsatisfied need of knowing why the theorem works :-)
        – Mariano Suárez-Álvarez
        Jan 12 '14 at 11:13






      • 1




        Not really: googling for each of the terms i mentioned, and/or looking at a basic textbook on representation theory (this is surely the best course of action) should satisfy you. The definitions and the statements of the theorems I mentioned comprise the first few chapters of every introductory textbook on representation theory.
        – Mariano Suárez-Álvarez
        Jan 12 '14 at 11:17








      • 9




        Please, do not delete comments to which I have responded: it makes the comment thread become uncomprehensible.
        – Mariano Suárez-Álvarez
        Jan 12 '14 at 11:23






      • 1




        Looking up Remak's contribution, as in Krull-Remak-Schimdt (in some order), it turns out that he has an interesting and tragic biography. en.wikipedia.org/wiki/Robert_Remak_%28mathematician%29
        – zyx
        Jan 14 '14 at 20:47














      8












      8








      8






      The true meaning of the Jordan canonical form is explained in the context of representation theory, namely, of finite dimensional representations of the algebra $k[t]$ (where $k$ is your algebraically closed ground field):




      • Uniqueness of the normal form is the Krull-Schmidt theorem, and

      • existence is the description of the indecomposable modules of $k[t]$.


      Moreover, the description of indecomposable modules follows more or less easily (in a strong sense: if you did not know about the Jordan canonical form, you could guess it by looking at the following:) the simple modules are very easy to describe (this is where algebraically closedness comes in) and the extensions between them (in the sense of homological algebra) are also easy to describe (because $k[t]$ is an hereditary ring) Putting these things together (plus the Jordan-Hölder theorem) one gets existence.






      share|cite|improve this answer














      The true meaning of the Jordan canonical form is explained in the context of representation theory, namely, of finite dimensional representations of the algebra $k[t]$ (where $k$ is your algebraically closed ground field):




      • Uniqueness of the normal form is the Krull-Schmidt theorem, and

      • existence is the description of the indecomposable modules of $k[t]$.


      Moreover, the description of indecomposable modules follows more or less easily (in a strong sense: if you did not know about the Jordan canonical form, you could guess it by looking at the following:) the simple modules are very easy to describe (this is where algebraically closedness comes in) and the extensions between them (in the sense of homological algebra) are also easy to describe (because $k[t]$ is an hereditary ring) Putting these things together (plus the Jordan-Hölder theorem) one gets existence.







      share|cite|improve this answer














      share|cite|improve this answer



      share|cite|improve this answer








      edited Nov 29 '18 at 15:31









      Martin Sleziak

      44.6k8115271




      44.6k8115271










      answered Jan 12 '14 at 11:08









      Mariano Suárez-ÁlvarezMariano Suárez-Álvarez

      111k7155281




      111k7155281








      • 2




        If you are justlearning linear algebra, this answer is probably not very satisfying, as it involves things you do not know about. But you can look as an enticement on studying further to eventually understand it!
        – Mariano Suárez-Álvarez
        Jan 12 '14 at 11:12












      • This approach would probably satisfy Terry Tao's unsatisfied need of knowing why the theorem works :-)
        – Mariano Suárez-Álvarez
        Jan 12 '14 at 11:13






      • 1




        Not really: googling for each of the terms i mentioned, and/or looking at a basic textbook on representation theory (this is surely the best course of action) should satisfy you. The definitions and the statements of the theorems I mentioned comprise the first few chapters of every introductory textbook on representation theory.
        – Mariano Suárez-Álvarez
        Jan 12 '14 at 11:17








      • 9




        Please, do not delete comments to which I have responded: it makes the comment thread become uncomprehensible.
        – Mariano Suárez-Álvarez
        Jan 12 '14 at 11:23






      • 1




        Looking up Remak's contribution, as in Krull-Remak-Schimdt (in some order), it turns out that he has an interesting and tragic biography. en.wikipedia.org/wiki/Robert_Remak_%28mathematician%29
        – zyx
        Jan 14 '14 at 20:47














      • 2




        If you are justlearning linear algebra, this answer is probably not very satisfying, as it involves things you do not know about. But you can look as an enticement on studying further to eventually understand it!
        – Mariano Suárez-Álvarez
        Jan 12 '14 at 11:12












      • This approach would probably satisfy Terry Tao's unsatisfied need of knowing why the theorem works :-)
        – Mariano Suárez-Álvarez
        Jan 12 '14 at 11:13






      • 1




        Not really: googling for each of the terms i mentioned, and/or looking at a basic textbook on representation theory (this is surely the best course of action) should satisfy you. The definitions and the statements of the theorems I mentioned comprise the first few chapters of every introductory textbook on representation theory.
        – Mariano Suárez-Álvarez
        Jan 12 '14 at 11:17








      • 9




        Please, do not delete comments to which I have responded: it makes the comment thread become uncomprehensible.
        – Mariano Suárez-Álvarez
        Jan 12 '14 at 11:23






      • 1




        Looking up Remak's contribution, as in Krull-Remak-Schimdt (in some order), it turns out that he has an interesting and tragic biography. en.wikipedia.org/wiki/Robert_Remak_%28mathematician%29
        – zyx
        Jan 14 '14 at 20:47








      2




      2




      If you are justlearning linear algebra, this answer is probably not very satisfying, as it involves things you do not know about. But you can look as an enticement on studying further to eventually understand it!
      – Mariano Suárez-Álvarez
      Jan 12 '14 at 11:12






      If you are justlearning linear algebra, this answer is probably not very satisfying, as it involves things you do not know about. But you can look as an enticement on studying further to eventually understand it!
      – Mariano Suárez-Álvarez
      Jan 12 '14 at 11:12














      This approach would probably satisfy Terry Tao's unsatisfied need of knowing why the theorem works :-)
      – Mariano Suárez-Álvarez
      Jan 12 '14 at 11:13




      This approach would probably satisfy Terry Tao's unsatisfied need of knowing why the theorem works :-)
      – Mariano Suárez-Álvarez
      Jan 12 '14 at 11:13




      1




      1




      Not really: googling for each of the terms i mentioned, and/or looking at a basic textbook on representation theory (this is surely the best course of action) should satisfy you. The definitions and the statements of the theorems I mentioned comprise the first few chapters of every introductory textbook on representation theory.
      – Mariano Suárez-Álvarez
      Jan 12 '14 at 11:17






      Not really: googling for each of the terms i mentioned, and/or looking at a basic textbook on representation theory (this is surely the best course of action) should satisfy you. The definitions and the statements of the theorems I mentioned comprise the first few chapters of every introductory textbook on representation theory.
      – Mariano Suárez-Álvarez
      Jan 12 '14 at 11:17






      9




      9




      Please, do not delete comments to which I have responded: it makes the comment thread become uncomprehensible.
      – Mariano Suárez-Álvarez
      Jan 12 '14 at 11:23




      Please, do not delete comments to which I have responded: it makes the comment thread become uncomprehensible.
      – Mariano Suárez-Álvarez
      Jan 12 '14 at 11:23




      1




      1




      Looking up Remak's contribution, as in Krull-Remak-Schimdt (in some order), it turns out that he has an interesting and tragic biography. en.wikipedia.org/wiki/Robert_Remak_%28mathematician%29
      – zyx
      Jan 14 '14 at 20:47




      Looking up Remak's contribution, as in Krull-Remak-Schimdt (in some order), it turns out that he has an interesting and tragic biography. en.wikipedia.org/wiki/Robert_Remak_%28mathematician%29
      – zyx
      Jan 14 '14 at 20:47











      4














      There is no real meaning behind the Jordan normal form; this form is just as good as it gets in general (and then only over a field where the characteristic polynomial splits). That is, as good as it gets in our attempts to understand the action of a linear operator$~phi$ on a finite dimensional vector space by decomposing the space as a direct sum of $phi$-stable subspaces, so that we can study the action of$~phi$ on each of the components separately, and reconstruct the whole action from the action on the components. (This is not the only possible approach to understanding$~phi$, but one may say that whenever such a decomposition is possible, it does simplify our understanding.) Direct sum decompositions into $phi$-stable subspaces correspond to reducing the matrix to a block diagonal form (the $phi$-stability means that the images of basis vectors in each summand only involve basis vectors in the same summand, whence the diagonal blocks), and the finer the decomposition is, the smaller the diagonal blocks. If one can decompose into a sum of $1$-dimensional $phi$-stable subspaces then one obtains a diagonal matrix, but this is not always possible. Jordan block correspond to $phi$-stable subspaces that cannot be decomposed in any way as a direct sum of smaller such subspaces, so they are the end of the line of our decompositions.



      Your concrete questions are easier to answer. Since (subspaces corresponding to) Jordan blocks for$~lambda$ are obtained from a (non-unique) direct sum decomposition of the generalised eigenspace for $lambda$, one can study the generalised eigenspace along that decomposition; in particular the (true) eigenspace is the direct sum of the eigenspaces for each Jordan block, and each of them is of dimension$~1$, whence the dimension of the eigenspace for$~lambda$ equals the number of Jordan blocks for$~lambda$. See this answer.



      This also answers question (a), although I should note that one does not start with eigenvectors to find a decomposition into Jordan blocks. It is the other ways around: each Jordan block one can decompose into comes with (up to a scalar) a single eigenvector, and (since the decomposition is a direct sum) these vectors for different blocks are linearly independent. One cannot in general just take any basis of the eigenspace for$~lambda$ and construct a Jordan block around each basis vector. To see why, consider the situation where the Jordan blocks are to be of sizes $2$ and $1$. Then the eigenvector coming from the larger Jordan block must be not only in the kernel, but also in the image of $phi-lambda I$, and not all eigenvectors for$~lambda$ have that property; therefore only bases where one basis vector is such a special eigenvector can correspond to a decomposition into Jordan blocks. (Actually giving an algorithm for decomposing into Jordan blocks is not easy, although the possibility to do so is an important theoretic fact.)



      The answer to question (b) is implied by this: since a Jordan block by nature only contributes $1$ to the geometric multiplicity of$~lambda$, one must have multiple Jordan blocks inside the generalised eigenspace whenever the geometric multiplicity of$~lambda$ is more than one. Just think of the simple case of a diagonalisable matrix with a (generalised) eigenspace of dimension $d>1$: a diagonal matrix with $d$ diagonal entries$~lambda$ is not a Jordan block. and this can only be seen as $d$ Jordan blocks of size $1$ strung together. In fact one should not wish that there were only one Jordan block: this finer decomposition is actually much better (when it is possible). Note that in the diagonalisable case any decomposition of the eigenspace into $1$-dimensional subspaces will do, exemplifying the highly non-unique nature of decompositions.



      Finally for question (c) note that inside a single Jordan block, the dimensions of the kernels of the powers of $A-lambda I$ in your formula increase with the exponent by unit steps until reaching the size of the Jordan block (after which they remain constant), so that the Jordan block contributes at most$~1$ to the difference of dimensions, and it does so if and only if its size is at least $k+1$. Again by the nice nature of direct sums, you can just add up these contributions from each of the Jordan blocks, so the difference of dimensions is equal to the number of Jordan block of size at least $k+1$. (And this is a way to see that this number cannot depend on the choices involved in decomposing the space into Jordan blocks.)






      share|cite|improve this answer




























        4














        There is no real meaning behind the Jordan normal form; this form is just as good as it gets in general (and then only over a field where the characteristic polynomial splits). That is, as good as it gets in our attempts to understand the action of a linear operator$~phi$ on a finite dimensional vector space by decomposing the space as a direct sum of $phi$-stable subspaces, so that we can study the action of$~phi$ on each of the components separately, and reconstruct the whole action from the action on the components. (This is not the only possible approach to understanding$~phi$, but one may say that whenever such a decomposition is possible, it does simplify our understanding.) Direct sum decompositions into $phi$-stable subspaces correspond to reducing the matrix to a block diagonal form (the $phi$-stability means that the images of basis vectors in each summand only involve basis vectors in the same summand, whence the diagonal blocks), and the finer the decomposition is, the smaller the diagonal blocks. If one can decompose into a sum of $1$-dimensional $phi$-stable subspaces then one obtains a diagonal matrix, but this is not always possible. Jordan block correspond to $phi$-stable subspaces that cannot be decomposed in any way as a direct sum of smaller such subspaces, so they are the end of the line of our decompositions.



        Your concrete questions are easier to answer. Since (subspaces corresponding to) Jordan blocks for$~lambda$ are obtained from a (non-unique) direct sum decomposition of the generalised eigenspace for $lambda$, one can study the generalised eigenspace along that decomposition; in particular the (true) eigenspace is the direct sum of the eigenspaces for each Jordan block, and each of them is of dimension$~1$, whence the dimension of the eigenspace for$~lambda$ equals the number of Jordan blocks for$~lambda$. See this answer.



        This also answers question (a), although I should note that one does not start with eigenvectors to find a decomposition into Jordan blocks. It is the other ways around: each Jordan block one can decompose into comes with (up to a scalar) a single eigenvector, and (since the decomposition is a direct sum) these vectors for different blocks are linearly independent. One cannot in general just take any basis of the eigenspace for$~lambda$ and construct a Jordan block around each basis vector. To see why, consider the situation where the Jordan blocks are to be of sizes $2$ and $1$. Then the eigenvector coming from the larger Jordan block must be not only in the kernel, but also in the image of $phi-lambda I$, and not all eigenvectors for$~lambda$ have that property; therefore only bases where one basis vector is such a special eigenvector can correspond to a decomposition into Jordan blocks. (Actually giving an algorithm for decomposing into Jordan blocks is not easy, although the possibility to do so is an important theoretic fact.)



        The answer to question (b) is implied by this: since a Jordan block by nature only contributes $1$ to the geometric multiplicity of$~lambda$, one must have multiple Jordan blocks inside the generalised eigenspace whenever the geometric multiplicity of$~lambda$ is more than one. Just think of the simple case of a diagonalisable matrix with a (generalised) eigenspace of dimension $d>1$: a diagonal matrix with $d$ diagonal entries$~lambda$ is not a Jordan block. and this can only be seen as $d$ Jordan blocks of size $1$ strung together. In fact one should not wish that there were only one Jordan block: this finer decomposition is actually much better (when it is possible). Note that in the diagonalisable case any decomposition of the eigenspace into $1$-dimensional subspaces will do, exemplifying the highly non-unique nature of decompositions.



        Finally for question (c) note that inside a single Jordan block, the dimensions of the kernels of the powers of $A-lambda I$ in your formula increase with the exponent by unit steps until reaching the size of the Jordan block (after which they remain constant), so that the Jordan block contributes at most$~1$ to the difference of dimensions, and it does so if and only if its size is at least $k+1$. Again by the nice nature of direct sums, you can just add up these contributions from each of the Jordan blocks, so the difference of dimensions is equal to the number of Jordan block of size at least $k+1$. (And this is a way to see that this number cannot depend on the choices involved in decomposing the space into Jordan blocks.)






        share|cite|improve this answer


























          4












          4








          4






          There is no real meaning behind the Jordan normal form; this form is just as good as it gets in general (and then only over a field where the characteristic polynomial splits). That is, as good as it gets in our attempts to understand the action of a linear operator$~phi$ on a finite dimensional vector space by decomposing the space as a direct sum of $phi$-stable subspaces, so that we can study the action of$~phi$ on each of the components separately, and reconstruct the whole action from the action on the components. (This is not the only possible approach to understanding$~phi$, but one may say that whenever such a decomposition is possible, it does simplify our understanding.) Direct sum decompositions into $phi$-stable subspaces correspond to reducing the matrix to a block diagonal form (the $phi$-stability means that the images of basis vectors in each summand only involve basis vectors in the same summand, whence the diagonal blocks), and the finer the decomposition is, the smaller the diagonal blocks. If one can decompose into a sum of $1$-dimensional $phi$-stable subspaces then one obtains a diagonal matrix, but this is not always possible. Jordan block correspond to $phi$-stable subspaces that cannot be decomposed in any way as a direct sum of smaller such subspaces, so they are the end of the line of our decompositions.



          Your concrete questions are easier to answer. Since (subspaces corresponding to) Jordan blocks for$~lambda$ are obtained from a (non-unique) direct sum decomposition of the generalised eigenspace for $lambda$, one can study the generalised eigenspace along that decomposition; in particular the (true) eigenspace is the direct sum of the eigenspaces for each Jordan block, and each of them is of dimension$~1$, whence the dimension of the eigenspace for$~lambda$ equals the number of Jordan blocks for$~lambda$. See this answer.



          This also answers question (a), although I should note that one does not start with eigenvectors to find a decomposition into Jordan blocks. It is the other ways around: each Jordan block one can decompose into comes with (up to a scalar) a single eigenvector, and (since the decomposition is a direct sum) these vectors for different blocks are linearly independent. One cannot in general just take any basis of the eigenspace for$~lambda$ and construct a Jordan block around each basis vector. To see why, consider the situation where the Jordan blocks are to be of sizes $2$ and $1$. Then the eigenvector coming from the larger Jordan block must be not only in the kernel, but also in the image of $phi-lambda I$, and not all eigenvectors for$~lambda$ have that property; therefore only bases where one basis vector is such a special eigenvector can correspond to a decomposition into Jordan blocks. (Actually giving an algorithm for decomposing into Jordan blocks is not easy, although the possibility to do so is an important theoretic fact.)



          The answer to question (b) is implied by this: since a Jordan block by nature only contributes $1$ to the geometric multiplicity of$~lambda$, one must have multiple Jordan blocks inside the generalised eigenspace whenever the geometric multiplicity of$~lambda$ is more than one. Just think of the simple case of a diagonalisable matrix with a (generalised) eigenspace of dimension $d>1$: a diagonal matrix with $d$ diagonal entries$~lambda$ is not a Jordan block. and this can only be seen as $d$ Jordan blocks of size $1$ strung together. In fact one should not wish that there were only one Jordan block: this finer decomposition is actually much better (when it is possible). Note that in the diagonalisable case any decomposition of the eigenspace into $1$-dimensional subspaces will do, exemplifying the highly non-unique nature of decompositions.



          Finally for question (c) note that inside a single Jordan block, the dimensions of the kernels of the powers of $A-lambda I$ in your formula increase with the exponent by unit steps until reaching the size of the Jordan block (after which they remain constant), so that the Jordan block contributes at most$~1$ to the difference of dimensions, and it does so if and only if its size is at least $k+1$. Again by the nice nature of direct sums, you can just add up these contributions from each of the Jordan blocks, so the difference of dimensions is equal to the number of Jordan block of size at least $k+1$. (And this is a way to see that this number cannot depend on the choices involved in decomposing the space into Jordan blocks.)






          share|cite|improve this answer














          There is no real meaning behind the Jordan normal form; this form is just as good as it gets in general (and then only over a field where the characteristic polynomial splits). That is, as good as it gets in our attempts to understand the action of a linear operator$~phi$ on a finite dimensional vector space by decomposing the space as a direct sum of $phi$-stable subspaces, so that we can study the action of$~phi$ on each of the components separately, and reconstruct the whole action from the action on the components. (This is not the only possible approach to understanding$~phi$, but one may say that whenever such a decomposition is possible, it does simplify our understanding.) Direct sum decompositions into $phi$-stable subspaces correspond to reducing the matrix to a block diagonal form (the $phi$-stability means that the images of basis vectors in each summand only involve basis vectors in the same summand, whence the diagonal blocks), and the finer the decomposition is, the smaller the diagonal blocks. If one can decompose into a sum of $1$-dimensional $phi$-stable subspaces then one obtains a diagonal matrix, but this is not always possible. Jordan block correspond to $phi$-stable subspaces that cannot be decomposed in any way as a direct sum of smaller such subspaces, so they are the end of the line of our decompositions.



          Your concrete questions are easier to answer. Since (subspaces corresponding to) Jordan blocks for$~lambda$ are obtained from a (non-unique) direct sum decomposition of the generalised eigenspace for $lambda$, one can study the generalised eigenspace along that decomposition; in particular the (true) eigenspace is the direct sum of the eigenspaces for each Jordan block, and each of them is of dimension$~1$, whence the dimension of the eigenspace for$~lambda$ equals the number of Jordan blocks for$~lambda$. See this answer.



          This also answers question (a), although I should note that one does not start with eigenvectors to find a decomposition into Jordan blocks. It is the other ways around: each Jordan block one can decompose into comes with (up to a scalar) a single eigenvector, and (since the decomposition is a direct sum) these vectors for different blocks are linearly independent. One cannot in general just take any basis of the eigenspace for$~lambda$ and construct a Jordan block around each basis vector. To see why, consider the situation where the Jordan blocks are to be of sizes $2$ and $1$. Then the eigenvector coming from the larger Jordan block must be not only in the kernel, but also in the image of $phi-lambda I$, and not all eigenvectors for$~lambda$ have that property; therefore only bases where one basis vector is such a special eigenvector can correspond to a decomposition into Jordan blocks. (Actually giving an algorithm for decomposing into Jordan blocks is not easy, although the possibility to do so is an important theoretic fact.)



          The answer to question (b) is implied by this: since a Jordan block by nature only contributes $1$ to the geometric multiplicity of$~lambda$, one must have multiple Jordan blocks inside the generalised eigenspace whenever the geometric multiplicity of$~lambda$ is more than one. Just think of the simple case of a diagonalisable matrix with a (generalised) eigenspace of dimension $d>1$: a diagonal matrix with $d$ diagonal entries$~lambda$ is not a Jordan block. and this can only be seen as $d$ Jordan blocks of size $1$ strung together. In fact one should not wish that there were only one Jordan block: this finer decomposition is actually much better (when it is possible). Note that in the diagonalisable case any decomposition of the eigenspace into $1$-dimensional subspaces will do, exemplifying the highly non-unique nature of decompositions.



          Finally for question (c) note that inside a single Jordan block, the dimensions of the kernels of the powers of $A-lambda I$ in your formula increase with the exponent by unit steps until reaching the size of the Jordan block (after which they remain constant), so that the Jordan block contributes at most$~1$ to the difference of dimensions, and it does so if and only if its size is at least $k+1$. Again by the nice nature of direct sums, you can just add up these contributions from each of the Jordan blocks, so the difference of dimensions is equal to the number of Jordan block of size at least $k+1$. (And this is a way to see that this number cannot depend on the choices involved in decomposing the space into Jordan blocks.)







          share|cite|improve this answer














          share|cite|improve this answer



          share|cite|improve this answer








          edited Apr 13 '17 at 12:21









          Community

          1




          1










          answered Jan 12 '14 at 14:39









          Marc van LeeuwenMarc van Leeuwen

          86.5k5106220




          86.5k5106220























              3














              In these notes I give a "middlebrow" approach to invariant subspaces and canonical forms. Middlebrow means here that it is a bit more sophisticated than what you would encounter in a first linear algebra course -- in particular I work over an arbitrary field and then specialize to the algebraically closed case -- but that it stays in the setting of linear algebra rather than module theory: especially, the idea of abstract isomorphism (of modules) is never used but only similarity (of matrices). Nevertheless this approach would generalize to give the structure theorem for finitely generated modules over a PID with little trouble.



              My perspective is that of understanding invariant subspaces more generally and finding all of them, if possible (I pursue this problem a bit more doggedly than in most of the standard treatments I know). The key result is the Cyclic Decomposition Theorem in Section 5, which says that given any endomorphism $T$ on a finite dimensional vector space $V$, one can write $V$ as a direct sum of subspaces stabilized by $T$ and on which the minimal polynomial of $T$ is primary, i.e., a power of an irreducible polynomial. This is the appropriate generalization of "generalized eigenspace" to the non-algebraically closed case. The Jordan canonical form follows easily and is discussed in Section 6. In my terminology, JCF exists if and only if the minimal polynomial is split, i.e., is a product of linear factors over the ground field.



              Before I wrote these notes it had been many years since I had had to think about JCF, so for me at least they are meant to give a simple(st) conceptual explanation of JCF.



              There are certainly other approaches. Just to briefly point at one more: JCF is a nice application of the Chinese Remainder Theorem for modules: see e.g. Section 4.3 of my commutative algebra notes. From this perspective the natural generalization would be the concept of primary decomposition of a module (which unfortunately I do not discuss in my commutative algebra notes, but most of the standard references do): what this is for becomes more clear when one studies algebraic geometry.






              share|cite|improve this answer




























                3














                In these notes I give a "middlebrow" approach to invariant subspaces and canonical forms. Middlebrow means here that it is a bit more sophisticated than what you would encounter in a first linear algebra course -- in particular I work over an arbitrary field and then specialize to the algebraically closed case -- but that it stays in the setting of linear algebra rather than module theory: especially, the idea of abstract isomorphism (of modules) is never used but only similarity (of matrices). Nevertheless this approach would generalize to give the structure theorem for finitely generated modules over a PID with little trouble.



                My perspective is that of understanding invariant subspaces more generally and finding all of them, if possible (I pursue this problem a bit more doggedly than in most of the standard treatments I know). The key result is the Cyclic Decomposition Theorem in Section 5, which says that given any endomorphism $T$ on a finite dimensional vector space $V$, one can write $V$ as a direct sum of subspaces stabilized by $T$ and on which the minimal polynomial of $T$ is primary, i.e., a power of an irreducible polynomial. This is the appropriate generalization of "generalized eigenspace" to the non-algebraically closed case. The Jordan canonical form follows easily and is discussed in Section 6. In my terminology, JCF exists if and only if the minimal polynomial is split, i.e., is a product of linear factors over the ground field.



                Before I wrote these notes it had been many years since I had had to think about JCF, so for me at least they are meant to give a simple(st) conceptual explanation of JCF.



                There are certainly other approaches. Just to briefly point at one more: JCF is a nice application of the Chinese Remainder Theorem for modules: see e.g. Section 4.3 of my commutative algebra notes. From this perspective the natural generalization would be the concept of primary decomposition of a module (which unfortunately I do not discuss in my commutative algebra notes, but most of the standard references do): what this is for becomes more clear when one studies algebraic geometry.






                share|cite|improve this answer


























                  3












                  3








                  3






                  In these notes I give a "middlebrow" approach to invariant subspaces and canonical forms. Middlebrow means here that it is a bit more sophisticated than what you would encounter in a first linear algebra course -- in particular I work over an arbitrary field and then specialize to the algebraically closed case -- but that it stays in the setting of linear algebra rather than module theory: especially, the idea of abstract isomorphism (of modules) is never used but only similarity (of matrices). Nevertheless this approach would generalize to give the structure theorem for finitely generated modules over a PID with little trouble.



                  My perspective is that of understanding invariant subspaces more generally and finding all of them, if possible (I pursue this problem a bit more doggedly than in most of the standard treatments I know). The key result is the Cyclic Decomposition Theorem in Section 5, which says that given any endomorphism $T$ on a finite dimensional vector space $V$, one can write $V$ as a direct sum of subspaces stabilized by $T$ and on which the minimal polynomial of $T$ is primary, i.e., a power of an irreducible polynomial. This is the appropriate generalization of "generalized eigenspace" to the non-algebraically closed case. The Jordan canonical form follows easily and is discussed in Section 6. In my terminology, JCF exists if and only if the minimal polynomial is split, i.e., is a product of linear factors over the ground field.



                  Before I wrote these notes it had been many years since I had had to think about JCF, so for me at least they are meant to give a simple(st) conceptual explanation of JCF.



                  There are certainly other approaches. Just to briefly point at one more: JCF is a nice application of the Chinese Remainder Theorem for modules: see e.g. Section 4.3 of my commutative algebra notes. From this perspective the natural generalization would be the concept of primary decomposition of a module (which unfortunately I do not discuss in my commutative algebra notes, but most of the standard references do): what this is for becomes more clear when one studies algebraic geometry.






                  share|cite|improve this answer














                  In these notes I give a "middlebrow" approach to invariant subspaces and canonical forms. Middlebrow means here that it is a bit more sophisticated than what you would encounter in a first linear algebra course -- in particular I work over an arbitrary field and then specialize to the algebraically closed case -- but that it stays in the setting of linear algebra rather than module theory: especially, the idea of abstract isomorphism (of modules) is never used but only similarity (of matrices). Nevertheless this approach would generalize to give the structure theorem for finitely generated modules over a PID with little trouble.



                  My perspective is that of understanding invariant subspaces more generally and finding all of them, if possible (I pursue this problem a bit more doggedly than in most of the standard treatments I know). The key result is the Cyclic Decomposition Theorem in Section 5, which says that given any endomorphism $T$ on a finite dimensional vector space $V$, one can write $V$ as a direct sum of subspaces stabilized by $T$ and on which the minimal polynomial of $T$ is primary, i.e., a power of an irreducible polynomial. This is the appropriate generalization of "generalized eigenspace" to the non-algebraically closed case. The Jordan canonical form follows easily and is discussed in Section 6. In my terminology, JCF exists if and only if the minimal polynomial is split, i.e., is a product of linear factors over the ground field.



                  Before I wrote these notes it had been many years since I had had to think about JCF, so for me at least they are meant to give a simple(st) conceptual explanation of JCF.



                  There are certainly other approaches. Just to briefly point at one more: JCF is a nice application of the Chinese Remainder Theorem for modules: see e.g. Section 4.3 of my commutative algebra notes. From this perspective the natural generalization would be the concept of primary decomposition of a module (which unfortunately I do not discuss in my commutative algebra notes, but most of the standard references do): what this is for becomes more clear when one studies algebraic geometry.







                  share|cite|improve this answer














                  share|cite|improve this answer



                  share|cite|improve this answer








                  edited Nov 29 '18 at 15:33









                  Martin Sleziak

                  44.6k8115271




                  44.6k8115271










                  answered Jan 17 '14 at 8:43









                  Pete L. ClarkPete L. Clark

                  80.1k9161311




                  80.1k9161311






























                      draft saved

                      draft discarded




















































                      Thanks for contributing an answer to Mathematics Stack Exchange!


                      • Please be sure to answer the question. Provide details and share your research!

                      But avoid



                      • Asking for help, clarification, or responding to other answers.

                      • Making statements based on opinion; back them up with references or personal experience.


                      Use MathJax to format equations. MathJax reference.


                      To learn more, see our tips on writing great answers.





                      Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


                      Please pay close attention to the following guidance:


                      • Please be sure to answer the question. Provide details and share your research!

                      But avoid



                      • Asking for help, clarification, or responding to other answers.

                      • Making statements based on opinion; back them up with references or personal experience.


                      To learn more, see our tips on writing great answers.




                      draft saved


                      draft discarded














                      StackExchange.ready(
                      function () {
                      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f411845%2fan-intuitive-approach-to-the-jordan-normal-form%23new-answer', 'question_page');
                      }
                      );

                      Post as a guest















                      Required, but never shown





















































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown

































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown







                      Popular posts from this blog

                      Le Mesnil-Réaume

                      Ida-Boy-Ed-Garten

                      web3.py web3.isConnected() returns false always