Measuring “concentration” in an expansion












0












$begingroup$


Suppose we have a vector $v in mathbb{R}^{n}$ which we expand in some orthonormal basis ${g_{m}}$ of $mathbb{R}^{n}$:
begin{align*}
v = sum_{m=1}^{n} a_{m}g_{m}
end{align*}

I want to measure how "concentrated" $v$ is in the basis ${g_{m}}$. To wit, I would like some measurement that is maximized when $a_{m} = 0$ for all but one $m$, and minimized when all the $a_{m}$ have the same magnitude. Is there a nice way to measure this "concentration" (preferably easily computed), and is there standard terminology for it?










share|cite|improve this question











$endgroup$












  • $begingroup$
    Fix $p<q$. Then $|a|_q/|a|_p$ has the extrema that you want
    $endgroup$
    – Federico
    Dec 4 '18 at 20:57










  • $begingroup$
    Alternatively, you could consider $p_i=a_i^2/|a|_2^2$ and take the Shannon entropy of the $p_i$'s
    $endgroup$
    – Federico
    Dec 4 '18 at 20:59










  • $begingroup$
    After asking this question, I found this paper: ieeexplore.ieee.org/document/5238742
    $endgroup$
    – user14717
    Dec 5 '18 at 0:27










  • $begingroup$
    @Federico: Your answer is basically summarized in the linked paper, so it's good. Do you wish to move your comments to an answer?
    $endgroup$
    – user14717
    Dec 5 '18 at 0:44










  • $begingroup$
    Unfortunately I don't have access to the paper. I don't mind if you post your own answer and accept it. By the way, I'm curious to know why the Shannon entropy is an inferior choice, based on their "Robin Hood, Scaling, Rising Tide, Cloning, Bill Gates, and Babies" properties
    $endgroup$
    – Federico
    Dec 5 '18 at 14:28
















0












$begingroup$


Suppose we have a vector $v in mathbb{R}^{n}$ which we expand in some orthonormal basis ${g_{m}}$ of $mathbb{R}^{n}$:
begin{align*}
v = sum_{m=1}^{n} a_{m}g_{m}
end{align*}

I want to measure how "concentrated" $v$ is in the basis ${g_{m}}$. To wit, I would like some measurement that is maximized when $a_{m} = 0$ for all but one $m$, and minimized when all the $a_{m}$ have the same magnitude. Is there a nice way to measure this "concentration" (preferably easily computed), and is there standard terminology for it?










share|cite|improve this question











$endgroup$












  • $begingroup$
    Fix $p<q$. Then $|a|_q/|a|_p$ has the extrema that you want
    $endgroup$
    – Federico
    Dec 4 '18 at 20:57










  • $begingroup$
    Alternatively, you could consider $p_i=a_i^2/|a|_2^2$ and take the Shannon entropy of the $p_i$'s
    $endgroup$
    – Federico
    Dec 4 '18 at 20:59










  • $begingroup$
    After asking this question, I found this paper: ieeexplore.ieee.org/document/5238742
    $endgroup$
    – user14717
    Dec 5 '18 at 0:27










  • $begingroup$
    @Federico: Your answer is basically summarized in the linked paper, so it's good. Do you wish to move your comments to an answer?
    $endgroup$
    – user14717
    Dec 5 '18 at 0:44










  • $begingroup$
    Unfortunately I don't have access to the paper. I don't mind if you post your own answer and accept it. By the way, I'm curious to know why the Shannon entropy is an inferior choice, based on their "Robin Hood, Scaling, Rising Tide, Cloning, Bill Gates, and Babies" properties
    $endgroup$
    – Federico
    Dec 5 '18 at 14:28














0












0








0





$begingroup$


Suppose we have a vector $v in mathbb{R}^{n}$ which we expand in some orthonormal basis ${g_{m}}$ of $mathbb{R}^{n}$:
begin{align*}
v = sum_{m=1}^{n} a_{m}g_{m}
end{align*}

I want to measure how "concentrated" $v$ is in the basis ${g_{m}}$. To wit, I would like some measurement that is maximized when $a_{m} = 0$ for all but one $m$, and minimized when all the $a_{m}$ have the same magnitude. Is there a nice way to measure this "concentration" (preferably easily computed), and is there standard terminology for it?










share|cite|improve this question











$endgroup$




Suppose we have a vector $v in mathbb{R}^{n}$ which we expand in some orthonormal basis ${g_{m}}$ of $mathbb{R}^{n}$:
begin{align*}
v = sum_{m=1}^{n} a_{m}g_{m}
end{align*}

I want to measure how "concentrated" $v$ is in the basis ${g_{m}}$. To wit, I would like some measurement that is maximized when $a_{m} = 0$ for all but one $m$, and minimized when all the $a_{m}$ have the same magnitude. Is there a nice way to measure this "concentration" (preferably easily computed), and is there standard terminology for it?







vectors linear-transformations






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Dec 4 '18 at 20:56







user14717

















asked Dec 4 '18 at 20:49









user14717user14717

3,8281020




3,8281020












  • $begingroup$
    Fix $p<q$. Then $|a|_q/|a|_p$ has the extrema that you want
    $endgroup$
    – Federico
    Dec 4 '18 at 20:57










  • $begingroup$
    Alternatively, you could consider $p_i=a_i^2/|a|_2^2$ and take the Shannon entropy of the $p_i$'s
    $endgroup$
    – Federico
    Dec 4 '18 at 20:59










  • $begingroup$
    After asking this question, I found this paper: ieeexplore.ieee.org/document/5238742
    $endgroup$
    – user14717
    Dec 5 '18 at 0:27










  • $begingroup$
    @Federico: Your answer is basically summarized in the linked paper, so it's good. Do you wish to move your comments to an answer?
    $endgroup$
    – user14717
    Dec 5 '18 at 0:44










  • $begingroup$
    Unfortunately I don't have access to the paper. I don't mind if you post your own answer and accept it. By the way, I'm curious to know why the Shannon entropy is an inferior choice, based on their "Robin Hood, Scaling, Rising Tide, Cloning, Bill Gates, and Babies" properties
    $endgroup$
    – Federico
    Dec 5 '18 at 14:28


















  • $begingroup$
    Fix $p<q$. Then $|a|_q/|a|_p$ has the extrema that you want
    $endgroup$
    – Federico
    Dec 4 '18 at 20:57










  • $begingroup$
    Alternatively, you could consider $p_i=a_i^2/|a|_2^2$ and take the Shannon entropy of the $p_i$'s
    $endgroup$
    – Federico
    Dec 4 '18 at 20:59










  • $begingroup$
    After asking this question, I found this paper: ieeexplore.ieee.org/document/5238742
    $endgroup$
    – user14717
    Dec 5 '18 at 0:27










  • $begingroup$
    @Federico: Your answer is basically summarized in the linked paper, so it's good. Do you wish to move your comments to an answer?
    $endgroup$
    – user14717
    Dec 5 '18 at 0:44










  • $begingroup$
    Unfortunately I don't have access to the paper. I don't mind if you post your own answer and accept it. By the way, I'm curious to know why the Shannon entropy is an inferior choice, based on their "Robin Hood, Scaling, Rising Tide, Cloning, Bill Gates, and Babies" properties
    $endgroup$
    – Federico
    Dec 5 '18 at 14:28
















$begingroup$
Fix $p<q$. Then $|a|_q/|a|_p$ has the extrema that you want
$endgroup$
– Federico
Dec 4 '18 at 20:57




$begingroup$
Fix $p<q$. Then $|a|_q/|a|_p$ has the extrema that you want
$endgroup$
– Federico
Dec 4 '18 at 20:57












$begingroup$
Alternatively, you could consider $p_i=a_i^2/|a|_2^2$ and take the Shannon entropy of the $p_i$'s
$endgroup$
– Federico
Dec 4 '18 at 20:59




$begingroup$
Alternatively, you could consider $p_i=a_i^2/|a|_2^2$ and take the Shannon entropy of the $p_i$'s
$endgroup$
– Federico
Dec 4 '18 at 20:59












$begingroup$
After asking this question, I found this paper: ieeexplore.ieee.org/document/5238742
$endgroup$
– user14717
Dec 5 '18 at 0:27




$begingroup$
After asking this question, I found this paper: ieeexplore.ieee.org/document/5238742
$endgroup$
– user14717
Dec 5 '18 at 0:27












$begingroup$
@Federico: Your answer is basically summarized in the linked paper, so it's good. Do you wish to move your comments to an answer?
$endgroup$
– user14717
Dec 5 '18 at 0:44




$begingroup$
@Federico: Your answer is basically summarized in the linked paper, so it's good. Do you wish to move your comments to an answer?
$endgroup$
– user14717
Dec 5 '18 at 0:44












$begingroup$
Unfortunately I don't have access to the paper. I don't mind if you post your own answer and accept it. By the way, I'm curious to know why the Shannon entropy is an inferior choice, based on their "Robin Hood, Scaling, Rising Tide, Cloning, Bill Gates, and Babies" properties
$endgroup$
– Federico
Dec 5 '18 at 14:28




$begingroup$
Unfortunately I don't have access to the paper. I don't mind if you post your own answer and accept it. By the way, I'm curious to know why the Shannon entropy is an inferior choice, based on their "Robin Hood, Scaling, Rising Tide, Cloning, Bill Gates, and Babies" properties
$endgroup$
– Federico
Dec 5 '18 at 14:28










1 Answer
1






active

oldest

votes


















0












$begingroup$

The Gini coefficient is an excellent way to measurement sparsity. Here, we use the following definition of Gini coefficient:
begin{align*}
G(v) := frac{1}{n-1}left[ frac{2}{left| vright|_{1} } sum_{i=0}^{n-1} (i+1)|v_i^{(s)}| - n - 1 right]
end{align*}

where $v_{i}^{(s)}$ is the $i$th coefficient of the vector $v$ sorted by increasing magnitude. Note that this definition differs from (say) Wikipedia's definition, which only applies it to wealth inequality, where each element is real and non-negative.



We have some nice properties of $G$:




  • If $v_{j} mapsto e^{itheta_j}v_{j}$, we feel intuitively that the sparsity is unchanged, and indeed $G(v)$ is invariant under these phase changes.


  • If $0 ne lambda in mathbb{C}$, it is easy to see that $G(lambda v) = G(v)$ (changing units doesn't change sparsity).


  • If $v_{j} = 0$ for all but one $j$, then $G(v) = 1$, which is the maxima of $G$. If all coefficients are of equal magnitude, then $G(v) = 0$, which is the minima of $G$.


  • If $Pi$ is a permutation of the elements of $v$, the $G(Pi v) = G(v)$. (Permutations don't affect sparsity.) The particular way that the Gini coefficient achieves this is via sorting, which is not ideal due to it requiring $mathcal{O}(nlog n)$ operations, but the invariance is indeed manifest.


  • The property of the Gini coefficient not shared by the Hoyer measure (a normalized ratio of $ell_1$ and $ell_2$ norms) is its invariance under concatenation: $G(voplus v) = G(v)$. How much weight should be given to this property is unclear, especially given the sorting requirement.







share|cite|improve this answer









$endgroup$













    Your Answer





    StackExchange.ifUsing("editor", function () {
    return StackExchange.using("mathjaxEditing", function () {
    StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
    StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
    });
    });
    }, "mathjax-editing");

    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "69"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    noCode: true, onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3026145%2fmeasuring-concentration-in-an-expansion%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    0












    $begingroup$

    The Gini coefficient is an excellent way to measurement sparsity. Here, we use the following definition of Gini coefficient:
    begin{align*}
    G(v) := frac{1}{n-1}left[ frac{2}{left| vright|_{1} } sum_{i=0}^{n-1} (i+1)|v_i^{(s)}| - n - 1 right]
    end{align*}

    where $v_{i}^{(s)}$ is the $i$th coefficient of the vector $v$ sorted by increasing magnitude. Note that this definition differs from (say) Wikipedia's definition, which only applies it to wealth inequality, where each element is real and non-negative.



    We have some nice properties of $G$:




    • If $v_{j} mapsto e^{itheta_j}v_{j}$, we feel intuitively that the sparsity is unchanged, and indeed $G(v)$ is invariant under these phase changes.


    • If $0 ne lambda in mathbb{C}$, it is easy to see that $G(lambda v) = G(v)$ (changing units doesn't change sparsity).


    • If $v_{j} = 0$ for all but one $j$, then $G(v) = 1$, which is the maxima of $G$. If all coefficients are of equal magnitude, then $G(v) = 0$, which is the minima of $G$.


    • If $Pi$ is a permutation of the elements of $v$, the $G(Pi v) = G(v)$. (Permutations don't affect sparsity.) The particular way that the Gini coefficient achieves this is via sorting, which is not ideal due to it requiring $mathcal{O}(nlog n)$ operations, but the invariance is indeed manifest.


    • The property of the Gini coefficient not shared by the Hoyer measure (a normalized ratio of $ell_1$ and $ell_2$ norms) is its invariance under concatenation: $G(voplus v) = G(v)$. How much weight should be given to this property is unclear, especially given the sorting requirement.







    share|cite|improve this answer









    $endgroup$


















      0












      $begingroup$

      The Gini coefficient is an excellent way to measurement sparsity. Here, we use the following definition of Gini coefficient:
      begin{align*}
      G(v) := frac{1}{n-1}left[ frac{2}{left| vright|_{1} } sum_{i=0}^{n-1} (i+1)|v_i^{(s)}| - n - 1 right]
      end{align*}

      where $v_{i}^{(s)}$ is the $i$th coefficient of the vector $v$ sorted by increasing magnitude. Note that this definition differs from (say) Wikipedia's definition, which only applies it to wealth inequality, where each element is real and non-negative.



      We have some nice properties of $G$:




      • If $v_{j} mapsto e^{itheta_j}v_{j}$, we feel intuitively that the sparsity is unchanged, and indeed $G(v)$ is invariant under these phase changes.


      • If $0 ne lambda in mathbb{C}$, it is easy to see that $G(lambda v) = G(v)$ (changing units doesn't change sparsity).


      • If $v_{j} = 0$ for all but one $j$, then $G(v) = 1$, which is the maxima of $G$. If all coefficients are of equal magnitude, then $G(v) = 0$, which is the minima of $G$.


      • If $Pi$ is a permutation of the elements of $v$, the $G(Pi v) = G(v)$. (Permutations don't affect sparsity.) The particular way that the Gini coefficient achieves this is via sorting, which is not ideal due to it requiring $mathcal{O}(nlog n)$ operations, but the invariance is indeed manifest.


      • The property of the Gini coefficient not shared by the Hoyer measure (a normalized ratio of $ell_1$ and $ell_2$ norms) is its invariance under concatenation: $G(voplus v) = G(v)$. How much weight should be given to this property is unclear, especially given the sorting requirement.







      share|cite|improve this answer









      $endgroup$
















        0












        0








        0





        $begingroup$

        The Gini coefficient is an excellent way to measurement sparsity. Here, we use the following definition of Gini coefficient:
        begin{align*}
        G(v) := frac{1}{n-1}left[ frac{2}{left| vright|_{1} } sum_{i=0}^{n-1} (i+1)|v_i^{(s)}| - n - 1 right]
        end{align*}

        where $v_{i}^{(s)}$ is the $i$th coefficient of the vector $v$ sorted by increasing magnitude. Note that this definition differs from (say) Wikipedia's definition, which only applies it to wealth inequality, where each element is real and non-negative.



        We have some nice properties of $G$:




        • If $v_{j} mapsto e^{itheta_j}v_{j}$, we feel intuitively that the sparsity is unchanged, and indeed $G(v)$ is invariant under these phase changes.


        • If $0 ne lambda in mathbb{C}$, it is easy to see that $G(lambda v) = G(v)$ (changing units doesn't change sparsity).


        • If $v_{j} = 0$ for all but one $j$, then $G(v) = 1$, which is the maxima of $G$. If all coefficients are of equal magnitude, then $G(v) = 0$, which is the minima of $G$.


        • If $Pi$ is a permutation of the elements of $v$, the $G(Pi v) = G(v)$. (Permutations don't affect sparsity.) The particular way that the Gini coefficient achieves this is via sorting, which is not ideal due to it requiring $mathcal{O}(nlog n)$ operations, but the invariance is indeed manifest.


        • The property of the Gini coefficient not shared by the Hoyer measure (a normalized ratio of $ell_1$ and $ell_2$ norms) is its invariance under concatenation: $G(voplus v) = G(v)$. How much weight should be given to this property is unclear, especially given the sorting requirement.







        share|cite|improve this answer









        $endgroup$



        The Gini coefficient is an excellent way to measurement sparsity. Here, we use the following definition of Gini coefficient:
        begin{align*}
        G(v) := frac{1}{n-1}left[ frac{2}{left| vright|_{1} } sum_{i=0}^{n-1} (i+1)|v_i^{(s)}| - n - 1 right]
        end{align*}

        where $v_{i}^{(s)}$ is the $i$th coefficient of the vector $v$ sorted by increasing magnitude. Note that this definition differs from (say) Wikipedia's definition, which only applies it to wealth inequality, where each element is real and non-negative.



        We have some nice properties of $G$:




        • If $v_{j} mapsto e^{itheta_j}v_{j}$, we feel intuitively that the sparsity is unchanged, and indeed $G(v)$ is invariant under these phase changes.


        • If $0 ne lambda in mathbb{C}$, it is easy to see that $G(lambda v) = G(v)$ (changing units doesn't change sparsity).


        • If $v_{j} = 0$ for all but one $j$, then $G(v) = 1$, which is the maxima of $G$. If all coefficients are of equal magnitude, then $G(v) = 0$, which is the minima of $G$.


        • If $Pi$ is a permutation of the elements of $v$, the $G(Pi v) = G(v)$. (Permutations don't affect sparsity.) The particular way that the Gini coefficient achieves this is via sorting, which is not ideal due to it requiring $mathcal{O}(nlog n)$ operations, but the invariance is indeed manifest.


        • The property of the Gini coefficient not shared by the Hoyer measure (a normalized ratio of $ell_1$ and $ell_2$ norms) is its invariance under concatenation: $G(voplus v) = G(v)$. How much weight should be given to this property is unclear, especially given the sorting requirement.








        share|cite|improve this answer












        share|cite|improve this answer



        share|cite|improve this answer










        answered Dec 7 '18 at 6:05









        user14717user14717

        3,8281020




        3,8281020






























            draft saved

            draft discarded




















































            Thanks for contributing an answer to Mathematics Stack Exchange!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            Use MathJax to format equations. MathJax reference.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3026145%2fmeasuring-concentration-in-an-expansion%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Bundesstraße 106

            Verónica Boquete

            Ida-Boy-Ed-Garten