Nonsingularity of Euclidean distance matrix











up vote
10
down vote

favorite
4












Let $x_1, dots, x_k in mathbb{R}^n$ be distinct points and let $A$ be the matrix defined by $A_{ij} = d(x_i, x_j)$, where $d$ is the Euclidean distance. Is $A$ always nonsingular?



I have a feeling this should be well known (or, at least a reference should exists), on the other hand, this fact fails for general metrics (take e.g. path metric on the cycle $C_4$)



edit: changed number of points from $n$ to general $k$










share|cite|improve this question
























  • This clearly holds for $n=2$. Perhaps some induction argument can be made using minors?
    – Alex Becker
    Aug 25 '12 at 18:22










  • wlog let one of the pts be zero and the others located at $v_i$. Suppose $Vert v_i Vert = 1$. The distance matrix is $2 1 - (v_i cdot v_j)$ where $1 $ is the matrix of all ones and $(v_i cdot v_j) = V^tV$ where $V $ has the $v_i$ down the rows. Both of these matrices can be very singular so I don't think the matirx is necessarily invertible.
    – mike
    Aug 25 '12 at 19:28








  • 2




    Note that the term "Euclidean distance matrix" (sometimes abbreviated "EDM") is usually used for the matrix of squared Euclidean distances; see Wikipedia. An EDM can be singular, e.g. in the case of a square, whose EDM is the distance matrix of the path metric on the cycle $C_4$.
    – joriki
    Aug 25 '12 at 21:52








  • 1




    The matrix (your non-squared one) for a unit $k$-simplex (with $n=k+1$) has one positive eigenvalue $k$ and $k$ negative eigenvalues $-1$. Empirically, the matrix for a line of $n$ equidistant points (e.g. for $n=10$) seems to also have one positive eigenvalue and $n-1$ negative eigenvalues. Since one might expect these two cases to be extreme in some sense, this suggests that the matrix might always have the same signature, in which case none of the eigenvalues would ever cross $0$ to make it singular.
    – joriki
    Aug 25 '12 at 22:23










  • We can write the distance matrix in terms of a matrix with pairwise displacement vectors as columns. Can we use some linear independence argument on this displacement matrix? E.g. there can be a maximum of 3 linearly independent vectors in $mathbb{R}^3$. And it seems linked to the fact that a squared Euclidean distance matrix in $mathbb{R}^3$ will have rank at most $3$.
    – Kartik Audhkhasi
    Aug 26 '12 at 0:19















up vote
10
down vote

favorite
4












Let $x_1, dots, x_k in mathbb{R}^n$ be distinct points and let $A$ be the matrix defined by $A_{ij} = d(x_i, x_j)$, where $d$ is the Euclidean distance. Is $A$ always nonsingular?



I have a feeling this should be well known (or, at least a reference should exists), on the other hand, this fact fails for general metrics (take e.g. path metric on the cycle $C_4$)



edit: changed number of points from $n$ to general $k$










share|cite|improve this question
























  • This clearly holds for $n=2$. Perhaps some induction argument can be made using minors?
    – Alex Becker
    Aug 25 '12 at 18:22










  • wlog let one of the pts be zero and the others located at $v_i$. Suppose $Vert v_i Vert = 1$. The distance matrix is $2 1 - (v_i cdot v_j)$ where $1 $ is the matrix of all ones and $(v_i cdot v_j) = V^tV$ where $V $ has the $v_i$ down the rows. Both of these matrices can be very singular so I don't think the matirx is necessarily invertible.
    – mike
    Aug 25 '12 at 19:28








  • 2




    Note that the term "Euclidean distance matrix" (sometimes abbreviated "EDM") is usually used for the matrix of squared Euclidean distances; see Wikipedia. An EDM can be singular, e.g. in the case of a square, whose EDM is the distance matrix of the path metric on the cycle $C_4$.
    – joriki
    Aug 25 '12 at 21:52








  • 1




    The matrix (your non-squared one) for a unit $k$-simplex (with $n=k+1$) has one positive eigenvalue $k$ and $k$ negative eigenvalues $-1$. Empirically, the matrix for a line of $n$ equidistant points (e.g. for $n=10$) seems to also have one positive eigenvalue and $n-1$ negative eigenvalues. Since one might expect these two cases to be extreme in some sense, this suggests that the matrix might always have the same signature, in which case none of the eigenvalues would ever cross $0$ to make it singular.
    – joriki
    Aug 25 '12 at 22:23










  • We can write the distance matrix in terms of a matrix with pairwise displacement vectors as columns. Can we use some linear independence argument on this displacement matrix? E.g. there can be a maximum of 3 linearly independent vectors in $mathbb{R}^3$. And it seems linked to the fact that a squared Euclidean distance matrix in $mathbb{R}^3$ will have rank at most $3$.
    – Kartik Audhkhasi
    Aug 26 '12 at 0:19













up vote
10
down vote

favorite
4









up vote
10
down vote

favorite
4






4





Let $x_1, dots, x_k in mathbb{R}^n$ be distinct points and let $A$ be the matrix defined by $A_{ij} = d(x_i, x_j)$, where $d$ is the Euclidean distance. Is $A$ always nonsingular?



I have a feeling this should be well known (or, at least a reference should exists), on the other hand, this fact fails for general metrics (take e.g. path metric on the cycle $C_4$)



edit: changed number of points from $n$ to general $k$










share|cite|improve this question















Let $x_1, dots, x_k in mathbb{R}^n$ be distinct points and let $A$ be the matrix defined by $A_{ij} = d(x_i, x_j)$, where $d$ is the Euclidean distance. Is $A$ always nonsingular?



I have a feeling this should be well known (or, at least a reference should exists), on the other hand, this fact fails for general metrics (take e.g. path metric on the cycle $C_4$)



edit: changed number of points from $n$ to general $k$







linear-algebra metric-spaces






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Aug 26 '12 at 11:37

























asked Aug 25 '12 at 18:14









Marcin Kotowski

1817




1817












  • This clearly holds for $n=2$. Perhaps some induction argument can be made using minors?
    – Alex Becker
    Aug 25 '12 at 18:22










  • wlog let one of the pts be zero and the others located at $v_i$. Suppose $Vert v_i Vert = 1$. The distance matrix is $2 1 - (v_i cdot v_j)$ where $1 $ is the matrix of all ones and $(v_i cdot v_j) = V^tV$ where $V $ has the $v_i$ down the rows. Both of these matrices can be very singular so I don't think the matirx is necessarily invertible.
    – mike
    Aug 25 '12 at 19:28








  • 2




    Note that the term "Euclidean distance matrix" (sometimes abbreviated "EDM") is usually used for the matrix of squared Euclidean distances; see Wikipedia. An EDM can be singular, e.g. in the case of a square, whose EDM is the distance matrix of the path metric on the cycle $C_4$.
    – joriki
    Aug 25 '12 at 21:52








  • 1




    The matrix (your non-squared one) for a unit $k$-simplex (with $n=k+1$) has one positive eigenvalue $k$ and $k$ negative eigenvalues $-1$. Empirically, the matrix for a line of $n$ equidistant points (e.g. for $n=10$) seems to also have one positive eigenvalue and $n-1$ negative eigenvalues. Since one might expect these two cases to be extreme in some sense, this suggests that the matrix might always have the same signature, in which case none of the eigenvalues would ever cross $0$ to make it singular.
    – joriki
    Aug 25 '12 at 22:23










  • We can write the distance matrix in terms of a matrix with pairwise displacement vectors as columns. Can we use some linear independence argument on this displacement matrix? E.g. there can be a maximum of 3 linearly independent vectors in $mathbb{R}^3$. And it seems linked to the fact that a squared Euclidean distance matrix in $mathbb{R}^3$ will have rank at most $3$.
    – Kartik Audhkhasi
    Aug 26 '12 at 0:19


















  • This clearly holds for $n=2$. Perhaps some induction argument can be made using minors?
    – Alex Becker
    Aug 25 '12 at 18:22










  • wlog let one of the pts be zero and the others located at $v_i$. Suppose $Vert v_i Vert = 1$. The distance matrix is $2 1 - (v_i cdot v_j)$ where $1 $ is the matrix of all ones and $(v_i cdot v_j) = V^tV$ where $V $ has the $v_i$ down the rows. Both of these matrices can be very singular so I don't think the matirx is necessarily invertible.
    – mike
    Aug 25 '12 at 19:28








  • 2




    Note that the term "Euclidean distance matrix" (sometimes abbreviated "EDM") is usually used for the matrix of squared Euclidean distances; see Wikipedia. An EDM can be singular, e.g. in the case of a square, whose EDM is the distance matrix of the path metric on the cycle $C_4$.
    – joriki
    Aug 25 '12 at 21:52








  • 1




    The matrix (your non-squared one) for a unit $k$-simplex (with $n=k+1$) has one positive eigenvalue $k$ and $k$ negative eigenvalues $-1$. Empirically, the matrix for a line of $n$ equidistant points (e.g. for $n=10$) seems to also have one positive eigenvalue and $n-1$ negative eigenvalues. Since one might expect these two cases to be extreme in some sense, this suggests that the matrix might always have the same signature, in which case none of the eigenvalues would ever cross $0$ to make it singular.
    – joriki
    Aug 25 '12 at 22:23










  • We can write the distance matrix in terms of a matrix with pairwise displacement vectors as columns. Can we use some linear independence argument on this displacement matrix? E.g. there can be a maximum of 3 linearly independent vectors in $mathbb{R}^3$. And it seems linked to the fact that a squared Euclidean distance matrix in $mathbb{R}^3$ will have rank at most $3$.
    – Kartik Audhkhasi
    Aug 26 '12 at 0:19
















This clearly holds for $n=2$. Perhaps some induction argument can be made using minors?
– Alex Becker
Aug 25 '12 at 18:22




This clearly holds for $n=2$. Perhaps some induction argument can be made using minors?
– Alex Becker
Aug 25 '12 at 18:22












wlog let one of the pts be zero and the others located at $v_i$. Suppose $Vert v_i Vert = 1$. The distance matrix is $2 1 - (v_i cdot v_j)$ where $1 $ is the matrix of all ones and $(v_i cdot v_j) = V^tV$ where $V $ has the $v_i$ down the rows. Both of these matrices can be very singular so I don't think the matirx is necessarily invertible.
– mike
Aug 25 '12 at 19:28






wlog let one of the pts be zero and the others located at $v_i$. Suppose $Vert v_i Vert = 1$. The distance matrix is $2 1 - (v_i cdot v_j)$ where $1 $ is the matrix of all ones and $(v_i cdot v_j) = V^tV$ where $V $ has the $v_i$ down the rows. Both of these matrices can be very singular so I don't think the matirx is necessarily invertible.
– mike
Aug 25 '12 at 19:28






2




2




Note that the term "Euclidean distance matrix" (sometimes abbreviated "EDM") is usually used for the matrix of squared Euclidean distances; see Wikipedia. An EDM can be singular, e.g. in the case of a square, whose EDM is the distance matrix of the path metric on the cycle $C_4$.
– joriki
Aug 25 '12 at 21:52






Note that the term "Euclidean distance matrix" (sometimes abbreviated "EDM") is usually used for the matrix of squared Euclidean distances; see Wikipedia. An EDM can be singular, e.g. in the case of a square, whose EDM is the distance matrix of the path metric on the cycle $C_4$.
– joriki
Aug 25 '12 at 21:52






1




1




The matrix (your non-squared one) for a unit $k$-simplex (with $n=k+1$) has one positive eigenvalue $k$ and $k$ negative eigenvalues $-1$. Empirically, the matrix for a line of $n$ equidistant points (e.g. for $n=10$) seems to also have one positive eigenvalue and $n-1$ negative eigenvalues. Since one might expect these two cases to be extreme in some sense, this suggests that the matrix might always have the same signature, in which case none of the eigenvalues would ever cross $0$ to make it singular.
– joriki
Aug 25 '12 at 22:23




The matrix (your non-squared one) for a unit $k$-simplex (with $n=k+1$) has one positive eigenvalue $k$ and $k$ negative eigenvalues $-1$. Empirically, the matrix for a line of $n$ equidistant points (e.g. for $n=10$) seems to also have one positive eigenvalue and $n-1$ negative eigenvalues. Since one might expect these two cases to be extreme in some sense, this suggests that the matrix might always have the same signature, in which case none of the eigenvalues would ever cross $0$ to make it singular.
– joriki
Aug 25 '12 at 22:23












We can write the distance matrix in terms of a matrix with pairwise displacement vectors as columns. Can we use some linear independence argument on this displacement matrix? E.g. there can be a maximum of 3 linearly independent vectors in $mathbb{R}^3$. And it seems linked to the fact that a squared Euclidean distance matrix in $mathbb{R}^3$ will have rank at most $3$.
– Kartik Audhkhasi
Aug 26 '12 at 0:19




We can write the distance matrix in terms of a matrix with pairwise displacement vectors as columns. Can we use some linear independence argument on this displacement matrix? E.g. there can be a maximum of 3 linearly independent vectors in $mathbb{R}^3$. And it seems linked to the fact that a squared Euclidean distance matrix in $mathbb{R}^3$ will have rank at most $3$.
– Kartik Audhkhasi
Aug 26 '12 at 0:19










2 Answers
2






active

oldest

votes

















up vote
5
down vote













I think it should be possible to show that your distance matrix is always nonsingular by showing that it is always a Euclidean distance matrix (in the usual sense of the term) for a non-degenerate set of points. I don't give a full proof but sketch some ideas that I think can be fleshed out into a proof.



Two relevant papers on Euclidean distance matrices are Discussion of a Set of Points in Terms of Their Mutual Distances by Young and Householder and Metric Spaces and Positive Definite Functions by Schoenberg. They show that an $ntimes n$ matrix $A$ is a Euclidean distance matrix if and only if $x^top Axle0$ for all $x$ with $e^top x=0$ (where $e$ is the vector with $1$ in each component) and that the affine dimension of the points is $n$ if and only if the inequality is strict.



It follows that a Euclidean distance matrix can only be singular if the affine dimension of the points is less than $n$: If the affine dimension is $n$, there cannot be an eigenvalue $0$, since there is a positive eigenvalue (since $e^top Aegt0$), and the span of these two eigenspaces would non-trivially intersect the space $e^top x=0$, contradicting the negative definiteness of $A$ on that space.



To use all this for your case, one could try to show that a distance matrix in your sense is always a Euclidean distance matrix in the usual sense for points with affine dimension $n$. I think this could be done by continuously varying the exponent $alpha$ in $A_{ij}=d(x_i,x_j)^alpha$ from $1$ to $2$ and showing a) that there is always a direction in which the points can move such that $A$ remains their distance matrix with the changing exponent and b) that this movement necessarily causes them to have affine dimension $n$.



To get a feel for how this might work, consider a square: The movement would bend the square into a tetrahedron. The proof would need to account for the fact that this seems to hold only for $alphalt2$; you can see from the example of three points in a line that they can be bent to accommodate $alphalt2$ but not $alphagt2$.






share|cite|improve this answer





















  • Thanks for answer, although - what happens if the number of points is greater than $n$? Dimension arguments won't work anymore.
    – Marcin Kotowski
    Aug 26 '12 at 11:37






  • 1




    @Marcin: I didn't mention that because you'd originally asked about $n$ points in $mathbb R^n$. It doesn't make a difference, though; just embed the points in $mathbb R^k$ (e.g. by adding $0$s) so they're free to move.
    – joriki
    Aug 26 '12 at 11:51






  • 1




    I think Young and Householder study a different matrix, whose entries are (mostly) squares of distances rather than distances themselves. Am I misreading them?
    – darij grinberg
    Nov 23 at 3:47


















up vote
1
down vote













This is true. Over in the comments of a related question, darij grinberg has just shown that $A$ is negative definite when restricted to the codimension one subspace where the coordinates sum to $0$. Let $lambda_1 geq lambda_2 geq cdots lambda_n$ be the eigenvalues of $A$, and let $mu_1 geq mu_2 geq cdots geq mu_{n-1}$ be the eigenvalues of $A$ restricted to this subspace, so darij shows that $0 > mu_1 geq cdots geq mu_{n-1}$. By the Eigenvalue Interlacing Theorem, we have $mu_1 geq lambda_2 geq mu_2 geq lambda_3 geq cdots geq mu_{n-1} geq lambda_n$, so we know that $0 > lambda_2$, ..., $lambda_n$. So it remains to show that $lambda_1$ is not zero.



If $lambda_1$ were $0$, then $A$ would be negative semidefinite. But, letting $vec{j}$ denote the all ones vector, we have $vec{j}^T A vec{j} > 0$, a contradiction.






share|cite|improve this answer





















  • Very nice answer!
    – Capublanca
    Dec 3 at 1:44











Your Answer





StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f186830%2fnonsingularity-of-euclidean-distance-matrix%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























2 Answers
2






active

oldest

votes








2 Answers
2






active

oldest

votes









active

oldest

votes






active

oldest

votes








up vote
5
down vote













I think it should be possible to show that your distance matrix is always nonsingular by showing that it is always a Euclidean distance matrix (in the usual sense of the term) for a non-degenerate set of points. I don't give a full proof but sketch some ideas that I think can be fleshed out into a proof.



Two relevant papers on Euclidean distance matrices are Discussion of a Set of Points in Terms of Their Mutual Distances by Young and Householder and Metric Spaces and Positive Definite Functions by Schoenberg. They show that an $ntimes n$ matrix $A$ is a Euclidean distance matrix if and only if $x^top Axle0$ for all $x$ with $e^top x=0$ (where $e$ is the vector with $1$ in each component) and that the affine dimension of the points is $n$ if and only if the inequality is strict.



It follows that a Euclidean distance matrix can only be singular if the affine dimension of the points is less than $n$: If the affine dimension is $n$, there cannot be an eigenvalue $0$, since there is a positive eigenvalue (since $e^top Aegt0$), and the span of these two eigenspaces would non-trivially intersect the space $e^top x=0$, contradicting the negative definiteness of $A$ on that space.



To use all this for your case, one could try to show that a distance matrix in your sense is always a Euclidean distance matrix in the usual sense for points with affine dimension $n$. I think this could be done by continuously varying the exponent $alpha$ in $A_{ij}=d(x_i,x_j)^alpha$ from $1$ to $2$ and showing a) that there is always a direction in which the points can move such that $A$ remains their distance matrix with the changing exponent and b) that this movement necessarily causes them to have affine dimension $n$.



To get a feel for how this might work, consider a square: The movement would bend the square into a tetrahedron. The proof would need to account for the fact that this seems to hold only for $alphalt2$; you can see from the example of three points in a line that they can be bent to accommodate $alphalt2$ but not $alphagt2$.






share|cite|improve this answer





















  • Thanks for answer, although - what happens if the number of points is greater than $n$? Dimension arguments won't work anymore.
    – Marcin Kotowski
    Aug 26 '12 at 11:37






  • 1




    @Marcin: I didn't mention that because you'd originally asked about $n$ points in $mathbb R^n$. It doesn't make a difference, though; just embed the points in $mathbb R^k$ (e.g. by adding $0$s) so they're free to move.
    – joriki
    Aug 26 '12 at 11:51






  • 1




    I think Young and Householder study a different matrix, whose entries are (mostly) squares of distances rather than distances themselves. Am I misreading them?
    – darij grinberg
    Nov 23 at 3:47















up vote
5
down vote













I think it should be possible to show that your distance matrix is always nonsingular by showing that it is always a Euclidean distance matrix (in the usual sense of the term) for a non-degenerate set of points. I don't give a full proof but sketch some ideas that I think can be fleshed out into a proof.



Two relevant papers on Euclidean distance matrices are Discussion of a Set of Points in Terms of Their Mutual Distances by Young and Householder and Metric Spaces and Positive Definite Functions by Schoenberg. They show that an $ntimes n$ matrix $A$ is a Euclidean distance matrix if and only if $x^top Axle0$ for all $x$ with $e^top x=0$ (where $e$ is the vector with $1$ in each component) and that the affine dimension of the points is $n$ if and only if the inequality is strict.



It follows that a Euclidean distance matrix can only be singular if the affine dimension of the points is less than $n$: If the affine dimension is $n$, there cannot be an eigenvalue $0$, since there is a positive eigenvalue (since $e^top Aegt0$), and the span of these two eigenspaces would non-trivially intersect the space $e^top x=0$, contradicting the negative definiteness of $A$ on that space.



To use all this for your case, one could try to show that a distance matrix in your sense is always a Euclidean distance matrix in the usual sense for points with affine dimension $n$. I think this could be done by continuously varying the exponent $alpha$ in $A_{ij}=d(x_i,x_j)^alpha$ from $1$ to $2$ and showing a) that there is always a direction in which the points can move such that $A$ remains their distance matrix with the changing exponent and b) that this movement necessarily causes them to have affine dimension $n$.



To get a feel for how this might work, consider a square: The movement would bend the square into a tetrahedron. The proof would need to account for the fact that this seems to hold only for $alphalt2$; you can see from the example of three points in a line that they can be bent to accommodate $alphalt2$ but not $alphagt2$.






share|cite|improve this answer





















  • Thanks for answer, although - what happens if the number of points is greater than $n$? Dimension arguments won't work anymore.
    – Marcin Kotowski
    Aug 26 '12 at 11:37






  • 1




    @Marcin: I didn't mention that because you'd originally asked about $n$ points in $mathbb R^n$. It doesn't make a difference, though; just embed the points in $mathbb R^k$ (e.g. by adding $0$s) so they're free to move.
    – joriki
    Aug 26 '12 at 11:51






  • 1




    I think Young and Householder study a different matrix, whose entries are (mostly) squares of distances rather than distances themselves. Am I misreading them?
    – darij grinberg
    Nov 23 at 3:47













up vote
5
down vote










up vote
5
down vote









I think it should be possible to show that your distance matrix is always nonsingular by showing that it is always a Euclidean distance matrix (in the usual sense of the term) for a non-degenerate set of points. I don't give a full proof but sketch some ideas that I think can be fleshed out into a proof.



Two relevant papers on Euclidean distance matrices are Discussion of a Set of Points in Terms of Their Mutual Distances by Young and Householder and Metric Spaces and Positive Definite Functions by Schoenberg. They show that an $ntimes n$ matrix $A$ is a Euclidean distance matrix if and only if $x^top Axle0$ for all $x$ with $e^top x=0$ (where $e$ is the vector with $1$ in each component) and that the affine dimension of the points is $n$ if and only if the inequality is strict.



It follows that a Euclidean distance matrix can only be singular if the affine dimension of the points is less than $n$: If the affine dimension is $n$, there cannot be an eigenvalue $0$, since there is a positive eigenvalue (since $e^top Aegt0$), and the span of these two eigenspaces would non-trivially intersect the space $e^top x=0$, contradicting the negative definiteness of $A$ on that space.



To use all this for your case, one could try to show that a distance matrix in your sense is always a Euclidean distance matrix in the usual sense for points with affine dimension $n$. I think this could be done by continuously varying the exponent $alpha$ in $A_{ij}=d(x_i,x_j)^alpha$ from $1$ to $2$ and showing a) that there is always a direction in which the points can move such that $A$ remains their distance matrix with the changing exponent and b) that this movement necessarily causes them to have affine dimension $n$.



To get a feel for how this might work, consider a square: The movement would bend the square into a tetrahedron. The proof would need to account for the fact that this seems to hold only for $alphalt2$; you can see from the example of three points in a line that they can be bent to accommodate $alphalt2$ but not $alphagt2$.






share|cite|improve this answer












I think it should be possible to show that your distance matrix is always nonsingular by showing that it is always a Euclidean distance matrix (in the usual sense of the term) for a non-degenerate set of points. I don't give a full proof but sketch some ideas that I think can be fleshed out into a proof.



Two relevant papers on Euclidean distance matrices are Discussion of a Set of Points in Terms of Their Mutual Distances by Young and Householder and Metric Spaces and Positive Definite Functions by Schoenberg. They show that an $ntimes n$ matrix $A$ is a Euclidean distance matrix if and only if $x^top Axle0$ for all $x$ with $e^top x=0$ (where $e$ is the vector with $1$ in each component) and that the affine dimension of the points is $n$ if and only if the inequality is strict.



It follows that a Euclidean distance matrix can only be singular if the affine dimension of the points is less than $n$: If the affine dimension is $n$, there cannot be an eigenvalue $0$, since there is a positive eigenvalue (since $e^top Aegt0$), and the span of these two eigenspaces would non-trivially intersect the space $e^top x=0$, contradicting the negative definiteness of $A$ on that space.



To use all this for your case, one could try to show that a distance matrix in your sense is always a Euclidean distance matrix in the usual sense for points with affine dimension $n$. I think this could be done by continuously varying the exponent $alpha$ in $A_{ij}=d(x_i,x_j)^alpha$ from $1$ to $2$ and showing a) that there is always a direction in which the points can move such that $A$ remains their distance matrix with the changing exponent and b) that this movement necessarily causes them to have affine dimension $n$.



To get a feel for how this might work, consider a square: The movement would bend the square into a tetrahedron. The proof would need to account for the fact that this seems to hold only for $alphalt2$; you can see from the example of three points in a line that they can be bent to accommodate $alphalt2$ but not $alphagt2$.







share|cite|improve this answer












share|cite|improve this answer



share|cite|improve this answer










answered Aug 26 '12 at 10:13









joriki

170k10183341




170k10183341












  • Thanks for answer, although - what happens if the number of points is greater than $n$? Dimension arguments won't work anymore.
    – Marcin Kotowski
    Aug 26 '12 at 11:37






  • 1




    @Marcin: I didn't mention that because you'd originally asked about $n$ points in $mathbb R^n$. It doesn't make a difference, though; just embed the points in $mathbb R^k$ (e.g. by adding $0$s) so they're free to move.
    – joriki
    Aug 26 '12 at 11:51






  • 1




    I think Young and Householder study a different matrix, whose entries are (mostly) squares of distances rather than distances themselves. Am I misreading them?
    – darij grinberg
    Nov 23 at 3:47


















  • Thanks for answer, although - what happens if the number of points is greater than $n$? Dimension arguments won't work anymore.
    – Marcin Kotowski
    Aug 26 '12 at 11:37






  • 1




    @Marcin: I didn't mention that because you'd originally asked about $n$ points in $mathbb R^n$. It doesn't make a difference, though; just embed the points in $mathbb R^k$ (e.g. by adding $0$s) so they're free to move.
    – joriki
    Aug 26 '12 at 11:51






  • 1




    I think Young and Householder study a different matrix, whose entries are (mostly) squares of distances rather than distances themselves. Am I misreading them?
    – darij grinberg
    Nov 23 at 3:47
















Thanks for answer, although - what happens if the number of points is greater than $n$? Dimension arguments won't work anymore.
– Marcin Kotowski
Aug 26 '12 at 11:37




Thanks for answer, although - what happens if the number of points is greater than $n$? Dimension arguments won't work anymore.
– Marcin Kotowski
Aug 26 '12 at 11:37




1




1




@Marcin: I didn't mention that because you'd originally asked about $n$ points in $mathbb R^n$. It doesn't make a difference, though; just embed the points in $mathbb R^k$ (e.g. by adding $0$s) so they're free to move.
– joriki
Aug 26 '12 at 11:51




@Marcin: I didn't mention that because you'd originally asked about $n$ points in $mathbb R^n$. It doesn't make a difference, though; just embed the points in $mathbb R^k$ (e.g. by adding $0$s) so they're free to move.
– joriki
Aug 26 '12 at 11:51




1




1




I think Young and Householder study a different matrix, whose entries are (mostly) squares of distances rather than distances themselves. Am I misreading them?
– darij grinberg
Nov 23 at 3:47




I think Young and Householder study a different matrix, whose entries are (mostly) squares of distances rather than distances themselves. Am I misreading them?
– darij grinberg
Nov 23 at 3:47










up vote
1
down vote













This is true. Over in the comments of a related question, darij grinberg has just shown that $A$ is negative definite when restricted to the codimension one subspace where the coordinates sum to $0$. Let $lambda_1 geq lambda_2 geq cdots lambda_n$ be the eigenvalues of $A$, and let $mu_1 geq mu_2 geq cdots geq mu_{n-1}$ be the eigenvalues of $A$ restricted to this subspace, so darij shows that $0 > mu_1 geq cdots geq mu_{n-1}$. By the Eigenvalue Interlacing Theorem, we have $mu_1 geq lambda_2 geq mu_2 geq lambda_3 geq cdots geq mu_{n-1} geq lambda_n$, so we know that $0 > lambda_2$, ..., $lambda_n$. So it remains to show that $lambda_1$ is not zero.



If $lambda_1$ were $0$, then $A$ would be negative semidefinite. But, letting $vec{j}$ denote the all ones vector, we have $vec{j}^T A vec{j} > 0$, a contradiction.






share|cite|improve this answer





















  • Very nice answer!
    – Capublanca
    Dec 3 at 1:44















up vote
1
down vote













This is true. Over in the comments of a related question, darij grinberg has just shown that $A$ is negative definite when restricted to the codimension one subspace where the coordinates sum to $0$. Let $lambda_1 geq lambda_2 geq cdots lambda_n$ be the eigenvalues of $A$, and let $mu_1 geq mu_2 geq cdots geq mu_{n-1}$ be the eigenvalues of $A$ restricted to this subspace, so darij shows that $0 > mu_1 geq cdots geq mu_{n-1}$. By the Eigenvalue Interlacing Theorem, we have $mu_1 geq lambda_2 geq mu_2 geq lambda_3 geq cdots geq mu_{n-1} geq lambda_n$, so we know that $0 > lambda_2$, ..., $lambda_n$. So it remains to show that $lambda_1$ is not zero.



If $lambda_1$ were $0$, then $A$ would be negative semidefinite. But, letting $vec{j}$ denote the all ones vector, we have $vec{j}^T A vec{j} > 0$, a contradiction.






share|cite|improve this answer





















  • Very nice answer!
    – Capublanca
    Dec 3 at 1:44













up vote
1
down vote










up vote
1
down vote









This is true. Over in the comments of a related question, darij grinberg has just shown that $A$ is negative definite when restricted to the codimension one subspace where the coordinates sum to $0$. Let $lambda_1 geq lambda_2 geq cdots lambda_n$ be the eigenvalues of $A$, and let $mu_1 geq mu_2 geq cdots geq mu_{n-1}$ be the eigenvalues of $A$ restricted to this subspace, so darij shows that $0 > mu_1 geq cdots geq mu_{n-1}$. By the Eigenvalue Interlacing Theorem, we have $mu_1 geq lambda_2 geq mu_2 geq lambda_3 geq cdots geq mu_{n-1} geq lambda_n$, so we know that $0 > lambda_2$, ..., $lambda_n$. So it remains to show that $lambda_1$ is not zero.



If $lambda_1$ were $0$, then $A$ would be negative semidefinite. But, letting $vec{j}$ denote the all ones vector, we have $vec{j}^T A vec{j} > 0$, a contradiction.






share|cite|improve this answer












This is true. Over in the comments of a related question, darij grinberg has just shown that $A$ is negative definite when restricted to the codimension one subspace where the coordinates sum to $0$. Let $lambda_1 geq lambda_2 geq cdots lambda_n$ be the eigenvalues of $A$, and let $mu_1 geq mu_2 geq cdots geq mu_{n-1}$ be the eigenvalues of $A$ restricted to this subspace, so darij shows that $0 > mu_1 geq cdots geq mu_{n-1}$. By the Eigenvalue Interlacing Theorem, we have $mu_1 geq lambda_2 geq mu_2 geq lambda_3 geq cdots geq mu_{n-1} geq lambda_n$, so we know that $0 > lambda_2$, ..., $lambda_n$. So it remains to show that $lambda_1$ is not zero.



If $lambda_1$ were $0$, then $A$ would be negative semidefinite. But, letting $vec{j}$ denote the all ones vector, we have $vec{j}^T A vec{j} > 0$, a contradiction.







share|cite|improve this answer












share|cite|improve this answer



share|cite|improve this answer










answered Nov 23 at 18:40









David E Speyer

44.9k4125201




44.9k4125201












  • Very nice answer!
    – Capublanca
    Dec 3 at 1:44


















  • Very nice answer!
    – Capublanca
    Dec 3 at 1:44
















Very nice answer!
– Capublanca
Dec 3 at 1:44




Very nice answer!
– Capublanca
Dec 3 at 1:44


















draft saved

draft discarded




















































Thanks for contributing an answer to Mathematics Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


Use MathJax to format equations. MathJax reference.


To learn more, see our tips on writing great answers.





Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


Please pay close attention to the following guidance:


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f186830%2fnonsingularity-of-euclidean-distance-matrix%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Le Mesnil-Réaume

Ida-Boy-Ed-Garten

web3.py web3.isConnected() returns false always