Is Sum of Principal Minors Equals to Pseudo Determinant?












1












$begingroup$


I'd like to prove following statement and check whether it's true or not.




Let $M$ be a diagonalizable $n × n$ matrix. If the rank of $M$ equals $r (> 0)$, then the pseudo determinant pdet$M$ equals the sum of all principal minors of order $r$.




Pseudo determinant refers to the product of all non-zero eigenvalues of a square matrix. Eigenvalues are scaling factors as far as I know. And principal minors of order r, is also small-sized scaling factors(determinant) of given $M$.



But does pseudo determinant equal to sum of all principal minor? It looks to me multiplication of those equals to pseudo determinant.



Which one is correct?










share|cite|improve this question









$endgroup$












  • $begingroup$
    This is true. The sum of all principal minors of order $r$ is the sum of all $r$-wise products of the eigenvalues. Now, if only $r$ of the eigenvalues are nonzero, then the latter sum will be the product of these $r$ nonzero eigenvalues. Hence, so is the former sum.
    $endgroup$
    – darij grinberg
    Apr 2 '18 at 3:25










  • $begingroup$
    @darijgrinberg what do you mean by sum of all r-wise products of the eigenvalues? The OP says just pseudo determinant equals the sum of all principal minors, and the definition of pseudo determinant is products of all eigenvalues
    $endgroup$
    – delinco
    Apr 2 '18 at 3:55










  • $begingroup$
    If $lambda_1, lambda_2, ldots, lambda_n$ are the eigenvalues of $A$, then the sum of all principal minors of order $r$ of $A$ is $sumlimits_{1 leq i_1 < i_2 < cdots < i_r leq n} lambda_{i_1} lambda_{i_2} cdots lambda_{i_r}$. This is the well-known fact that I'm referring to. Of course, if $A$ has only $r$ nonzero eigenvalues, then this sum will have only one nonzero addend.
    $endgroup$
    – darij grinberg
    Apr 2 '18 at 3:56












  • $begingroup$
    @darijgrinberg could you give me reference that I can follow the proof?
    $endgroup$
    – delinco
    Apr 2 '18 at 3:57






  • 1




    $begingroup$
    Exterior powers. For a really elementary self-contained proof, see Corollary 5.163 in my Notes on the combinatorial fundamentals of algebra, version of 21 March 2018 (for the solution, see Exercise 5.48). But the gist of the argument is explained well in math.stackexchange.com/questions/336048/coefficient-of-detxia/… .
    $endgroup$
    – darij grinberg
    Apr 2 '18 at 4:42
















1












$begingroup$


I'd like to prove following statement and check whether it's true or not.




Let $M$ be a diagonalizable $n × n$ matrix. If the rank of $M$ equals $r (> 0)$, then the pseudo determinant pdet$M$ equals the sum of all principal minors of order $r$.




Pseudo determinant refers to the product of all non-zero eigenvalues of a square matrix. Eigenvalues are scaling factors as far as I know. And principal minors of order r, is also small-sized scaling factors(determinant) of given $M$.



But does pseudo determinant equal to sum of all principal minor? It looks to me multiplication of those equals to pseudo determinant.



Which one is correct?










share|cite|improve this question









$endgroup$












  • $begingroup$
    This is true. The sum of all principal minors of order $r$ is the sum of all $r$-wise products of the eigenvalues. Now, if only $r$ of the eigenvalues are nonzero, then the latter sum will be the product of these $r$ nonzero eigenvalues. Hence, so is the former sum.
    $endgroup$
    – darij grinberg
    Apr 2 '18 at 3:25










  • $begingroup$
    @darijgrinberg what do you mean by sum of all r-wise products of the eigenvalues? The OP says just pseudo determinant equals the sum of all principal minors, and the definition of pseudo determinant is products of all eigenvalues
    $endgroup$
    – delinco
    Apr 2 '18 at 3:55










  • $begingroup$
    If $lambda_1, lambda_2, ldots, lambda_n$ are the eigenvalues of $A$, then the sum of all principal minors of order $r$ of $A$ is $sumlimits_{1 leq i_1 < i_2 < cdots < i_r leq n} lambda_{i_1} lambda_{i_2} cdots lambda_{i_r}$. This is the well-known fact that I'm referring to. Of course, if $A$ has only $r$ nonzero eigenvalues, then this sum will have only one nonzero addend.
    $endgroup$
    – darij grinberg
    Apr 2 '18 at 3:56












  • $begingroup$
    @darijgrinberg could you give me reference that I can follow the proof?
    $endgroup$
    – delinco
    Apr 2 '18 at 3:57






  • 1




    $begingroup$
    Exterior powers. For a really elementary self-contained proof, see Corollary 5.163 in my Notes on the combinatorial fundamentals of algebra, version of 21 March 2018 (for the solution, see Exercise 5.48). But the gist of the argument is explained well in math.stackexchange.com/questions/336048/coefficient-of-detxia/… .
    $endgroup$
    – darij grinberg
    Apr 2 '18 at 4:42














1












1








1





$begingroup$


I'd like to prove following statement and check whether it's true or not.




Let $M$ be a diagonalizable $n × n$ matrix. If the rank of $M$ equals $r (> 0)$, then the pseudo determinant pdet$M$ equals the sum of all principal minors of order $r$.




Pseudo determinant refers to the product of all non-zero eigenvalues of a square matrix. Eigenvalues are scaling factors as far as I know. And principal minors of order r, is also small-sized scaling factors(determinant) of given $M$.



But does pseudo determinant equal to sum of all principal minor? It looks to me multiplication of those equals to pseudo determinant.



Which one is correct?










share|cite|improve this question









$endgroup$




I'd like to prove following statement and check whether it's true or not.




Let $M$ be a diagonalizable $n × n$ matrix. If the rank of $M$ equals $r (> 0)$, then the pseudo determinant pdet$M$ equals the sum of all principal minors of order $r$.




Pseudo determinant refers to the product of all non-zero eigenvalues of a square matrix. Eigenvalues are scaling factors as far as I know. And principal minors of order r, is also small-sized scaling factors(determinant) of given $M$.



But does pseudo determinant equal to sum of all principal minor? It looks to me multiplication of those equals to pseudo determinant.



Which one is correct?







determinant






share|cite|improve this question













share|cite|improve this question











share|cite|improve this question




share|cite|improve this question










asked Apr 2 '18 at 3:23









delincodelinco

1289




1289












  • $begingroup$
    This is true. The sum of all principal minors of order $r$ is the sum of all $r$-wise products of the eigenvalues. Now, if only $r$ of the eigenvalues are nonzero, then the latter sum will be the product of these $r$ nonzero eigenvalues. Hence, so is the former sum.
    $endgroup$
    – darij grinberg
    Apr 2 '18 at 3:25










  • $begingroup$
    @darijgrinberg what do you mean by sum of all r-wise products of the eigenvalues? The OP says just pseudo determinant equals the sum of all principal minors, and the definition of pseudo determinant is products of all eigenvalues
    $endgroup$
    – delinco
    Apr 2 '18 at 3:55










  • $begingroup$
    If $lambda_1, lambda_2, ldots, lambda_n$ are the eigenvalues of $A$, then the sum of all principal minors of order $r$ of $A$ is $sumlimits_{1 leq i_1 < i_2 < cdots < i_r leq n} lambda_{i_1} lambda_{i_2} cdots lambda_{i_r}$. This is the well-known fact that I'm referring to. Of course, if $A$ has only $r$ nonzero eigenvalues, then this sum will have only one nonzero addend.
    $endgroup$
    – darij grinberg
    Apr 2 '18 at 3:56












  • $begingroup$
    @darijgrinberg could you give me reference that I can follow the proof?
    $endgroup$
    – delinco
    Apr 2 '18 at 3:57






  • 1




    $begingroup$
    Exterior powers. For a really elementary self-contained proof, see Corollary 5.163 in my Notes on the combinatorial fundamentals of algebra, version of 21 March 2018 (for the solution, see Exercise 5.48). But the gist of the argument is explained well in math.stackexchange.com/questions/336048/coefficient-of-detxia/… .
    $endgroup$
    – darij grinberg
    Apr 2 '18 at 4:42


















  • $begingroup$
    This is true. The sum of all principal minors of order $r$ is the sum of all $r$-wise products of the eigenvalues. Now, if only $r$ of the eigenvalues are nonzero, then the latter sum will be the product of these $r$ nonzero eigenvalues. Hence, so is the former sum.
    $endgroup$
    – darij grinberg
    Apr 2 '18 at 3:25










  • $begingroup$
    @darijgrinberg what do you mean by sum of all r-wise products of the eigenvalues? The OP says just pseudo determinant equals the sum of all principal minors, and the definition of pseudo determinant is products of all eigenvalues
    $endgroup$
    – delinco
    Apr 2 '18 at 3:55










  • $begingroup$
    If $lambda_1, lambda_2, ldots, lambda_n$ are the eigenvalues of $A$, then the sum of all principal minors of order $r$ of $A$ is $sumlimits_{1 leq i_1 < i_2 < cdots < i_r leq n} lambda_{i_1} lambda_{i_2} cdots lambda_{i_r}$. This is the well-known fact that I'm referring to. Of course, if $A$ has only $r$ nonzero eigenvalues, then this sum will have only one nonzero addend.
    $endgroup$
    – darij grinberg
    Apr 2 '18 at 3:56












  • $begingroup$
    @darijgrinberg could you give me reference that I can follow the proof?
    $endgroup$
    – delinco
    Apr 2 '18 at 3:57






  • 1




    $begingroup$
    Exterior powers. For a really elementary self-contained proof, see Corollary 5.163 in my Notes on the combinatorial fundamentals of algebra, version of 21 March 2018 (for the solution, see Exercise 5.48). But the gist of the argument is explained well in math.stackexchange.com/questions/336048/coefficient-of-detxia/… .
    $endgroup$
    – darij grinberg
    Apr 2 '18 at 4:42
















$begingroup$
This is true. The sum of all principal minors of order $r$ is the sum of all $r$-wise products of the eigenvalues. Now, if only $r$ of the eigenvalues are nonzero, then the latter sum will be the product of these $r$ nonzero eigenvalues. Hence, so is the former sum.
$endgroup$
– darij grinberg
Apr 2 '18 at 3:25




$begingroup$
This is true. The sum of all principal minors of order $r$ is the sum of all $r$-wise products of the eigenvalues. Now, if only $r$ of the eigenvalues are nonzero, then the latter sum will be the product of these $r$ nonzero eigenvalues. Hence, so is the former sum.
$endgroup$
– darij grinberg
Apr 2 '18 at 3:25












$begingroup$
@darijgrinberg what do you mean by sum of all r-wise products of the eigenvalues? The OP says just pseudo determinant equals the sum of all principal minors, and the definition of pseudo determinant is products of all eigenvalues
$endgroup$
– delinco
Apr 2 '18 at 3:55




$begingroup$
@darijgrinberg what do you mean by sum of all r-wise products of the eigenvalues? The OP says just pseudo determinant equals the sum of all principal minors, and the definition of pseudo determinant is products of all eigenvalues
$endgroup$
– delinco
Apr 2 '18 at 3:55












$begingroup$
If $lambda_1, lambda_2, ldots, lambda_n$ are the eigenvalues of $A$, then the sum of all principal minors of order $r$ of $A$ is $sumlimits_{1 leq i_1 < i_2 < cdots < i_r leq n} lambda_{i_1} lambda_{i_2} cdots lambda_{i_r}$. This is the well-known fact that I'm referring to. Of course, if $A$ has only $r$ nonzero eigenvalues, then this sum will have only one nonzero addend.
$endgroup$
– darij grinberg
Apr 2 '18 at 3:56






$begingroup$
If $lambda_1, lambda_2, ldots, lambda_n$ are the eigenvalues of $A$, then the sum of all principal minors of order $r$ of $A$ is $sumlimits_{1 leq i_1 < i_2 < cdots < i_r leq n} lambda_{i_1} lambda_{i_2} cdots lambda_{i_r}$. This is the well-known fact that I'm referring to. Of course, if $A$ has only $r$ nonzero eigenvalues, then this sum will have only one nonzero addend.
$endgroup$
– darij grinberg
Apr 2 '18 at 3:56














$begingroup$
@darijgrinberg could you give me reference that I can follow the proof?
$endgroup$
– delinco
Apr 2 '18 at 3:57




$begingroup$
@darijgrinberg could you give me reference that I can follow the proof?
$endgroup$
– delinco
Apr 2 '18 at 3:57




1




1




$begingroup$
Exterior powers. For a really elementary self-contained proof, see Corollary 5.163 in my Notes on the combinatorial fundamentals of algebra, version of 21 March 2018 (for the solution, see Exercise 5.48). But the gist of the argument is explained well in math.stackexchange.com/questions/336048/coefficient-of-detxia/… .
$endgroup$
– darij grinberg
Apr 2 '18 at 4:42




$begingroup$
Exterior powers. For a really elementary self-contained proof, see Corollary 5.163 in my Notes on the combinatorial fundamentals of algebra, version of 21 March 2018 (for the solution, see Exercise 5.48). But the gist of the argument is explained well in math.stackexchange.com/questions/336048/coefficient-of-detxia/… .
$endgroup$
– darij grinberg
Apr 2 '18 at 4:42










1 Answer
1






active

oldest

votes


















0












$begingroup$

In order not to leave this question unanswered, let me prove the claim along
the lines I've suggested in the comments.



Let us agree on a few notations:




  • Let $n$ and $m$ be two nonnegative integers. Let $A=left( a_{i,j}right)
    _{1leq ileq n, 1leq jleq m}$
    be an $ntimes m$-matrix (over some ring).
    Let $U=left{ u_{1}<u_{2}<cdots<u_{p}right} $ be a subset of $left{
    1,2,ldots,nright} $
    , and let $V=left{ v_{1}<v_{2}<cdots<v_{q}right}
    $
    be a subset of $left{ 1,2,ldots,mright} $. Then, $A_{U,V}$ shall
    denote the submatrix $left( a_{u_{i},v_{j}}right) _{1leq ileq p, 1leq
    jleq q}$
    of $A$. (This is the matrix obtained from $A$ by crossing out all
    rows except for the rows numbered $u_{1},u_{2},ldots,u_{p}$ and crossing out
    all columns except for the columns numbered $v_{1},v_{2},ldots,v_{q}$.) For
    example,
    begin{equation}
    begin{pmatrix}
    a_{1} & a_{2} & a_{3} & a_{4}\
    b_{1} & b_{2} & b_{3} & b_{4}\
    c_{1} & c_{2} & c_{3} & c_{4}\
    d_{1} & d_{2} & d_{3} & d_{4}
    end{pmatrix}
    _{left{ 1,3,4right} ,left{ 2,4right} }
    =
    begin{pmatrix}
    a_{2} & a_{4}\
    c_{2} & c_{4}\
    d_{2} & d_{4}
    end{pmatrix} .
    end{equation}


  • If $n$ is a nonnegative integer, then $I_n$ will denote the $ntimes n$
    identity matrix (over whatever ring we are working in).



Fix a nonnegative integer $n$ and a field $mathbb{F}$.



We shall use the following known fact:




Theorem 1. Let $mathbb{K}$ be a commutative ring. Let $A$ be an $ntimes
n$
-matrix over $mathbb{K}$. Let $xinmathbb{K}$. Then,
begin{align}
detleft( A+xI_n right) & =sum_{Psubseteqleft{ 1,2,ldots
,nright} }detleft( A_{P,P}right) x^{n-leftvert Prightvert
}
label{darij.eq.t1.1}
tag{1}
\
& =sum_{k=0}^{n}left( sum_{substack{Psubseteqleft{ 1,2,ldots
,nright} ;\leftvert Prightvert =n-k}}detleft( A_{P,P}right)
right) x^{k}.
label{darij.eq.t1.2}
tag{2}
end{align}




Theorem 1 appears, e.g., as Corollary 6.164 in my Notes on the combinatorial
fundamentals of algebra
, in the version of 10th January
2019 (where I
use the more cumbersome notation $operatorname*{sub}nolimits_{wleft(
Pright) }^{wleft( Pright) }A$
instead of $A_{P,P}$). $blacksquare$




Corollary 2. Let $A$ be an $ntimes n$-matrix over a field $mathbb{F}$.
Let $rinleft{ 0,1,ldots,nright} $. Consider the $ntimes n$-matrix
$tI_n +A$ over the polynomial ring $mathbb{F}left[ tright] $. Its
determinant $detleft( tI_n +Aright) $ is a polynomial in $mathbb{F}
left[ tright] $
. Then,
begin{align}
& left( text{the sum of all principal }rtimes rtext{-minors of }Aright)
nonumber\
& =left( text{the coefficient of }t^{n-r}text{ in the polynomial }
detleft( tI_n +Aright) right) .
end{align}




Proof of Corollary 2. We have $rinleft{ 0,1,ldots,nright} $, thus
$n-rinleft{ 0,1,ldots,nright} $. Also, from $tI_n +A=A+tI_n $, we
obtain
begin{equation}
detleft( tI_n +Aright) =detleft( A+tI_n right) =sum_{k=0}
^{n}left( sum_{substack{Psubseteqleft{ 1,2,ldots,nright}
;\leftvert Prightvert =n-k}}detleft( A_{P,P}right) right) t^{k}
end{equation}

(by eqref{darij.eq.t1.2}, applied to $mathbb{K}=mathbb{F}left[ tright]
$
and $x=t$). Hence, for each $kinleft{ 0,1,ldots,nright} $, we have
begin{align*}
& left( text{the coefficient of }t^{k}text{ in the polynomial }
detleft( tI_n +Aright) right) \
& =sum_{substack{Psubseteqleft{ 1,2,ldots,nright} ;\leftvert
Prightvert =n-k}}detleft( A_{P,P}right) .
end{align*}

We can apply this to $k=n-r$ (since $n-rinleft{ 0,1,ldots,nright} $)
and thus obtain
begin{align*}
& left( text{the coefficient of }t^{n-r}text{ in the polynomial }
detleft( tI_n +Aright) right) \
& =sum_{substack{Psubseteqleft{ 1,2,ldots,nright} ;\leftvert
Prightvert =n-left( n-rright) }}detleft( A_{P,P}right)
=sum_{substack{Psubseteqleft{ 1,2,ldots,nright} ;\leftvert
Prightvert =r}}detleft( A_{P,P}right) qquadleft( text{since
}n-left( n-rright) =rright) \
& =left( text{the sum of all principal }rtimes rtext{-minors of
}Aright)
end{align*}

(by the definition of principal minors). This proves Corollary 2.
$blacksquare$




Lemma 3. Let $A$ be an $ntimes n$-matrix over a field $mathbb{F}$. Let
$lambda_{1},lambda_{2},ldots,lambda_{n}$ be the eigenvalues of $A$. We
assume that all $n$ of them lie in $mathbb{F}$. Then, in the polynomial ring
$mathbb{F}left[ tright] $, we have
begin{equation}
detleft( tI_n +Aright) =left( t+lambda_{1}right) left(
t+lambda_{2}right) cdotsleft( t+lambda_{n}right) .
end{equation}




Proof of Lemma 3. The eigenvalues of $A$ are defined as the roots of the
characteristic polynomial $detleft( tI_n -Aright) $ of $A$. (You may be
used to defining the characteristic polynomial of $A$ as $detleft(
A-tI_n right) $
instead, but this makes no difference: The polynomials
$detleft( tI_n -Aright) $ and $detleft( A-tI_n right) $ differ
only by a factor of $left( -1right) ^{n}$ (in fact, we have $detleft(
A-tI_n right) =left( -1right) ^{n}detleft( tI_n -Aright) $
), and
thus have the same roots.)



Also, the characteristic polynomial $detleft( tI_n -Aright) $ of $A$ is
a monic polynomial of degree $n$. And we know that its roots are the
eigenvalues of $A$, which are exactly $lambda_{1},lambda_{2},ldots
,lambda_{n}$
(with multiplicities). Thus, $detleft( tI_n -Aright) $ is
a monic polynomial of degree $n$ and has roots $lambda_{1},lambda_{2}
,ldots,lambda_{n}$
. Thus,
begin{equation}
detleft( tI_n -Aright) =left( t-lambda_{1}right) left(
t-lambda_{2}right) cdotsleft( t-lambda_{n}right)
end{equation}

(because the only monic polynomial of degree $n$ that has roots $lambda
_{1},lambda_{2},ldots,lambda_{n}$
is $left( t-lambda_{1}right) left(
t-lambda_{2}right) cdotsleft( t-lambda_{n}right) $
). Substituting
$-t$ for $t$ in this equality, we obtain
begin{align*}
detleft( left( -tright) I_n -Aright) & =left( -t-lambda
_{1}right) left( -t-lambda_{2}right) cdotsleft( -t-lambda
_{n}right) \
& =prod_{i=1}^{n}underbrace{left( -t-lambda_{i}right) }_{=-left(
t+lambda_{i}right) }=prod_{i=1}^{n}left( -left( t+lambda_{i}right)
right) \
& =left( -1right) ^{n}underbrace{prod_{i=1}^{n}left( t+lambda
_{i}right) }_{=left( t+lambda_{1}right) left( t+lambda_{2}right)
cdotsleft( t+lambda_{n}right) } \
& = left( -1right) ^{n}left(
t+lambda_{1}right) left( t+lambda_{2}right) cdotsleft(
t+lambda_{n}right) .
end{align*}

Comparing this with
begin{equation}
detleft( underbrace{left( -tright) I_n -A}_{=-left( tI_n
+Aright) }right) =detleft( -left( tI_n +Aright) right) =left(
-1right) ^{n}detleft( tI_n +Aright) ,
end{equation}

we obtain
begin{equation}
left( -1right) ^{n}detleft( tI_n +Aright) =left( -1right)
^{n}left( t+lambda_{1}right) left( t+lambda_{2}right) cdotsleft(
t+lambda_{n}right) .
end{equation}

We can divide both sides of this equality by $left( -1right) ^{n}$, and
thus obtain $detleft( tI_n +Aright) =left( t+lambda_{1}right)
left( t+lambda_{2}right) cdotsleft( t+lambda_{n}right) $
. This
proves Lemma 3. $blacksquare$



Let us also notice a completely trivial fact:




Lemma 4. Let $mathbb{F}$ be a field. Let $m$ and $k$ be nonnegative
integers. Let $pinmathbb{F}left[ tright] $ be a polynomial. Then,
begin{align*}
& left( text{the coefficient of }t^{m+k}text{ in the polynomial }pcdot
t^{k}right) \
& =left( text{the coefficient of }t^{m}text{ in the polynomial }pright)
.
end{align*}




Proof of Lemma 4. The coefficients of the polynomial $pcdot t^{k}$ are
precisely the coefficients of $p$, shifted to the right by $k$ slots. This
yields Lemma 4. $blacksquare$



Now we can prove your claim:




Theorem 5. Let $A$ be a diagonalizable $ntimes n$-matrix over a field
$mathbb{F}$. Let $r=operatorname*{rank}A$. Then,
begin{align*}
& left( text{the product of all nonzero eigenvalues of }Aright) \
& =left( text{the sum of all principal }rtimes rtext{-minors of
}Aright) .
end{align*}

(Here, the product of all nonzero eigenvalues takes the multiplicities of the
eigenvalues into account.)




Proof of Theorem 5. First of all, all $n$ eigenvalues of $A$ belong to
$mathbb{F}$ (since $A$ is diagonalizable). Moreover, $r=operatorname*{rank}
Ainleft{ 0,1,ldots,nright} $
(since $A$ is an $ntimes n$-matrix).



The matrix $A$ is diagonalizable; in other words, it is similar to a diagonal
matrix $Dinmathbb{F}^{ntimes n}$. Consider this $D$. Of course, the
diagonal entries of $D$ are the eigenvalues of $A$ (with multiplicities).



Since $A$ is similar to $D$, we have $operatorname*{rank}
A=operatorname*{rank}D$
. But $D$ is diagonal; thus, its rank
$operatorname*{rank}D$ equals the number of nonzero diagonal entries of $D$.
In other words, $operatorname*{rank}D$ equals the number of nonzero
eigenvalues of $A$ (since the diagonal entries of $D$ are the eigenvalues of
$A$). In other words, $r$ equals the number of nonzero eigenvalues of $A$
(since $r=operatorname*{rank}A=operatorname*{rank}D$). In other words, the
matrix $A$ has exactly $r$ nonzero eigenvalues.



Label the eigenvalues of $A$ as $lambda_{1},lambda_{2},ldots,lambda_{n}$
(with multiplicities) in such a way that the first $r$ eigenvalues
$lambda_{1},lambda_{2},ldots,lambda_{r}$ are nonzero, while the remaining
$n-r$ eigenvalues $lambda_{r+1},lambda_{r+2},ldots,lambda_{n}$ are zero.
(This is clearly possible, since $A$ has exactly $r$ nonzero eigenvalues.)
Thus, $lambda_{1},lambda_{2},ldots,lambda_{r}$ are exactly the nonzero
eigenvalues of $A$.



Lemma 3 yields
begin{align*}
detleft( tI_n +Aright) & =left( t+lambda_{1}right) left(
t+lambda_{2}right) cdotsleft( t+lambda_{n}right) =prod_{i=1}
^{n}left( t+lambda_{i}right) \
& =left( prod_{i=1}^{r}left( t+lambda_{i}right) right) cdotleft(
prod_{i=r+1}^{n}left( t+underbrace{lambda_{i}}
_{substack{=0\text{(since }lambda_{r+1},lambda_{r+2},ldots,lambda
_{n}text{ are zero)}}}right) right) \
& =left( prod_{i=1}^{r}left( t+lambda_{i}right) right)
cdotunderbrace{left( prod_{i=r+1}^{n}tright) }_{=t^{n-r}}=left(
prod_{i=1}^{r}left( t+lambda_{i}right) right) cdot t^{n-r}.
end{align*}

Now, Corollary 2 yields
begin{align*}
& left( text{the sum of all principal }rtimes rtext{-minors of
}Aright) \
& =left( text{the coefficient of }t^{n-r}text{ in the polynomial
}underbrace{detleft( tI_n +Aright) }_{=left( prod_{i=1}^{r}left(
t+lambda_{i}right) right) cdot t^{n-r}}right) \
& =left( text{the coefficient of }t^{n-r}text{ in the polynomial }left(
prod_{i=1}^{r}left( t+lambda_{i}right) right) cdot t^{n-r}right) \
& =left( text{the coefficient of }t^{0}text{ in the polynomial }
prod_{i=1}^{r}left( t+lambda_{i}right) right) \
& qquadleft( text{by Lemma 4, applied to }m=0text{ and }k=n-rtext{ and
}p=prod_{i=1}^{r}left( t+lambda_{i}right) right) \
& =left( text{the constant term of the polynomial }prod_{i=1}^{r}left(
t+lambda_{i}right) right) \
& =prod_{i=1}^{r}lambda_{i}=lambda_{1}lambda_{2}cdotslambda_{r}\
& =left( text{the product of all nonzero eigenvalues of }Aright)
end{align*}

(since $lambda_{1},lambda_{2},ldots,lambda_{r}$ are exactly the nonzero
eigenvalues of $A$). This proves Theorem 5. $blacksquare$



Note that in the above proof of Theorem 5,
the diagonalizability of $A$ was used only to guarantee that $A$
has exactly $r$ nonzero eigenvalues and that all $n$ eigenvalues of $A$
belong to $mathbb{F}$.






share|cite|improve this answer









$endgroup$













    Your Answer





    StackExchange.ifUsing("editor", function () {
    return StackExchange.using("mathjaxEditing", function () {
    StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
    StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
    });
    });
    }, "mathjax-editing");

    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "69"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    noCode: true, onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2718258%2fis-sum-of-principal-minors-equals-to-pseudo-determinant%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    0












    $begingroup$

    In order not to leave this question unanswered, let me prove the claim along
    the lines I've suggested in the comments.



    Let us agree on a few notations:




    • Let $n$ and $m$ be two nonnegative integers. Let $A=left( a_{i,j}right)
      _{1leq ileq n, 1leq jleq m}$
      be an $ntimes m$-matrix (over some ring).
      Let $U=left{ u_{1}<u_{2}<cdots<u_{p}right} $ be a subset of $left{
      1,2,ldots,nright} $
      , and let $V=left{ v_{1}<v_{2}<cdots<v_{q}right}
      $
      be a subset of $left{ 1,2,ldots,mright} $. Then, $A_{U,V}$ shall
      denote the submatrix $left( a_{u_{i},v_{j}}right) _{1leq ileq p, 1leq
      jleq q}$
      of $A$. (This is the matrix obtained from $A$ by crossing out all
      rows except for the rows numbered $u_{1},u_{2},ldots,u_{p}$ and crossing out
      all columns except for the columns numbered $v_{1},v_{2},ldots,v_{q}$.) For
      example,
      begin{equation}
      begin{pmatrix}
      a_{1} & a_{2} & a_{3} & a_{4}\
      b_{1} & b_{2} & b_{3} & b_{4}\
      c_{1} & c_{2} & c_{3} & c_{4}\
      d_{1} & d_{2} & d_{3} & d_{4}
      end{pmatrix}
      _{left{ 1,3,4right} ,left{ 2,4right} }
      =
      begin{pmatrix}
      a_{2} & a_{4}\
      c_{2} & c_{4}\
      d_{2} & d_{4}
      end{pmatrix} .
      end{equation}


    • If $n$ is a nonnegative integer, then $I_n$ will denote the $ntimes n$
      identity matrix (over whatever ring we are working in).



    Fix a nonnegative integer $n$ and a field $mathbb{F}$.



    We shall use the following known fact:




    Theorem 1. Let $mathbb{K}$ be a commutative ring. Let $A$ be an $ntimes
    n$
    -matrix over $mathbb{K}$. Let $xinmathbb{K}$. Then,
    begin{align}
    detleft( A+xI_n right) & =sum_{Psubseteqleft{ 1,2,ldots
    ,nright} }detleft( A_{P,P}right) x^{n-leftvert Prightvert
    }
    label{darij.eq.t1.1}
    tag{1}
    \
    & =sum_{k=0}^{n}left( sum_{substack{Psubseteqleft{ 1,2,ldots
    ,nright} ;\leftvert Prightvert =n-k}}detleft( A_{P,P}right)
    right) x^{k}.
    label{darij.eq.t1.2}
    tag{2}
    end{align}




    Theorem 1 appears, e.g., as Corollary 6.164 in my Notes on the combinatorial
    fundamentals of algebra
    , in the version of 10th January
    2019 (where I
    use the more cumbersome notation $operatorname*{sub}nolimits_{wleft(
    Pright) }^{wleft( Pright) }A$
    instead of $A_{P,P}$). $blacksquare$




    Corollary 2. Let $A$ be an $ntimes n$-matrix over a field $mathbb{F}$.
    Let $rinleft{ 0,1,ldots,nright} $. Consider the $ntimes n$-matrix
    $tI_n +A$ over the polynomial ring $mathbb{F}left[ tright] $. Its
    determinant $detleft( tI_n +Aright) $ is a polynomial in $mathbb{F}
    left[ tright] $
    . Then,
    begin{align}
    & left( text{the sum of all principal }rtimes rtext{-minors of }Aright)
    nonumber\
    & =left( text{the coefficient of }t^{n-r}text{ in the polynomial }
    detleft( tI_n +Aright) right) .
    end{align}




    Proof of Corollary 2. We have $rinleft{ 0,1,ldots,nright} $, thus
    $n-rinleft{ 0,1,ldots,nright} $. Also, from $tI_n +A=A+tI_n $, we
    obtain
    begin{equation}
    detleft( tI_n +Aright) =detleft( A+tI_n right) =sum_{k=0}
    ^{n}left( sum_{substack{Psubseteqleft{ 1,2,ldots,nright}
    ;\leftvert Prightvert =n-k}}detleft( A_{P,P}right) right) t^{k}
    end{equation}

    (by eqref{darij.eq.t1.2}, applied to $mathbb{K}=mathbb{F}left[ tright]
    $
    and $x=t$). Hence, for each $kinleft{ 0,1,ldots,nright} $, we have
    begin{align*}
    & left( text{the coefficient of }t^{k}text{ in the polynomial }
    detleft( tI_n +Aright) right) \
    & =sum_{substack{Psubseteqleft{ 1,2,ldots,nright} ;\leftvert
    Prightvert =n-k}}detleft( A_{P,P}right) .
    end{align*}

    We can apply this to $k=n-r$ (since $n-rinleft{ 0,1,ldots,nright} $)
    and thus obtain
    begin{align*}
    & left( text{the coefficient of }t^{n-r}text{ in the polynomial }
    detleft( tI_n +Aright) right) \
    & =sum_{substack{Psubseteqleft{ 1,2,ldots,nright} ;\leftvert
    Prightvert =n-left( n-rright) }}detleft( A_{P,P}right)
    =sum_{substack{Psubseteqleft{ 1,2,ldots,nright} ;\leftvert
    Prightvert =r}}detleft( A_{P,P}right) qquadleft( text{since
    }n-left( n-rright) =rright) \
    & =left( text{the sum of all principal }rtimes rtext{-minors of
    }Aright)
    end{align*}

    (by the definition of principal minors). This proves Corollary 2.
    $blacksquare$




    Lemma 3. Let $A$ be an $ntimes n$-matrix over a field $mathbb{F}$. Let
    $lambda_{1},lambda_{2},ldots,lambda_{n}$ be the eigenvalues of $A$. We
    assume that all $n$ of them lie in $mathbb{F}$. Then, in the polynomial ring
    $mathbb{F}left[ tright] $, we have
    begin{equation}
    detleft( tI_n +Aright) =left( t+lambda_{1}right) left(
    t+lambda_{2}right) cdotsleft( t+lambda_{n}right) .
    end{equation}




    Proof of Lemma 3. The eigenvalues of $A$ are defined as the roots of the
    characteristic polynomial $detleft( tI_n -Aright) $ of $A$. (You may be
    used to defining the characteristic polynomial of $A$ as $detleft(
    A-tI_n right) $
    instead, but this makes no difference: The polynomials
    $detleft( tI_n -Aright) $ and $detleft( A-tI_n right) $ differ
    only by a factor of $left( -1right) ^{n}$ (in fact, we have $detleft(
    A-tI_n right) =left( -1right) ^{n}detleft( tI_n -Aright) $
    ), and
    thus have the same roots.)



    Also, the characteristic polynomial $detleft( tI_n -Aright) $ of $A$ is
    a monic polynomial of degree $n$. And we know that its roots are the
    eigenvalues of $A$, which are exactly $lambda_{1},lambda_{2},ldots
    ,lambda_{n}$
    (with multiplicities). Thus, $detleft( tI_n -Aright) $ is
    a monic polynomial of degree $n$ and has roots $lambda_{1},lambda_{2}
    ,ldots,lambda_{n}$
    . Thus,
    begin{equation}
    detleft( tI_n -Aright) =left( t-lambda_{1}right) left(
    t-lambda_{2}right) cdotsleft( t-lambda_{n}right)
    end{equation}

    (because the only monic polynomial of degree $n$ that has roots $lambda
    _{1},lambda_{2},ldots,lambda_{n}$
    is $left( t-lambda_{1}right) left(
    t-lambda_{2}right) cdotsleft( t-lambda_{n}right) $
    ). Substituting
    $-t$ for $t$ in this equality, we obtain
    begin{align*}
    detleft( left( -tright) I_n -Aright) & =left( -t-lambda
    _{1}right) left( -t-lambda_{2}right) cdotsleft( -t-lambda
    _{n}right) \
    & =prod_{i=1}^{n}underbrace{left( -t-lambda_{i}right) }_{=-left(
    t+lambda_{i}right) }=prod_{i=1}^{n}left( -left( t+lambda_{i}right)
    right) \
    & =left( -1right) ^{n}underbrace{prod_{i=1}^{n}left( t+lambda
    _{i}right) }_{=left( t+lambda_{1}right) left( t+lambda_{2}right)
    cdotsleft( t+lambda_{n}right) } \
    & = left( -1right) ^{n}left(
    t+lambda_{1}right) left( t+lambda_{2}right) cdotsleft(
    t+lambda_{n}right) .
    end{align*}

    Comparing this with
    begin{equation}
    detleft( underbrace{left( -tright) I_n -A}_{=-left( tI_n
    +Aright) }right) =detleft( -left( tI_n +Aright) right) =left(
    -1right) ^{n}detleft( tI_n +Aright) ,
    end{equation}

    we obtain
    begin{equation}
    left( -1right) ^{n}detleft( tI_n +Aright) =left( -1right)
    ^{n}left( t+lambda_{1}right) left( t+lambda_{2}right) cdotsleft(
    t+lambda_{n}right) .
    end{equation}

    We can divide both sides of this equality by $left( -1right) ^{n}$, and
    thus obtain $detleft( tI_n +Aright) =left( t+lambda_{1}right)
    left( t+lambda_{2}right) cdotsleft( t+lambda_{n}right) $
    . This
    proves Lemma 3. $blacksquare$



    Let us also notice a completely trivial fact:




    Lemma 4. Let $mathbb{F}$ be a field. Let $m$ and $k$ be nonnegative
    integers. Let $pinmathbb{F}left[ tright] $ be a polynomial. Then,
    begin{align*}
    & left( text{the coefficient of }t^{m+k}text{ in the polynomial }pcdot
    t^{k}right) \
    & =left( text{the coefficient of }t^{m}text{ in the polynomial }pright)
    .
    end{align*}




    Proof of Lemma 4. The coefficients of the polynomial $pcdot t^{k}$ are
    precisely the coefficients of $p$, shifted to the right by $k$ slots. This
    yields Lemma 4. $blacksquare$



    Now we can prove your claim:




    Theorem 5. Let $A$ be a diagonalizable $ntimes n$-matrix over a field
    $mathbb{F}$. Let $r=operatorname*{rank}A$. Then,
    begin{align*}
    & left( text{the product of all nonzero eigenvalues of }Aright) \
    & =left( text{the sum of all principal }rtimes rtext{-minors of
    }Aright) .
    end{align*}

    (Here, the product of all nonzero eigenvalues takes the multiplicities of the
    eigenvalues into account.)




    Proof of Theorem 5. First of all, all $n$ eigenvalues of $A$ belong to
    $mathbb{F}$ (since $A$ is diagonalizable). Moreover, $r=operatorname*{rank}
    Ainleft{ 0,1,ldots,nright} $
    (since $A$ is an $ntimes n$-matrix).



    The matrix $A$ is diagonalizable; in other words, it is similar to a diagonal
    matrix $Dinmathbb{F}^{ntimes n}$. Consider this $D$. Of course, the
    diagonal entries of $D$ are the eigenvalues of $A$ (with multiplicities).



    Since $A$ is similar to $D$, we have $operatorname*{rank}
    A=operatorname*{rank}D$
    . But $D$ is diagonal; thus, its rank
    $operatorname*{rank}D$ equals the number of nonzero diagonal entries of $D$.
    In other words, $operatorname*{rank}D$ equals the number of nonzero
    eigenvalues of $A$ (since the diagonal entries of $D$ are the eigenvalues of
    $A$). In other words, $r$ equals the number of nonzero eigenvalues of $A$
    (since $r=operatorname*{rank}A=operatorname*{rank}D$). In other words, the
    matrix $A$ has exactly $r$ nonzero eigenvalues.



    Label the eigenvalues of $A$ as $lambda_{1},lambda_{2},ldots,lambda_{n}$
    (with multiplicities) in such a way that the first $r$ eigenvalues
    $lambda_{1},lambda_{2},ldots,lambda_{r}$ are nonzero, while the remaining
    $n-r$ eigenvalues $lambda_{r+1},lambda_{r+2},ldots,lambda_{n}$ are zero.
    (This is clearly possible, since $A$ has exactly $r$ nonzero eigenvalues.)
    Thus, $lambda_{1},lambda_{2},ldots,lambda_{r}$ are exactly the nonzero
    eigenvalues of $A$.



    Lemma 3 yields
    begin{align*}
    detleft( tI_n +Aright) & =left( t+lambda_{1}right) left(
    t+lambda_{2}right) cdotsleft( t+lambda_{n}right) =prod_{i=1}
    ^{n}left( t+lambda_{i}right) \
    & =left( prod_{i=1}^{r}left( t+lambda_{i}right) right) cdotleft(
    prod_{i=r+1}^{n}left( t+underbrace{lambda_{i}}
    _{substack{=0\text{(since }lambda_{r+1},lambda_{r+2},ldots,lambda
    _{n}text{ are zero)}}}right) right) \
    & =left( prod_{i=1}^{r}left( t+lambda_{i}right) right)
    cdotunderbrace{left( prod_{i=r+1}^{n}tright) }_{=t^{n-r}}=left(
    prod_{i=1}^{r}left( t+lambda_{i}right) right) cdot t^{n-r}.
    end{align*}

    Now, Corollary 2 yields
    begin{align*}
    & left( text{the sum of all principal }rtimes rtext{-minors of
    }Aright) \
    & =left( text{the coefficient of }t^{n-r}text{ in the polynomial
    }underbrace{detleft( tI_n +Aright) }_{=left( prod_{i=1}^{r}left(
    t+lambda_{i}right) right) cdot t^{n-r}}right) \
    & =left( text{the coefficient of }t^{n-r}text{ in the polynomial }left(
    prod_{i=1}^{r}left( t+lambda_{i}right) right) cdot t^{n-r}right) \
    & =left( text{the coefficient of }t^{0}text{ in the polynomial }
    prod_{i=1}^{r}left( t+lambda_{i}right) right) \
    & qquadleft( text{by Lemma 4, applied to }m=0text{ and }k=n-rtext{ and
    }p=prod_{i=1}^{r}left( t+lambda_{i}right) right) \
    & =left( text{the constant term of the polynomial }prod_{i=1}^{r}left(
    t+lambda_{i}right) right) \
    & =prod_{i=1}^{r}lambda_{i}=lambda_{1}lambda_{2}cdotslambda_{r}\
    & =left( text{the product of all nonzero eigenvalues of }Aright)
    end{align*}

    (since $lambda_{1},lambda_{2},ldots,lambda_{r}$ are exactly the nonzero
    eigenvalues of $A$). This proves Theorem 5. $blacksquare$



    Note that in the above proof of Theorem 5,
    the diagonalizability of $A$ was used only to guarantee that $A$
    has exactly $r$ nonzero eigenvalues and that all $n$ eigenvalues of $A$
    belong to $mathbb{F}$.






    share|cite|improve this answer









    $endgroup$


















      0












      $begingroup$

      In order not to leave this question unanswered, let me prove the claim along
      the lines I've suggested in the comments.



      Let us agree on a few notations:




      • Let $n$ and $m$ be two nonnegative integers. Let $A=left( a_{i,j}right)
        _{1leq ileq n, 1leq jleq m}$
        be an $ntimes m$-matrix (over some ring).
        Let $U=left{ u_{1}<u_{2}<cdots<u_{p}right} $ be a subset of $left{
        1,2,ldots,nright} $
        , and let $V=left{ v_{1}<v_{2}<cdots<v_{q}right}
        $
        be a subset of $left{ 1,2,ldots,mright} $. Then, $A_{U,V}$ shall
        denote the submatrix $left( a_{u_{i},v_{j}}right) _{1leq ileq p, 1leq
        jleq q}$
        of $A$. (This is the matrix obtained from $A$ by crossing out all
        rows except for the rows numbered $u_{1},u_{2},ldots,u_{p}$ and crossing out
        all columns except for the columns numbered $v_{1},v_{2},ldots,v_{q}$.) For
        example,
        begin{equation}
        begin{pmatrix}
        a_{1} & a_{2} & a_{3} & a_{4}\
        b_{1} & b_{2} & b_{3} & b_{4}\
        c_{1} & c_{2} & c_{3} & c_{4}\
        d_{1} & d_{2} & d_{3} & d_{4}
        end{pmatrix}
        _{left{ 1,3,4right} ,left{ 2,4right} }
        =
        begin{pmatrix}
        a_{2} & a_{4}\
        c_{2} & c_{4}\
        d_{2} & d_{4}
        end{pmatrix} .
        end{equation}


      • If $n$ is a nonnegative integer, then $I_n$ will denote the $ntimes n$
        identity matrix (over whatever ring we are working in).



      Fix a nonnegative integer $n$ and a field $mathbb{F}$.



      We shall use the following known fact:




      Theorem 1. Let $mathbb{K}$ be a commutative ring. Let $A$ be an $ntimes
      n$
      -matrix over $mathbb{K}$. Let $xinmathbb{K}$. Then,
      begin{align}
      detleft( A+xI_n right) & =sum_{Psubseteqleft{ 1,2,ldots
      ,nright} }detleft( A_{P,P}right) x^{n-leftvert Prightvert
      }
      label{darij.eq.t1.1}
      tag{1}
      \
      & =sum_{k=0}^{n}left( sum_{substack{Psubseteqleft{ 1,2,ldots
      ,nright} ;\leftvert Prightvert =n-k}}detleft( A_{P,P}right)
      right) x^{k}.
      label{darij.eq.t1.2}
      tag{2}
      end{align}




      Theorem 1 appears, e.g., as Corollary 6.164 in my Notes on the combinatorial
      fundamentals of algebra
      , in the version of 10th January
      2019 (where I
      use the more cumbersome notation $operatorname*{sub}nolimits_{wleft(
      Pright) }^{wleft( Pright) }A$
      instead of $A_{P,P}$). $blacksquare$




      Corollary 2. Let $A$ be an $ntimes n$-matrix over a field $mathbb{F}$.
      Let $rinleft{ 0,1,ldots,nright} $. Consider the $ntimes n$-matrix
      $tI_n +A$ over the polynomial ring $mathbb{F}left[ tright] $. Its
      determinant $detleft( tI_n +Aright) $ is a polynomial in $mathbb{F}
      left[ tright] $
      . Then,
      begin{align}
      & left( text{the sum of all principal }rtimes rtext{-minors of }Aright)
      nonumber\
      & =left( text{the coefficient of }t^{n-r}text{ in the polynomial }
      detleft( tI_n +Aright) right) .
      end{align}




      Proof of Corollary 2. We have $rinleft{ 0,1,ldots,nright} $, thus
      $n-rinleft{ 0,1,ldots,nright} $. Also, from $tI_n +A=A+tI_n $, we
      obtain
      begin{equation}
      detleft( tI_n +Aright) =detleft( A+tI_n right) =sum_{k=0}
      ^{n}left( sum_{substack{Psubseteqleft{ 1,2,ldots,nright}
      ;\leftvert Prightvert =n-k}}detleft( A_{P,P}right) right) t^{k}
      end{equation}

      (by eqref{darij.eq.t1.2}, applied to $mathbb{K}=mathbb{F}left[ tright]
      $
      and $x=t$). Hence, for each $kinleft{ 0,1,ldots,nright} $, we have
      begin{align*}
      & left( text{the coefficient of }t^{k}text{ in the polynomial }
      detleft( tI_n +Aright) right) \
      & =sum_{substack{Psubseteqleft{ 1,2,ldots,nright} ;\leftvert
      Prightvert =n-k}}detleft( A_{P,P}right) .
      end{align*}

      We can apply this to $k=n-r$ (since $n-rinleft{ 0,1,ldots,nright} $)
      and thus obtain
      begin{align*}
      & left( text{the coefficient of }t^{n-r}text{ in the polynomial }
      detleft( tI_n +Aright) right) \
      & =sum_{substack{Psubseteqleft{ 1,2,ldots,nright} ;\leftvert
      Prightvert =n-left( n-rright) }}detleft( A_{P,P}right)
      =sum_{substack{Psubseteqleft{ 1,2,ldots,nright} ;\leftvert
      Prightvert =r}}detleft( A_{P,P}right) qquadleft( text{since
      }n-left( n-rright) =rright) \
      & =left( text{the sum of all principal }rtimes rtext{-minors of
      }Aright)
      end{align*}

      (by the definition of principal minors). This proves Corollary 2.
      $blacksquare$




      Lemma 3. Let $A$ be an $ntimes n$-matrix over a field $mathbb{F}$. Let
      $lambda_{1},lambda_{2},ldots,lambda_{n}$ be the eigenvalues of $A$. We
      assume that all $n$ of them lie in $mathbb{F}$. Then, in the polynomial ring
      $mathbb{F}left[ tright] $, we have
      begin{equation}
      detleft( tI_n +Aright) =left( t+lambda_{1}right) left(
      t+lambda_{2}right) cdotsleft( t+lambda_{n}right) .
      end{equation}




      Proof of Lemma 3. The eigenvalues of $A$ are defined as the roots of the
      characteristic polynomial $detleft( tI_n -Aright) $ of $A$. (You may be
      used to defining the characteristic polynomial of $A$ as $detleft(
      A-tI_n right) $
      instead, but this makes no difference: The polynomials
      $detleft( tI_n -Aright) $ and $detleft( A-tI_n right) $ differ
      only by a factor of $left( -1right) ^{n}$ (in fact, we have $detleft(
      A-tI_n right) =left( -1right) ^{n}detleft( tI_n -Aright) $
      ), and
      thus have the same roots.)



      Also, the characteristic polynomial $detleft( tI_n -Aright) $ of $A$ is
      a monic polynomial of degree $n$. And we know that its roots are the
      eigenvalues of $A$, which are exactly $lambda_{1},lambda_{2},ldots
      ,lambda_{n}$
      (with multiplicities). Thus, $detleft( tI_n -Aright) $ is
      a monic polynomial of degree $n$ and has roots $lambda_{1},lambda_{2}
      ,ldots,lambda_{n}$
      . Thus,
      begin{equation}
      detleft( tI_n -Aright) =left( t-lambda_{1}right) left(
      t-lambda_{2}right) cdotsleft( t-lambda_{n}right)
      end{equation}

      (because the only monic polynomial of degree $n$ that has roots $lambda
      _{1},lambda_{2},ldots,lambda_{n}$
      is $left( t-lambda_{1}right) left(
      t-lambda_{2}right) cdotsleft( t-lambda_{n}right) $
      ). Substituting
      $-t$ for $t$ in this equality, we obtain
      begin{align*}
      detleft( left( -tright) I_n -Aright) & =left( -t-lambda
      _{1}right) left( -t-lambda_{2}right) cdotsleft( -t-lambda
      _{n}right) \
      & =prod_{i=1}^{n}underbrace{left( -t-lambda_{i}right) }_{=-left(
      t+lambda_{i}right) }=prod_{i=1}^{n}left( -left( t+lambda_{i}right)
      right) \
      & =left( -1right) ^{n}underbrace{prod_{i=1}^{n}left( t+lambda
      _{i}right) }_{=left( t+lambda_{1}right) left( t+lambda_{2}right)
      cdotsleft( t+lambda_{n}right) } \
      & = left( -1right) ^{n}left(
      t+lambda_{1}right) left( t+lambda_{2}right) cdotsleft(
      t+lambda_{n}right) .
      end{align*}

      Comparing this with
      begin{equation}
      detleft( underbrace{left( -tright) I_n -A}_{=-left( tI_n
      +Aright) }right) =detleft( -left( tI_n +Aright) right) =left(
      -1right) ^{n}detleft( tI_n +Aright) ,
      end{equation}

      we obtain
      begin{equation}
      left( -1right) ^{n}detleft( tI_n +Aright) =left( -1right)
      ^{n}left( t+lambda_{1}right) left( t+lambda_{2}right) cdotsleft(
      t+lambda_{n}right) .
      end{equation}

      We can divide both sides of this equality by $left( -1right) ^{n}$, and
      thus obtain $detleft( tI_n +Aright) =left( t+lambda_{1}right)
      left( t+lambda_{2}right) cdotsleft( t+lambda_{n}right) $
      . This
      proves Lemma 3. $blacksquare$



      Let us also notice a completely trivial fact:




      Lemma 4. Let $mathbb{F}$ be a field. Let $m$ and $k$ be nonnegative
      integers. Let $pinmathbb{F}left[ tright] $ be a polynomial. Then,
      begin{align*}
      & left( text{the coefficient of }t^{m+k}text{ in the polynomial }pcdot
      t^{k}right) \
      & =left( text{the coefficient of }t^{m}text{ in the polynomial }pright)
      .
      end{align*}




      Proof of Lemma 4. The coefficients of the polynomial $pcdot t^{k}$ are
      precisely the coefficients of $p$, shifted to the right by $k$ slots. This
      yields Lemma 4. $blacksquare$



      Now we can prove your claim:




      Theorem 5. Let $A$ be a diagonalizable $ntimes n$-matrix over a field
      $mathbb{F}$. Let $r=operatorname*{rank}A$. Then,
      begin{align*}
      & left( text{the product of all nonzero eigenvalues of }Aright) \
      & =left( text{the sum of all principal }rtimes rtext{-minors of
      }Aright) .
      end{align*}

      (Here, the product of all nonzero eigenvalues takes the multiplicities of the
      eigenvalues into account.)




      Proof of Theorem 5. First of all, all $n$ eigenvalues of $A$ belong to
      $mathbb{F}$ (since $A$ is diagonalizable). Moreover, $r=operatorname*{rank}
      Ainleft{ 0,1,ldots,nright} $
      (since $A$ is an $ntimes n$-matrix).



      The matrix $A$ is diagonalizable; in other words, it is similar to a diagonal
      matrix $Dinmathbb{F}^{ntimes n}$. Consider this $D$. Of course, the
      diagonal entries of $D$ are the eigenvalues of $A$ (with multiplicities).



      Since $A$ is similar to $D$, we have $operatorname*{rank}
      A=operatorname*{rank}D$
      . But $D$ is diagonal; thus, its rank
      $operatorname*{rank}D$ equals the number of nonzero diagonal entries of $D$.
      In other words, $operatorname*{rank}D$ equals the number of nonzero
      eigenvalues of $A$ (since the diagonal entries of $D$ are the eigenvalues of
      $A$). In other words, $r$ equals the number of nonzero eigenvalues of $A$
      (since $r=operatorname*{rank}A=operatorname*{rank}D$). In other words, the
      matrix $A$ has exactly $r$ nonzero eigenvalues.



      Label the eigenvalues of $A$ as $lambda_{1},lambda_{2},ldots,lambda_{n}$
      (with multiplicities) in such a way that the first $r$ eigenvalues
      $lambda_{1},lambda_{2},ldots,lambda_{r}$ are nonzero, while the remaining
      $n-r$ eigenvalues $lambda_{r+1},lambda_{r+2},ldots,lambda_{n}$ are zero.
      (This is clearly possible, since $A$ has exactly $r$ nonzero eigenvalues.)
      Thus, $lambda_{1},lambda_{2},ldots,lambda_{r}$ are exactly the nonzero
      eigenvalues of $A$.



      Lemma 3 yields
      begin{align*}
      detleft( tI_n +Aright) & =left( t+lambda_{1}right) left(
      t+lambda_{2}right) cdotsleft( t+lambda_{n}right) =prod_{i=1}
      ^{n}left( t+lambda_{i}right) \
      & =left( prod_{i=1}^{r}left( t+lambda_{i}right) right) cdotleft(
      prod_{i=r+1}^{n}left( t+underbrace{lambda_{i}}
      _{substack{=0\text{(since }lambda_{r+1},lambda_{r+2},ldots,lambda
      _{n}text{ are zero)}}}right) right) \
      & =left( prod_{i=1}^{r}left( t+lambda_{i}right) right)
      cdotunderbrace{left( prod_{i=r+1}^{n}tright) }_{=t^{n-r}}=left(
      prod_{i=1}^{r}left( t+lambda_{i}right) right) cdot t^{n-r}.
      end{align*}

      Now, Corollary 2 yields
      begin{align*}
      & left( text{the sum of all principal }rtimes rtext{-minors of
      }Aright) \
      & =left( text{the coefficient of }t^{n-r}text{ in the polynomial
      }underbrace{detleft( tI_n +Aright) }_{=left( prod_{i=1}^{r}left(
      t+lambda_{i}right) right) cdot t^{n-r}}right) \
      & =left( text{the coefficient of }t^{n-r}text{ in the polynomial }left(
      prod_{i=1}^{r}left( t+lambda_{i}right) right) cdot t^{n-r}right) \
      & =left( text{the coefficient of }t^{0}text{ in the polynomial }
      prod_{i=1}^{r}left( t+lambda_{i}right) right) \
      & qquadleft( text{by Lemma 4, applied to }m=0text{ and }k=n-rtext{ and
      }p=prod_{i=1}^{r}left( t+lambda_{i}right) right) \
      & =left( text{the constant term of the polynomial }prod_{i=1}^{r}left(
      t+lambda_{i}right) right) \
      & =prod_{i=1}^{r}lambda_{i}=lambda_{1}lambda_{2}cdotslambda_{r}\
      & =left( text{the product of all nonzero eigenvalues of }Aright)
      end{align*}

      (since $lambda_{1},lambda_{2},ldots,lambda_{r}$ are exactly the nonzero
      eigenvalues of $A$). This proves Theorem 5. $blacksquare$



      Note that in the above proof of Theorem 5,
      the diagonalizability of $A$ was used only to guarantee that $A$
      has exactly $r$ nonzero eigenvalues and that all $n$ eigenvalues of $A$
      belong to $mathbb{F}$.






      share|cite|improve this answer









      $endgroup$
















        0












        0








        0





        $begingroup$

        In order not to leave this question unanswered, let me prove the claim along
        the lines I've suggested in the comments.



        Let us agree on a few notations:




        • Let $n$ and $m$ be two nonnegative integers. Let $A=left( a_{i,j}right)
          _{1leq ileq n, 1leq jleq m}$
          be an $ntimes m$-matrix (over some ring).
          Let $U=left{ u_{1}<u_{2}<cdots<u_{p}right} $ be a subset of $left{
          1,2,ldots,nright} $
          , and let $V=left{ v_{1}<v_{2}<cdots<v_{q}right}
          $
          be a subset of $left{ 1,2,ldots,mright} $. Then, $A_{U,V}$ shall
          denote the submatrix $left( a_{u_{i},v_{j}}right) _{1leq ileq p, 1leq
          jleq q}$
          of $A$. (This is the matrix obtained from $A$ by crossing out all
          rows except for the rows numbered $u_{1},u_{2},ldots,u_{p}$ and crossing out
          all columns except for the columns numbered $v_{1},v_{2},ldots,v_{q}$.) For
          example,
          begin{equation}
          begin{pmatrix}
          a_{1} & a_{2} & a_{3} & a_{4}\
          b_{1} & b_{2} & b_{3} & b_{4}\
          c_{1} & c_{2} & c_{3} & c_{4}\
          d_{1} & d_{2} & d_{3} & d_{4}
          end{pmatrix}
          _{left{ 1,3,4right} ,left{ 2,4right} }
          =
          begin{pmatrix}
          a_{2} & a_{4}\
          c_{2} & c_{4}\
          d_{2} & d_{4}
          end{pmatrix} .
          end{equation}


        • If $n$ is a nonnegative integer, then $I_n$ will denote the $ntimes n$
          identity matrix (over whatever ring we are working in).



        Fix a nonnegative integer $n$ and a field $mathbb{F}$.



        We shall use the following known fact:




        Theorem 1. Let $mathbb{K}$ be a commutative ring. Let $A$ be an $ntimes
        n$
        -matrix over $mathbb{K}$. Let $xinmathbb{K}$. Then,
        begin{align}
        detleft( A+xI_n right) & =sum_{Psubseteqleft{ 1,2,ldots
        ,nright} }detleft( A_{P,P}right) x^{n-leftvert Prightvert
        }
        label{darij.eq.t1.1}
        tag{1}
        \
        & =sum_{k=0}^{n}left( sum_{substack{Psubseteqleft{ 1,2,ldots
        ,nright} ;\leftvert Prightvert =n-k}}detleft( A_{P,P}right)
        right) x^{k}.
        label{darij.eq.t1.2}
        tag{2}
        end{align}




        Theorem 1 appears, e.g., as Corollary 6.164 in my Notes on the combinatorial
        fundamentals of algebra
        , in the version of 10th January
        2019 (where I
        use the more cumbersome notation $operatorname*{sub}nolimits_{wleft(
        Pright) }^{wleft( Pright) }A$
        instead of $A_{P,P}$). $blacksquare$




        Corollary 2. Let $A$ be an $ntimes n$-matrix over a field $mathbb{F}$.
        Let $rinleft{ 0,1,ldots,nright} $. Consider the $ntimes n$-matrix
        $tI_n +A$ over the polynomial ring $mathbb{F}left[ tright] $. Its
        determinant $detleft( tI_n +Aright) $ is a polynomial in $mathbb{F}
        left[ tright] $
        . Then,
        begin{align}
        & left( text{the sum of all principal }rtimes rtext{-minors of }Aright)
        nonumber\
        & =left( text{the coefficient of }t^{n-r}text{ in the polynomial }
        detleft( tI_n +Aright) right) .
        end{align}




        Proof of Corollary 2. We have $rinleft{ 0,1,ldots,nright} $, thus
        $n-rinleft{ 0,1,ldots,nright} $. Also, from $tI_n +A=A+tI_n $, we
        obtain
        begin{equation}
        detleft( tI_n +Aright) =detleft( A+tI_n right) =sum_{k=0}
        ^{n}left( sum_{substack{Psubseteqleft{ 1,2,ldots,nright}
        ;\leftvert Prightvert =n-k}}detleft( A_{P,P}right) right) t^{k}
        end{equation}

        (by eqref{darij.eq.t1.2}, applied to $mathbb{K}=mathbb{F}left[ tright]
        $
        and $x=t$). Hence, for each $kinleft{ 0,1,ldots,nright} $, we have
        begin{align*}
        & left( text{the coefficient of }t^{k}text{ in the polynomial }
        detleft( tI_n +Aright) right) \
        & =sum_{substack{Psubseteqleft{ 1,2,ldots,nright} ;\leftvert
        Prightvert =n-k}}detleft( A_{P,P}right) .
        end{align*}

        We can apply this to $k=n-r$ (since $n-rinleft{ 0,1,ldots,nright} $)
        and thus obtain
        begin{align*}
        & left( text{the coefficient of }t^{n-r}text{ in the polynomial }
        detleft( tI_n +Aright) right) \
        & =sum_{substack{Psubseteqleft{ 1,2,ldots,nright} ;\leftvert
        Prightvert =n-left( n-rright) }}detleft( A_{P,P}right)
        =sum_{substack{Psubseteqleft{ 1,2,ldots,nright} ;\leftvert
        Prightvert =r}}detleft( A_{P,P}right) qquadleft( text{since
        }n-left( n-rright) =rright) \
        & =left( text{the sum of all principal }rtimes rtext{-minors of
        }Aright)
        end{align*}

        (by the definition of principal minors). This proves Corollary 2.
        $blacksquare$




        Lemma 3. Let $A$ be an $ntimes n$-matrix over a field $mathbb{F}$. Let
        $lambda_{1},lambda_{2},ldots,lambda_{n}$ be the eigenvalues of $A$. We
        assume that all $n$ of them lie in $mathbb{F}$. Then, in the polynomial ring
        $mathbb{F}left[ tright] $, we have
        begin{equation}
        detleft( tI_n +Aright) =left( t+lambda_{1}right) left(
        t+lambda_{2}right) cdotsleft( t+lambda_{n}right) .
        end{equation}




        Proof of Lemma 3. The eigenvalues of $A$ are defined as the roots of the
        characteristic polynomial $detleft( tI_n -Aright) $ of $A$. (You may be
        used to defining the characteristic polynomial of $A$ as $detleft(
        A-tI_n right) $
        instead, but this makes no difference: The polynomials
        $detleft( tI_n -Aright) $ and $detleft( A-tI_n right) $ differ
        only by a factor of $left( -1right) ^{n}$ (in fact, we have $detleft(
        A-tI_n right) =left( -1right) ^{n}detleft( tI_n -Aright) $
        ), and
        thus have the same roots.)



        Also, the characteristic polynomial $detleft( tI_n -Aright) $ of $A$ is
        a monic polynomial of degree $n$. And we know that its roots are the
        eigenvalues of $A$, which are exactly $lambda_{1},lambda_{2},ldots
        ,lambda_{n}$
        (with multiplicities). Thus, $detleft( tI_n -Aright) $ is
        a monic polynomial of degree $n$ and has roots $lambda_{1},lambda_{2}
        ,ldots,lambda_{n}$
        . Thus,
        begin{equation}
        detleft( tI_n -Aright) =left( t-lambda_{1}right) left(
        t-lambda_{2}right) cdotsleft( t-lambda_{n}right)
        end{equation}

        (because the only monic polynomial of degree $n$ that has roots $lambda
        _{1},lambda_{2},ldots,lambda_{n}$
        is $left( t-lambda_{1}right) left(
        t-lambda_{2}right) cdotsleft( t-lambda_{n}right) $
        ). Substituting
        $-t$ for $t$ in this equality, we obtain
        begin{align*}
        detleft( left( -tright) I_n -Aright) & =left( -t-lambda
        _{1}right) left( -t-lambda_{2}right) cdotsleft( -t-lambda
        _{n}right) \
        & =prod_{i=1}^{n}underbrace{left( -t-lambda_{i}right) }_{=-left(
        t+lambda_{i}right) }=prod_{i=1}^{n}left( -left( t+lambda_{i}right)
        right) \
        & =left( -1right) ^{n}underbrace{prod_{i=1}^{n}left( t+lambda
        _{i}right) }_{=left( t+lambda_{1}right) left( t+lambda_{2}right)
        cdotsleft( t+lambda_{n}right) } \
        & = left( -1right) ^{n}left(
        t+lambda_{1}right) left( t+lambda_{2}right) cdotsleft(
        t+lambda_{n}right) .
        end{align*}

        Comparing this with
        begin{equation}
        detleft( underbrace{left( -tright) I_n -A}_{=-left( tI_n
        +Aright) }right) =detleft( -left( tI_n +Aright) right) =left(
        -1right) ^{n}detleft( tI_n +Aright) ,
        end{equation}

        we obtain
        begin{equation}
        left( -1right) ^{n}detleft( tI_n +Aright) =left( -1right)
        ^{n}left( t+lambda_{1}right) left( t+lambda_{2}right) cdotsleft(
        t+lambda_{n}right) .
        end{equation}

        We can divide both sides of this equality by $left( -1right) ^{n}$, and
        thus obtain $detleft( tI_n +Aright) =left( t+lambda_{1}right)
        left( t+lambda_{2}right) cdotsleft( t+lambda_{n}right) $
        . This
        proves Lemma 3. $blacksquare$



        Let us also notice a completely trivial fact:




        Lemma 4. Let $mathbb{F}$ be a field. Let $m$ and $k$ be nonnegative
        integers. Let $pinmathbb{F}left[ tright] $ be a polynomial. Then,
        begin{align*}
        & left( text{the coefficient of }t^{m+k}text{ in the polynomial }pcdot
        t^{k}right) \
        & =left( text{the coefficient of }t^{m}text{ in the polynomial }pright)
        .
        end{align*}




        Proof of Lemma 4. The coefficients of the polynomial $pcdot t^{k}$ are
        precisely the coefficients of $p$, shifted to the right by $k$ slots. This
        yields Lemma 4. $blacksquare$



        Now we can prove your claim:




        Theorem 5. Let $A$ be a diagonalizable $ntimes n$-matrix over a field
        $mathbb{F}$. Let $r=operatorname*{rank}A$. Then,
        begin{align*}
        & left( text{the product of all nonzero eigenvalues of }Aright) \
        & =left( text{the sum of all principal }rtimes rtext{-minors of
        }Aright) .
        end{align*}

        (Here, the product of all nonzero eigenvalues takes the multiplicities of the
        eigenvalues into account.)




        Proof of Theorem 5. First of all, all $n$ eigenvalues of $A$ belong to
        $mathbb{F}$ (since $A$ is diagonalizable). Moreover, $r=operatorname*{rank}
        Ainleft{ 0,1,ldots,nright} $
        (since $A$ is an $ntimes n$-matrix).



        The matrix $A$ is diagonalizable; in other words, it is similar to a diagonal
        matrix $Dinmathbb{F}^{ntimes n}$. Consider this $D$. Of course, the
        diagonal entries of $D$ are the eigenvalues of $A$ (with multiplicities).



        Since $A$ is similar to $D$, we have $operatorname*{rank}
        A=operatorname*{rank}D$
        . But $D$ is diagonal; thus, its rank
        $operatorname*{rank}D$ equals the number of nonzero diagonal entries of $D$.
        In other words, $operatorname*{rank}D$ equals the number of nonzero
        eigenvalues of $A$ (since the diagonal entries of $D$ are the eigenvalues of
        $A$). In other words, $r$ equals the number of nonzero eigenvalues of $A$
        (since $r=operatorname*{rank}A=operatorname*{rank}D$). In other words, the
        matrix $A$ has exactly $r$ nonzero eigenvalues.



        Label the eigenvalues of $A$ as $lambda_{1},lambda_{2},ldots,lambda_{n}$
        (with multiplicities) in such a way that the first $r$ eigenvalues
        $lambda_{1},lambda_{2},ldots,lambda_{r}$ are nonzero, while the remaining
        $n-r$ eigenvalues $lambda_{r+1},lambda_{r+2},ldots,lambda_{n}$ are zero.
        (This is clearly possible, since $A$ has exactly $r$ nonzero eigenvalues.)
        Thus, $lambda_{1},lambda_{2},ldots,lambda_{r}$ are exactly the nonzero
        eigenvalues of $A$.



        Lemma 3 yields
        begin{align*}
        detleft( tI_n +Aright) & =left( t+lambda_{1}right) left(
        t+lambda_{2}right) cdotsleft( t+lambda_{n}right) =prod_{i=1}
        ^{n}left( t+lambda_{i}right) \
        & =left( prod_{i=1}^{r}left( t+lambda_{i}right) right) cdotleft(
        prod_{i=r+1}^{n}left( t+underbrace{lambda_{i}}
        _{substack{=0\text{(since }lambda_{r+1},lambda_{r+2},ldots,lambda
        _{n}text{ are zero)}}}right) right) \
        & =left( prod_{i=1}^{r}left( t+lambda_{i}right) right)
        cdotunderbrace{left( prod_{i=r+1}^{n}tright) }_{=t^{n-r}}=left(
        prod_{i=1}^{r}left( t+lambda_{i}right) right) cdot t^{n-r}.
        end{align*}

        Now, Corollary 2 yields
        begin{align*}
        & left( text{the sum of all principal }rtimes rtext{-minors of
        }Aright) \
        & =left( text{the coefficient of }t^{n-r}text{ in the polynomial
        }underbrace{detleft( tI_n +Aright) }_{=left( prod_{i=1}^{r}left(
        t+lambda_{i}right) right) cdot t^{n-r}}right) \
        & =left( text{the coefficient of }t^{n-r}text{ in the polynomial }left(
        prod_{i=1}^{r}left( t+lambda_{i}right) right) cdot t^{n-r}right) \
        & =left( text{the coefficient of }t^{0}text{ in the polynomial }
        prod_{i=1}^{r}left( t+lambda_{i}right) right) \
        & qquadleft( text{by Lemma 4, applied to }m=0text{ and }k=n-rtext{ and
        }p=prod_{i=1}^{r}left( t+lambda_{i}right) right) \
        & =left( text{the constant term of the polynomial }prod_{i=1}^{r}left(
        t+lambda_{i}right) right) \
        & =prod_{i=1}^{r}lambda_{i}=lambda_{1}lambda_{2}cdotslambda_{r}\
        & =left( text{the product of all nonzero eigenvalues of }Aright)
        end{align*}

        (since $lambda_{1},lambda_{2},ldots,lambda_{r}$ are exactly the nonzero
        eigenvalues of $A$). This proves Theorem 5. $blacksquare$



        Note that in the above proof of Theorem 5,
        the diagonalizability of $A$ was used only to guarantee that $A$
        has exactly $r$ nonzero eigenvalues and that all $n$ eigenvalues of $A$
        belong to $mathbb{F}$.






        share|cite|improve this answer









        $endgroup$



        In order not to leave this question unanswered, let me prove the claim along
        the lines I've suggested in the comments.



        Let us agree on a few notations:




        • Let $n$ and $m$ be two nonnegative integers. Let $A=left( a_{i,j}right)
          _{1leq ileq n, 1leq jleq m}$
          be an $ntimes m$-matrix (over some ring).
          Let $U=left{ u_{1}<u_{2}<cdots<u_{p}right} $ be a subset of $left{
          1,2,ldots,nright} $
          , and let $V=left{ v_{1}<v_{2}<cdots<v_{q}right}
          $
          be a subset of $left{ 1,2,ldots,mright} $. Then, $A_{U,V}$ shall
          denote the submatrix $left( a_{u_{i},v_{j}}right) _{1leq ileq p, 1leq
          jleq q}$
          of $A$. (This is the matrix obtained from $A$ by crossing out all
          rows except for the rows numbered $u_{1},u_{2},ldots,u_{p}$ and crossing out
          all columns except for the columns numbered $v_{1},v_{2},ldots,v_{q}$.) For
          example,
          begin{equation}
          begin{pmatrix}
          a_{1} & a_{2} & a_{3} & a_{4}\
          b_{1} & b_{2} & b_{3} & b_{4}\
          c_{1} & c_{2} & c_{3} & c_{4}\
          d_{1} & d_{2} & d_{3} & d_{4}
          end{pmatrix}
          _{left{ 1,3,4right} ,left{ 2,4right} }
          =
          begin{pmatrix}
          a_{2} & a_{4}\
          c_{2} & c_{4}\
          d_{2} & d_{4}
          end{pmatrix} .
          end{equation}


        • If $n$ is a nonnegative integer, then $I_n$ will denote the $ntimes n$
          identity matrix (over whatever ring we are working in).



        Fix a nonnegative integer $n$ and a field $mathbb{F}$.



        We shall use the following known fact:




        Theorem 1. Let $mathbb{K}$ be a commutative ring. Let $A$ be an $ntimes
        n$
        -matrix over $mathbb{K}$. Let $xinmathbb{K}$. Then,
        begin{align}
        detleft( A+xI_n right) & =sum_{Psubseteqleft{ 1,2,ldots
        ,nright} }detleft( A_{P,P}right) x^{n-leftvert Prightvert
        }
        label{darij.eq.t1.1}
        tag{1}
        \
        & =sum_{k=0}^{n}left( sum_{substack{Psubseteqleft{ 1,2,ldots
        ,nright} ;\leftvert Prightvert =n-k}}detleft( A_{P,P}right)
        right) x^{k}.
        label{darij.eq.t1.2}
        tag{2}
        end{align}




        Theorem 1 appears, e.g., as Corollary 6.164 in my Notes on the combinatorial
        fundamentals of algebra
        , in the version of 10th January
        2019 (where I
        use the more cumbersome notation $operatorname*{sub}nolimits_{wleft(
        Pright) }^{wleft( Pright) }A$
        instead of $A_{P,P}$). $blacksquare$




        Corollary 2. Let $A$ be an $ntimes n$-matrix over a field $mathbb{F}$.
        Let $rinleft{ 0,1,ldots,nright} $. Consider the $ntimes n$-matrix
        $tI_n +A$ over the polynomial ring $mathbb{F}left[ tright] $. Its
        determinant $detleft( tI_n +Aright) $ is a polynomial in $mathbb{F}
        left[ tright] $
        . Then,
        begin{align}
        & left( text{the sum of all principal }rtimes rtext{-minors of }Aright)
        nonumber\
        & =left( text{the coefficient of }t^{n-r}text{ in the polynomial }
        detleft( tI_n +Aright) right) .
        end{align}




        Proof of Corollary 2. We have $rinleft{ 0,1,ldots,nright} $, thus
        $n-rinleft{ 0,1,ldots,nright} $. Also, from $tI_n +A=A+tI_n $, we
        obtain
        begin{equation}
        detleft( tI_n +Aright) =detleft( A+tI_n right) =sum_{k=0}
        ^{n}left( sum_{substack{Psubseteqleft{ 1,2,ldots,nright}
        ;\leftvert Prightvert =n-k}}detleft( A_{P,P}right) right) t^{k}
        end{equation}

        (by eqref{darij.eq.t1.2}, applied to $mathbb{K}=mathbb{F}left[ tright]
        $
        and $x=t$). Hence, for each $kinleft{ 0,1,ldots,nright} $, we have
        begin{align*}
        & left( text{the coefficient of }t^{k}text{ in the polynomial }
        detleft( tI_n +Aright) right) \
        & =sum_{substack{Psubseteqleft{ 1,2,ldots,nright} ;\leftvert
        Prightvert =n-k}}detleft( A_{P,P}right) .
        end{align*}

        We can apply this to $k=n-r$ (since $n-rinleft{ 0,1,ldots,nright} $)
        and thus obtain
        begin{align*}
        & left( text{the coefficient of }t^{n-r}text{ in the polynomial }
        detleft( tI_n +Aright) right) \
        & =sum_{substack{Psubseteqleft{ 1,2,ldots,nright} ;\leftvert
        Prightvert =n-left( n-rright) }}detleft( A_{P,P}right)
        =sum_{substack{Psubseteqleft{ 1,2,ldots,nright} ;\leftvert
        Prightvert =r}}detleft( A_{P,P}right) qquadleft( text{since
        }n-left( n-rright) =rright) \
        & =left( text{the sum of all principal }rtimes rtext{-minors of
        }Aright)
        end{align*}

        (by the definition of principal minors). This proves Corollary 2.
        $blacksquare$




        Lemma 3. Let $A$ be an $ntimes n$-matrix over a field $mathbb{F}$. Let
        $lambda_{1},lambda_{2},ldots,lambda_{n}$ be the eigenvalues of $A$. We
        assume that all $n$ of them lie in $mathbb{F}$. Then, in the polynomial ring
        $mathbb{F}left[ tright] $, we have
        begin{equation}
        detleft( tI_n +Aright) =left( t+lambda_{1}right) left(
        t+lambda_{2}right) cdotsleft( t+lambda_{n}right) .
        end{equation}




        Proof of Lemma 3. The eigenvalues of $A$ are defined as the roots of the
        characteristic polynomial $detleft( tI_n -Aright) $ of $A$. (You may be
        used to defining the characteristic polynomial of $A$ as $detleft(
        A-tI_n right) $
        instead, but this makes no difference: The polynomials
        $detleft( tI_n -Aright) $ and $detleft( A-tI_n right) $ differ
        only by a factor of $left( -1right) ^{n}$ (in fact, we have $detleft(
        A-tI_n right) =left( -1right) ^{n}detleft( tI_n -Aright) $
        ), and
        thus have the same roots.)



        Also, the characteristic polynomial $detleft( tI_n -Aright) $ of $A$ is
        a monic polynomial of degree $n$. And we know that its roots are the
        eigenvalues of $A$, which are exactly $lambda_{1},lambda_{2},ldots
        ,lambda_{n}$
        (with multiplicities). Thus, $detleft( tI_n -Aright) $ is
        a monic polynomial of degree $n$ and has roots $lambda_{1},lambda_{2}
        ,ldots,lambda_{n}$
        . Thus,
        begin{equation}
        detleft( tI_n -Aright) =left( t-lambda_{1}right) left(
        t-lambda_{2}right) cdotsleft( t-lambda_{n}right)
        end{equation}

        (because the only monic polynomial of degree $n$ that has roots $lambda
        _{1},lambda_{2},ldots,lambda_{n}$
        is $left( t-lambda_{1}right) left(
        t-lambda_{2}right) cdotsleft( t-lambda_{n}right) $
        ). Substituting
        $-t$ for $t$ in this equality, we obtain
        begin{align*}
        detleft( left( -tright) I_n -Aright) & =left( -t-lambda
        _{1}right) left( -t-lambda_{2}right) cdotsleft( -t-lambda
        _{n}right) \
        & =prod_{i=1}^{n}underbrace{left( -t-lambda_{i}right) }_{=-left(
        t+lambda_{i}right) }=prod_{i=1}^{n}left( -left( t+lambda_{i}right)
        right) \
        & =left( -1right) ^{n}underbrace{prod_{i=1}^{n}left( t+lambda
        _{i}right) }_{=left( t+lambda_{1}right) left( t+lambda_{2}right)
        cdotsleft( t+lambda_{n}right) } \
        & = left( -1right) ^{n}left(
        t+lambda_{1}right) left( t+lambda_{2}right) cdotsleft(
        t+lambda_{n}right) .
        end{align*}

        Comparing this with
        begin{equation}
        detleft( underbrace{left( -tright) I_n -A}_{=-left( tI_n
        +Aright) }right) =detleft( -left( tI_n +Aright) right) =left(
        -1right) ^{n}detleft( tI_n +Aright) ,
        end{equation}

        we obtain
        begin{equation}
        left( -1right) ^{n}detleft( tI_n +Aright) =left( -1right)
        ^{n}left( t+lambda_{1}right) left( t+lambda_{2}right) cdotsleft(
        t+lambda_{n}right) .
        end{equation}

        We can divide both sides of this equality by $left( -1right) ^{n}$, and
        thus obtain $detleft( tI_n +Aright) =left( t+lambda_{1}right)
        left( t+lambda_{2}right) cdotsleft( t+lambda_{n}right) $
        . This
        proves Lemma 3. $blacksquare$



        Let us also notice a completely trivial fact:




        Lemma 4. Let $mathbb{F}$ be a field. Let $m$ and $k$ be nonnegative
        integers. Let $pinmathbb{F}left[ tright] $ be a polynomial. Then,
        begin{align*}
        & left( text{the coefficient of }t^{m+k}text{ in the polynomial }pcdot
        t^{k}right) \
        & =left( text{the coefficient of }t^{m}text{ in the polynomial }pright)
        .
        end{align*}




        Proof of Lemma 4. The coefficients of the polynomial $pcdot t^{k}$ are
        precisely the coefficients of $p$, shifted to the right by $k$ slots. This
        yields Lemma 4. $blacksquare$



        Now we can prove your claim:




        Theorem 5. Let $A$ be a diagonalizable $ntimes n$-matrix over a field
        $mathbb{F}$. Let $r=operatorname*{rank}A$. Then,
        begin{align*}
        & left( text{the product of all nonzero eigenvalues of }Aright) \
        & =left( text{the sum of all principal }rtimes rtext{-minors of
        }Aright) .
        end{align*}

        (Here, the product of all nonzero eigenvalues takes the multiplicities of the
        eigenvalues into account.)




        Proof of Theorem 5. First of all, all $n$ eigenvalues of $A$ belong to
        $mathbb{F}$ (since $A$ is diagonalizable). Moreover, $r=operatorname*{rank}
        Ainleft{ 0,1,ldots,nright} $
        (since $A$ is an $ntimes n$-matrix).



        The matrix $A$ is diagonalizable; in other words, it is similar to a diagonal
        matrix $Dinmathbb{F}^{ntimes n}$. Consider this $D$. Of course, the
        diagonal entries of $D$ are the eigenvalues of $A$ (with multiplicities).



        Since $A$ is similar to $D$, we have $operatorname*{rank}
        A=operatorname*{rank}D$
        . But $D$ is diagonal; thus, its rank
        $operatorname*{rank}D$ equals the number of nonzero diagonal entries of $D$.
        In other words, $operatorname*{rank}D$ equals the number of nonzero
        eigenvalues of $A$ (since the diagonal entries of $D$ are the eigenvalues of
        $A$). In other words, $r$ equals the number of nonzero eigenvalues of $A$
        (since $r=operatorname*{rank}A=operatorname*{rank}D$). In other words, the
        matrix $A$ has exactly $r$ nonzero eigenvalues.



        Label the eigenvalues of $A$ as $lambda_{1},lambda_{2},ldots,lambda_{n}$
        (with multiplicities) in such a way that the first $r$ eigenvalues
        $lambda_{1},lambda_{2},ldots,lambda_{r}$ are nonzero, while the remaining
        $n-r$ eigenvalues $lambda_{r+1},lambda_{r+2},ldots,lambda_{n}$ are zero.
        (This is clearly possible, since $A$ has exactly $r$ nonzero eigenvalues.)
        Thus, $lambda_{1},lambda_{2},ldots,lambda_{r}$ are exactly the nonzero
        eigenvalues of $A$.



        Lemma 3 yields
        begin{align*}
        detleft( tI_n +Aright) & =left( t+lambda_{1}right) left(
        t+lambda_{2}right) cdotsleft( t+lambda_{n}right) =prod_{i=1}
        ^{n}left( t+lambda_{i}right) \
        & =left( prod_{i=1}^{r}left( t+lambda_{i}right) right) cdotleft(
        prod_{i=r+1}^{n}left( t+underbrace{lambda_{i}}
        _{substack{=0\text{(since }lambda_{r+1},lambda_{r+2},ldots,lambda
        _{n}text{ are zero)}}}right) right) \
        & =left( prod_{i=1}^{r}left( t+lambda_{i}right) right)
        cdotunderbrace{left( prod_{i=r+1}^{n}tright) }_{=t^{n-r}}=left(
        prod_{i=1}^{r}left( t+lambda_{i}right) right) cdot t^{n-r}.
        end{align*}

        Now, Corollary 2 yields
        begin{align*}
        & left( text{the sum of all principal }rtimes rtext{-minors of
        }Aright) \
        & =left( text{the coefficient of }t^{n-r}text{ in the polynomial
        }underbrace{detleft( tI_n +Aright) }_{=left( prod_{i=1}^{r}left(
        t+lambda_{i}right) right) cdot t^{n-r}}right) \
        & =left( text{the coefficient of }t^{n-r}text{ in the polynomial }left(
        prod_{i=1}^{r}left( t+lambda_{i}right) right) cdot t^{n-r}right) \
        & =left( text{the coefficient of }t^{0}text{ in the polynomial }
        prod_{i=1}^{r}left( t+lambda_{i}right) right) \
        & qquadleft( text{by Lemma 4, applied to }m=0text{ and }k=n-rtext{ and
        }p=prod_{i=1}^{r}left( t+lambda_{i}right) right) \
        & =left( text{the constant term of the polynomial }prod_{i=1}^{r}left(
        t+lambda_{i}right) right) \
        & =prod_{i=1}^{r}lambda_{i}=lambda_{1}lambda_{2}cdotslambda_{r}\
        & =left( text{the product of all nonzero eigenvalues of }Aright)
        end{align*}

        (since $lambda_{1},lambda_{2},ldots,lambda_{r}$ are exactly the nonzero
        eigenvalues of $A$). This proves Theorem 5. $blacksquare$



        Note that in the above proof of Theorem 5,
        the diagonalizability of $A$ was used only to guarantee that $A$
        has exactly $r$ nonzero eigenvalues and that all $n$ eigenvalues of $A$
        belong to $mathbb{F}$.







        share|cite|improve this answer












        share|cite|improve this answer



        share|cite|improve this answer










        answered Jan 10 at 19:09









        darij grinbergdarij grinberg

        10.5k33062




        10.5k33062






























            draft saved

            draft discarded




















































            Thanks for contributing an answer to Mathematics Stack Exchange!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            Use MathJax to format equations. MathJax reference.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2718258%2fis-sum-of-principal-minors-equals-to-pseudo-determinant%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Mario Kart Wii

            What does “Dominus providebit” mean?

            Antonio Litta Visconti Arese