Linear independence of $1, e^{it}, e^{2it}, ldots, e^{nit}$












4












$begingroup$


Definition: Let $C[a,b]$ be the set of continuous $mathbb{C}$-valued functions on an interval $[a,b] subseteq mathbb{R}$ with $a < b$.




Claim: In $C[-pi, pi]$, the vectors $1, e^{it}, e^{2it}, ldots, e^{nit}$ are linearly independent for each $n = 1,2, ldots$




I'm having trouble understanding why this claim is true. I get that $C[-pi, pi]$ is a vector space, so the $e^{nit}$'s are vectors. But I don't get how to show these functions are linearly independent.



One approach I was thinking about was letting $x = e^{it}$. Then the list of vectors looks more like a list of polynomials: $1,x,x^2, ldots, x^n$. I know these are linearly independent. But I'm not confident this is the correct way to think about it.



Reference: Garcia & Horn Linear Algebra e.g. 1.6.8.










share|cite|improve this question









$endgroup$








  • 4




    $begingroup$
    Polynomials are a fine way to think about it, I would say. If some linear combination is identically $0$, then you have a polynomial with infinitely many roots.
    $endgroup$
    – saulspatz
    Jan 19 at 18:54






  • 3




    $begingroup$
    Also note that each $e^{int}$ is an eigenvector of differentiation operator corresponding to distinct eigenvalues $in$.
    $endgroup$
    – Song
    Jan 19 at 19:06
















4












$begingroup$


Definition: Let $C[a,b]$ be the set of continuous $mathbb{C}$-valued functions on an interval $[a,b] subseteq mathbb{R}$ with $a < b$.




Claim: In $C[-pi, pi]$, the vectors $1, e^{it}, e^{2it}, ldots, e^{nit}$ are linearly independent for each $n = 1,2, ldots$




I'm having trouble understanding why this claim is true. I get that $C[-pi, pi]$ is a vector space, so the $e^{nit}$'s are vectors. But I don't get how to show these functions are linearly independent.



One approach I was thinking about was letting $x = e^{it}$. Then the list of vectors looks more like a list of polynomials: $1,x,x^2, ldots, x^n$. I know these are linearly independent. But I'm not confident this is the correct way to think about it.



Reference: Garcia & Horn Linear Algebra e.g. 1.6.8.










share|cite|improve this question









$endgroup$








  • 4




    $begingroup$
    Polynomials are a fine way to think about it, I would say. If some linear combination is identically $0$, then you have a polynomial with infinitely many roots.
    $endgroup$
    – saulspatz
    Jan 19 at 18:54






  • 3




    $begingroup$
    Also note that each $e^{int}$ is an eigenvector of differentiation operator corresponding to distinct eigenvalues $in$.
    $endgroup$
    – Song
    Jan 19 at 19:06














4












4








4


1



$begingroup$


Definition: Let $C[a,b]$ be the set of continuous $mathbb{C}$-valued functions on an interval $[a,b] subseteq mathbb{R}$ with $a < b$.




Claim: In $C[-pi, pi]$, the vectors $1, e^{it}, e^{2it}, ldots, e^{nit}$ are linearly independent for each $n = 1,2, ldots$




I'm having trouble understanding why this claim is true. I get that $C[-pi, pi]$ is a vector space, so the $e^{nit}$'s are vectors. But I don't get how to show these functions are linearly independent.



One approach I was thinking about was letting $x = e^{it}$. Then the list of vectors looks more like a list of polynomials: $1,x,x^2, ldots, x^n$. I know these are linearly independent. But I'm not confident this is the correct way to think about it.



Reference: Garcia & Horn Linear Algebra e.g. 1.6.8.










share|cite|improve this question









$endgroup$




Definition: Let $C[a,b]$ be the set of continuous $mathbb{C}$-valued functions on an interval $[a,b] subseteq mathbb{R}$ with $a < b$.




Claim: In $C[-pi, pi]$, the vectors $1, e^{it}, e^{2it}, ldots, e^{nit}$ are linearly independent for each $n = 1,2, ldots$




I'm having trouble understanding why this claim is true. I get that $C[-pi, pi]$ is a vector space, so the $e^{nit}$'s are vectors. But I don't get how to show these functions are linearly independent.



One approach I was thinking about was letting $x = e^{it}$. Then the list of vectors looks more like a list of polynomials: $1,x,x^2, ldots, x^n$. I know these are linearly independent. But I'm not confident this is the correct way to think about it.



Reference: Garcia & Horn Linear Algebra e.g. 1.6.8.







linear-algebra






share|cite|improve this question













share|cite|improve this question











share|cite|improve this question




share|cite|improve this question










asked Jan 19 at 18:50









T. FoT. Fo

466311




466311








  • 4




    $begingroup$
    Polynomials are a fine way to think about it, I would say. If some linear combination is identically $0$, then you have a polynomial with infinitely many roots.
    $endgroup$
    – saulspatz
    Jan 19 at 18:54






  • 3




    $begingroup$
    Also note that each $e^{int}$ is an eigenvector of differentiation operator corresponding to distinct eigenvalues $in$.
    $endgroup$
    – Song
    Jan 19 at 19:06














  • 4




    $begingroup$
    Polynomials are a fine way to think about it, I would say. If some linear combination is identically $0$, then you have a polynomial with infinitely many roots.
    $endgroup$
    – saulspatz
    Jan 19 at 18:54






  • 3




    $begingroup$
    Also note that each $e^{int}$ is an eigenvector of differentiation operator corresponding to distinct eigenvalues $in$.
    $endgroup$
    – Song
    Jan 19 at 19:06








4




4




$begingroup$
Polynomials are a fine way to think about it, I would say. If some linear combination is identically $0$, then you have a polynomial with infinitely many roots.
$endgroup$
– saulspatz
Jan 19 at 18:54




$begingroup$
Polynomials are a fine way to think about it, I would say. If some linear combination is identically $0$, then you have a polynomial with infinitely many roots.
$endgroup$
– saulspatz
Jan 19 at 18:54




3




3




$begingroup$
Also note that each $e^{int}$ is an eigenvector of differentiation operator corresponding to distinct eigenvalues $in$.
$endgroup$
– Song
Jan 19 at 19:06




$begingroup$
Also note that each $e^{int}$ is an eigenvector of differentiation operator corresponding to distinct eigenvalues $in$.
$endgroup$
– Song
Jan 19 at 19:06










4 Answers
4






active

oldest

votes


















2












$begingroup$

For the proof, we will employ Euler's formula:




$$ e^{itheta} = cos{(theta)} + isin{(theta)}$$




We proceed by induction.



Base case:



The base case where $n = 1$ follows easily, for if




$$c_0 + c_1e^{it} = 0$$
for all $t in [-pi,pi]$




then for $ t = 0 $ and $t = pi$, we have the following two equations:




$$ c_0 + c_1 = 0$$
$$ c_0 - c_1 = 0$$




which implies that




$$ c_0 = c_1 = 0 $$




Inductive case:



For the inductive case, suppose there are scalars $c_0, c_1, dots, c_n$ such that




$$ c_0 + c_1e^{it} + cdots c_ne^{nit} = 0$$
for all $t in [-pi,pi]$.




Using Euler's formula and setting $t = 0$, we have




$$c_0 + c_1sin{(0)} + cdots + c_nsin{(0)} = 0$$
so $$c_0 = 0$$




Thus,




$$ c_1e^{it} + c_2e^{2it} + cdots + c_ne^{nit} = 0 $$




so we can factor out $e^{it}$ to get




$$ e^{it}(c_1 + c_2e^{it} + cdots + c_ne^{(n-1)it}) = 0 $$




and since $e^{it} ne 0$ for all $t$, this implies




$$c_1 + c_2e^{it} + cdots + c_ne^{(n-1)it} = 0$$




in which case we employ the inductive hypothesis to get




$$ c_1 = c_2 = cdots = c_n = 0 $$




and since $c_0 = 0$ as well, this ends the proof.






share|cite|improve this answer









$endgroup$





















    1












    $begingroup$

    Here's a technique using the definition of linear independence and some easy integration: Suppose $$sum_{k = 0}^n a_k e^{k i t} = 0$$ for some $a_0, ldots, a_n$. Integrating against $e^{-j i t}$ for $j in {0, ldots, n}$ gives
    $$0 = int_0^{2 pi} left(sum_{k = 0}^n a_k e^{k i t}right) e^{-j i t} dt = sum_{k = 0}^n a_k int_0^{2 pi} e^{(k - j) i t} dt = 2 pi a_j,$$ so each $a_j$ is zero.






    share|cite|improve this answer









    $endgroup$





















      0












      $begingroup$

      Suppose the contrary. Then there are $a_0, a_1, cdots a_n$ that are nonzero and
      so that
      $$sum_{k=0}^n a_k e^{ikt} = 0.$$ Consider the polynomial
      $$f(z) = sum_{k=0}^n a_k z^k = 0.$$ This analytic function is mapping the unit circle to zero. Therefore, it must be the zero function. Contradiction.






      share|cite|improve this answer









      $endgroup$





















        0












        $begingroup$

        Suppose the functions



        $e^{ikt}, ; 0 le k le n, tag 1$



        were linearly dependent over $Bbb C$; then we would have



        $a_k in Bbb C, ; 0 le k le n, tag 2$



        not all $0$, with



        $displaystyle sum_0^n a_k e^{ikt} = 0; tag 3$



        we note that



        $a_k ne 0 tag 4$



        for at least one $k ge 1$; otherwise (3) reduces to



        $a_0 cdot 1 = 0, ; a_0 ne 0 Longrightarrow 1 = 0, tag 5$



        an absurdity; we may thus assume further that



        $a_n ne 0; tag 6$



        also, we may write (3) as



        $displaystyle sum_0^n a_k (e^{it})^k = 0; tag 7$



        but (7) is a polynomial of degree $n$ in the $e^{it}$; as such (by the fundamental theorem of algebra), it has at most $n$ distinct zeroes



        $mu_i in Bbb C, 1 le i le n; tag 8$



        this further implies that



        $forall t in [-pi, pi], ; e^{it} in {mu_1, mu_2, ldots, mu_n }, tag 9$



        that is, $e^{it}$ may only take values in the finite set of zeroes of (7); but this assertion is patently false, since $e^{it}$ passes through every unimodular complex number as $-pi to t to pi$, i.e., the range of $e^{it}$ is uncountable. This contradiction implies that (3) cannot bind, and hence that the $e^{ikt}$ are linearly independent over $Bbb C$ on $[-pi, pi]$.






        share|cite|improve this answer











        $endgroup$













          Your Answer





          StackExchange.ifUsing("editor", function () {
          return StackExchange.using("mathjaxEditing", function () {
          StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
          StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
          });
          });
          }, "mathjax-editing");

          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "69"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          noCode: true, onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3079683%2flinear-independence-of-1-eit-e2it-ldots-enit%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          4 Answers
          4






          active

          oldest

          votes








          4 Answers
          4






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          2












          $begingroup$

          For the proof, we will employ Euler's formula:




          $$ e^{itheta} = cos{(theta)} + isin{(theta)}$$




          We proceed by induction.



          Base case:



          The base case where $n = 1$ follows easily, for if




          $$c_0 + c_1e^{it} = 0$$
          for all $t in [-pi,pi]$




          then for $ t = 0 $ and $t = pi$, we have the following two equations:




          $$ c_0 + c_1 = 0$$
          $$ c_0 - c_1 = 0$$




          which implies that




          $$ c_0 = c_1 = 0 $$




          Inductive case:



          For the inductive case, suppose there are scalars $c_0, c_1, dots, c_n$ such that




          $$ c_0 + c_1e^{it} + cdots c_ne^{nit} = 0$$
          for all $t in [-pi,pi]$.




          Using Euler's formula and setting $t = 0$, we have




          $$c_0 + c_1sin{(0)} + cdots + c_nsin{(0)} = 0$$
          so $$c_0 = 0$$




          Thus,




          $$ c_1e^{it} + c_2e^{2it} + cdots + c_ne^{nit} = 0 $$




          so we can factor out $e^{it}$ to get




          $$ e^{it}(c_1 + c_2e^{it} + cdots + c_ne^{(n-1)it}) = 0 $$




          and since $e^{it} ne 0$ for all $t$, this implies




          $$c_1 + c_2e^{it} + cdots + c_ne^{(n-1)it} = 0$$




          in which case we employ the inductive hypothesis to get




          $$ c_1 = c_2 = cdots = c_n = 0 $$




          and since $c_0 = 0$ as well, this ends the proof.






          share|cite|improve this answer









          $endgroup$


















            2












            $begingroup$

            For the proof, we will employ Euler's formula:




            $$ e^{itheta} = cos{(theta)} + isin{(theta)}$$




            We proceed by induction.



            Base case:



            The base case where $n = 1$ follows easily, for if




            $$c_0 + c_1e^{it} = 0$$
            for all $t in [-pi,pi]$




            then for $ t = 0 $ and $t = pi$, we have the following two equations:




            $$ c_0 + c_1 = 0$$
            $$ c_0 - c_1 = 0$$




            which implies that




            $$ c_0 = c_1 = 0 $$




            Inductive case:



            For the inductive case, suppose there are scalars $c_0, c_1, dots, c_n$ such that




            $$ c_0 + c_1e^{it} + cdots c_ne^{nit} = 0$$
            for all $t in [-pi,pi]$.




            Using Euler's formula and setting $t = 0$, we have




            $$c_0 + c_1sin{(0)} + cdots + c_nsin{(0)} = 0$$
            so $$c_0 = 0$$




            Thus,




            $$ c_1e^{it} + c_2e^{2it} + cdots + c_ne^{nit} = 0 $$




            so we can factor out $e^{it}$ to get




            $$ e^{it}(c_1 + c_2e^{it} + cdots + c_ne^{(n-1)it}) = 0 $$




            and since $e^{it} ne 0$ for all $t$, this implies




            $$c_1 + c_2e^{it} + cdots + c_ne^{(n-1)it} = 0$$




            in which case we employ the inductive hypothesis to get




            $$ c_1 = c_2 = cdots = c_n = 0 $$




            and since $c_0 = 0$ as well, this ends the proof.






            share|cite|improve this answer









            $endgroup$
















              2












              2








              2





              $begingroup$

              For the proof, we will employ Euler's formula:




              $$ e^{itheta} = cos{(theta)} + isin{(theta)}$$




              We proceed by induction.



              Base case:



              The base case where $n = 1$ follows easily, for if




              $$c_0 + c_1e^{it} = 0$$
              for all $t in [-pi,pi]$




              then for $ t = 0 $ and $t = pi$, we have the following two equations:




              $$ c_0 + c_1 = 0$$
              $$ c_0 - c_1 = 0$$




              which implies that




              $$ c_0 = c_1 = 0 $$




              Inductive case:



              For the inductive case, suppose there are scalars $c_0, c_1, dots, c_n$ such that




              $$ c_0 + c_1e^{it} + cdots c_ne^{nit} = 0$$
              for all $t in [-pi,pi]$.




              Using Euler's formula and setting $t = 0$, we have




              $$c_0 + c_1sin{(0)} + cdots + c_nsin{(0)} = 0$$
              so $$c_0 = 0$$




              Thus,




              $$ c_1e^{it} + c_2e^{2it} + cdots + c_ne^{nit} = 0 $$




              so we can factor out $e^{it}$ to get




              $$ e^{it}(c_1 + c_2e^{it} + cdots + c_ne^{(n-1)it}) = 0 $$




              and since $e^{it} ne 0$ for all $t$, this implies




              $$c_1 + c_2e^{it} + cdots + c_ne^{(n-1)it} = 0$$




              in which case we employ the inductive hypothesis to get




              $$ c_1 = c_2 = cdots = c_n = 0 $$




              and since $c_0 = 0$ as well, this ends the proof.






              share|cite|improve this answer









              $endgroup$



              For the proof, we will employ Euler's formula:




              $$ e^{itheta} = cos{(theta)} + isin{(theta)}$$




              We proceed by induction.



              Base case:



              The base case where $n = 1$ follows easily, for if




              $$c_0 + c_1e^{it} = 0$$
              for all $t in [-pi,pi]$




              then for $ t = 0 $ and $t = pi$, we have the following two equations:




              $$ c_0 + c_1 = 0$$
              $$ c_0 - c_1 = 0$$




              which implies that




              $$ c_0 = c_1 = 0 $$




              Inductive case:



              For the inductive case, suppose there are scalars $c_0, c_1, dots, c_n$ such that




              $$ c_0 + c_1e^{it} + cdots c_ne^{nit} = 0$$
              for all $t in [-pi,pi]$.




              Using Euler's formula and setting $t = 0$, we have




              $$c_0 + c_1sin{(0)} + cdots + c_nsin{(0)} = 0$$
              so $$c_0 = 0$$




              Thus,




              $$ c_1e^{it} + c_2e^{2it} + cdots + c_ne^{nit} = 0 $$




              so we can factor out $e^{it}$ to get




              $$ e^{it}(c_1 + c_2e^{it} + cdots + c_ne^{(n-1)it}) = 0 $$




              and since $e^{it} ne 0$ for all $t$, this implies




              $$c_1 + c_2e^{it} + cdots + c_ne^{(n-1)it} = 0$$




              in which case we employ the inductive hypothesis to get




              $$ c_1 = c_2 = cdots = c_n = 0 $$




              and since $c_0 = 0$ as well, this ends the proof.







              share|cite|improve this answer












              share|cite|improve this answer



              share|cite|improve this answer










              answered Jan 19 at 20:10









              MetricMetric

              1,23649




              1,23649























                  1












                  $begingroup$

                  Here's a technique using the definition of linear independence and some easy integration: Suppose $$sum_{k = 0}^n a_k e^{k i t} = 0$$ for some $a_0, ldots, a_n$. Integrating against $e^{-j i t}$ for $j in {0, ldots, n}$ gives
                  $$0 = int_0^{2 pi} left(sum_{k = 0}^n a_k e^{k i t}right) e^{-j i t} dt = sum_{k = 0}^n a_k int_0^{2 pi} e^{(k - j) i t} dt = 2 pi a_j,$$ so each $a_j$ is zero.






                  share|cite|improve this answer









                  $endgroup$


















                    1












                    $begingroup$

                    Here's a technique using the definition of linear independence and some easy integration: Suppose $$sum_{k = 0}^n a_k e^{k i t} = 0$$ for some $a_0, ldots, a_n$. Integrating against $e^{-j i t}$ for $j in {0, ldots, n}$ gives
                    $$0 = int_0^{2 pi} left(sum_{k = 0}^n a_k e^{k i t}right) e^{-j i t} dt = sum_{k = 0}^n a_k int_0^{2 pi} e^{(k - j) i t} dt = 2 pi a_j,$$ so each $a_j$ is zero.






                    share|cite|improve this answer









                    $endgroup$
















                      1












                      1








                      1





                      $begingroup$

                      Here's a technique using the definition of linear independence and some easy integration: Suppose $$sum_{k = 0}^n a_k e^{k i t} = 0$$ for some $a_0, ldots, a_n$. Integrating against $e^{-j i t}$ for $j in {0, ldots, n}$ gives
                      $$0 = int_0^{2 pi} left(sum_{k = 0}^n a_k e^{k i t}right) e^{-j i t} dt = sum_{k = 0}^n a_k int_0^{2 pi} e^{(k - j) i t} dt = 2 pi a_j,$$ so each $a_j$ is zero.






                      share|cite|improve this answer









                      $endgroup$



                      Here's a technique using the definition of linear independence and some easy integration: Suppose $$sum_{k = 0}^n a_k e^{k i t} = 0$$ for some $a_0, ldots, a_n$. Integrating against $e^{-j i t}$ for $j in {0, ldots, n}$ gives
                      $$0 = int_0^{2 pi} left(sum_{k = 0}^n a_k e^{k i t}right) e^{-j i t} dt = sum_{k = 0}^n a_k int_0^{2 pi} e^{(k - j) i t} dt = 2 pi a_j,$$ so each $a_j$ is zero.







                      share|cite|improve this answer












                      share|cite|improve this answer



                      share|cite|improve this answer










                      answered Jan 19 at 20:30









                      TravisTravis

                      60.3k767147




                      60.3k767147























                          0












                          $begingroup$

                          Suppose the contrary. Then there are $a_0, a_1, cdots a_n$ that are nonzero and
                          so that
                          $$sum_{k=0}^n a_k e^{ikt} = 0.$$ Consider the polynomial
                          $$f(z) = sum_{k=0}^n a_k z^k = 0.$$ This analytic function is mapping the unit circle to zero. Therefore, it must be the zero function. Contradiction.






                          share|cite|improve this answer









                          $endgroup$


















                            0












                            $begingroup$

                            Suppose the contrary. Then there are $a_0, a_1, cdots a_n$ that are nonzero and
                            so that
                            $$sum_{k=0}^n a_k e^{ikt} = 0.$$ Consider the polynomial
                            $$f(z) = sum_{k=0}^n a_k z^k = 0.$$ This analytic function is mapping the unit circle to zero. Therefore, it must be the zero function. Contradiction.






                            share|cite|improve this answer









                            $endgroup$
















                              0












                              0








                              0





                              $begingroup$

                              Suppose the contrary. Then there are $a_0, a_1, cdots a_n$ that are nonzero and
                              so that
                              $$sum_{k=0}^n a_k e^{ikt} = 0.$$ Consider the polynomial
                              $$f(z) = sum_{k=0}^n a_k z^k = 0.$$ This analytic function is mapping the unit circle to zero. Therefore, it must be the zero function. Contradiction.






                              share|cite|improve this answer









                              $endgroup$



                              Suppose the contrary. Then there are $a_0, a_1, cdots a_n$ that are nonzero and
                              so that
                              $$sum_{k=0}^n a_k e^{ikt} = 0.$$ Consider the polynomial
                              $$f(z) = sum_{k=0}^n a_k z^k = 0.$$ This analytic function is mapping the unit circle to zero. Therefore, it must be the zero function. Contradiction.







                              share|cite|improve this answer












                              share|cite|improve this answer



                              share|cite|improve this answer










                              answered Jan 19 at 20:11









                              ncmathsadistncmathsadist

                              42.8k260103




                              42.8k260103























                                  0












                                  $begingroup$

                                  Suppose the functions



                                  $e^{ikt}, ; 0 le k le n, tag 1$



                                  were linearly dependent over $Bbb C$; then we would have



                                  $a_k in Bbb C, ; 0 le k le n, tag 2$



                                  not all $0$, with



                                  $displaystyle sum_0^n a_k e^{ikt} = 0; tag 3$



                                  we note that



                                  $a_k ne 0 tag 4$



                                  for at least one $k ge 1$; otherwise (3) reduces to



                                  $a_0 cdot 1 = 0, ; a_0 ne 0 Longrightarrow 1 = 0, tag 5$



                                  an absurdity; we may thus assume further that



                                  $a_n ne 0; tag 6$



                                  also, we may write (3) as



                                  $displaystyle sum_0^n a_k (e^{it})^k = 0; tag 7$



                                  but (7) is a polynomial of degree $n$ in the $e^{it}$; as such (by the fundamental theorem of algebra), it has at most $n$ distinct zeroes



                                  $mu_i in Bbb C, 1 le i le n; tag 8$



                                  this further implies that



                                  $forall t in [-pi, pi], ; e^{it} in {mu_1, mu_2, ldots, mu_n }, tag 9$



                                  that is, $e^{it}$ may only take values in the finite set of zeroes of (7); but this assertion is patently false, since $e^{it}$ passes through every unimodular complex number as $-pi to t to pi$, i.e., the range of $e^{it}$ is uncountable. This contradiction implies that (3) cannot bind, and hence that the $e^{ikt}$ are linearly independent over $Bbb C$ on $[-pi, pi]$.






                                  share|cite|improve this answer











                                  $endgroup$


















                                    0












                                    $begingroup$

                                    Suppose the functions



                                    $e^{ikt}, ; 0 le k le n, tag 1$



                                    were linearly dependent over $Bbb C$; then we would have



                                    $a_k in Bbb C, ; 0 le k le n, tag 2$



                                    not all $0$, with



                                    $displaystyle sum_0^n a_k e^{ikt} = 0; tag 3$



                                    we note that



                                    $a_k ne 0 tag 4$



                                    for at least one $k ge 1$; otherwise (3) reduces to



                                    $a_0 cdot 1 = 0, ; a_0 ne 0 Longrightarrow 1 = 0, tag 5$



                                    an absurdity; we may thus assume further that



                                    $a_n ne 0; tag 6$



                                    also, we may write (3) as



                                    $displaystyle sum_0^n a_k (e^{it})^k = 0; tag 7$



                                    but (7) is a polynomial of degree $n$ in the $e^{it}$; as such (by the fundamental theorem of algebra), it has at most $n$ distinct zeroes



                                    $mu_i in Bbb C, 1 le i le n; tag 8$



                                    this further implies that



                                    $forall t in [-pi, pi], ; e^{it} in {mu_1, mu_2, ldots, mu_n }, tag 9$



                                    that is, $e^{it}$ may only take values in the finite set of zeroes of (7); but this assertion is patently false, since $e^{it}$ passes through every unimodular complex number as $-pi to t to pi$, i.e., the range of $e^{it}$ is uncountable. This contradiction implies that (3) cannot bind, and hence that the $e^{ikt}$ are linearly independent over $Bbb C$ on $[-pi, pi]$.






                                    share|cite|improve this answer











                                    $endgroup$
















                                      0












                                      0








                                      0





                                      $begingroup$

                                      Suppose the functions



                                      $e^{ikt}, ; 0 le k le n, tag 1$



                                      were linearly dependent over $Bbb C$; then we would have



                                      $a_k in Bbb C, ; 0 le k le n, tag 2$



                                      not all $0$, with



                                      $displaystyle sum_0^n a_k e^{ikt} = 0; tag 3$



                                      we note that



                                      $a_k ne 0 tag 4$



                                      for at least one $k ge 1$; otherwise (3) reduces to



                                      $a_0 cdot 1 = 0, ; a_0 ne 0 Longrightarrow 1 = 0, tag 5$



                                      an absurdity; we may thus assume further that



                                      $a_n ne 0; tag 6$



                                      also, we may write (3) as



                                      $displaystyle sum_0^n a_k (e^{it})^k = 0; tag 7$



                                      but (7) is a polynomial of degree $n$ in the $e^{it}$; as such (by the fundamental theorem of algebra), it has at most $n$ distinct zeroes



                                      $mu_i in Bbb C, 1 le i le n; tag 8$



                                      this further implies that



                                      $forall t in [-pi, pi], ; e^{it} in {mu_1, mu_2, ldots, mu_n }, tag 9$



                                      that is, $e^{it}$ may only take values in the finite set of zeroes of (7); but this assertion is patently false, since $e^{it}$ passes through every unimodular complex number as $-pi to t to pi$, i.e., the range of $e^{it}$ is uncountable. This contradiction implies that (3) cannot bind, and hence that the $e^{ikt}$ are linearly independent over $Bbb C$ on $[-pi, pi]$.






                                      share|cite|improve this answer











                                      $endgroup$



                                      Suppose the functions



                                      $e^{ikt}, ; 0 le k le n, tag 1$



                                      were linearly dependent over $Bbb C$; then we would have



                                      $a_k in Bbb C, ; 0 le k le n, tag 2$



                                      not all $0$, with



                                      $displaystyle sum_0^n a_k e^{ikt} = 0; tag 3$



                                      we note that



                                      $a_k ne 0 tag 4$



                                      for at least one $k ge 1$; otherwise (3) reduces to



                                      $a_0 cdot 1 = 0, ; a_0 ne 0 Longrightarrow 1 = 0, tag 5$



                                      an absurdity; we may thus assume further that



                                      $a_n ne 0; tag 6$



                                      also, we may write (3) as



                                      $displaystyle sum_0^n a_k (e^{it})^k = 0; tag 7$



                                      but (7) is a polynomial of degree $n$ in the $e^{it}$; as such (by the fundamental theorem of algebra), it has at most $n$ distinct zeroes



                                      $mu_i in Bbb C, 1 le i le n; tag 8$



                                      this further implies that



                                      $forall t in [-pi, pi], ; e^{it} in {mu_1, mu_2, ldots, mu_n }, tag 9$



                                      that is, $e^{it}$ may only take values in the finite set of zeroes of (7); but this assertion is patently false, since $e^{it}$ passes through every unimodular complex number as $-pi to t to pi$, i.e., the range of $e^{it}$ is uncountable. This contradiction implies that (3) cannot bind, and hence that the $e^{ikt}$ are linearly independent over $Bbb C$ on $[-pi, pi]$.







                                      share|cite|improve this answer














                                      share|cite|improve this answer



                                      share|cite|improve this answer








                                      edited Jan 19 at 20:22

























                                      answered Jan 19 at 20:03









                                      Robert LewisRobert Lewis

                                      46.6k23067




                                      46.6k23067






























                                          draft saved

                                          draft discarded




















































                                          Thanks for contributing an answer to Mathematics Stack Exchange!


                                          • Please be sure to answer the question. Provide details and share your research!

                                          But avoid



                                          • Asking for help, clarification, or responding to other answers.

                                          • Making statements based on opinion; back them up with references or personal experience.


                                          Use MathJax to format equations. MathJax reference.


                                          To learn more, see our tips on writing great answers.




                                          draft saved


                                          draft discarded














                                          StackExchange.ready(
                                          function () {
                                          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3079683%2flinear-independence-of-1-eit-e2it-ldots-enit%23new-answer', 'question_page');
                                          }
                                          );

                                          Post as a guest















                                          Required, but never shown





















































                                          Required, but never shown














                                          Required, but never shown












                                          Required, but never shown







                                          Required, but never shown

































                                          Required, but never shown














                                          Required, but never shown












                                          Required, but never shown







                                          Required, but never shown







                                          Popular posts from this blog

                                          Mario Kart Wii

                                          The Binding of Isaac: Rebirth/Afterbirth

                                          What does “Dominus providebit” mean?