How to prove this limit is $1/4$












0












$begingroup$


$$underset{nto infty }{mathop{lim }},int_{0}^{1}{int_{0}^{1}{cdots int_{0}^{1}{{{left( frac{{{x}_{1}}+{{x}_{2}}+...+{{x}_{n}}}{n} right)}^{2}}d{{x}_{1}}d{{x}_{2}}cdots d{{x}_{n}}}}}$$



I start thinking first in the Lebesgue monotone convergent theorem
but this leads to closed road
is there any shortcut to solve this problem ??










share|cite|improve this question









$endgroup$

















    0












    $begingroup$


    $$underset{nto infty }{mathop{lim }},int_{0}^{1}{int_{0}^{1}{cdots int_{0}^{1}{{{left( frac{{{x}_{1}}+{{x}_{2}}+...+{{x}_{n}}}{n} right)}^{2}}d{{x}_{1}}d{{x}_{2}}cdots d{{x}_{n}}}}}$$



    I start thinking first in the Lebesgue monotone convergent theorem
    but this leads to closed road
    is there any shortcut to solve this problem ??










    share|cite|improve this question









    $endgroup$















      0












      0








      0


      1



      $begingroup$


      $$underset{nto infty }{mathop{lim }},int_{0}^{1}{int_{0}^{1}{cdots int_{0}^{1}{{{left( frac{{{x}_{1}}+{{x}_{2}}+...+{{x}_{n}}}{n} right)}^{2}}d{{x}_{1}}d{{x}_{2}}cdots d{{x}_{n}}}}}$$



      I start thinking first in the Lebesgue monotone convergent theorem
      but this leads to closed road
      is there any shortcut to solve this problem ??










      share|cite|improve this question









      $endgroup$




      $$underset{nto infty }{mathop{lim }},int_{0}^{1}{int_{0}^{1}{cdots int_{0}^{1}{{{left( frac{{{x}_{1}}+{{x}_{2}}+...+{{x}_{n}}}{n} right)}^{2}}d{{x}_{1}}d{{x}_{2}}cdots d{{x}_{n}}}}}$$



      I start thinking first in the Lebesgue monotone convergent theorem
      but this leads to closed road
      is there any shortcut to solve this problem ??







      real-analysis integration lebesgue-integral






      share|cite|improve this question













      share|cite|improve this question











      share|cite|improve this question




      share|cite|improve this question










      asked Jan 13 at 19:51









      Ramez HindiRamez Hindi

      1437




      1437






















          3 Answers
          3






          active

          oldest

          votes


















          4












          $begingroup$

          The integral has an equivalent expression
          $$
          Eleft[left(frac{1}{n}sum_{i=1}^n U_iright)^2right]
          $$
          where $U_i$, $ile n$ are independently and uniformly distributed random variables on $[0,1]$. This gives
          $$begin{eqnarray}
          Eleft[left(frac{1}{n}sum_{i=1}^n U_iright)^2right]&=&frac{1}{n^2}sum_{i,j=1}^nEleft[U_iU_jright]\
          &=&frac{1}{n^2}sum_{i=1}^nEleft[U_i^2right] +frac{1}{n^2}sum_{ine j}Eleft[U_iU_jright]\
          &=&frac{n}{n^2}frac{1}{3}+frac{n(n-1)}{n^2}frac{1}{4}to frac{1}{4}
          end{eqnarray}$$

          since $E[U_i]=int_0^1 xdx=frac{1}{2}$, $E[U_i^2]=int_0^1 x^2dx =frac{1}{3}$ and $E[U_iU_j]=E[U_i]E[U_j]=frac{1}{2}cdotfrac{1}{2}=frac{1}{4}$ for all $ine j$.






          share|cite|improve this answer









          $endgroup$





















            3












            $begingroup$

            We have $$(x_1+x_2+cdots +x_n)^2=sum_i x_i^2+sum_{i,jne i} x_ix_j$$first of all note that $$int_{0}^{1}{int_{0}^{1}{cdots int_{0}^{1}{{{x_i}^{2}}d{{x}_{1}}d{{x}_{2}}cdots d{{x}_{n}}}}}=int_0^1 x_i^2dx_i={1over 3}$$and $$int_{0}^{1}{int_{0}^{1}{cdots int_{0}^{1}{{{x_ix_j}{}}d{{x}_{1}}d{{x}_{2}}cdots d{{x}_{n}}}}}={1over 4}$$therefore$$int_{0}^{1}{int_{0}^{1}{cdots int_{0}^{1}{{{left( {{{x}_{1}}+{{x}_{2}}+...+{{x}_{n}}}{} right)}^{2}}d{{x}_{1}}d{{x}_{2}}cdots d{{x}_{n}}}}}=ncdot {1over 3}+(n^2-n){1over 4}$$and by substitution we obtain $$int_{0}^{1}{int_{0}^{1}{cdots int_{0}^{1}{{{left( frac{{{x}_{1}}+{{x}_{2}}+...+{{x}_{n}}}{n} right)}^{2}}d{{x}_{1}}d{{x}_{2}}cdots d{{x}_{n}}}}}={1over 3n}+{n-1over 4n}$$which obviously shows that the limit is $1over 4$.






            share|cite|improve this answer









            $endgroup$













            • $begingroup$
              Sorry first line is not clear to if we say $(x+y+z)^2=x^2+y^2+z^2+2(xy+xz+yz)$ by induction you will get the following result , ${{left( sumnolimits_{i=1}^{n}{{{x}_{i}}} right)}^{2}}=sumnolimits_{i=1}^{n}{x_{i}^{2}+2sumnolimits_{ine j}^{n}{{{x}_{i}}{{x}_{j}}}}$
              $endgroup$
              – Ramez Hindi
              Jan 13 at 21:34












            • $begingroup$
              In fact you get $(sum_i x_i)^2=x_1^2+cdots + x_n^2+2sum_{i<j}x_ix_j$. The multiplicity has been considered in my equation since $2x_ix_j=x_ix_j+x_jx_i$ whenever $ine j$
              $endgroup$
              – Mostafa Ayaz
              Jan 13 at 21:38



















            2












            $begingroup$

            Suppose $X_1,X_2,ldots$ are independent random variables having the uniform distribution on $[0,1]$. Then the common expectation of these variables exist and equals $mu=1/2$.



            Define $$overline X_n=frac{1}{n}sum_{k=1}^n X_k$$



            By Khintchine's weak law of large numbers, $$overline X_nstackrel{P}{longrightarrow}muquadtext{ as }quad ntoinfty$$



            And by the continuous mapping theorem, $$overline X_n^2stackrel{P}{longrightarrow}mu^2quadtext{ as }quad ntoinftytag{1}$$



            Moreover, $$0le X_1,ldots,X_nle 1implies 0le overline X_nle 1implies 0le overline X_n^2le 1tag{2}$$



            $(1)$ and $(2)$ together imply $$int_{[0,1]^n}left(frac{x_1+cdots+x_n}{n}right)^2mathrm{d}x_1ldotsmathrm{d}x_n = Eleft(overline X_n^2right)stackrel{ntoinfty}{longrightarrow}frac{1}{4}$$






            share|cite|improve this answer











            $endgroup$













            • $begingroup$
              It’s always nice to see an easygoing probability solution to what appears to be a monstrous analytic problem. +1
              $endgroup$
              – LoveTooNap29
              Jan 13 at 20:08










            • $begingroup$
              there must be some confusion. I upvoted both yours and Song’s answer for the use of probability.
              $endgroup$
              – LoveTooNap29
              Jan 13 at 20:33










            • $begingroup$
              @LoveTooNap29 I wasn't replying to you.
              $endgroup$
              – StubbornAtom
              Jan 13 at 20:34










            • $begingroup$
              $X_nto X$ in probability isn't enough to conclude that $mathbb{E}[X_n]tomathbb{E}[X]$. It would be better to use the strong law of large numbers, plus dominated convergence.
              $endgroup$
              – carmichael561
              Jan 15 at 0:27










            • $begingroup$
              @carmichael561 Thank you. What if I add the fact that $|X_n|<1$ to my answer? I think that salvages the argument.
              $endgroup$
              – StubbornAtom
              Jan 15 at 4:59











            Your Answer





            StackExchange.ifUsing("editor", function () {
            return StackExchange.using("mathjaxEditing", function () {
            StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
            StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
            });
            });
            }, "mathjax-editing");

            StackExchange.ready(function() {
            var channelOptions = {
            tags: "".split(" "),
            id: "69"
            };
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function() {
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled) {
            StackExchange.using("snippets", function() {
            createEditor();
            });
            }
            else {
            createEditor();
            }
            });

            function createEditor() {
            StackExchange.prepareEditor({
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: true,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: 10,
            bindNavPrevention: true,
            postfix: "",
            imageUploader: {
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            },
            noCode: true, onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            });


            }
            });














            draft saved

            draft discarded


















            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3072446%2fhow-to-prove-this-limit-is-1-4%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown

























            3 Answers
            3






            active

            oldest

            votes








            3 Answers
            3






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            4












            $begingroup$

            The integral has an equivalent expression
            $$
            Eleft[left(frac{1}{n}sum_{i=1}^n U_iright)^2right]
            $$
            where $U_i$, $ile n$ are independently and uniformly distributed random variables on $[0,1]$. This gives
            $$begin{eqnarray}
            Eleft[left(frac{1}{n}sum_{i=1}^n U_iright)^2right]&=&frac{1}{n^2}sum_{i,j=1}^nEleft[U_iU_jright]\
            &=&frac{1}{n^2}sum_{i=1}^nEleft[U_i^2right] +frac{1}{n^2}sum_{ine j}Eleft[U_iU_jright]\
            &=&frac{n}{n^2}frac{1}{3}+frac{n(n-1)}{n^2}frac{1}{4}to frac{1}{4}
            end{eqnarray}$$

            since $E[U_i]=int_0^1 xdx=frac{1}{2}$, $E[U_i^2]=int_0^1 x^2dx =frac{1}{3}$ and $E[U_iU_j]=E[U_i]E[U_j]=frac{1}{2}cdotfrac{1}{2}=frac{1}{4}$ for all $ine j$.






            share|cite|improve this answer









            $endgroup$


















              4












              $begingroup$

              The integral has an equivalent expression
              $$
              Eleft[left(frac{1}{n}sum_{i=1}^n U_iright)^2right]
              $$
              where $U_i$, $ile n$ are independently and uniformly distributed random variables on $[0,1]$. This gives
              $$begin{eqnarray}
              Eleft[left(frac{1}{n}sum_{i=1}^n U_iright)^2right]&=&frac{1}{n^2}sum_{i,j=1}^nEleft[U_iU_jright]\
              &=&frac{1}{n^2}sum_{i=1}^nEleft[U_i^2right] +frac{1}{n^2}sum_{ine j}Eleft[U_iU_jright]\
              &=&frac{n}{n^2}frac{1}{3}+frac{n(n-1)}{n^2}frac{1}{4}to frac{1}{4}
              end{eqnarray}$$

              since $E[U_i]=int_0^1 xdx=frac{1}{2}$, $E[U_i^2]=int_0^1 x^2dx =frac{1}{3}$ and $E[U_iU_j]=E[U_i]E[U_j]=frac{1}{2}cdotfrac{1}{2}=frac{1}{4}$ for all $ine j$.






              share|cite|improve this answer









              $endgroup$
















                4












                4








                4





                $begingroup$

                The integral has an equivalent expression
                $$
                Eleft[left(frac{1}{n}sum_{i=1}^n U_iright)^2right]
                $$
                where $U_i$, $ile n$ are independently and uniformly distributed random variables on $[0,1]$. This gives
                $$begin{eqnarray}
                Eleft[left(frac{1}{n}sum_{i=1}^n U_iright)^2right]&=&frac{1}{n^2}sum_{i,j=1}^nEleft[U_iU_jright]\
                &=&frac{1}{n^2}sum_{i=1}^nEleft[U_i^2right] +frac{1}{n^2}sum_{ine j}Eleft[U_iU_jright]\
                &=&frac{n}{n^2}frac{1}{3}+frac{n(n-1)}{n^2}frac{1}{4}to frac{1}{4}
                end{eqnarray}$$

                since $E[U_i]=int_0^1 xdx=frac{1}{2}$, $E[U_i^2]=int_0^1 x^2dx =frac{1}{3}$ and $E[U_iU_j]=E[U_i]E[U_j]=frac{1}{2}cdotfrac{1}{2}=frac{1}{4}$ for all $ine j$.






                share|cite|improve this answer









                $endgroup$



                The integral has an equivalent expression
                $$
                Eleft[left(frac{1}{n}sum_{i=1}^n U_iright)^2right]
                $$
                where $U_i$, $ile n$ are independently and uniformly distributed random variables on $[0,1]$. This gives
                $$begin{eqnarray}
                Eleft[left(frac{1}{n}sum_{i=1}^n U_iright)^2right]&=&frac{1}{n^2}sum_{i,j=1}^nEleft[U_iU_jright]\
                &=&frac{1}{n^2}sum_{i=1}^nEleft[U_i^2right] +frac{1}{n^2}sum_{ine j}Eleft[U_iU_jright]\
                &=&frac{n}{n^2}frac{1}{3}+frac{n(n-1)}{n^2}frac{1}{4}to frac{1}{4}
                end{eqnarray}$$

                since $E[U_i]=int_0^1 xdx=frac{1}{2}$, $E[U_i^2]=int_0^1 x^2dx =frac{1}{3}$ and $E[U_iU_j]=E[U_i]E[U_j]=frac{1}{2}cdotfrac{1}{2}=frac{1}{4}$ for all $ine j$.







                share|cite|improve this answer












                share|cite|improve this answer



                share|cite|improve this answer










                answered Jan 13 at 19:59









                SongSong

                11.2k628




                11.2k628























                    3












                    $begingroup$

                    We have $$(x_1+x_2+cdots +x_n)^2=sum_i x_i^2+sum_{i,jne i} x_ix_j$$first of all note that $$int_{0}^{1}{int_{0}^{1}{cdots int_{0}^{1}{{{x_i}^{2}}d{{x}_{1}}d{{x}_{2}}cdots d{{x}_{n}}}}}=int_0^1 x_i^2dx_i={1over 3}$$and $$int_{0}^{1}{int_{0}^{1}{cdots int_{0}^{1}{{{x_ix_j}{}}d{{x}_{1}}d{{x}_{2}}cdots d{{x}_{n}}}}}={1over 4}$$therefore$$int_{0}^{1}{int_{0}^{1}{cdots int_{0}^{1}{{{left( {{{x}_{1}}+{{x}_{2}}+...+{{x}_{n}}}{} right)}^{2}}d{{x}_{1}}d{{x}_{2}}cdots d{{x}_{n}}}}}=ncdot {1over 3}+(n^2-n){1over 4}$$and by substitution we obtain $$int_{0}^{1}{int_{0}^{1}{cdots int_{0}^{1}{{{left( frac{{{x}_{1}}+{{x}_{2}}+...+{{x}_{n}}}{n} right)}^{2}}d{{x}_{1}}d{{x}_{2}}cdots d{{x}_{n}}}}}={1over 3n}+{n-1over 4n}$$which obviously shows that the limit is $1over 4$.






                    share|cite|improve this answer









                    $endgroup$













                    • $begingroup$
                      Sorry first line is not clear to if we say $(x+y+z)^2=x^2+y^2+z^2+2(xy+xz+yz)$ by induction you will get the following result , ${{left( sumnolimits_{i=1}^{n}{{{x}_{i}}} right)}^{2}}=sumnolimits_{i=1}^{n}{x_{i}^{2}+2sumnolimits_{ine j}^{n}{{{x}_{i}}{{x}_{j}}}}$
                      $endgroup$
                      – Ramez Hindi
                      Jan 13 at 21:34












                    • $begingroup$
                      In fact you get $(sum_i x_i)^2=x_1^2+cdots + x_n^2+2sum_{i<j}x_ix_j$. The multiplicity has been considered in my equation since $2x_ix_j=x_ix_j+x_jx_i$ whenever $ine j$
                      $endgroup$
                      – Mostafa Ayaz
                      Jan 13 at 21:38
















                    3












                    $begingroup$

                    We have $$(x_1+x_2+cdots +x_n)^2=sum_i x_i^2+sum_{i,jne i} x_ix_j$$first of all note that $$int_{0}^{1}{int_{0}^{1}{cdots int_{0}^{1}{{{x_i}^{2}}d{{x}_{1}}d{{x}_{2}}cdots d{{x}_{n}}}}}=int_0^1 x_i^2dx_i={1over 3}$$and $$int_{0}^{1}{int_{0}^{1}{cdots int_{0}^{1}{{{x_ix_j}{}}d{{x}_{1}}d{{x}_{2}}cdots d{{x}_{n}}}}}={1over 4}$$therefore$$int_{0}^{1}{int_{0}^{1}{cdots int_{0}^{1}{{{left( {{{x}_{1}}+{{x}_{2}}+...+{{x}_{n}}}{} right)}^{2}}d{{x}_{1}}d{{x}_{2}}cdots d{{x}_{n}}}}}=ncdot {1over 3}+(n^2-n){1over 4}$$and by substitution we obtain $$int_{0}^{1}{int_{0}^{1}{cdots int_{0}^{1}{{{left( frac{{{x}_{1}}+{{x}_{2}}+...+{{x}_{n}}}{n} right)}^{2}}d{{x}_{1}}d{{x}_{2}}cdots d{{x}_{n}}}}}={1over 3n}+{n-1over 4n}$$which obviously shows that the limit is $1over 4$.






                    share|cite|improve this answer









                    $endgroup$













                    • $begingroup$
                      Sorry first line is not clear to if we say $(x+y+z)^2=x^2+y^2+z^2+2(xy+xz+yz)$ by induction you will get the following result , ${{left( sumnolimits_{i=1}^{n}{{{x}_{i}}} right)}^{2}}=sumnolimits_{i=1}^{n}{x_{i}^{2}+2sumnolimits_{ine j}^{n}{{{x}_{i}}{{x}_{j}}}}$
                      $endgroup$
                      – Ramez Hindi
                      Jan 13 at 21:34












                    • $begingroup$
                      In fact you get $(sum_i x_i)^2=x_1^2+cdots + x_n^2+2sum_{i<j}x_ix_j$. The multiplicity has been considered in my equation since $2x_ix_j=x_ix_j+x_jx_i$ whenever $ine j$
                      $endgroup$
                      – Mostafa Ayaz
                      Jan 13 at 21:38














                    3












                    3








                    3





                    $begingroup$

                    We have $$(x_1+x_2+cdots +x_n)^2=sum_i x_i^2+sum_{i,jne i} x_ix_j$$first of all note that $$int_{0}^{1}{int_{0}^{1}{cdots int_{0}^{1}{{{x_i}^{2}}d{{x}_{1}}d{{x}_{2}}cdots d{{x}_{n}}}}}=int_0^1 x_i^2dx_i={1over 3}$$and $$int_{0}^{1}{int_{0}^{1}{cdots int_{0}^{1}{{{x_ix_j}{}}d{{x}_{1}}d{{x}_{2}}cdots d{{x}_{n}}}}}={1over 4}$$therefore$$int_{0}^{1}{int_{0}^{1}{cdots int_{0}^{1}{{{left( {{{x}_{1}}+{{x}_{2}}+...+{{x}_{n}}}{} right)}^{2}}d{{x}_{1}}d{{x}_{2}}cdots d{{x}_{n}}}}}=ncdot {1over 3}+(n^2-n){1over 4}$$and by substitution we obtain $$int_{0}^{1}{int_{0}^{1}{cdots int_{0}^{1}{{{left( frac{{{x}_{1}}+{{x}_{2}}+...+{{x}_{n}}}{n} right)}^{2}}d{{x}_{1}}d{{x}_{2}}cdots d{{x}_{n}}}}}={1over 3n}+{n-1over 4n}$$which obviously shows that the limit is $1over 4$.






                    share|cite|improve this answer









                    $endgroup$



                    We have $$(x_1+x_2+cdots +x_n)^2=sum_i x_i^2+sum_{i,jne i} x_ix_j$$first of all note that $$int_{0}^{1}{int_{0}^{1}{cdots int_{0}^{1}{{{x_i}^{2}}d{{x}_{1}}d{{x}_{2}}cdots d{{x}_{n}}}}}=int_0^1 x_i^2dx_i={1over 3}$$and $$int_{0}^{1}{int_{0}^{1}{cdots int_{0}^{1}{{{x_ix_j}{}}d{{x}_{1}}d{{x}_{2}}cdots d{{x}_{n}}}}}={1over 4}$$therefore$$int_{0}^{1}{int_{0}^{1}{cdots int_{0}^{1}{{{left( {{{x}_{1}}+{{x}_{2}}+...+{{x}_{n}}}{} right)}^{2}}d{{x}_{1}}d{{x}_{2}}cdots d{{x}_{n}}}}}=ncdot {1over 3}+(n^2-n){1over 4}$$and by substitution we obtain $$int_{0}^{1}{int_{0}^{1}{cdots int_{0}^{1}{{{left( frac{{{x}_{1}}+{{x}_{2}}+...+{{x}_{n}}}{n} right)}^{2}}d{{x}_{1}}d{{x}_{2}}cdots d{{x}_{n}}}}}={1over 3n}+{n-1over 4n}$$which obviously shows that the limit is $1over 4$.







                    share|cite|improve this answer












                    share|cite|improve this answer



                    share|cite|improve this answer










                    answered Jan 13 at 20:19









                    Mostafa AyazMostafa Ayaz

                    15.3k3939




                    15.3k3939












                    • $begingroup$
                      Sorry first line is not clear to if we say $(x+y+z)^2=x^2+y^2+z^2+2(xy+xz+yz)$ by induction you will get the following result , ${{left( sumnolimits_{i=1}^{n}{{{x}_{i}}} right)}^{2}}=sumnolimits_{i=1}^{n}{x_{i}^{2}+2sumnolimits_{ine j}^{n}{{{x}_{i}}{{x}_{j}}}}$
                      $endgroup$
                      – Ramez Hindi
                      Jan 13 at 21:34












                    • $begingroup$
                      In fact you get $(sum_i x_i)^2=x_1^2+cdots + x_n^2+2sum_{i<j}x_ix_j$. The multiplicity has been considered in my equation since $2x_ix_j=x_ix_j+x_jx_i$ whenever $ine j$
                      $endgroup$
                      – Mostafa Ayaz
                      Jan 13 at 21:38


















                    • $begingroup$
                      Sorry first line is not clear to if we say $(x+y+z)^2=x^2+y^2+z^2+2(xy+xz+yz)$ by induction you will get the following result , ${{left( sumnolimits_{i=1}^{n}{{{x}_{i}}} right)}^{2}}=sumnolimits_{i=1}^{n}{x_{i}^{2}+2sumnolimits_{ine j}^{n}{{{x}_{i}}{{x}_{j}}}}$
                      $endgroup$
                      – Ramez Hindi
                      Jan 13 at 21:34












                    • $begingroup$
                      In fact you get $(sum_i x_i)^2=x_1^2+cdots + x_n^2+2sum_{i<j}x_ix_j$. The multiplicity has been considered in my equation since $2x_ix_j=x_ix_j+x_jx_i$ whenever $ine j$
                      $endgroup$
                      – Mostafa Ayaz
                      Jan 13 at 21:38
















                    $begingroup$
                    Sorry first line is not clear to if we say $(x+y+z)^2=x^2+y^2+z^2+2(xy+xz+yz)$ by induction you will get the following result , ${{left( sumnolimits_{i=1}^{n}{{{x}_{i}}} right)}^{2}}=sumnolimits_{i=1}^{n}{x_{i}^{2}+2sumnolimits_{ine j}^{n}{{{x}_{i}}{{x}_{j}}}}$
                    $endgroup$
                    – Ramez Hindi
                    Jan 13 at 21:34






                    $begingroup$
                    Sorry first line is not clear to if we say $(x+y+z)^2=x^2+y^2+z^2+2(xy+xz+yz)$ by induction you will get the following result , ${{left( sumnolimits_{i=1}^{n}{{{x}_{i}}} right)}^{2}}=sumnolimits_{i=1}^{n}{x_{i}^{2}+2sumnolimits_{ine j}^{n}{{{x}_{i}}{{x}_{j}}}}$
                    $endgroup$
                    – Ramez Hindi
                    Jan 13 at 21:34














                    $begingroup$
                    In fact you get $(sum_i x_i)^2=x_1^2+cdots + x_n^2+2sum_{i<j}x_ix_j$. The multiplicity has been considered in my equation since $2x_ix_j=x_ix_j+x_jx_i$ whenever $ine j$
                    $endgroup$
                    – Mostafa Ayaz
                    Jan 13 at 21:38




                    $begingroup$
                    In fact you get $(sum_i x_i)^2=x_1^2+cdots + x_n^2+2sum_{i<j}x_ix_j$. The multiplicity has been considered in my equation since $2x_ix_j=x_ix_j+x_jx_i$ whenever $ine j$
                    $endgroup$
                    – Mostafa Ayaz
                    Jan 13 at 21:38











                    2












                    $begingroup$

                    Suppose $X_1,X_2,ldots$ are independent random variables having the uniform distribution on $[0,1]$. Then the common expectation of these variables exist and equals $mu=1/2$.



                    Define $$overline X_n=frac{1}{n}sum_{k=1}^n X_k$$



                    By Khintchine's weak law of large numbers, $$overline X_nstackrel{P}{longrightarrow}muquadtext{ as }quad ntoinfty$$



                    And by the continuous mapping theorem, $$overline X_n^2stackrel{P}{longrightarrow}mu^2quadtext{ as }quad ntoinftytag{1}$$



                    Moreover, $$0le X_1,ldots,X_nle 1implies 0le overline X_nle 1implies 0le overline X_n^2le 1tag{2}$$



                    $(1)$ and $(2)$ together imply $$int_{[0,1]^n}left(frac{x_1+cdots+x_n}{n}right)^2mathrm{d}x_1ldotsmathrm{d}x_n = Eleft(overline X_n^2right)stackrel{ntoinfty}{longrightarrow}frac{1}{4}$$






                    share|cite|improve this answer











                    $endgroup$













                    • $begingroup$
                      It’s always nice to see an easygoing probability solution to what appears to be a monstrous analytic problem. +1
                      $endgroup$
                      – LoveTooNap29
                      Jan 13 at 20:08










                    • $begingroup$
                      there must be some confusion. I upvoted both yours and Song’s answer for the use of probability.
                      $endgroup$
                      – LoveTooNap29
                      Jan 13 at 20:33










                    • $begingroup$
                      @LoveTooNap29 I wasn't replying to you.
                      $endgroup$
                      – StubbornAtom
                      Jan 13 at 20:34










                    • $begingroup$
                      $X_nto X$ in probability isn't enough to conclude that $mathbb{E}[X_n]tomathbb{E}[X]$. It would be better to use the strong law of large numbers, plus dominated convergence.
                      $endgroup$
                      – carmichael561
                      Jan 15 at 0:27










                    • $begingroup$
                      @carmichael561 Thank you. What if I add the fact that $|X_n|<1$ to my answer? I think that salvages the argument.
                      $endgroup$
                      – StubbornAtom
                      Jan 15 at 4:59
















                    2












                    $begingroup$

                    Suppose $X_1,X_2,ldots$ are independent random variables having the uniform distribution on $[0,1]$. Then the common expectation of these variables exist and equals $mu=1/2$.



                    Define $$overline X_n=frac{1}{n}sum_{k=1}^n X_k$$



                    By Khintchine's weak law of large numbers, $$overline X_nstackrel{P}{longrightarrow}muquadtext{ as }quad ntoinfty$$



                    And by the continuous mapping theorem, $$overline X_n^2stackrel{P}{longrightarrow}mu^2quadtext{ as }quad ntoinftytag{1}$$



                    Moreover, $$0le X_1,ldots,X_nle 1implies 0le overline X_nle 1implies 0le overline X_n^2le 1tag{2}$$



                    $(1)$ and $(2)$ together imply $$int_{[0,1]^n}left(frac{x_1+cdots+x_n}{n}right)^2mathrm{d}x_1ldotsmathrm{d}x_n = Eleft(overline X_n^2right)stackrel{ntoinfty}{longrightarrow}frac{1}{4}$$






                    share|cite|improve this answer











                    $endgroup$













                    • $begingroup$
                      It’s always nice to see an easygoing probability solution to what appears to be a monstrous analytic problem. +1
                      $endgroup$
                      – LoveTooNap29
                      Jan 13 at 20:08










                    • $begingroup$
                      there must be some confusion. I upvoted both yours and Song’s answer for the use of probability.
                      $endgroup$
                      – LoveTooNap29
                      Jan 13 at 20:33










                    • $begingroup$
                      @LoveTooNap29 I wasn't replying to you.
                      $endgroup$
                      – StubbornAtom
                      Jan 13 at 20:34










                    • $begingroup$
                      $X_nto X$ in probability isn't enough to conclude that $mathbb{E}[X_n]tomathbb{E}[X]$. It would be better to use the strong law of large numbers, plus dominated convergence.
                      $endgroup$
                      – carmichael561
                      Jan 15 at 0:27










                    • $begingroup$
                      @carmichael561 Thank you. What if I add the fact that $|X_n|<1$ to my answer? I think that salvages the argument.
                      $endgroup$
                      – StubbornAtom
                      Jan 15 at 4:59














                    2












                    2








                    2





                    $begingroup$

                    Suppose $X_1,X_2,ldots$ are independent random variables having the uniform distribution on $[0,1]$. Then the common expectation of these variables exist and equals $mu=1/2$.



                    Define $$overline X_n=frac{1}{n}sum_{k=1}^n X_k$$



                    By Khintchine's weak law of large numbers, $$overline X_nstackrel{P}{longrightarrow}muquadtext{ as }quad ntoinfty$$



                    And by the continuous mapping theorem, $$overline X_n^2stackrel{P}{longrightarrow}mu^2quadtext{ as }quad ntoinftytag{1}$$



                    Moreover, $$0le X_1,ldots,X_nle 1implies 0le overline X_nle 1implies 0le overline X_n^2le 1tag{2}$$



                    $(1)$ and $(2)$ together imply $$int_{[0,1]^n}left(frac{x_1+cdots+x_n}{n}right)^2mathrm{d}x_1ldotsmathrm{d}x_n = Eleft(overline X_n^2right)stackrel{ntoinfty}{longrightarrow}frac{1}{4}$$






                    share|cite|improve this answer











                    $endgroup$



                    Suppose $X_1,X_2,ldots$ are independent random variables having the uniform distribution on $[0,1]$. Then the common expectation of these variables exist and equals $mu=1/2$.



                    Define $$overline X_n=frac{1}{n}sum_{k=1}^n X_k$$



                    By Khintchine's weak law of large numbers, $$overline X_nstackrel{P}{longrightarrow}muquadtext{ as }quad ntoinfty$$



                    And by the continuous mapping theorem, $$overline X_n^2stackrel{P}{longrightarrow}mu^2quadtext{ as }quad ntoinftytag{1}$$



                    Moreover, $$0le X_1,ldots,X_nle 1implies 0le overline X_nle 1implies 0le overline X_n^2le 1tag{2}$$



                    $(1)$ and $(2)$ together imply $$int_{[0,1]^n}left(frac{x_1+cdots+x_n}{n}right)^2mathrm{d}x_1ldotsmathrm{d}x_n = Eleft(overline X_n^2right)stackrel{ntoinfty}{longrightarrow}frac{1}{4}$$







                    share|cite|improve this answer














                    share|cite|improve this answer



                    share|cite|improve this answer








                    edited Jan 15 at 7:08

























                    answered Jan 13 at 20:00









                    StubbornAtomStubbornAtom

                    5,75611138




                    5,75611138












                    • $begingroup$
                      It’s always nice to see an easygoing probability solution to what appears to be a monstrous analytic problem. +1
                      $endgroup$
                      – LoveTooNap29
                      Jan 13 at 20:08










                    • $begingroup$
                      there must be some confusion. I upvoted both yours and Song’s answer for the use of probability.
                      $endgroup$
                      – LoveTooNap29
                      Jan 13 at 20:33










                    • $begingroup$
                      @LoveTooNap29 I wasn't replying to you.
                      $endgroup$
                      – StubbornAtom
                      Jan 13 at 20:34










                    • $begingroup$
                      $X_nto X$ in probability isn't enough to conclude that $mathbb{E}[X_n]tomathbb{E}[X]$. It would be better to use the strong law of large numbers, plus dominated convergence.
                      $endgroup$
                      – carmichael561
                      Jan 15 at 0:27










                    • $begingroup$
                      @carmichael561 Thank you. What if I add the fact that $|X_n|<1$ to my answer? I think that salvages the argument.
                      $endgroup$
                      – StubbornAtom
                      Jan 15 at 4:59


















                    • $begingroup$
                      It’s always nice to see an easygoing probability solution to what appears to be a monstrous analytic problem. +1
                      $endgroup$
                      – LoveTooNap29
                      Jan 13 at 20:08










                    • $begingroup$
                      there must be some confusion. I upvoted both yours and Song’s answer for the use of probability.
                      $endgroup$
                      – LoveTooNap29
                      Jan 13 at 20:33










                    • $begingroup$
                      @LoveTooNap29 I wasn't replying to you.
                      $endgroup$
                      – StubbornAtom
                      Jan 13 at 20:34










                    • $begingroup$
                      $X_nto X$ in probability isn't enough to conclude that $mathbb{E}[X_n]tomathbb{E}[X]$. It would be better to use the strong law of large numbers, plus dominated convergence.
                      $endgroup$
                      – carmichael561
                      Jan 15 at 0:27










                    • $begingroup$
                      @carmichael561 Thank you. What if I add the fact that $|X_n|<1$ to my answer? I think that salvages the argument.
                      $endgroup$
                      – StubbornAtom
                      Jan 15 at 4:59
















                    $begingroup$
                    It’s always nice to see an easygoing probability solution to what appears to be a monstrous analytic problem. +1
                    $endgroup$
                    – LoveTooNap29
                    Jan 13 at 20:08




                    $begingroup$
                    It’s always nice to see an easygoing probability solution to what appears to be a monstrous analytic problem. +1
                    $endgroup$
                    – LoveTooNap29
                    Jan 13 at 20:08












                    $begingroup$
                    there must be some confusion. I upvoted both yours and Song’s answer for the use of probability.
                    $endgroup$
                    – LoveTooNap29
                    Jan 13 at 20:33




                    $begingroup$
                    there must be some confusion. I upvoted both yours and Song’s answer for the use of probability.
                    $endgroup$
                    – LoveTooNap29
                    Jan 13 at 20:33












                    $begingroup$
                    @LoveTooNap29 I wasn't replying to you.
                    $endgroup$
                    – StubbornAtom
                    Jan 13 at 20:34




                    $begingroup$
                    @LoveTooNap29 I wasn't replying to you.
                    $endgroup$
                    – StubbornAtom
                    Jan 13 at 20:34












                    $begingroup$
                    $X_nto X$ in probability isn't enough to conclude that $mathbb{E}[X_n]tomathbb{E}[X]$. It would be better to use the strong law of large numbers, plus dominated convergence.
                    $endgroup$
                    – carmichael561
                    Jan 15 at 0:27




                    $begingroup$
                    $X_nto X$ in probability isn't enough to conclude that $mathbb{E}[X_n]tomathbb{E}[X]$. It would be better to use the strong law of large numbers, plus dominated convergence.
                    $endgroup$
                    – carmichael561
                    Jan 15 at 0:27












                    $begingroup$
                    @carmichael561 Thank you. What if I add the fact that $|X_n|<1$ to my answer? I think that salvages the argument.
                    $endgroup$
                    – StubbornAtom
                    Jan 15 at 4:59




                    $begingroup$
                    @carmichael561 Thank you. What if I add the fact that $|X_n|<1$ to my answer? I think that salvages the argument.
                    $endgroup$
                    – StubbornAtom
                    Jan 15 at 4:59


















                    draft saved

                    draft discarded




















































                    Thanks for contributing an answer to Mathematics Stack Exchange!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid



                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.


                    Use MathJax to format equations. MathJax reference.


                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function () {
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3072446%2fhow-to-prove-this-limit-is-1-4%23new-answer', 'question_page');
                    }
                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    Mario Kart Wii

                    What does “Dominus providebit” mean?

                    Antonio Litta Visconti Arese