A experiment of throwing two fair cubic dice












1












$begingroup$


Two fair cubic dice are thrown repeatedly in an experiment. Let $X_i$ be the absolute difference of the value of the two dice in the $i$-th throw. The experiment will be stopped when $X_i=0$. Let $Y=sum X_i$. How can I evaluate the mean and variance of Y, if the experiment is stopped at the $n$-th throw? And if the experiment is stopped at the 4-th throw and $Y=5$, what is the probability of $X_1=1$?










share|cite|improve this question









$endgroup$

















    1












    $begingroup$


    Two fair cubic dice are thrown repeatedly in an experiment. Let $X_i$ be the absolute difference of the value of the two dice in the $i$-th throw. The experiment will be stopped when $X_i=0$. Let $Y=sum X_i$. How can I evaluate the mean and variance of Y, if the experiment is stopped at the $n$-th throw? And if the experiment is stopped at the 4-th throw and $Y=5$, what is the probability of $X_1=1$?










    share|cite|improve this question









    $endgroup$















      1












      1








      1


      1



      $begingroup$


      Two fair cubic dice are thrown repeatedly in an experiment. Let $X_i$ be the absolute difference of the value of the two dice in the $i$-th throw. The experiment will be stopped when $X_i=0$. Let $Y=sum X_i$. How can I evaluate the mean and variance of Y, if the experiment is stopped at the $n$-th throw? And if the experiment is stopped at the 4-th throw and $Y=5$, what is the probability of $X_1=1$?










      share|cite|improve this question









      $endgroup$




      Two fair cubic dice are thrown repeatedly in an experiment. Let $X_i$ be the absolute difference of the value of the two dice in the $i$-th throw. The experiment will be stopped when $X_i=0$. Let $Y=sum X_i$. How can I evaluate the mean and variance of Y, if the experiment is stopped at the $n$-th throw? And if the experiment is stopped at the 4-th throw and $Y=5$, what is the probability of $X_1=1$?







      probability






      share|cite|improve this question













      share|cite|improve this question











      share|cite|improve this question




      share|cite|improve this question










      asked Jan 22 at 8:09









      Weihao HuangWeihao Huang

      112




      112






















          1 Answer
          1






          active

          oldest

          votes


















          0












          $begingroup$

          Let $E_n$ denote the event that the experiment is stopped at the $n$-th throw.



          Then to be found is:$$mathbb E[Ymid E_n]=sum_{i=1}^{n-1}mathbb E[X_imid E_n]=(n-1)mathbb E[X_1mid E_n]=(n-1)mathbb E[X_1mid X_1>0]$$



          For the last equality observe that $E_n={X_1>0}cap F_n$ where $F_n:={X_2>0,cdots,X_{n-1}>0,X_n=0}$ is an event such that $mathbf1_{F_n}$ and $X_1$ are independent so that:$$mathbb E[X_1mid E_n]=frac{mathbb EX_1mathbf1_{X_1>0}mathbf1_{F_n}}{mathbb Emathbf1_{X_1>0}mathbf1_{F_n}}=frac{mathbb EX_1mathbf1_{X_1>0}mathbb Emathbf1_{F_n}}{mathbb Emathbf1_{X_1>0}mathbb Emathbf1_{F_n}}=frac{mathbb EX_1mathbf1_{X_1>0}}{mathbb Emathbf1_{X_1>0}}=mathbb E[X_1mid X_1>0]$$



          To find the mean of $Y$ it is enough now to find $mathbb E[X_1mid X_1>0]$ and I leave that to you.



          Similarly we can find: $$mathbb E[Y^2mid E_n]=$$$$sum_{i=1}^{n-1}sum_{j=1}^{n-1}mathbb E[X_iX_jmid E_n]=(n-1)(n-2)mathbb E[X_1X_2mid X_1>0,X_2>0]+(n-1)mathbb E[X_1^2mid X_1>0]$$and then variance $mathbb E[Y^2mid E_n]-(mathbb E[Ymid E_n])^2$.





          To be found is: $$P(X_1=1mid X_1>0,X_2>0,X_3>0,X_4=0,X_1+X_2+X_3=5)=$$$$frac{P(X_1=1,X_2>0,X_3>0,X_4=0,X_1+X_2+X_3=5)}{P(X_1>0,X_2>0,X_3>0,X_4=0,X_1+X_2+X_3=5)}=$$$$frac{P(X_1=1,X_2>0,X_3>0,X_2+X_3=4)P(X_4=0)}{P(X_1>0,X_2>0,X_3>0,X_1+X_2+X_3=5)P(X_4=0)}=$$$$frac{P(X_1=1,X_2>0,X_3>0,X_2+X_3=4)}{P(X_1>0,X_2>0,X_3>0,X_1+X_2+X_3=5)}=$$$$frac{sum_{s,t>0,s+t=4}P(X_1=1)P(X_2=s)P(X_3=t)}{sum_{r,s,t>0,r+s+t=5}P(X_1=r)P(X_2=s)P(X_3=t)}$$






          share|cite|improve this answer









          $endgroup$













          • $begingroup$
            Thanks for the reply. Since I not good at probability, I hope you can confirm my works. The mean is $E(Y)=(n-1)E(X_i)$ and the variance is $Var(Y)=(n-1)[E(X_i^2)-(E(X_i))^2]$. I don't understand why you write $X_1>0$ as a condition since the value of $X_i$ has already grater than 0.
            $endgroup$
            – Weihao Huang
            Jan 22 at 10:48










          • $begingroup$
            Do you mean that $X_i>0$ (where $i>1$) already implies that $X_1>0$? I do that because I look at this as a process that does not stop (that does not harm, $Y$ can still be defined properly), and work with $X_i$ that are independent. If e.g. conditions like $X_2>0implies X_1>0$ are build in (I avoid that) then this (valuable) independence is lost.
            $endgroup$
            – drhab
            Jan 22 at 11:26












          • $begingroup$
            I see, it not that complicated. $X_i$ is defined as the $|D_1-D_2|$, so it must be greater than $0$.
            $endgroup$
            – Weihao Huang
            Jan 22 at 11:33










          • $begingroup$
            "..so it must be greater than $0$." What do you mean? For every index $i$ it is possible that $D_1^{(i)}=D_2^{(i)}$ or equivalently $X_i=0$.
            $endgroup$
            – drhab
            Jan 22 at 11:38










          • $begingroup$
            I see what you mean, thanks
            $endgroup$
            – Weihao Huang
            Jan 22 at 11:59











          Your Answer





          StackExchange.ifUsing("editor", function () {
          return StackExchange.using("mathjaxEditing", function () {
          StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
          StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
          });
          });
          }, "mathjax-editing");

          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "69"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          noCode: true, onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3082872%2fa-experiment-of-throwing-two-fair-cubic-dice%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          0












          $begingroup$

          Let $E_n$ denote the event that the experiment is stopped at the $n$-th throw.



          Then to be found is:$$mathbb E[Ymid E_n]=sum_{i=1}^{n-1}mathbb E[X_imid E_n]=(n-1)mathbb E[X_1mid E_n]=(n-1)mathbb E[X_1mid X_1>0]$$



          For the last equality observe that $E_n={X_1>0}cap F_n$ where $F_n:={X_2>0,cdots,X_{n-1}>0,X_n=0}$ is an event such that $mathbf1_{F_n}$ and $X_1$ are independent so that:$$mathbb E[X_1mid E_n]=frac{mathbb EX_1mathbf1_{X_1>0}mathbf1_{F_n}}{mathbb Emathbf1_{X_1>0}mathbf1_{F_n}}=frac{mathbb EX_1mathbf1_{X_1>0}mathbb Emathbf1_{F_n}}{mathbb Emathbf1_{X_1>0}mathbb Emathbf1_{F_n}}=frac{mathbb EX_1mathbf1_{X_1>0}}{mathbb Emathbf1_{X_1>0}}=mathbb E[X_1mid X_1>0]$$



          To find the mean of $Y$ it is enough now to find $mathbb E[X_1mid X_1>0]$ and I leave that to you.



          Similarly we can find: $$mathbb E[Y^2mid E_n]=$$$$sum_{i=1}^{n-1}sum_{j=1}^{n-1}mathbb E[X_iX_jmid E_n]=(n-1)(n-2)mathbb E[X_1X_2mid X_1>0,X_2>0]+(n-1)mathbb E[X_1^2mid X_1>0]$$and then variance $mathbb E[Y^2mid E_n]-(mathbb E[Ymid E_n])^2$.





          To be found is: $$P(X_1=1mid X_1>0,X_2>0,X_3>0,X_4=0,X_1+X_2+X_3=5)=$$$$frac{P(X_1=1,X_2>0,X_3>0,X_4=0,X_1+X_2+X_3=5)}{P(X_1>0,X_2>0,X_3>0,X_4=0,X_1+X_2+X_3=5)}=$$$$frac{P(X_1=1,X_2>0,X_3>0,X_2+X_3=4)P(X_4=0)}{P(X_1>0,X_2>0,X_3>0,X_1+X_2+X_3=5)P(X_4=0)}=$$$$frac{P(X_1=1,X_2>0,X_3>0,X_2+X_3=4)}{P(X_1>0,X_2>0,X_3>0,X_1+X_2+X_3=5)}=$$$$frac{sum_{s,t>0,s+t=4}P(X_1=1)P(X_2=s)P(X_3=t)}{sum_{r,s,t>0,r+s+t=5}P(X_1=r)P(X_2=s)P(X_3=t)}$$






          share|cite|improve this answer









          $endgroup$













          • $begingroup$
            Thanks for the reply. Since I not good at probability, I hope you can confirm my works. The mean is $E(Y)=(n-1)E(X_i)$ and the variance is $Var(Y)=(n-1)[E(X_i^2)-(E(X_i))^2]$. I don't understand why you write $X_1>0$ as a condition since the value of $X_i$ has already grater than 0.
            $endgroup$
            – Weihao Huang
            Jan 22 at 10:48










          • $begingroup$
            Do you mean that $X_i>0$ (where $i>1$) already implies that $X_1>0$? I do that because I look at this as a process that does not stop (that does not harm, $Y$ can still be defined properly), and work with $X_i$ that are independent. If e.g. conditions like $X_2>0implies X_1>0$ are build in (I avoid that) then this (valuable) independence is lost.
            $endgroup$
            – drhab
            Jan 22 at 11:26












          • $begingroup$
            I see, it not that complicated. $X_i$ is defined as the $|D_1-D_2|$, so it must be greater than $0$.
            $endgroup$
            – Weihao Huang
            Jan 22 at 11:33










          • $begingroup$
            "..so it must be greater than $0$." What do you mean? For every index $i$ it is possible that $D_1^{(i)}=D_2^{(i)}$ or equivalently $X_i=0$.
            $endgroup$
            – drhab
            Jan 22 at 11:38










          • $begingroup$
            I see what you mean, thanks
            $endgroup$
            – Weihao Huang
            Jan 22 at 11:59
















          0












          $begingroup$

          Let $E_n$ denote the event that the experiment is stopped at the $n$-th throw.



          Then to be found is:$$mathbb E[Ymid E_n]=sum_{i=1}^{n-1}mathbb E[X_imid E_n]=(n-1)mathbb E[X_1mid E_n]=(n-1)mathbb E[X_1mid X_1>0]$$



          For the last equality observe that $E_n={X_1>0}cap F_n$ where $F_n:={X_2>0,cdots,X_{n-1}>0,X_n=0}$ is an event such that $mathbf1_{F_n}$ and $X_1$ are independent so that:$$mathbb E[X_1mid E_n]=frac{mathbb EX_1mathbf1_{X_1>0}mathbf1_{F_n}}{mathbb Emathbf1_{X_1>0}mathbf1_{F_n}}=frac{mathbb EX_1mathbf1_{X_1>0}mathbb Emathbf1_{F_n}}{mathbb Emathbf1_{X_1>0}mathbb Emathbf1_{F_n}}=frac{mathbb EX_1mathbf1_{X_1>0}}{mathbb Emathbf1_{X_1>0}}=mathbb E[X_1mid X_1>0]$$



          To find the mean of $Y$ it is enough now to find $mathbb E[X_1mid X_1>0]$ and I leave that to you.



          Similarly we can find: $$mathbb E[Y^2mid E_n]=$$$$sum_{i=1}^{n-1}sum_{j=1}^{n-1}mathbb E[X_iX_jmid E_n]=(n-1)(n-2)mathbb E[X_1X_2mid X_1>0,X_2>0]+(n-1)mathbb E[X_1^2mid X_1>0]$$and then variance $mathbb E[Y^2mid E_n]-(mathbb E[Ymid E_n])^2$.





          To be found is: $$P(X_1=1mid X_1>0,X_2>0,X_3>0,X_4=0,X_1+X_2+X_3=5)=$$$$frac{P(X_1=1,X_2>0,X_3>0,X_4=0,X_1+X_2+X_3=5)}{P(X_1>0,X_2>0,X_3>0,X_4=0,X_1+X_2+X_3=5)}=$$$$frac{P(X_1=1,X_2>0,X_3>0,X_2+X_3=4)P(X_4=0)}{P(X_1>0,X_2>0,X_3>0,X_1+X_2+X_3=5)P(X_4=0)}=$$$$frac{P(X_1=1,X_2>0,X_3>0,X_2+X_3=4)}{P(X_1>0,X_2>0,X_3>0,X_1+X_2+X_3=5)}=$$$$frac{sum_{s,t>0,s+t=4}P(X_1=1)P(X_2=s)P(X_3=t)}{sum_{r,s,t>0,r+s+t=5}P(X_1=r)P(X_2=s)P(X_3=t)}$$






          share|cite|improve this answer









          $endgroup$













          • $begingroup$
            Thanks for the reply. Since I not good at probability, I hope you can confirm my works. The mean is $E(Y)=(n-1)E(X_i)$ and the variance is $Var(Y)=(n-1)[E(X_i^2)-(E(X_i))^2]$. I don't understand why you write $X_1>0$ as a condition since the value of $X_i$ has already grater than 0.
            $endgroup$
            – Weihao Huang
            Jan 22 at 10:48










          • $begingroup$
            Do you mean that $X_i>0$ (where $i>1$) already implies that $X_1>0$? I do that because I look at this as a process that does not stop (that does not harm, $Y$ can still be defined properly), and work with $X_i$ that are independent. If e.g. conditions like $X_2>0implies X_1>0$ are build in (I avoid that) then this (valuable) independence is lost.
            $endgroup$
            – drhab
            Jan 22 at 11:26












          • $begingroup$
            I see, it not that complicated. $X_i$ is defined as the $|D_1-D_2|$, so it must be greater than $0$.
            $endgroup$
            – Weihao Huang
            Jan 22 at 11:33










          • $begingroup$
            "..so it must be greater than $0$." What do you mean? For every index $i$ it is possible that $D_1^{(i)}=D_2^{(i)}$ or equivalently $X_i=0$.
            $endgroup$
            – drhab
            Jan 22 at 11:38










          • $begingroup$
            I see what you mean, thanks
            $endgroup$
            – Weihao Huang
            Jan 22 at 11:59














          0












          0








          0





          $begingroup$

          Let $E_n$ denote the event that the experiment is stopped at the $n$-th throw.



          Then to be found is:$$mathbb E[Ymid E_n]=sum_{i=1}^{n-1}mathbb E[X_imid E_n]=(n-1)mathbb E[X_1mid E_n]=(n-1)mathbb E[X_1mid X_1>0]$$



          For the last equality observe that $E_n={X_1>0}cap F_n$ where $F_n:={X_2>0,cdots,X_{n-1}>0,X_n=0}$ is an event such that $mathbf1_{F_n}$ and $X_1$ are independent so that:$$mathbb E[X_1mid E_n]=frac{mathbb EX_1mathbf1_{X_1>0}mathbf1_{F_n}}{mathbb Emathbf1_{X_1>0}mathbf1_{F_n}}=frac{mathbb EX_1mathbf1_{X_1>0}mathbb Emathbf1_{F_n}}{mathbb Emathbf1_{X_1>0}mathbb Emathbf1_{F_n}}=frac{mathbb EX_1mathbf1_{X_1>0}}{mathbb Emathbf1_{X_1>0}}=mathbb E[X_1mid X_1>0]$$



          To find the mean of $Y$ it is enough now to find $mathbb E[X_1mid X_1>0]$ and I leave that to you.



          Similarly we can find: $$mathbb E[Y^2mid E_n]=$$$$sum_{i=1}^{n-1}sum_{j=1}^{n-1}mathbb E[X_iX_jmid E_n]=(n-1)(n-2)mathbb E[X_1X_2mid X_1>0,X_2>0]+(n-1)mathbb E[X_1^2mid X_1>0]$$and then variance $mathbb E[Y^2mid E_n]-(mathbb E[Ymid E_n])^2$.





          To be found is: $$P(X_1=1mid X_1>0,X_2>0,X_3>0,X_4=0,X_1+X_2+X_3=5)=$$$$frac{P(X_1=1,X_2>0,X_3>0,X_4=0,X_1+X_2+X_3=5)}{P(X_1>0,X_2>0,X_3>0,X_4=0,X_1+X_2+X_3=5)}=$$$$frac{P(X_1=1,X_2>0,X_3>0,X_2+X_3=4)P(X_4=0)}{P(X_1>0,X_2>0,X_3>0,X_1+X_2+X_3=5)P(X_4=0)}=$$$$frac{P(X_1=1,X_2>0,X_3>0,X_2+X_3=4)}{P(X_1>0,X_2>0,X_3>0,X_1+X_2+X_3=5)}=$$$$frac{sum_{s,t>0,s+t=4}P(X_1=1)P(X_2=s)P(X_3=t)}{sum_{r,s,t>0,r+s+t=5}P(X_1=r)P(X_2=s)P(X_3=t)}$$






          share|cite|improve this answer









          $endgroup$



          Let $E_n$ denote the event that the experiment is stopped at the $n$-th throw.



          Then to be found is:$$mathbb E[Ymid E_n]=sum_{i=1}^{n-1}mathbb E[X_imid E_n]=(n-1)mathbb E[X_1mid E_n]=(n-1)mathbb E[X_1mid X_1>0]$$



          For the last equality observe that $E_n={X_1>0}cap F_n$ where $F_n:={X_2>0,cdots,X_{n-1}>0,X_n=0}$ is an event such that $mathbf1_{F_n}$ and $X_1$ are independent so that:$$mathbb E[X_1mid E_n]=frac{mathbb EX_1mathbf1_{X_1>0}mathbf1_{F_n}}{mathbb Emathbf1_{X_1>0}mathbf1_{F_n}}=frac{mathbb EX_1mathbf1_{X_1>0}mathbb Emathbf1_{F_n}}{mathbb Emathbf1_{X_1>0}mathbb Emathbf1_{F_n}}=frac{mathbb EX_1mathbf1_{X_1>0}}{mathbb Emathbf1_{X_1>0}}=mathbb E[X_1mid X_1>0]$$



          To find the mean of $Y$ it is enough now to find $mathbb E[X_1mid X_1>0]$ and I leave that to you.



          Similarly we can find: $$mathbb E[Y^2mid E_n]=$$$$sum_{i=1}^{n-1}sum_{j=1}^{n-1}mathbb E[X_iX_jmid E_n]=(n-1)(n-2)mathbb E[X_1X_2mid X_1>0,X_2>0]+(n-1)mathbb E[X_1^2mid X_1>0]$$and then variance $mathbb E[Y^2mid E_n]-(mathbb E[Ymid E_n])^2$.





          To be found is: $$P(X_1=1mid X_1>0,X_2>0,X_3>0,X_4=0,X_1+X_2+X_3=5)=$$$$frac{P(X_1=1,X_2>0,X_3>0,X_4=0,X_1+X_2+X_3=5)}{P(X_1>0,X_2>0,X_3>0,X_4=0,X_1+X_2+X_3=5)}=$$$$frac{P(X_1=1,X_2>0,X_3>0,X_2+X_3=4)P(X_4=0)}{P(X_1>0,X_2>0,X_3>0,X_1+X_2+X_3=5)P(X_4=0)}=$$$$frac{P(X_1=1,X_2>0,X_3>0,X_2+X_3=4)}{P(X_1>0,X_2>0,X_3>0,X_1+X_2+X_3=5)}=$$$$frac{sum_{s,t>0,s+t=4}P(X_1=1)P(X_2=s)P(X_3=t)}{sum_{r,s,t>0,r+s+t=5}P(X_1=r)P(X_2=s)P(X_3=t)}$$







          share|cite|improve this answer












          share|cite|improve this answer



          share|cite|improve this answer










          answered Jan 22 at 9:58









          drhabdrhab

          102k545136




          102k545136












          • $begingroup$
            Thanks for the reply. Since I not good at probability, I hope you can confirm my works. The mean is $E(Y)=(n-1)E(X_i)$ and the variance is $Var(Y)=(n-1)[E(X_i^2)-(E(X_i))^2]$. I don't understand why you write $X_1>0$ as a condition since the value of $X_i$ has already grater than 0.
            $endgroup$
            – Weihao Huang
            Jan 22 at 10:48










          • $begingroup$
            Do you mean that $X_i>0$ (where $i>1$) already implies that $X_1>0$? I do that because I look at this as a process that does not stop (that does not harm, $Y$ can still be defined properly), and work with $X_i$ that are independent. If e.g. conditions like $X_2>0implies X_1>0$ are build in (I avoid that) then this (valuable) independence is lost.
            $endgroup$
            – drhab
            Jan 22 at 11:26












          • $begingroup$
            I see, it not that complicated. $X_i$ is defined as the $|D_1-D_2|$, so it must be greater than $0$.
            $endgroup$
            – Weihao Huang
            Jan 22 at 11:33










          • $begingroup$
            "..so it must be greater than $0$." What do you mean? For every index $i$ it is possible that $D_1^{(i)}=D_2^{(i)}$ or equivalently $X_i=0$.
            $endgroup$
            – drhab
            Jan 22 at 11:38










          • $begingroup$
            I see what you mean, thanks
            $endgroup$
            – Weihao Huang
            Jan 22 at 11:59


















          • $begingroup$
            Thanks for the reply. Since I not good at probability, I hope you can confirm my works. The mean is $E(Y)=(n-1)E(X_i)$ and the variance is $Var(Y)=(n-1)[E(X_i^2)-(E(X_i))^2]$. I don't understand why you write $X_1>0$ as a condition since the value of $X_i$ has already grater than 0.
            $endgroup$
            – Weihao Huang
            Jan 22 at 10:48










          • $begingroup$
            Do you mean that $X_i>0$ (where $i>1$) already implies that $X_1>0$? I do that because I look at this as a process that does not stop (that does not harm, $Y$ can still be defined properly), and work with $X_i$ that are independent. If e.g. conditions like $X_2>0implies X_1>0$ are build in (I avoid that) then this (valuable) independence is lost.
            $endgroup$
            – drhab
            Jan 22 at 11:26












          • $begingroup$
            I see, it not that complicated. $X_i$ is defined as the $|D_1-D_2|$, so it must be greater than $0$.
            $endgroup$
            – Weihao Huang
            Jan 22 at 11:33










          • $begingroup$
            "..so it must be greater than $0$." What do you mean? For every index $i$ it is possible that $D_1^{(i)}=D_2^{(i)}$ or equivalently $X_i=0$.
            $endgroup$
            – drhab
            Jan 22 at 11:38










          • $begingroup$
            I see what you mean, thanks
            $endgroup$
            – Weihao Huang
            Jan 22 at 11:59
















          $begingroup$
          Thanks for the reply. Since I not good at probability, I hope you can confirm my works. The mean is $E(Y)=(n-1)E(X_i)$ and the variance is $Var(Y)=(n-1)[E(X_i^2)-(E(X_i))^2]$. I don't understand why you write $X_1>0$ as a condition since the value of $X_i$ has already grater than 0.
          $endgroup$
          – Weihao Huang
          Jan 22 at 10:48




          $begingroup$
          Thanks for the reply. Since I not good at probability, I hope you can confirm my works. The mean is $E(Y)=(n-1)E(X_i)$ and the variance is $Var(Y)=(n-1)[E(X_i^2)-(E(X_i))^2]$. I don't understand why you write $X_1>0$ as a condition since the value of $X_i$ has already grater than 0.
          $endgroup$
          – Weihao Huang
          Jan 22 at 10:48












          $begingroup$
          Do you mean that $X_i>0$ (where $i>1$) already implies that $X_1>0$? I do that because I look at this as a process that does not stop (that does not harm, $Y$ can still be defined properly), and work with $X_i$ that are independent. If e.g. conditions like $X_2>0implies X_1>0$ are build in (I avoid that) then this (valuable) independence is lost.
          $endgroup$
          – drhab
          Jan 22 at 11:26






          $begingroup$
          Do you mean that $X_i>0$ (where $i>1$) already implies that $X_1>0$? I do that because I look at this as a process that does not stop (that does not harm, $Y$ can still be defined properly), and work with $X_i$ that are independent. If e.g. conditions like $X_2>0implies X_1>0$ are build in (I avoid that) then this (valuable) independence is lost.
          $endgroup$
          – drhab
          Jan 22 at 11:26














          $begingroup$
          I see, it not that complicated. $X_i$ is defined as the $|D_1-D_2|$, so it must be greater than $0$.
          $endgroup$
          – Weihao Huang
          Jan 22 at 11:33




          $begingroup$
          I see, it not that complicated. $X_i$ is defined as the $|D_1-D_2|$, so it must be greater than $0$.
          $endgroup$
          – Weihao Huang
          Jan 22 at 11:33












          $begingroup$
          "..so it must be greater than $0$." What do you mean? For every index $i$ it is possible that $D_1^{(i)}=D_2^{(i)}$ or equivalently $X_i=0$.
          $endgroup$
          – drhab
          Jan 22 at 11:38




          $begingroup$
          "..so it must be greater than $0$." What do you mean? For every index $i$ it is possible that $D_1^{(i)}=D_2^{(i)}$ or equivalently $X_i=0$.
          $endgroup$
          – drhab
          Jan 22 at 11:38












          $begingroup$
          I see what you mean, thanks
          $endgroup$
          – Weihao Huang
          Jan 22 at 11:59




          $begingroup$
          I see what you mean, thanks
          $endgroup$
          – Weihao Huang
          Jan 22 at 11:59


















          draft saved

          draft discarded




















































          Thanks for contributing an answer to Mathematics Stack Exchange!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          Use MathJax to format equations. MathJax reference.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3082872%2fa-experiment-of-throwing-two-fair-cubic-dice%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Mario Kart Wii

          What does “Dominus providebit” mean?

          Antonio Litta Visconti Arese