Likelihood for model with two data sources












0












$begingroup$


$theta$ is parameter for model and $D_{1} ,D_{2}$ are two i.i.d data sources. The parameter for model are jointly optimised by MLE. My goal is to just confirm likelihood for the model is right.



$ P (theta | D_{1},D_{2}) = frac{P(D_{1} ,D_{2} ,theta)}{P(D_{1},D_{2})}$



$P (theta | D_{1},D_{2}) = frac{P(D_{1} ,D_{2} | theta)P(theta)}{P(D_{1})P(D_{2})}$



Since, $D_{1},D_{2}$ are independent given $theta$. Can I write this directly as?



$P (theta | D_{1},D_{2}) = frac{P(D_{1}| theta)P(D_{2}| theta)P(theta)}{P(D_{1})P(D_{2})}$ (1)



OR



$P (theta | D_{1},D_{2}) = frac{P(D_{1}| D_{2}, theta)P(D_{2}| theta)P(theta)}{P(D_{1})P(D_{2})}$



$P (theta | D_{1},D_{2}) = frac{P(D_{1}| D_{2}, theta)P(theta | D_{2})}{P(D_{1})}$ (2)



What would be the right way of writing this? Also, is everything correct? I strongly feel that the first equation is wrong because even though the data sources don't depend on each other the optimisation of parameter does. So, the second equation would be more appropriate which is intuitive that the parameters are optimised with respect to let's say $D_{2}$ and then optimised further given $D_{1}, theta$.



So, the overall likelihood term for model would be $P(D_{1}| D_{2}, theta)P(D_{2}| theta)$ ?










share|cite|improve this question











$endgroup$

















    0












    $begingroup$


    $theta$ is parameter for model and $D_{1} ,D_{2}$ are two i.i.d data sources. The parameter for model are jointly optimised by MLE. My goal is to just confirm likelihood for the model is right.



    $ P (theta | D_{1},D_{2}) = frac{P(D_{1} ,D_{2} ,theta)}{P(D_{1},D_{2})}$



    $P (theta | D_{1},D_{2}) = frac{P(D_{1} ,D_{2} | theta)P(theta)}{P(D_{1})P(D_{2})}$



    Since, $D_{1},D_{2}$ are independent given $theta$. Can I write this directly as?



    $P (theta | D_{1},D_{2}) = frac{P(D_{1}| theta)P(D_{2}| theta)P(theta)}{P(D_{1})P(D_{2})}$ (1)



    OR



    $P (theta | D_{1},D_{2}) = frac{P(D_{1}| D_{2}, theta)P(D_{2}| theta)P(theta)}{P(D_{1})P(D_{2})}$



    $P (theta | D_{1},D_{2}) = frac{P(D_{1}| D_{2}, theta)P(theta | D_{2})}{P(D_{1})}$ (2)



    What would be the right way of writing this? Also, is everything correct? I strongly feel that the first equation is wrong because even though the data sources don't depend on each other the optimisation of parameter does. So, the second equation would be more appropriate which is intuitive that the parameters are optimised with respect to let's say $D_{2}$ and then optimised further given $D_{1}, theta$.



    So, the overall likelihood term for model would be $P(D_{1}| D_{2}, theta)P(D_{2}| theta)$ ?










    share|cite|improve this question











    $endgroup$















      0












      0








      0





      $begingroup$


      $theta$ is parameter for model and $D_{1} ,D_{2}$ are two i.i.d data sources. The parameter for model are jointly optimised by MLE. My goal is to just confirm likelihood for the model is right.



      $ P (theta | D_{1},D_{2}) = frac{P(D_{1} ,D_{2} ,theta)}{P(D_{1},D_{2})}$



      $P (theta | D_{1},D_{2}) = frac{P(D_{1} ,D_{2} | theta)P(theta)}{P(D_{1})P(D_{2})}$



      Since, $D_{1},D_{2}$ are independent given $theta$. Can I write this directly as?



      $P (theta | D_{1},D_{2}) = frac{P(D_{1}| theta)P(D_{2}| theta)P(theta)}{P(D_{1})P(D_{2})}$ (1)



      OR



      $P (theta | D_{1},D_{2}) = frac{P(D_{1}| D_{2}, theta)P(D_{2}| theta)P(theta)}{P(D_{1})P(D_{2})}$



      $P (theta | D_{1},D_{2}) = frac{P(D_{1}| D_{2}, theta)P(theta | D_{2})}{P(D_{1})}$ (2)



      What would be the right way of writing this? Also, is everything correct? I strongly feel that the first equation is wrong because even though the data sources don't depend on each other the optimisation of parameter does. So, the second equation would be more appropriate which is intuitive that the parameters are optimised with respect to let's say $D_{2}$ and then optimised further given $D_{1}, theta$.



      So, the overall likelihood term for model would be $P(D_{1}| D_{2}, theta)P(D_{2}| theta)$ ?










      share|cite|improve this question











      $endgroup$




      $theta$ is parameter for model and $D_{1} ,D_{2}$ are two i.i.d data sources. The parameter for model are jointly optimised by MLE. My goal is to just confirm likelihood for the model is right.



      $ P (theta | D_{1},D_{2}) = frac{P(D_{1} ,D_{2} ,theta)}{P(D_{1},D_{2})}$



      $P (theta | D_{1},D_{2}) = frac{P(D_{1} ,D_{2} | theta)P(theta)}{P(D_{1})P(D_{2})}$



      Since, $D_{1},D_{2}$ are independent given $theta$. Can I write this directly as?



      $P (theta | D_{1},D_{2}) = frac{P(D_{1}| theta)P(D_{2}| theta)P(theta)}{P(D_{1})P(D_{2})}$ (1)



      OR



      $P (theta | D_{1},D_{2}) = frac{P(D_{1}| D_{2}, theta)P(D_{2}| theta)P(theta)}{P(D_{1})P(D_{2})}$



      $P (theta | D_{1},D_{2}) = frac{P(D_{1}| D_{2}, theta)P(theta | D_{2})}{P(D_{1})}$ (2)



      What would be the right way of writing this? Also, is everything correct? I strongly feel that the first equation is wrong because even though the data sources don't depend on each other the optimisation of parameter does. So, the second equation would be more appropriate which is intuitive that the parameters are optimised with respect to let's say $D_{2}$ and then optimised further given $D_{1}, theta$.



      So, the overall likelihood term for model would be $P(D_{1}| D_{2}, theta)P(D_{2}| theta)$ ?







      probability proof-verification maximum-likelihood






      share|cite|improve this question















      share|cite|improve this question













      share|cite|improve this question




      share|cite|improve this question








      edited Jan 22 at 0:40







      gamabunta

















      asked Jan 22 at 0:03









      gamabuntagamabunta

      12




      12






















          0






          active

          oldest

          votes











          Your Answer





          StackExchange.ifUsing("editor", function () {
          return StackExchange.using("mathjaxEditing", function () {
          StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
          StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
          });
          });
          }, "mathjax-editing");

          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "69"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          noCode: true, onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3082581%2flikelihood-for-model-with-two-data-sources%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          0






          active

          oldest

          votes








          0






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes
















          draft saved

          draft discarded




















































          Thanks for contributing an answer to Mathematics Stack Exchange!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          Use MathJax to format equations. MathJax reference.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3082581%2flikelihood-for-model-with-two-data-sources%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Mario Kart Wii

          What does “Dominus providebit” mean?

          Antonio Litta Visconti Arese