How to code oscillator driven by Gaussian white noise? Edit: How to convert ODE to a system of SDE's?












1












$begingroup$


I have written some python code which was designed to try to solve the following differential equation:
$$ddot{x}+omega_0^2x=eta(t),$$
where $eta(t)$ is the gaussian white noise, with mean 0 and variance 1. The initial conditions are:
$$x(0)=dot{x}(0)=0.$$
The code is given here:



import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import norm

class HarmonicOdeSolver:
def __init__(self, dt, x0, xd0, omega_squared):
"Inits the solver."
self.dt = dt
self.dt_squared = dt ** 2
self.t = dt
self.omega_squared = omega_squared
self.x0 = x0
self.xd0 = xd0
self.x = [xd0 * dt + x0, x0]

def step(self):
"Steps the solver."
xt, xtm1 = self.x
xtp1 = (2 - self.omega_squared * self.dt_squared) * xt - xtm1
+ self.dt_squared * norm.rvs()
self.x = (xtp1, xt)
self.t += self.dt

def step_until(self, tmax, snapshot_dt):
"Steps the solver until a given time, returns snapshots."
ts = [self.t]
vals = [self.x[0]]
niter = max(1, int(snapshot_dt // self.dt))
while self.t < tmax:
for _ in range(niter):
self.step()
vals.append(self.x[0])
ts.append(self.t)
return np.array(ts), np.array(vals)

solver = HarmonicOdeSolver(1e-2, 0, 0, 1)
snapshot_dt = 1.0
ts, vals = solver.step_until(1000, snapshot_dt)
plt.plot(ts, np.sqrt(vals ** 2))
plt.plot(ts, np.sqrt(ts / 2))


The code was taken from and explained here. I naively hoped that I could simply add the following line of code:



self.dt_squared * norm.rvs()


to simulate Gaussian white noise. One problem I have noticed is that the results appear to be highly dependent on the time step used. In a similar post we found that the variance of the oscillator should grow as:
$$sqrt{langle x(t)^2rangle}simsqrt{frac{t}{2}}.$$
I would like to reproduce this result, does anyone know of a simple way to simulate a harmonic oscillator driven by white noise?



EDIT: Thanks for the help WoofDoggy, however, I am still confused. When you turned the ODE into a system of stochasic differential equations should you have not done this:
$$dX_t=dot{X}_tdt,$$
$$ddot{X}_t=-omega_0^2X_tdt+dW_t,$$
but instead you have done this:
$$dX_t=dot{X}_tdt+dW_t,$$
$$ddot{X}_t=-omega_0^2X_tdt?$$










share|cite|improve this question











$endgroup$

















    1












    $begingroup$


    I have written some python code which was designed to try to solve the following differential equation:
    $$ddot{x}+omega_0^2x=eta(t),$$
    where $eta(t)$ is the gaussian white noise, with mean 0 and variance 1. The initial conditions are:
    $$x(0)=dot{x}(0)=0.$$
    The code is given here:



    import numpy as np
    import matplotlib.pyplot as plt
    from scipy.stats import norm

    class HarmonicOdeSolver:
    def __init__(self, dt, x0, xd0, omega_squared):
    "Inits the solver."
    self.dt = dt
    self.dt_squared = dt ** 2
    self.t = dt
    self.omega_squared = omega_squared
    self.x0 = x0
    self.xd0 = xd0
    self.x = [xd0 * dt + x0, x0]

    def step(self):
    "Steps the solver."
    xt, xtm1 = self.x
    xtp1 = (2 - self.omega_squared * self.dt_squared) * xt - xtm1
    + self.dt_squared * norm.rvs()
    self.x = (xtp1, xt)
    self.t += self.dt

    def step_until(self, tmax, snapshot_dt):
    "Steps the solver until a given time, returns snapshots."
    ts = [self.t]
    vals = [self.x[0]]
    niter = max(1, int(snapshot_dt // self.dt))
    while self.t < tmax:
    for _ in range(niter):
    self.step()
    vals.append(self.x[0])
    ts.append(self.t)
    return np.array(ts), np.array(vals)

    solver = HarmonicOdeSolver(1e-2, 0, 0, 1)
    snapshot_dt = 1.0
    ts, vals = solver.step_until(1000, snapshot_dt)
    plt.plot(ts, np.sqrt(vals ** 2))
    plt.plot(ts, np.sqrt(ts / 2))


    The code was taken from and explained here. I naively hoped that I could simply add the following line of code:



    self.dt_squared * norm.rvs()


    to simulate Gaussian white noise. One problem I have noticed is that the results appear to be highly dependent on the time step used. In a similar post we found that the variance of the oscillator should grow as:
    $$sqrt{langle x(t)^2rangle}simsqrt{frac{t}{2}}.$$
    I would like to reproduce this result, does anyone know of a simple way to simulate a harmonic oscillator driven by white noise?



    EDIT: Thanks for the help WoofDoggy, however, I am still confused. When you turned the ODE into a system of stochasic differential equations should you have not done this:
    $$dX_t=dot{X}_tdt,$$
    $$ddot{X}_t=-omega_0^2X_tdt+dW_t,$$
    but instead you have done this:
    $$dX_t=dot{X}_tdt+dW_t,$$
    $$ddot{X}_t=-omega_0^2X_tdt?$$










    share|cite|improve this question











    $endgroup$















      1












      1








      1


      4



      $begingroup$


      I have written some python code which was designed to try to solve the following differential equation:
      $$ddot{x}+omega_0^2x=eta(t),$$
      where $eta(t)$ is the gaussian white noise, with mean 0 and variance 1. The initial conditions are:
      $$x(0)=dot{x}(0)=0.$$
      The code is given here:



      import numpy as np
      import matplotlib.pyplot as plt
      from scipy.stats import norm

      class HarmonicOdeSolver:
      def __init__(self, dt, x0, xd0, omega_squared):
      "Inits the solver."
      self.dt = dt
      self.dt_squared = dt ** 2
      self.t = dt
      self.omega_squared = omega_squared
      self.x0 = x0
      self.xd0 = xd0
      self.x = [xd0 * dt + x0, x0]

      def step(self):
      "Steps the solver."
      xt, xtm1 = self.x
      xtp1 = (2 - self.omega_squared * self.dt_squared) * xt - xtm1
      + self.dt_squared * norm.rvs()
      self.x = (xtp1, xt)
      self.t += self.dt

      def step_until(self, tmax, snapshot_dt):
      "Steps the solver until a given time, returns snapshots."
      ts = [self.t]
      vals = [self.x[0]]
      niter = max(1, int(snapshot_dt // self.dt))
      while self.t < tmax:
      for _ in range(niter):
      self.step()
      vals.append(self.x[0])
      ts.append(self.t)
      return np.array(ts), np.array(vals)

      solver = HarmonicOdeSolver(1e-2, 0, 0, 1)
      snapshot_dt = 1.0
      ts, vals = solver.step_until(1000, snapshot_dt)
      plt.plot(ts, np.sqrt(vals ** 2))
      plt.plot(ts, np.sqrt(ts / 2))


      The code was taken from and explained here. I naively hoped that I could simply add the following line of code:



      self.dt_squared * norm.rvs()


      to simulate Gaussian white noise. One problem I have noticed is that the results appear to be highly dependent on the time step used. In a similar post we found that the variance of the oscillator should grow as:
      $$sqrt{langle x(t)^2rangle}simsqrt{frac{t}{2}}.$$
      I would like to reproduce this result, does anyone know of a simple way to simulate a harmonic oscillator driven by white noise?



      EDIT: Thanks for the help WoofDoggy, however, I am still confused. When you turned the ODE into a system of stochasic differential equations should you have not done this:
      $$dX_t=dot{X}_tdt,$$
      $$ddot{X}_t=-omega_0^2X_tdt+dW_t,$$
      but instead you have done this:
      $$dX_t=dot{X}_tdt+dW_t,$$
      $$ddot{X}_t=-omega_0^2X_tdt?$$










      share|cite|improve this question











      $endgroup$




      I have written some python code which was designed to try to solve the following differential equation:
      $$ddot{x}+omega_0^2x=eta(t),$$
      where $eta(t)$ is the gaussian white noise, with mean 0 and variance 1. The initial conditions are:
      $$x(0)=dot{x}(0)=0.$$
      The code is given here:



      import numpy as np
      import matplotlib.pyplot as plt
      from scipy.stats import norm

      class HarmonicOdeSolver:
      def __init__(self, dt, x0, xd0, omega_squared):
      "Inits the solver."
      self.dt = dt
      self.dt_squared = dt ** 2
      self.t = dt
      self.omega_squared = omega_squared
      self.x0 = x0
      self.xd0 = xd0
      self.x = [xd0 * dt + x0, x0]

      def step(self):
      "Steps the solver."
      xt, xtm1 = self.x
      xtp1 = (2 - self.omega_squared * self.dt_squared) * xt - xtm1
      + self.dt_squared * norm.rvs()
      self.x = (xtp1, xt)
      self.t += self.dt

      def step_until(self, tmax, snapshot_dt):
      "Steps the solver until a given time, returns snapshots."
      ts = [self.t]
      vals = [self.x[0]]
      niter = max(1, int(snapshot_dt // self.dt))
      while self.t < tmax:
      for _ in range(niter):
      self.step()
      vals.append(self.x[0])
      ts.append(self.t)
      return np.array(ts), np.array(vals)

      solver = HarmonicOdeSolver(1e-2, 0, 0, 1)
      snapshot_dt = 1.0
      ts, vals = solver.step_until(1000, snapshot_dt)
      plt.plot(ts, np.sqrt(vals ** 2))
      plt.plot(ts, np.sqrt(ts / 2))


      The code was taken from and explained here. I naively hoped that I could simply add the following line of code:



      self.dt_squared * norm.rvs()


      to simulate Gaussian white noise. One problem I have noticed is that the results appear to be highly dependent on the time step used. In a similar post we found that the variance of the oscillator should grow as:
      $$sqrt{langle x(t)^2rangle}simsqrt{frac{t}{2}}.$$
      I would like to reproduce this result, does anyone know of a simple way to simulate a harmonic oscillator driven by white noise?



      EDIT: Thanks for the help WoofDoggy, however, I am still confused. When you turned the ODE into a system of stochasic differential equations should you have not done this:
      $$dX_t=dot{X}_tdt,$$
      $$ddot{X}_t=-omega_0^2X_tdt+dW_t,$$
      but instead you have done this:
      $$dX_t=dot{X}_tdt+dW_t,$$
      $$ddot{X}_t=-omega_0^2X_tdt?$$







      random python noise






      share|cite|improve this question















      share|cite|improve this question













      share|cite|improve this question




      share|cite|improve this question








      edited Jan 7 at 16:26







      Peanutlex

















      asked Dec 30 '18 at 12:39









      PeanutlexPeanutlex

      726




      726






















          1 Answer
          1






          active

          oldest

          votes


















          2












          $begingroup$

          What you are dealing with is called stochastic differential equation. Go back to the differential form:
          $$ mathbf{X}_t = left[begin{array}{c} X_t \ dot{X}_t end{array} right],$$
          and write down the equation in matrix form
          $$dmathbf{X}_t = mathbf{M} cdot mathbf{X}_t dt + left[begin{array}{c}dW_t\0end{array}right],$$
          where $dW_t = eta(t)dt$ and $mathbf{M} = left[begin{array}{cc}0 & 1 \ -omega_0^2 & 0end{array} right]$. Now you can numerically simulate the process using Euler-Maruyama method:
          $$mathbf{X}_{t+1} = mathbf{X}_t + mathbf{M} cdot mathbf{X}_t Delta t + left[begin{array}{c}Delta W_t\0end{array}right],$$
          and keep in mind that $Delta W_t$ is a Gaussian random variable (with parameters mentioned in the question). If your discretization domain is small enough and you collected enough samples you should see a plot similar to the one below. Blue line is the mean $langle X_trangle$ and orange $sqrt{langle X_t^2 rangle}$.



          enter image description here



          EDIT



          A little bit of theoretical explanation. The solution can be written as
          $$mathbf{X}_t = e^{t mathbf{M}} mathbf{X}_0 + intlimits_{0}^{t} e^{-(s-t)mathbf{M}} left[begin{array}{c}eta(s)\0end{array}right]ds.$$
          Since $mathbf{X}_0 = mathbf{0}$, we can write
          $$X_t = frac{1}{omega_0}intlimits_{0}^{t} cos[omega_0(s-t)] eta(s)ds$$
          and you get (for $omega_0 = 1$)
          $$langle X_t^2rangle = frac{1}{2}t + frac{sin(2t)}{4}$$



          SAMPLE Python code



          # -*- coding: utf-8 -*-
          import numpy as np

          def run(x0=np.array([.0,.0]), n=40000, dt=1.0E-04, omega=1.0):

          sol = np.array()

          M = np.array([[0, 1.],[-omega**2, 0.]])

          x = x0.copy()
          for i in range(0,n):
          sol = np.append(sol, x[0])
          x += M @ x * dt + np.array([1.,0.]) * np.random.normal(scale=np.sqrt(dt))


          return sol

          sol = np.array([run() for i in range(0,500)])

          mean = np.mean(sol, axis=0)
          sigma = np.sqrt(np.var(sol, axis=0) + mean**2)

          dt = 1.0E-04
          x = np.arange(0, len(mean))
          y = np.sqrt(x * dt/2. + np.sin(2. * x * dt)/4.)

          import matplotlib.pyplot as plt

          fig, ax = plt.subplots(figsize=(15,10))


          ax.plot(x, mean)
          ax.plot(x, sigma)
          ax.plot(x, y)
          plt.show()





          share|cite|improve this answer











          $endgroup$













          • $begingroup$
            Thank you, do you mind posting the code you used?
            $endgroup$
            – Peanutlex
            Dec 30 '18 at 18:14










          • $begingroup$
            Thanks. Shouldn't the final value of $sqrt{langle X_t^2rangle}$ be $1/sqrt{2}$? Also, the following equation does not make sense to me:$$dtextbf{X}_t=textbf{M}cdottextbf{X}_tdt+[dW_t,0],$$ as this would mean the second element of the vector would be: $$d^2X_t=-omega_0^2X_tdt,$$ but surely it should reduce to:$$d^2X_t/dt=-omega_0^2X_tdt+dW_t?$$ Finally, have you made a mistake in your code at line 13, should it be: np.random.normal(scale=dt)?
            $endgroup$
            – Peanutlex
            Dec 30 '18 at 20:35












          • $begingroup$
            Sorry, should the code be: np.random.normal(scale=np.sqrt(dt))?
            $endgroup$
            – Peanutlex
            Dec 30 '18 at 20:45












          • $begingroup$
            @Peanutlex yes, my bad. The plot will change then.
            $endgroup$
            – WoofDoggy
            Dec 30 '18 at 20:46










          • $begingroup$
            Thanks a lot. Do you mind showing how you derived this equation:$$dtextbf{X}_t=textbf{M}cdottextbf{X}_tdt+[dW_t,0]?$$
            $endgroup$
            – Peanutlex
            Jan 3 at 11:35













          Your Answer





          StackExchange.ifUsing("editor", function () {
          return StackExchange.using("mathjaxEditing", function () {
          StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
          StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
          });
          });
          }, "mathjax-editing");

          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "69"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          noCode: true, onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3056792%2fhow-to-code-oscillator-driven-by-gaussian-white-noise-edit-how-to-convert-ode%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          2












          $begingroup$

          What you are dealing with is called stochastic differential equation. Go back to the differential form:
          $$ mathbf{X}_t = left[begin{array}{c} X_t \ dot{X}_t end{array} right],$$
          and write down the equation in matrix form
          $$dmathbf{X}_t = mathbf{M} cdot mathbf{X}_t dt + left[begin{array}{c}dW_t\0end{array}right],$$
          where $dW_t = eta(t)dt$ and $mathbf{M} = left[begin{array}{cc}0 & 1 \ -omega_0^2 & 0end{array} right]$. Now you can numerically simulate the process using Euler-Maruyama method:
          $$mathbf{X}_{t+1} = mathbf{X}_t + mathbf{M} cdot mathbf{X}_t Delta t + left[begin{array}{c}Delta W_t\0end{array}right],$$
          and keep in mind that $Delta W_t$ is a Gaussian random variable (with parameters mentioned in the question). If your discretization domain is small enough and you collected enough samples you should see a plot similar to the one below. Blue line is the mean $langle X_trangle$ and orange $sqrt{langle X_t^2 rangle}$.



          enter image description here



          EDIT



          A little bit of theoretical explanation. The solution can be written as
          $$mathbf{X}_t = e^{t mathbf{M}} mathbf{X}_0 + intlimits_{0}^{t} e^{-(s-t)mathbf{M}} left[begin{array}{c}eta(s)\0end{array}right]ds.$$
          Since $mathbf{X}_0 = mathbf{0}$, we can write
          $$X_t = frac{1}{omega_0}intlimits_{0}^{t} cos[omega_0(s-t)] eta(s)ds$$
          and you get (for $omega_0 = 1$)
          $$langle X_t^2rangle = frac{1}{2}t + frac{sin(2t)}{4}$$



          SAMPLE Python code



          # -*- coding: utf-8 -*-
          import numpy as np

          def run(x0=np.array([.0,.0]), n=40000, dt=1.0E-04, omega=1.0):

          sol = np.array()

          M = np.array([[0, 1.],[-omega**2, 0.]])

          x = x0.copy()
          for i in range(0,n):
          sol = np.append(sol, x[0])
          x += M @ x * dt + np.array([1.,0.]) * np.random.normal(scale=np.sqrt(dt))


          return sol

          sol = np.array([run() for i in range(0,500)])

          mean = np.mean(sol, axis=0)
          sigma = np.sqrt(np.var(sol, axis=0) + mean**2)

          dt = 1.0E-04
          x = np.arange(0, len(mean))
          y = np.sqrt(x * dt/2. + np.sin(2. * x * dt)/4.)

          import matplotlib.pyplot as plt

          fig, ax = plt.subplots(figsize=(15,10))


          ax.plot(x, mean)
          ax.plot(x, sigma)
          ax.plot(x, y)
          plt.show()





          share|cite|improve this answer











          $endgroup$













          • $begingroup$
            Thank you, do you mind posting the code you used?
            $endgroup$
            – Peanutlex
            Dec 30 '18 at 18:14










          • $begingroup$
            Thanks. Shouldn't the final value of $sqrt{langle X_t^2rangle}$ be $1/sqrt{2}$? Also, the following equation does not make sense to me:$$dtextbf{X}_t=textbf{M}cdottextbf{X}_tdt+[dW_t,0],$$ as this would mean the second element of the vector would be: $$d^2X_t=-omega_0^2X_tdt,$$ but surely it should reduce to:$$d^2X_t/dt=-omega_0^2X_tdt+dW_t?$$ Finally, have you made a mistake in your code at line 13, should it be: np.random.normal(scale=dt)?
            $endgroup$
            – Peanutlex
            Dec 30 '18 at 20:35












          • $begingroup$
            Sorry, should the code be: np.random.normal(scale=np.sqrt(dt))?
            $endgroup$
            – Peanutlex
            Dec 30 '18 at 20:45












          • $begingroup$
            @Peanutlex yes, my bad. The plot will change then.
            $endgroup$
            – WoofDoggy
            Dec 30 '18 at 20:46










          • $begingroup$
            Thanks a lot. Do you mind showing how you derived this equation:$$dtextbf{X}_t=textbf{M}cdottextbf{X}_tdt+[dW_t,0]?$$
            $endgroup$
            – Peanutlex
            Jan 3 at 11:35


















          2












          $begingroup$

          What you are dealing with is called stochastic differential equation. Go back to the differential form:
          $$ mathbf{X}_t = left[begin{array}{c} X_t \ dot{X}_t end{array} right],$$
          and write down the equation in matrix form
          $$dmathbf{X}_t = mathbf{M} cdot mathbf{X}_t dt + left[begin{array}{c}dW_t\0end{array}right],$$
          where $dW_t = eta(t)dt$ and $mathbf{M} = left[begin{array}{cc}0 & 1 \ -omega_0^2 & 0end{array} right]$. Now you can numerically simulate the process using Euler-Maruyama method:
          $$mathbf{X}_{t+1} = mathbf{X}_t + mathbf{M} cdot mathbf{X}_t Delta t + left[begin{array}{c}Delta W_t\0end{array}right],$$
          and keep in mind that $Delta W_t$ is a Gaussian random variable (with parameters mentioned in the question). If your discretization domain is small enough and you collected enough samples you should see a plot similar to the one below. Blue line is the mean $langle X_trangle$ and orange $sqrt{langle X_t^2 rangle}$.



          enter image description here



          EDIT



          A little bit of theoretical explanation. The solution can be written as
          $$mathbf{X}_t = e^{t mathbf{M}} mathbf{X}_0 + intlimits_{0}^{t} e^{-(s-t)mathbf{M}} left[begin{array}{c}eta(s)\0end{array}right]ds.$$
          Since $mathbf{X}_0 = mathbf{0}$, we can write
          $$X_t = frac{1}{omega_0}intlimits_{0}^{t} cos[omega_0(s-t)] eta(s)ds$$
          and you get (for $omega_0 = 1$)
          $$langle X_t^2rangle = frac{1}{2}t + frac{sin(2t)}{4}$$



          SAMPLE Python code



          # -*- coding: utf-8 -*-
          import numpy as np

          def run(x0=np.array([.0,.0]), n=40000, dt=1.0E-04, omega=1.0):

          sol = np.array()

          M = np.array([[0, 1.],[-omega**2, 0.]])

          x = x0.copy()
          for i in range(0,n):
          sol = np.append(sol, x[0])
          x += M @ x * dt + np.array([1.,0.]) * np.random.normal(scale=np.sqrt(dt))


          return sol

          sol = np.array([run() for i in range(0,500)])

          mean = np.mean(sol, axis=0)
          sigma = np.sqrt(np.var(sol, axis=0) + mean**2)

          dt = 1.0E-04
          x = np.arange(0, len(mean))
          y = np.sqrt(x * dt/2. + np.sin(2. * x * dt)/4.)

          import matplotlib.pyplot as plt

          fig, ax = plt.subplots(figsize=(15,10))


          ax.plot(x, mean)
          ax.plot(x, sigma)
          ax.plot(x, y)
          plt.show()





          share|cite|improve this answer











          $endgroup$













          • $begingroup$
            Thank you, do you mind posting the code you used?
            $endgroup$
            – Peanutlex
            Dec 30 '18 at 18:14










          • $begingroup$
            Thanks. Shouldn't the final value of $sqrt{langle X_t^2rangle}$ be $1/sqrt{2}$? Also, the following equation does not make sense to me:$$dtextbf{X}_t=textbf{M}cdottextbf{X}_tdt+[dW_t,0],$$ as this would mean the second element of the vector would be: $$d^2X_t=-omega_0^2X_tdt,$$ but surely it should reduce to:$$d^2X_t/dt=-omega_0^2X_tdt+dW_t?$$ Finally, have you made a mistake in your code at line 13, should it be: np.random.normal(scale=dt)?
            $endgroup$
            – Peanutlex
            Dec 30 '18 at 20:35












          • $begingroup$
            Sorry, should the code be: np.random.normal(scale=np.sqrt(dt))?
            $endgroup$
            – Peanutlex
            Dec 30 '18 at 20:45












          • $begingroup$
            @Peanutlex yes, my bad. The plot will change then.
            $endgroup$
            – WoofDoggy
            Dec 30 '18 at 20:46










          • $begingroup$
            Thanks a lot. Do you mind showing how you derived this equation:$$dtextbf{X}_t=textbf{M}cdottextbf{X}_tdt+[dW_t,0]?$$
            $endgroup$
            – Peanutlex
            Jan 3 at 11:35
















          2












          2








          2





          $begingroup$

          What you are dealing with is called stochastic differential equation. Go back to the differential form:
          $$ mathbf{X}_t = left[begin{array}{c} X_t \ dot{X}_t end{array} right],$$
          and write down the equation in matrix form
          $$dmathbf{X}_t = mathbf{M} cdot mathbf{X}_t dt + left[begin{array}{c}dW_t\0end{array}right],$$
          where $dW_t = eta(t)dt$ and $mathbf{M} = left[begin{array}{cc}0 & 1 \ -omega_0^2 & 0end{array} right]$. Now you can numerically simulate the process using Euler-Maruyama method:
          $$mathbf{X}_{t+1} = mathbf{X}_t + mathbf{M} cdot mathbf{X}_t Delta t + left[begin{array}{c}Delta W_t\0end{array}right],$$
          and keep in mind that $Delta W_t$ is a Gaussian random variable (with parameters mentioned in the question). If your discretization domain is small enough and you collected enough samples you should see a plot similar to the one below. Blue line is the mean $langle X_trangle$ and orange $sqrt{langle X_t^2 rangle}$.



          enter image description here



          EDIT



          A little bit of theoretical explanation. The solution can be written as
          $$mathbf{X}_t = e^{t mathbf{M}} mathbf{X}_0 + intlimits_{0}^{t} e^{-(s-t)mathbf{M}} left[begin{array}{c}eta(s)\0end{array}right]ds.$$
          Since $mathbf{X}_0 = mathbf{0}$, we can write
          $$X_t = frac{1}{omega_0}intlimits_{0}^{t} cos[omega_0(s-t)] eta(s)ds$$
          and you get (for $omega_0 = 1$)
          $$langle X_t^2rangle = frac{1}{2}t + frac{sin(2t)}{4}$$



          SAMPLE Python code



          # -*- coding: utf-8 -*-
          import numpy as np

          def run(x0=np.array([.0,.0]), n=40000, dt=1.0E-04, omega=1.0):

          sol = np.array()

          M = np.array([[0, 1.],[-omega**2, 0.]])

          x = x0.copy()
          for i in range(0,n):
          sol = np.append(sol, x[0])
          x += M @ x * dt + np.array([1.,0.]) * np.random.normal(scale=np.sqrt(dt))


          return sol

          sol = np.array([run() for i in range(0,500)])

          mean = np.mean(sol, axis=0)
          sigma = np.sqrt(np.var(sol, axis=0) + mean**2)

          dt = 1.0E-04
          x = np.arange(0, len(mean))
          y = np.sqrt(x * dt/2. + np.sin(2. * x * dt)/4.)

          import matplotlib.pyplot as plt

          fig, ax = plt.subplots(figsize=(15,10))


          ax.plot(x, mean)
          ax.plot(x, sigma)
          ax.plot(x, y)
          plt.show()





          share|cite|improve this answer











          $endgroup$



          What you are dealing with is called stochastic differential equation. Go back to the differential form:
          $$ mathbf{X}_t = left[begin{array}{c} X_t \ dot{X}_t end{array} right],$$
          and write down the equation in matrix form
          $$dmathbf{X}_t = mathbf{M} cdot mathbf{X}_t dt + left[begin{array}{c}dW_t\0end{array}right],$$
          where $dW_t = eta(t)dt$ and $mathbf{M} = left[begin{array}{cc}0 & 1 \ -omega_0^2 & 0end{array} right]$. Now you can numerically simulate the process using Euler-Maruyama method:
          $$mathbf{X}_{t+1} = mathbf{X}_t + mathbf{M} cdot mathbf{X}_t Delta t + left[begin{array}{c}Delta W_t\0end{array}right],$$
          and keep in mind that $Delta W_t$ is a Gaussian random variable (with parameters mentioned in the question). If your discretization domain is small enough and you collected enough samples you should see a plot similar to the one below. Blue line is the mean $langle X_trangle$ and orange $sqrt{langle X_t^2 rangle}$.



          enter image description here



          EDIT



          A little bit of theoretical explanation. The solution can be written as
          $$mathbf{X}_t = e^{t mathbf{M}} mathbf{X}_0 + intlimits_{0}^{t} e^{-(s-t)mathbf{M}} left[begin{array}{c}eta(s)\0end{array}right]ds.$$
          Since $mathbf{X}_0 = mathbf{0}$, we can write
          $$X_t = frac{1}{omega_0}intlimits_{0}^{t} cos[omega_0(s-t)] eta(s)ds$$
          and you get (for $omega_0 = 1$)
          $$langle X_t^2rangle = frac{1}{2}t + frac{sin(2t)}{4}$$



          SAMPLE Python code



          # -*- coding: utf-8 -*-
          import numpy as np

          def run(x0=np.array([.0,.0]), n=40000, dt=1.0E-04, omega=1.0):

          sol = np.array()

          M = np.array([[0, 1.],[-omega**2, 0.]])

          x = x0.copy()
          for i in range(0,n):
          sol = np.append(sol, x[0])
          x += M @ x * dt + np.array([1.,0.]) * np.random.normal(scale=np.sqrt(dt))


          return sol

          sol = np.array([run() for i in range(0,500)])

          mean = np.mean(sol, axis=0)
          sigma = np.sqrt(np.var(sol, axis=0) + mean**2)

          dt = 1.0E-04
          x = np.arange(0, len(mean))
          y = np.sqrt(x * dt/2. + np.sin(2. * x * dt)/4.)

          import matplotlib.pyplot as plt

          fig, ax = plt.subplots(figsize=(15,10))


          ax.plot(x, mean)
          ax.plot(x, sigma)
          ax.plot(x, y)
          plt.show()






          share|cite|improve this answer














          share|cite|improve this answer



          share|cite|improve this answer








          edited Dec 30 '18 at 21:31

























          answered Dec 30 '18 at 15:10









          WoofDoggyWoofDoggy

          21917




          21917












          • $begingroup$
            Thank you, do you mind posting the code you used?
            $endgroup$
            – Peanutlex
            Dec 30 '18 at 18:14










          • $begingroup$
            Thanks. Shouldn't the final value of $sqrt{langle X_t^2rangle}$ be $1/sqrt{2}$? Also, the following equation does not make sense to me:$$dtextbf{X}_t=textbf{M}cdottextbf{X}_tdt+[dW_t,0],$$ as this would mean the second element of the vector would be: $$d^2X_t=-omega_0^2X_tdt,$$ but surely it should reduce to:$$d^2X_t/dt=-omega_0^2X_tdt+dW_t?$$ Finally, have you made a mistake in your code at line 13, should it be: np.random.normal(scale=dt)?
            $endgroup$
            – Peanutlex
            Dec 30 '18 at 20:35












          • $begingroup$
            Sorry, should the code be: np.random.normal(scale=np.sqrt(dt))?
            $endgroup$
            – Peanutlex
            Dec 30 '18 at 20:45












          • $begingroup$
            @Peanutlex yes, my bad. The plot will change then.
            $endgroup$
            – WoofDoggy
            Dec 30 '18 at 20:46










          • $begingroup$
            Thanks a lot. Do you mind showing how you derived this equation:$$dtextbf{X}_t=textbf{M}cdottextbf{X}_tdt+[dW_t,0]?$$
            $endgroup$
            – Peanutlex
            Jan 3 at 11:35




















          • $begingroup$
            Thank you, do you mind posting the code you used?
            $endgroup$
            – Peanutlex
            Dec 30 '18 at 18:14










          • $begingroup$
            Thanks. Shouldn't the final value of $sqrt{langle X_t^2rangle}$ be $1/sqrt{2}$? Also, the following equation does not make sense to me:$$dtextbf{X}_t=textbf{M}cdottextbf{X}_tdt+[dW_t,0],$$ as this would mean the second element of the vector would be: $$d^2X_t=-omega_0^2X_tdt,$$ but surely it should reduce to:$$d^2X_t/dt=-omega_0^2X_tdt+dW_t?$$ Finally, have you made a mistake in your code at line 13, should it be: np.random.normal(scale=dt)?
            $endgroup$
            – Peanutlex
            Dec 30 '18 at 20:35












          • $begingroup$
            Sorry, should the code be: np.random.normal(scale=np.sqrt(dt))?
            $endgroup$
            – Peanutlex
            Dec 30 '18 at 20:45












          • $begingroup$
            @Peanutlex yes, my bad. The plot will change then.
            $endgroup$
            – WoofDoggy
            Dec 30 '18 at 20:46










          • $begingroup$
            Thanks a lot. Do you mind showing how you derived this equation:$$dtextbf{X}_t=textbf{M}cdottextbf{X}_tdt+[dW_t,0]?$$
            $endgroup$
            – Peanutlex
            Jan 3 at 11:35


















          $begingroup$
          Thank you, do you mind posting the code you used?
          $endgroup$
          – Peanutlex
          Dec 30 '18 at 18:14




          $begingroup$
          Thank you, do you mind posting the code you used?
          $endgroup$
          – Peanutlex
          Dec 30 '18 at 18:14












          $begingroup$
          Thanks. Shouldn't the final value of $sqrt{langle X_t^2rangle}$ be $1/sqrt{2}$? Also, the following equation does not make sense to me:$$dtextbf{X}_t=textbf{M}cdottextbf{X}_tdt+[dW_t,0],$$ as this would mean the second element of the vector would be: $$d^2X_t=-omega_0^2X_tdt,$$ but surely it should reduce to:$$d^2X_t/dt=-omega_0^2X_tdt+dW_t?$$ Finally, have you made a mistake in your code at line 13, should it be: np.random.normal(scale=dt)?
          $endgroup$
          – Peanutlex
          Dec 30 '18 at 20:35






          $begingroup$
          Thanks. Shouldn't the final value of $sqrt{langle X_t^2rangle}$ be $1/sqrt{2}$? Also, the following equation does not make sense to me:$$dtextbf{X}_t=textbf{M}cdottextbf{X}_tdt+[dW_t,0],$$ as this would mean the second element of the vector would be: $$d^2X_t=-omega_0^2X_tdt,$$ but surely it should reduce to:$$d^2X_t/dt=-omega_0^2X_tdt+dW_t?$$ Finally, have you made a mistake in your code at line 13, should it be: np.random.normal(scale=dt)?
          $endgroup$
          – Peanutlex
          Dec 30 '18 at 20:35














          $begingroup$
          Sorry, should the code be: np.random.normal(scale=np.sqrt(dt))?
          $endgroup$
          – Peanutlex
          Dec 30 '18 at 20:45






          $begingroup$
          Sorry, should the code be: np.random.normal(scale=np.sqrt(dt))?
          $endgroup$
          – Peanutlex
          Dec 30 '18 at 20:45














          $begingroup$
          @Peanutlex yes, my bad. The plot will change then.
          $endgroup$
          – WoofDoggy
          Dec 30 '18 at 20:46




          $begingroup$
          @Peanutlex yes, my bad. The plot will change then.
          $endgroup$
          – WoofDoggy
          Dec 30 '18 at 20:46












          $begingroup$
          Thanks a lot. Do you mind showing how you derived this equation:$$dtextbf{X}_t=textbf{M}cdottextbf{X}_tdt+[dW_t,0]?$$
          $endgroup$
          – Peanutlex
          Jan 3 at 11:35






          $begingroup$
          Thanks a lot. Do you mind showing how you derived this equation:$$dtextbf{X}_t=textbf{M}cdottextbf{X}_tdt+[dW_t,0]?$$
          $endgroup$
          – Peanutlex
          Jan 3 at 11:35




















          draft saved

          draft discarded




















































          Thanks for contributing an answer to Mathematics Stack Exchange!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          Use MathJax to format equations. MathJax reference.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3056792%2fhow-to-code-oscillator-driven-by-gaussian-white-noise-edit-how-to-convert-ode%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Mario Kart Wii

          What does “Dominus providebit” mean?

          Antonio Litta Visconti Arese