OiO.lk Community platform!

Oio.lk is an excellent forum for developers, providing a wide range of resources, discussions, and support for those in the developer community. Join oio.lk today to connect with like-minded professionals, share insights, and stay updated on the latest trends and technologies in the development field.
  You need to log in or register to access the solved answers to this problem.
  • You have reached the maximum number of guest views allowed
  • Please register below to remove this limitation

individual gradients with torch.autograd.grad without sum over second variable

  • Thread starter Thread starter GigaByte123
  • Start date Start date
G

GigaByte123

Guest
I have a sampled path of a stochastic process starting from an initial point:

Code:
class SDE_ou_1d(nn.Module):
    def __init__(self):
        super().__init__()
        self.sde_type = "into"
        self.noise_type = "diagonal"

    def f(self, t, y): #drift
        return -y

    def g(self, t, y): #vol
        return torch.ones_like(y)

t_vec = torch.linspace(0, 1, 100)  #time array
mySDE = SDE_ou_1d()

x0 = torch.zeros(1, 1, requires_grad=True).to(t_vec)
X_t = torchsde.sdeint(mySDE, x0, t_vec, method = 'euler')

and I would like to measure the gradient with respect to the initial condition using torch.autograd.grad(), and get an output with the same shape as X_t i.e. 100x1. This gives the change in the path at every time point

Code:
X_grad = torch.autograd.grad(outputs=X_t, inputs=x0,
                           grad_outputs=torch.ones_like(X_t),
                           create_graph=False, retain_graph=True, only_inputs=True, allow_unused=True)[0]

the issue is that the gradient is a sum over all values of t.

I can do this with a for loop, but it is very slow and not practical:

Code:
X_grad_loop = torch.zeros_like(X_t)

for i in range(X_t.shape[0]):  # Loop over the first dimension of X_t which is time
    grad_i = torch.autograd.grad(outputs=X_t[i,...], inputs=x0,
                                    grad_outputs=torch.ones_like(X_t[i,...]),
                                    create_graph=False, retain_graph=True, only_inputs=True, allow_unused=True)[0]
    X_grad_loop[i,...] = grad_i

is there a way to compute this gradient with torch.autograd.grad() and no loop? thanks
<p>I have a sampled path of a stochastic process starting from an initial point:</p>
<pre class="lang-py prettyprint-override"><code>
class SDE_ou_1d(nn.Module):
def __init__(self):
super().__init__()
self.sde_type = "into"
self.noise_type = "diagonal"

def f(self, t, y): #drift
return -y

def g(self, t, y): #vol
return torch.ones_like(y)

t_vec = torch.linspace(0, 1, 100) #time array
mySDE = SDE_ou_1d()

x0 = torch.zeros(1, 1, requires_grad=True).to(t_vec)
X_t = torchsde.sdeint(mySDE, x0, t_vec, method = 'euler')

</code></pre>
<p>and I would like to measure the gradient with respect to the initial condition using <code>torch.autograd.grad()</code>, and get an output with the same shape as <code>X_t</code> i.e. 100x1.
This gives the change in the path at every time point</p>
<pre class="lang-py prettyprint-override"><code>
X_grad = torch.autograd.grad(outputs=X_t, inputs=x0,
grad_outputs=torch.ones_like(X_t),
create_graph=False, retain_graph=True, only_inputs=True, allow_unused=True)[0]

</code></pre>
<p>the issue is that the gradient is a sum over all values of <code>t</code>.</p>
<p>I can do this with a for loop, but it is very slow and not practical:</p>
<pre class="lang-py prettyprint-override"><code>X_grad_loop = torch.zeros_like(X_t)

for i in range(X_t.shape[0]): # Loop over the first dimension of X_t which is time
grad_i = torch.autograd.grad(outputs=X_t[i,...], inputs=x0,
grad_outputs=torch.ones_like(X_t[i,...]),
create_graph=False, retain_graph=True, only_inputs=True, allow_unused=True)[0]
X_grad_loop[i,...] = grad_i

</code></pre>
<p>is there a way to compute this gradient with <code>torch.autograd.grad()</code> and no loop?
thanks</p>
 

Latest posts

Top