I've encountered a problem but I don't know why. I created a tensor with torch.tensor() at first and my goal is to calculate the gradient of y=2*x. It did work by setting the parameter requires_grad = True at very begining. I run the y.backward() and it worked.
I thought the steps mentioned above as the pattern. I'd like to see if this pattern work for each element in the vector a. So I wrote the for-loop, but the new steps return None instead of tensor(2).
I tried to separate each i out of the loop, like in the picture and it worked.
I'm confused. Please tell my why. Thank you very much!
import torch
x = torch.tensor([1.0,2.0,3.0,7.0],requires_grad=True) #vector
y = 2*x #vector
# while pytorch could only return scalar
#y.sum().backward()
#print(x.grad)
#x.requires_grad_(True)
for i in x:
i.requires_grad_(True)
print(i)
z = 2 * i
z.backward()
print(i.grad)
a = torch.tensor(1.0,requires_grad=True)
b = 2 * a
b.backward()
print(a)
print(a.grad)
The output shows as
tensor(1., grad_fn=<UnbindBackward0>)
None
tensor(2., grad_fn=<UnbindBackward0>)
None
tensor(3., grad_fn=<UnbindBackward0>)
None
tensor(7., grad_fn=<UnbindBackward0>)
None
tensor(1., requires_grad=True)
tensor(2.)