Your gradient is not destroyed: grad returns None because it has never been saved on the grad attribute. This is because non-leaf tensors don't have their gradients stored during backpropagation. Hence the warning message you received when running your first snippet:
UserWarning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward().
This is the case for your z tensor when it is defined as:
>>> z = torch.tensor(np.array([1., 1.]), requires_grad=True).float()
>>> z.is_leaf
False
Compared to:
>>> z = torch.tensor([1., 1.], requires_grad=True).float()
>>> z.is_leaf
True
which means the latter will have its gradient value in z.grad.
But notice that:
>>> z = torch.tensor(np.array([1., 1.]), requires_grad=True)
>>> z.is_leaf
True
To further explain this: when a tensor is first initialized it is a leaf node (.is_leaf returns True). As soon as you apply a function on it (here .float() is an in-place operator) it is not a leaf anymore since it has parents in the computational graph.
So really, there's nothing to fix... What you can do though is make sure the gradient is saved on z.grad when the backward pass is called. So, the second question comes down to how to store/access the gradient on a non-leaf node?.
Now, if you would like to store the gradient on .backward() call. You could use retain_grad() as explained in the warning message:
z = torch.tensor(np.array([1., 1.]), requires_grad=True).float()
z.retain_grad()
Or, since we expected it to be a leaf node, solve it by using FloatTensor to convert the numpy.array to a torch.Tensor:
z = torch.FloatTensor(np.array([1., 1.]))
z.requires_grad=True
Alternatively, you could stick with torch.tensor and supply a dtype:
z = torch.tensor(np.array([1., 1.]), dtype=torch.float64, requires_grad=True)