You can use torch.stacktorch.stack:
torch.stack(li, dim=0)
after the for loop will give you a torch.Tensor of that size.
Note that if you know in advance the size of the final tensor, you can allocate an empty tensor beforehand and fill it in the for loop:
x = torch.empty(size=(len(items), 768))
for i in range(len(items)):
x[i] = calc_result
This is usually faster than doing the stack.