What is the difference between torch.tensor and…

In PyTorch torch.Tensor is the main tensor class. So all tensors are just instances of torch.Tensor.

When you call torch.Tensor() you will get an empty tensor without any data.

In contrast torch.tensor is a function which returns a tensor. In the documentation it says:

torch.tensor(data, dtype=None, device=None, requires_grad=False) → Tensor

Constructs a tensor with data.

This also also explains why it is no problem creating an empty tensor instance of torch.Tensorwithout data by calling:

tensor_without_data = torch.Tensor()

But on the other side:

tensor_without_data = torch.tensor()

Will lead to an error:

TypeError                                 Traceback (most recent call last)
<ipython-input-12-ebc3ceaa76d2> in <module>()
----> 1 torch.tensor()
TypeError: tensor() missing 1 required positional arguments: "data"

But in general there is no reason to choose torch.Tensor over torch.tensor. Also torch.Tensor lacks a docstring.

Similar behaviour for creating a tensor without data like with: torch.Tensor() can be archive by using:




参考: https://stackoverflow.com/questions/51911749/what-is-the-difference-between-torch-tensor-and-torch-tensor/51911888



0 条回复 A 作者 M 管理员
欢迎您,新朋友,感谢参与互动!欢迎您 {{author}},您在本站有{{commentsCount}}条评论