torch.Tensor.to#
- Tensor.to(*args, **kwargs) Tensor#
Performs Tensor dtype and/or device conversion. A
torch.dtypeandtorch.deviceare inferred from the arguments ofself.to(*args, **kwargs).Note
If the
selfTensor already has the correcttorch.dtypeandtorch.device, thenselfis returned. Otherwise, the returned tensor is a copy ofselfwith the desiredtorch.dtypeandtorch.device.Note
If
selfrequires gradients (requires_grad=True) but the targetdtypespecified is an integer type, the returned tensor will implicitly setrequires_grad=False. This is because only tensors with floating-point or complex dtypes can require gradients.Here are the ways to call
to:- to(dtype, non_blocking=False, copy=False, memory_format=torch.preserve_format) Tensor
Returns a Tensor with the specified
dtype- Args:
memory_format (
torch.memory_format, optional): the desired memory format of returned Tensor. Default:torch.preserve_format.
Note
According to C++ type conversion rules, converting floating point value to integer type will truncate the fractional part. If the truncated value cannot fit into the target type (e.g., casting
torch.inftotorch.long), the behavior is undefined and the result may vary across platforms.- torch.to(device=None, dtype=None, non_blocking=False, copy=False, memory_format=torch.preserve_format) Tensor
Returns a Tensor with the specified
deviceand (optional)dtype. IfdtypeisNoneit is inferred to beself.dtype. Whennon_blockingis set toTrue, the function attempts to perform the conversion asynchronously with respect to the host, if possible. This asynchronous behavior applies to both pinned and pageable memory. However, caution is advised when using this feature. For more information, refer to the tutorial on good usage of non_blocking and pin_memory. Whencopyis set, a new Tensor is created even when the Tensor already matches the desired conversion.- Args:
memory_format (
torch.memory_format, optional): the desired memory format of returned Tensor. Default:torch.preserve_format.
- torch.to(other, non_blocking=False, copy=False) Tensor
Returns a Tensor with same
torch.dtypeandtorch.deviceas the Tensorother. Whennon_blockingis set toTrue, the function attempts to perform the conversion asynchronously with respect to the host, if possible. This asynchronous behavior applies to both pinned and pageable memory. However, caution is advised when using this feature. For more information, refer to the tutorial on good usage of non_blocking and pin_memory. Whencopyis set, a new Tensor is created even when the Tensor already matches the desired conversion.
Example:
>>> tensor = torch.randn(2, 2) # Initially dtype=float32, device=cpu >>> tensor.to(torch.float64) tensor([[-0.5044, 0.0005], [ 0.3310, -0.0584]], dtype=torch.float64) >>> cuda0 = torch.device('cuda:0') >>> tensor.to(cuda0) tensor([[-0.5044, 0.0005], [ 0.3310, -0.0584]], device='cuda:0') >>> tensor.to(cuda0, dtype=torch.float64) tensor([[-0.5044, 0.0005], [ 0.3310, -0.0584]], dtype=torch.float64, device='cuda:0') >>> other = torch.randn((), dtype=torch.float64, device=cuda0) >>> tensor.to(other, non_blocking=True) tensor([[-0.5044, 0.0005], [ 0.3310, -0.0584]], dtype=torch.float64, device='cuda:0')