torch.addmm#
- torch.addmm(input, mat1, mat2, out_dtype=None, *, beta=1, alpha=1, out=None) Tensor#
Performs a matrix multiplication of the matrices
mat1andmat2. The matrixinputis added to the final result.If
mat1is a tensor,mat2is a tensor, theninputmust be broadcastable with a tensor andoutwill be a tensor.alphaandbetaare scaling factors on matrix-vector product betweenmat1andmat2and the added matrixinputrespectively.If
betais 0, then the content ofinputwill be ignored, and nan and inf in it will not be propagated.For inputs of type FloatTensor or DoubleTensor, arguments
betaandalphamust be real numbers, otherwise they should be integers.This operation has support for arguments with sparse layouts. If
inputis sparse the result will have the same layout and ifoutis provided it must have the same layout asinput.Warning
Sparse support is a beta feature and some layout(s)/dtype/device combinations may not be supported, or may not have autograd support. If you notice missing functionality please open a feature request.
This operator supports TensorFloat32.
On certain ROCm devices, when using float16 inputs this module will use different precision for backward.
- Parameters
input (Tensor) – matrix to be added
mat1 (Tensor) – the first matrix to be matrix multiplied
mat2 (Tensor) – the second matrix to be matrix multiplied
out_dtype (dtype, optional) – the dtype of the output tensor, Supported only on CUDA and for torch.float32 given torch.float16/torch.bfloat16 input dtypes
- Keyword Arguments
beta (Number, optional) – multiplier for
input()alpha (Number, optional) – multiplier for ()
out (Tensor, optional) – the output tensor.
Example:
>>> M = torch.randn(2, 3) >>> mat1 = torch.randn(2, 3) >>> mat2 = torch.randn(3, 3) >>> torch.addmm(M, mat1, mat2) tensor([[-4.8716, 1.4671, -1.3746], [ 0.7573, -3.9555, -2.8681]])