tf.keras.ops.einsum
Stay organized with collections
Save and categorize content based on your preferences.
Evaluates the Einstein summation convention on the operands.
tf.keras.ops.einsum(
subscripts, *operands
)
Args |
subscripts
|
Specifies the subscripts for summation as comma separated
list of subscript labels. An implicit (classical Einstein
summation) calculation is performed unless the explicit indicator
-> is included as well as subscript labels of the precise
output form.
|
operands
|
The operands to compute the Einstein sum of.
|
Returns |
The calculation based on the Einstein summation convention.
|
Example:
from keras.src import ops
a = ops.arange(25).reshape(5, 5)
b = ops.arange(5)
c = ops.arange(6).reshape(2, 3)
Trace of a matrix:
ops.einsum("ii", a)
60
ops.einsum(a, [0, 0])
60
ops.trace(a)
60
ops.einsum("ii -> i", a)
array([ 0, 6, 12, 18, 24])
ops.einsum(a, [0, 0], [0])
array([ 0, 6, 12, 18, 24])
ops.diag(a)
array([ 0, 6, 12, 18, 24])
Sum over an axis:
ops.einsum("ij -> i", a)
array([ 10, 35, 60, 85, 110])
ops.einsum(a, [0, 1], [0])
array([ 10, 35, 60, 85, 110])
ops.sum(a, axis=1)
array([ 10, 35, 60, 85, 110])
For higher dimensional tensors summing a single axis can be done
with ellipsis:
ops.einsum("...j -> ...", a)
array([ 10, 35, 60, 85, 110])
np.einsum(a, [..., 1], [...])
array([ 10, 35, 60, 85, 110])
Compute a matrix transpose or reorder any number of axes:
ops.einsum("ji", c)
array([[0, 3],
[1, 4],
[2, 5]])
ops.einsum("ij -> ji", c)
array([[0, 3],
[1, 4],
[2, 5]])
ops.einsum(c, [1, 0])
array([[0, 3],
[1, 4],
[2, 5]])
ops.transpose(c)
array([[0, 3],
[1, 4],
[2, 5]])
Matrix vector multiplication:
ops.einsum("ij, j", a, b)
array([ 30, 80, 130, 180, 230])
ops.einsum(a, [0, 1], b, [1])
array([ 30, 80, 130, 180, 230])
ops.einsum("...j, j", a, b)
array([ 30, 80, 130, 180, 230])
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates. Some content is licensed under the numpy license.
Last updated 2024-06-07 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2024-06-07 UTC."],[],[],null,["# tf.keras.ops.einsum\n\n\u003cbr /\u003e\n\n|-------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/keras-team/keras/tree/v3.3.3/keras/src/ops/numpy.py#L2219-L2305) |\n\nEvaluates the Einstein summation convention on the operands.\n\n#### View aliases\n\n\n**Main aliases**\n\n[`tf.keras.ops.numpy.einsum`](https://www.tensorflow.org/api_docs/python/tf/keras/ops/einsum)\n\n\u003cbr /\u003e\n\n tf.keras.ops.einsum(\n subscripts, *operands\n )\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|--------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `subscripts` | Specifies the subscripts for summation as comma separated list of subscript labels. An implicit (classical Einstein summation) calculation is performed unless the explicit indicator `-\u003e` is included as well as subscript labels of the precise output form. |\n| `operands` | The operands to compute the Einstein sum of. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ------- ||\n|---|---|\n| The calculation based on the Einstein summation convention. ||\n\n\u003cbr /\u003e\n\n#### Example:\n\n from keras.src import ops\n a = ops.arange(25).reshape(5, 5)\n b = ops.arange(5)\n c = ops.arange(6).reshape(2, 3)\n\n#### Trace of a matrix:\n\n ops.einsum(\"ii\", a)\n 60\n ops.einsum(a, [0, 0])\n 60\n ops.trace(a)\n 60\n\n#### Extract the diagonal:\n\n ops.einsum(\"ii -\u003e i\", a)\n array([ 0, 6, 12, 18, 24])\n ops.einsum(a, [0, 0], [0])\n array([ 0, 6, 12, 18, 24])\n ops.diag(a)\n array([ 0, 6, 12, 18, 24])\n\n#### Sum over an axis:\n\n ops.einsum(\"ij -\u003e i\", a)\n array([ 10, 35, 60, 85, 110])\n ops.einsum(a, [0, 1], [0])\n array([ 10, 35, 60, 85, 110])\n ops.sum(a, axis=1)\n array([ 10, 35, 60, 85, 110])\n\nFor higher dimensional tensors summing a single axis can be done\nwith ellipsis: \n\n ops.einsum(\"...j -\u003e ...\", a)\n array([ 10, 35, 60, 85, 110])\n np.einsum(a, [..., 1], [...])\n array([ 10, 35, 60, 85, 110])\n\nCompute a matrix transpose or reorder any number of axes: \n\n ops.einsum(\"ji\", c)\n array([[0, 3],\n [1, 4],\n [2, 5]])\n ops.einsum(\"ij -\u003e ji\", c)\n array([[0, 3],\n [1, 4],\n [2, 5]])\n ops.einsum(c, [1, 0])\n array([[0, 3],\n [1, 4],\n [2, 5]])\n ops.transpose(c)\n array([[0, 3],\n [1, 4],\n [2, 5]])\n\nMatrix vector multiplication: \n\n ops.einsum(\"ij, j\", a, b)\n array([ 30, 80, 130, 180, 230])\n ops.einsum(a, [0, 1], b, [1])\n array([ 30, 80, 130, 180, 230])\n ops.einsum(\"...j, j\", a, b)\n array([ 30, 80, 130, 180, 230])"]]