tf.experimental.async_scope
Stay organized with collections
Save and categorize content based on your preferences.
Context manager for grouping async operations.
@tf_contextlib.contextmanager
tf.experimental.async_scope()
Ops/function calls inside the scope can return before finishing the actual
execution. When exiting the async scope, a synchronization barrier will be
automatically added to ensure the completion of all async op and function
execution, potentially raising exceptions if async execution results in
an error state.
Users may write the following code to asynchronously invoke train_step_fn
and log the loss
metric for every num_steps
steps in a training loop.
train_step_fn
internally consumes data using iterator.get_next()
, and may
throw OutOfRangeError when running out of data. In the case:
try:
with tf.experimental.async_scope():
for _ in range(num_steps):
# Step function updates the metric `loss` internally
train_step_fn()
except tf.errors.OutOfRangeError:
tf.experimental.async_clear_error()
logging.info('loss = %s', loss.numpy())
Yields |
Context manager for grouping async operations.
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates. Some content is licensed under the numpy license.
Last updated 2024-04-26 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2024-04-26 UTC."],[],[],null,["# tf.experimental.async_scope\n\n\u003cbr /\u003e\n\n|-------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.16.1/tensorflow/python/eager/context.py#L2809-L2854) |\n\nContext manager for grouping async operations.\n\n#### View aliases\n\n\n**Compat aliases for migration**\n\nSee\n[Migration guide](https://www.tensorflow.org/guide/migrate) for\nmore details.\n\n[`tf.compat.v1.experimental.async_scope`](https://www.tensorflow.org/api_docs/python/tf/experimental/async_scope)\n\n\u003cbr /\u003e\n\n @tf_contextlib.contextmanager\n tf.experimental.async_scope()\n\nOps/function calls inside the scope can return before finishing the actual\nexecution. When exiting the async scope, a synchronization barrier will be\nautomatically added to ensure the completion of all async op and function\nexecution, potentially raising exceptions if async execution results in\nan error state.\n\nUsers may write the following code to asynchronously invoke `train_step_fn`\nand log the `loss` metric for every `num_steps` steps in a training loop.\n`train_step_fn` internally consumes data using `iterator.get_next()`, and may\nthrow OutOfRangeError when running out of data. In the case: \n\n try:\n with tf.experimental.async_scope():\n for _ in range(num_steps):\n # Step function updates the metric `loss` internally\n train_step_fn()\n except tf.errors.OutOfRangeError:\n tf.experimental.async_clear_error()\n logging.info('loss = %s', loss.numpy())\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Yields ------ ||\n|---|---|\n| Context manager for grouping async operations. ||\n\n\u003cbr /\u003e"]]