Base async callback handler.
AsyncCallbackHandler()Run when the model starts running.
This method is called for non-chat models (regular text completion LLMs). If
you're implementing a handler for a chat model, you should use
on_chat_model_start instead.
Run when a chat model starts running.
This method is called for chat models. If you're implementing a handler for
a non-chat model, you should use on_llm_start instead.
When overriding this method, the signature must include the two
required positional arguments serialized and messages. Avoid
using *args in your override — doing so causes an IndexError
in the fallback path when the callback system converts messages
to prompt strings for on_llm_start. Always declare the
signature explicitly:
.. code-block:: python
async def on_chat_model_start(
self,
serialized: dict[str, Any],
messages: list[list[BaseMessage]],
**kwargs: Any,
) -> None:
raise NotImplementedError # triggers fallback to on_llm_start
Run on new output token. Only available when streaming is enabled.
For both chat models and non-chat models (legacy text completion LLMs).
Run when the model ends running.
Run when LLM errors.
Run when a chain starts running.
Run when a chain ends running.
Run when chain errors.
Run when the tool starts running.
Run when the tool ends running.
Run when tool errors.
Run on an arbitrary text.
Run on a retry event.
Run on agent action.
Run on the agent end.
Run on the retriever start.
Run on the retriever end.
Run on retriever error.
Override to define a handler for custom events.
Whether to raise an error if an exception occurs.
Whether to run the callback inline.
Whether to ignore LLM callbacks.
Whether to ignore retry callbacks.
Whether to ignore chain callbacks.
Whether to ignore agent callbacks.
Whether to ignore retriever callbacks.
Whether to ignore chat model callbacks.
Ignore custom event.