A single agent can usually perform well using a small set of tools to solve a specific problem. However, even powerful models like GPT-4 may struggle when given many different tools to solve a complex problem.
One way to approach complicated tasks is through a "divide-and-conquer" approach. Create a specialized agent for each task and route tasks to the correct "expert".
In this notebook, we will see how two agents, each given different tools, can work together to solve a problem that requires the use of all available tools.
The code in the notebook is adapted from the LangGraph tutorial: Multi-agent Collaboration.
Clone this repository to your local computer by running:
git clone https://github.com/TCLee/multi-agent-collab-
You will need conda in order to install the required packages to run the notebook. Installing conda.
-
Make sure the current working directory is this cloned project's directory:
cd /path/to/multi-agent-collab -
Create the environment from the
environment.ymlfile:conda env create -f environment.yml -p ./env
This will create a new environment in a subdirectory of the project directory called
env, (i.e.,project-dir/env) -
Activate the environment:
conda activate ./env
This project makes use of
python-dotenv
to load in the environment variables from a .env file.
Create a .env file in the root directory of this cloned repository
(i.e., project-dir/.env):
# Google Gemini API
GOOGLE_API_KEY="your-google-secret-key"
# Optional. Recommended to see what's going on
# under the hood of LangGraph and LangChain.
LANGSMITH_API_KEY="your-langsmith-secret-key"
LANGCHAIN_TRACING_V2="true"
LANGCHAIN_PROJECT="Multi-Agent Collaboration"Fill it in with your own API keys.
The LLM that we will use in the notebook is Google's Gemini 1.5 Flash. It is fast and it offers a generous free tier for us to play around with.
To use the Gemini API, you'll need an API key. If you do not already have one, create a key in Google AI Studio.
Many of the applications you build with LangChain will contain multiple steps with multiple invocations of LLM calls. As these applications get more and more complex, it becomes crucial to be able to inspect what exactly is going on inside your chain or agent. The best way to do this is with LangSmith.
The conda environment includes an installation of Jupyter Lab. Start Jupyter Lab from your terminal:
jupyter labIn Jupyter Lab, open the notebook
multi-agent-collab.ipynb
and follow the instructions there.