Skip to content

TCLee/multi-agent-collab

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

31 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Multi-Agent Collaboration

Overview

A single agent can usually perform well using a small set of tools to solve a specific problem. However, even powerful models like GPT-4 may struggle when given many different tools to solve a complex problem.

One way to approach complicated tasks is through a "divide-and-conquer" approach. Create a specialized agent for each task and route tasks to the correct "expert".

In this notebook, we will see how two agents, each given different tools, can work together to solve a problem that requires the use of all available tools.

The code in the notebook is adapted from the LangGraph tutorial: Multi-agent Collaboration.

Setup

Git

Clone this repository to your local computer by running:

git clone https://github.com/TCLee/multi-agent-collab

Conda

  1. You will need conda in order to install the required packages to run the notebook. Installing conda.

  2. Make sure the current working directory is this cloned project's directory:

    cd /path/to/multi-agent-collab
  3. Create the environment from the environment.yml file:

    conda env create -f environment.yml -p ./env

    This will create a new environment in a subdirectory of the project directory called env, (i.e., project-dir/env)

  4. Activate the environment:

    conda activate ./env

Environment variables

This project makes use of python-dotenv to load in the environment variables from a .env file.

Create a .env file in the root directory of this cloned repository (i.e., project-dir/.env):

# Google Gemini API
GOOGLE_API_KEY="your-google-secret-key"

# Optional. Recommended to see what's going on 
# under the hood of LangGraph and LangChain.
LANGSMITH_API_KEY="your-langsmith-secret-key"
LANGCHAIN_TRACING_V2="true"
LANGCHAIN_PROJECT="Multi-Agent Collaboration"

Fill it in with your own API keys.

Google Gemini

The LLM that we will use in the notebook is Google's Gemini 1.5 Flash. It is fast and it offers a generous free tier for us to play around with.

To use the Gemini API, you'll need an API key. If you do not already have one, create a key in Google AI Studio.

Get an API key

(Optional) LangSmith

Many of the applications you build with LangChain will contain multiple steps with multiple invocations of LLM calls. As these applications get more and more complex, it becomes crucial to be able to inspect what exactly is going on inside your chain or agent. The best way to do this is with LangSmith.

Jupyter Notebook

The conda environment includes an installation of Jupyter Lab. Start Jupyter Lab from your terminal:

jupyter lab

In Jupyter Lab, open the notebook multi-agent-collab.ipynb and follow the instructions there.

About

Multi-agent collaboration with LangGraph

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published