Telephonic-Grade Voice AI β WebRTC-Ready Framework
Piopiy AI is an open-source, telephony-grade framework for building real-time voice agents that blend large language models (LLM), automatic speech recognition (ASR), and text-to-speech (TTS) engines. Purchase numbers, configure agents, and let Piopiy handle call routing, audio streaming, and connectivity while you focus on conversation design. Combine cloud or open-source providers to tailor the voice stack to your latency, privacy, and cost targets.
Requires Python 3.10+.
pip install piopiy-aiTo install extras for the providers you plan to use:
pip install "piopiy-ai[cartesia,deepgram,openai]"Set provider API keys in the environment (for example, OPENAI_API_KEY).
import asyncio
import os
from piopiy.agent import Agent
from piopiy.voice_agent import VoiceAgent
from piopiy.services.deepgram.stt import DeepgramSTTService
from piopiy.services.openai.llm import OpenAILLMService
from piopiy.services.cartesia.tts import CartesiaTTSService
async def create_session(agent_id, call_id, from_number, to_number, metadata=None):
print(f"Incoming call {call_id} from {from_number} to {to_number}")
if metadata:
print(f"Call Metadata: {metadata}")
voice_agent = VoiceAgent(
instructions="You are an advanced voice AI.",
greeting="Hello! How can I help you today?",
)
stt = DeepgramSTTService(api_key=os.getenv("DEEPGRAM_API_KEY"))
llm = OpenAILLMService(api_key=os.getenv("OPENAI_API_KEY"))
tts = CartesiaTTSService(api_key=os.getenv("CARTESIA_API_KEY"))
await voice_agent.Action(stt=stt, llm=llm, tts=tts)
await voice_agent.start()
async def main():
agent = Agent(
agent_id=os.getenv("AGENT_ID"),
agent_token=os.getenv("AGENT_TOKEN"),
create_session=create_session,
debug=True # Enable debug logging (optional, default: False)
)
await agent.connect()
if __name__ == "__main__":
asyncio.run(main())You can control the verbosity of the logs using the debug parameter in the Agent constructor.
debug=True: Enables INFO level logging and prints full debug information, including internal events and third-party provider logs (e.g., Deepgram, Websockets). Useful during development.debug=False(Default): Sets logging to ERROR level and suppresses noisy third-party logs. This keeps your console clean and focuses on your application's output (like metadata).
The create_session function receives metadata as a dictionary if it was passed when initiating the call.
- Automatic Parsing: Piopiy automatically parses JSON metadata strings into Python dictionaries.
- Key-Value Access: You can access properties directly, e.g.,
metadata.get('customer_name').
async def create_session(agent_id, call_id, metadata=None, **kwargs):
if metadata:
customer_id = metadata.get("customer_id")
print(f"Handling call for customer: {customer_id}")Piopiy AI supports 40+ provider integrations across STT, LLM, and TTS services:
| Provider | Speed | Accuracy | Best For |
|---|---|---|---|
| Deepgram | β‘β‘β‘ | βββ | Real-time, low latency |
| AssemblyAI | β‘β‘ | βββ | High accuracy |
| Azure Speech | β‘β‘ | ββ | Enterprise, budget |
| Google Cloud | β‘β‘ | βββ | Multi-language |
| Gladia | β‘β‘ | ββ | Real-time |
| Speechmatics | β‘β‘ | βββ | Enterprise |
| OpenAI Whisper | β‘ | βββ | High accuracy |
| Local Whisper | β‘ | βββ | Privacy, offline |
| Provider | Speed | Quality | Best For |
|---|---|---|---|
| Groq | β‘β‘β‘ | ββ | Ultra-fast responses |
| Cerebras | β‘β‘β‘ | ββ | Ultra-fast inference |
| OpenAI | β‘β‘ | βββ | Best overall quality |
| Anthropic Claude | β‘β‘ | βββ | Complex reasoning |
| Google Gemini | β‘β‘ | βββ | Multimodal |
| Mistral | β‘β‘ | βββ | European AI |
| DeepSeek | β‘β‘ | ββ | Cost-effective |
| Perplexity | β‘β‘ | βββ | Search-augmented |
| Together AI | β‘β‘ | ββ | Open-source models |
| Fireworks | β‘β‘β‘ | ββ | Fast inference |
| OpenRouter | β‘β‘ | βββ | Multi-provider access |
| Ollama | β‘ | ββ | Local/offline |
| Provider | Speed | Quality | Best For |
|---|---|---|---|
| Cartesia | β‘β‘β‘ | βββ | Ultra-low latency |
| ElevenLabs | β‘β‘ | βββ | Highest quality |
| PlayHT | β‘β‘ | βββ | Voice cloning |
| LMNT | β‘β‘β‘ | βββ | Low latency |
| Deepgram Aura | β‘β‘β‘ | ββ | Fast, budget-friendly |
| Azure | β‘β‘ | ββ | Enterprise |
| β‘β‘ | ββ | Multi-language | |
| OpenAI | β‘β‘ | βββ | Good quality |
| Hume AI | β‘β‘ | βββ | Empathic voice |
| Murf.ai | β‘β‘ | βββ | Professional voices |
See example/providers/ for complete examples of all providers.
Ultra-Low Latency (< 500ms response time):
Deepgram (STT) + Groq (LLM) + Cartesia (TTS)Premium Quality (Best accuracy & naturalness):
AssemblyAI (STT) + Claude 3.5 Sonnet (LLM) + ElevenLabs (TTS)- Getting Started - Installation, setup, and your first voice agent
- Developer Guide - Core concepts, building agents, and advanced features
- API Reference - Complete API documentation
- Telephony Setup - Phone numbers, deployment, and production best practices
- Supported Providers - 40+ LLM, STT, and TTS providers
- Examples - Code examples and use cases
- New to Piopiy? Start with Getting Started
- Building your agent? Read the Developer Guide
- Need API details? Check the API Reference
- Deploying to production? Follow Telephony Setup
Piopiy AI supports advanced features like switching providers mid-call (e.g., swapping TTS voices or STT models based on user commands).
Check out the Switching Providers Examples to see how to implement dynamic provider switching with ServiceSwitcher.
Piopiy AI supports 40+ providers. Here are some of the most popular ones:
- LLM: OpenAI, Anthropic, Google Gemini, Groq, unsloth (via Ollama)
- STT: Deepgram, Speechmatics, Google, Azure, AssemblyAI, Whisper
- TTS: ElevenLabs, Cartesia, PlayHT, Azure, Google, Rime
π See the full list of Supported Providers
Enable interruption handling with Silero voice activity detection:
pip install "piopiy-ai[silero]"Silero VAD detects speech during playback, allowing callers to interrupt the agent.
Pair Piopiyβs realtime orchestration with open-source engines across the full speech stack:
| Layer | Default | Alternatives |
|---|---|---|
| LLM | Ollama running llama3.1 (or another local model) |
LM Studio, GPT4All via Ollama-compatible APIs |
| ASR | WhisperSTTService with Whisper small/medium models |
mlx-whisper for Apple silicon |
| TTS | ChatterboxTTSService pointed at a self-hosted Chatterbox TTS server |
Piper, XTTS, Kokoro |
Install the optional dependencies and runtimes:
pip install "piopiy-ai[whisper]"
# Install and run Ollama separately: https://ollama.ai
# Start the Chatterbox TTS WebSocket server (https://github.com/piopiy-ai/chatterbox-tts)Example session factory using the open-source trio:
from piopiy.voice_agent import VoiceAgent
from piopiy.services.whisper.stt import WhisperSTTService
from piopiy.services.ollama.llm import OLLamaLLMService
from piopiy.services.opensource.chatterbox.tts import ChatterboxTTSService
async def create_session():
voice_agent = VoiceAgent(
instructions="You are a helpful local-first voice assistant.",
greeting="Hi there! Running fully on open-source models today.",
)
stt = WhisperSTTService(model="small")
llm = OLLamaLLMService(model="llama3.1") # points to your local Ollama runtime
tts = ChatterboxTTSService(base_url="ws://localhost:6078")
await voice_agent.Action(stt=stt, llm=llm, tts=tts, vad=True)
await voice_agent.start()Swap in other open-source providers such as Piper, XTTS, or Kokoro for TTS, and adjust the Chatterbox base URL or voice ID for your deployment. You can also run Whisper on Apple silicon with the mlx-whisper extra. Piopiy's abstraction layer lets you mix these with managed services whenever needed.
Connect phone calls in minutes using the Piopiy dashboard:
- Sign in at dashboard.piopiy.com and purchase a phone number.
- Create a voice AI agent to receive
AGENT_IDandAGENT_TOKEN. - Use those credentials with the SDK for instant connectivity.
No SIP setup or third-party telephony vendors are requiredβPiopiy handles the calls so you can focus on your agent logic.
Thanks to Pipecat for making client SDK implementation easy.