Skip to content
This repository was archived by the owner on Sep 23, 2025. It is now read-only.

Example Layercode powered Voice Agent with Express on Node 18+

License

Notifications You must be signed in to change notification settings

layercodedev/example-backend-express

Repository files navigation

Layercode Conversational AI Backend (Express)

This open source project demonstrates how to build a real-time voice agent using Layercode Voice Agents, with an Express backend to drive the agent's responses.

Read the companion guide: Express Backend Guide

Features

  • Browser or Phone Voice Interaction: Users can speak to the agent directly from their browser or phone (see Layercode docs for more details on connecting these channels)
  • Session State: Conversation history is stored in memory. You can easily switch to a database or Redis to persist sessions.
  • LLM Integration: User queries are sent to Gemini Flash 2.0.
  • Streaming Responses: LLM responses are streamed back, where Layercode handles the conversion to speech and playback to the user.

How It Works

  1. Frontend:
    See the Layercode docs for details about connecting a Web Voice Agent frontend or Phone channel to the agent. This backend can also be tested our in the Layercode Dashboard Playground

  2. Transcription & Webhook:
    Layercode transcribes user speech. For each complete message, it sends a webhook containing the transcribed text to the /agent endpoint.

  3. Backend Processing:
    The transcribed text is sent to the LLM (Gemini Flash 2.0) to generate a response.

  4. Streaming & Speech Synthesis:
    As soon as the LLM starts generating a response, the backend streams the output back as SSE messags to Layercode, which converts it to speech and delivers it to the frontend for playback in realtime.

Getting Started

# Clone and enter the repo
$ git clone https://github.com/layercodedev/example-backend-express.git && cd example-backend-express

Requires Bun 1.0+

# Install dependencies
bun install

Edit your .env environment variables. You'll need to add:

  • GOOGLE_GENERATIVE_AI_API_KEY - Your Google AI API key
  • LAYERCODE_WEBHOOK_SECRET - Your Layercode agent's webhook secret, found in the Layercode dashboard (goto your agent, click Edit in the Your Backend Box and copy the webhook secret shown)
  • LAYERCODE_API_KEY - Your Layercode API key found in the Layercode dashboard settings

If running locally, setup a tunnel (we recommend cloudflared which is free for dev) to your localhost so the Layercode webhook can reach your backend. Follow our tunneling guide here: https://docs.layercode.com/tunnelling

If you didn't follow the tunneling guide, and are deploying this example to the internet, remember to set the Webhook URL in the Layercode dashboard (click Edit in the Your Backend box) to your publically accessible backend URL.

Now run the backend:

bun run index.ts

The server will listen on port 3001 by default.

The easiest way to talk to your agent is to use the Layercode Dashboard Playground.

Tip: If you don't hear any response from your voice agent, check the Webhook Logs tab in your agent in the Layercode Dashboard to see the response from your backend.

License

MIT

About

Example Layercode powered Voice Agent with Express on Node 18+

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •