A web application built with NestJS to visualize GitHub Copilot CLI request/response logs from .jsonl files. Features a Chrome DevTools-like interface with intelligent parsing of OpenAI chat completion requests and responses.
- π Drag & Drop Interface - Simply drop your
.jsonllog file onto the page - π Chrome DevTools-like UI - Familiar interface with request list and detail panels
- π Streaming Response Merging - Automatically merges chunked/streaming responses
- πΎ Large File Support - Handles log files up to 50MB
- π€ Smart Request Parsing - Structured display of messages, tools, and metadata
- π Message Summary - Shows counts of user/system/assistant/tool messages at a glance
- π§ Tool Call Navigation - Click to navigate between tool calls and their results
- π¬ Collapsible Messages - Expand/collapse individual messages for better readability
- π― Token Usage Display - Input/output token counts for each request
- π§ Reasoning Field Support - Displays reasoning_text, reasoning_opaque, and other special fields
- π Path Filtering - Filter requests by API endpoint (multi-select dropdown)
- πΎ Persistent Filters - Your filter selections are remembered across sessions
- π¨ Request Highlighting - Currently selected request is visually highlighted
- π Interactive Tool IDs - Click tool call IDs to jump between assistant messages and tool results
- π Tabbed Detail View - Separate tabs for request and response details
- π΄ Collapsible Sections - All cards can be expanded/collapsed for focused reading
- π·οΈ Status Badges - Color-coded HTTP status codes
- β±οΈ Timing Information - Request duration and timestamps
- π¨ Syntax Highlighting - Pretty-printed JSON with proper formatting
copilot-log-visualizer/
βββ src/
β βββ main.ts # Application entry point
β βββ app.module.ts # Main application module
β βββ app.controller.ts # HTTP request controller
β βββ log.service.ts # Log parsing logic
βββ public/
β βββ index.html # Main HTML page
β βββ app.js # Frontend JavaScript
βββ scripts/ # Utility scripts
βββ mitm/ # Mitmproxy configuration for capturing logs
βββ package.json
βββ tsconfig.json
- Node.js (v16 or higher)
- npm
- Clone or download this repository
- Install dependencies:
npm install- Build the TypeScript code:
npm run build- Start the server:
npm start- Open your browser and navigate to:
http://localhost:3001
- Follow mitm/README.md to proxy Copilot CLI request and write logs. Then upload the jsonl logs to the website.
To capture logs from GitHub Copilot CLI, see mitm/README.md for detailed setup instructions.
- Upload Log File: Drag and drop your
.jsonllog file onto the page, or click to browse - Browse Requests: All HTTP requests appear in the left sidebar with method, status, and timing
- View Details: Click on any request to view its full details
- Switch Tabs: Use "Request" and "Response" tabs to see headers and body
- Open Filter: Click the "Filter" dropdown at the top of the request list
- Select Paths: Check/uncheck API paths to show only relevant requests
- Persistent Selection: Your filter choices are saved and restored on reload
- Auto-Reset: Filters automatically clear when loading a file with different paths
For OpenAI chat completion requests with tool calls:
- From Tool Call to Result: Click the tool call ID in an assistant message to jump to the corresponding tool message
- From Result to Tool Call: Click the tool_call_id in a tool message to jump back to the specific tool call
- Auto-Expand: Messages are automatically expanded when navigating
- Highlight: Target element is temporarily highlighted for easy identification
The application uses mitmproxy to intercept and capture HTTP/HTTPS traffic from GitHub Copilot CLI:
- Proxy Setup: Mitmproxy runs as a man-in-the-middle proxy on
127.0.0.1:8080 - Traffic Capture: A custom Python script (
mitm-to-json.py) captures each request/response in real-time - JSON Conversion: Each HTTP transaction is converted to a JSON object with:
- Request/response headers and body
- Timing information (start timestamp and completion time)
- HTTP method, URL, and status code
- JSONL Output: Each transaction is written as a single line to
out.jsonl
The captured log file is then loaded into this visualizer for analysis. See mitm/README.md for detailed setup instructions.
- LogService: Parses
.jsonlfiles, aggregates chunked responses, and handles Server-Sent Events (SSE) - AppController: Provides endpoints for serving the UI and parsing log files
- JSON Parsing: Automatically detects and parses JSON in request/response bodies
- Drag & Drop: File upload with drag-and-drop support
- Request List: Displays all parsed requests with method, status, URL, and timing
- Detail View: Shows complete request/response information including headers and body
- Responsive: Clean, modern interface inspired by Chrome DevTools
GET /- Serves the main HTML pagePOST /parse- Accepts log content and returns parsed requests
The application expects .jsonl files where each line is a JSON object with this structure:
{
"timestamp": "ISO 8601 timestamp",
"completed": "ISO 8601 timestamp (optional)",
"url": "https://api.example.com/path",
"method": "POST",
"status_code": 200,
"request": {
"headers": {},
"body": "string or JSON"
},
"response": {
"headers": {},
"body": "string, JSON, or SSE format"
}
}Multiple log entries for the same request (identified by timestamp and URL) are automatically merged:
- Streaming response chunks are combined intelligently
- Duration is calculated from first to last chunk
- All response data is preserved
For /chat/completions requests with streaming responses:
- Delta Merging: Combines partial
contentstrings across chunks - Tool Calls: Merges tool call deltas by index, building complete function calls
- Field Preservation: All fields (including reasoning_text, reasoning_opaque, refusal, etc.) are preserved
- Usage Statistics: Token counts from the final chunk are included
- Structured View: Merged response is rendered with expandable Choices and Metadata cards
The request list shows context-specific information:
For /chat/completions requests:
- HTTP method (left) | Message/tool summary (center) | Status code (right)
- URL path on second line
- Timing and token usage on third line
- Message counts:
U2/S1/A3/R2/T5= 2 user, 1 system, 3 assistant, 2 tool messages, 5 tools - Token usage:
1234/567= input tokens / output tokens
For other requests:
- HTTP method (left) | Status code (right)
- URL path on second line
- Timing on third line
Request messages are rendered with special handling:
- Content: Main message content (text or JSON)
- Refusal: Displayed with red styling if present
- Reasoning Fields: Any reasoning-related fields shown with blue styling
- Tool Calls: Rendered as structured cards with function name, arguments, and clickable IDs
- Tool Messages: Show tool_call_id in header for navigation back to the original call
Filter selections are stored in browser localStorage:
- Selections survive page reloads
- Maintained when loading new files (if paths match)
- Automatically reset only when no selected paths exist in new data
- Stored as JSON array of path strings
All content sections can be collapsed/expanded:
- Request View: General, Messages, Tools, Metadata sections
- Response View: Choices, Metadata, Merged Body, Raw Body sections
- Messages: Individual message cards can be collapsed
- Smart Defaults: Most-used sections expanded by default
- Status Badges: Green for 2xx, red for errors
- Method Badges: Color-coded HTTP methods
- Token Counts: Tooltips explain abbreviations
- Hover Effects: Subtle highlighting on interactive elements
- Selection Highlight: Active request shown with blue background
- main.ts: Application bootstrap, configures Express with 50MB body limit
- app.module.ts: Main application module
- app.controller.ts: HTTP endpoints (GET / and POST /parse)
- log.service.ts: Core log parsing and SSE handling logic
- index.html: UI structure and all CSS styles
- app.js: All frontend logic including:
- File upload handling
- Request list rendering and filtering
- Request/response detail rendering
- OpenAI-specific parsing and merging
- Navigation and interaction handlers
- LocalStorage persistence
GET /- Serves the main HTML pagePOST /parse- Accepts{ content: string }and returnsParsedRequest[]
The application expects .jsonl files where each line is a JSON object:
{
"timestamp": "2024-01-20T08:00:00.000Z",
"completed": "2024-01-20T08:00:01.234Z",
"url": "https://api.openai.com/v1/chat/completions",
"method": "POST",
"status_code": 200,
"request": {
"headers": { "Content-Type": "application/json" },
"body": { "messages": [...], "tools": [...] }
},
"response": {
"headers": { "Content-Type": "text/event-stream" },
"body": "data: {...}\n\ndata: {...}\n\n" // or JSON object
}
}The tool handles multiple response body formats:
- JSON Object: Standard JSON response
- SSE Format: Server-Sent Events with
data:lines - Array: Pre-parsed SSE chunks as array of objects
- Chunked Array: Multiple response bodies merged into array
Contributions are welcome! Please feel free to submit issues or pull requests.
- Make your changes to the TypeScript/HTML/CSS files
- Build:
npm run build - Test:
npm startand open http://localhost:3001 - Verify all existing features still work
MIT
Built for visualizing GitHub Copilot CLI logs with a focus on OpenAI chat completion requests.