A comprehensive interview preparation tool that combines real-time audio transcription, manual AI analysis control, and Chrome extension integration for LeetCode, HackerRank, and CoderPad. Features advanced token optimization to minimize API costs while providing intelligent interview insights.
- Manual AI Analysis Control: Click "Analyze Conversation" button to trigger LLM analysis (prevents unwanted token usage)
- Smart Transcription Filtering: Automatically filters out "BLANK AUDIO", filler words, and low-quality speech
- Conversation Buffer Management: Manual reset with "Clear Conversation" button and real-time character count
- Chrome Extension Support: Extract interview questions from LeetCode, HackerRank, and CoderPad
- Real-time Audio Processing: Voice Activity Detection (VAD) for efficient transcription
- Token Usage Optimization: Quality filtering prevents expensive LLM calls on poor audio
- Centralized Logging: Cross-platform logging system with configurable log levels
- Performance Metrics: Track transcription and analysis performance (View β Performance Metrics)
- Global Shortcuts: Cmd/Ctrl + Arrow Keys to move window, Cmd/Ctrl + M to randomize both main window and indicator positions, Cmd/Ctrl + Shift + S to start recording, Cmd/Ctrl + Shift + X to stop recording
- Floating Indicator: Always-visible overlay window showing recording status and shortcut reminders
- Responsive UI: Controls and analysis side-by-side layout
- Hot Reloading: Development mode with automatic restarts
- ESLint Integration: Code quality enforcement across Electron + React stack
- Node.js (v14 or higher)
- npm
- A DashScope API key for AI analysis
-
Clone the repository and navigate to the project directory.
-
Install dependencies:
npm install -
Download the Whisper model:
npx whisper-node downloadThis downloads the required Whisper model files for offline transcription.
-
Create a
.envfile in the root directory with your DashScope API key:DASHSCOPE_API_KEY=your_api_key_hereYou can obtain an API key from DashScope.
-
Start the application:
npm start
- Click "Start Recording" or use global shortcut Cmd/Ctrl+Shift+S to begin recording.
- Speak into your microphone - speech is transcribed in real-time.
- Click "Stop Recording" or use global shortcut Cmd/Ctrl+Shift+X to stop.
- View transcription history in the scrollable bottom section.
InterviewMate runs as a floating window that can be positioned anywhere on your screen using global shortcuts:
- Move Window: Use
Cmd/Ctrl + Arrow Keysto move the window in any direction (50px steps) - Random Position: Press
Cmd/Ctrl + Mto randomly reposition the window on your screen - These shortcuts work even when InterviewMate is not the active window
- No automatic analysis - you control when to spend tokens
- Click "π§ Analyze Conversation" to send current buffer to LLM
- Analysis appears in the right panel with interview insights
- Use "π§Ή Clear Conversation" to reset buffer between topics
- Monitor "Buffer: X chars" to see accumulated conversation
- Install the extension from
interview-extension/folder - Navigate to LeetCode, HackerRank, or CoderPad problems
- Click extension icon and "Extract Question"
- Question data appears in InterviewMate for enhanced analysis
- Frontend: React components with manual AI analysis controls
- Audio Processing: Web Audio API with Voice Activity Detection (VAD)
- Transcription: OpenAI Whisper with quality filtering and artifact removal
- AI Analysis: Qwen 3-Max via DashScope API (manual trigger only)
- IPC: Electron for secure main/renderer communication
- Content Scripts: Extract interview questions from LeetCode, HackerRank, CoderPad
- Background Service: Handles server communication and data processing
- Popup UI: Simple interface for question extraction
- Local Server: Express.js server for extension β Electron communication
- Transcription Filtering: Removes "BLANK AUDIO", filler words, and low-quality segments
- Token Optimization: Manual analysis prevents unwanted API calls
- Centralized Logging: Cross-platform logging with configurable levels
- Performance Metrics: Track transcription and analysis performance
npm start: Start the application in production modenpm run dev: Start the application in development mode with hot reloadingnpm run lint: Run ESLint to check for code issuesnpm run lint:fix: Run ESLint and automatically fix fixable issues
interviewmate/
βββ src/ # React frontend
β βββ App.js # Main React component
β βββ AudioManager.js # Audio processing logic
β βββ Constants.js # Application constants
β βββ LocalServer.js # Express server for extension comms
β βββ Logging.js # Centralized logging system
β βββ MetricsManager.js # Performance tracking
β βββ TranscriptEntry.js # Transcription display component
β βββ VADManager.js # Voice Activity Detection
βββ interview-extension/ # Chrome extension
β βββ manifest.json # Extension configuration
β βββ background.js # Service worker
β βββ content-script.js # Page data extraction
β βββ popup.html/js # Extension UI
β βββ README.md # Extension documentation
βββ static/ # Static assets
β βββ InterviewMate.png # Main interface screenshot
β βββ chrome-extension.png # Extension screenshot
βββ main.js # Electron main process
βββ preload.js # IPC security bridge
βββ index.html # Main UI template
βββ package.json # Dependencies and scripts
βββ README.md # This file
This project uses ESLint to maintain code quality and catch potential issues during development. The linter is configured to handle:
- Main Process: Node.js environment rules for Electron main process
- Renderer Process: React and browser environment rules
- ES6 Modules: Modern JavaScript import/export syntax
- React Hooks: Proper usage of React hooks
Run npm run lint before committing to ensure code quality.
InterviewMate includes several features to minimize API costs:
- No automatic LLM calls - you control when analysis happens
- "Analyze Conversation" button prevents unwanted token usage
- Buffer management with manual reset capabilities
- Removes "BLANK AUDIO" artifacts from Whisper
- Filters filler words: "um", "uh", "ah", "er", "hmm", "mhm"
- Quality scoring: Only meaningful speech reaches the conversation buffer
- Annotation cleanup: Removes
[LAUGHTER],(audience), etc. from transcripts
- Real-time buffer tracking: See "Buffer: X chars" status
- Performance metrics: Track transcription and analysis times
- Manual control: Clear conversation buffer between topics
The included Chrome extension extracts interview questions for enhanced AI analysis:
- Open Chrome and navigate to
chrome://extensions/ - Enable "Developer mode" in the top right
- Click "Load unpacked" and select the
interview-extension/folder - The extension will appear in your toolbar
- LeetCode (
leetcode.com) - HackerRank (
hackerrank.com) - CoderPad (
coderpad.io)
- Navigate to any supported interview platform
- Click the InterviewMate extension icon
- Click "Extract Question" to send problem data to the app
- Question appears in InterviewMate for priority analysis
InterviewMate uses a centralized logging system across all components:
error: Errors and exceptionswarn: Warnings and potential issuesinfo: General information (default)debug: Detailed debugging information
Set the log level using environment variable:
LOG_LEVEL=debug npm start # Show all log levels
LOG_LEVEL=error npm start # Show only errors- Electron Main Process: Uses CommonJS imports
- React Renderer Process: Uses ES6 imports
- Consistent formatting across all components
- Ensure microphone permissions are granted in browser/Electron
- Check that the API key is correctly set in
.env - For audio issues, verify Web Audio API support
- Try restarting the app if VAD (Voice Activity Detection) fails
- "Free tier exhausted": Upgrade your DashScope plan or reduce analysis frequency
- No analysis appearing: Click "π§ Analyze Conversation" button (analysis is manual-only)
- Poor analysis quality: Clear conversation buffer and try again with focused speech
- Extension not loading: Ensure "Developer mode" is enabled in
chrome://extensions/ - Question not extracting: Refresh the interview page and try again
- Connection failed: Ensure InterviewMate app is running on localhost:8080
- High token usage: Use manual analysis mode and clear buffer regularly
- Slow transcription: Check Whisper model download with
npx whisper-node download - UI lag: Clear conversation buffer if it gets too large (>10,000 chars)
- Set
LOG_LEVEL=debugto see detailed logs - Check console for
[INFO],[ERROR],[WARN]messages - Use "Performance Metrics" from View menu to track performance
In development mode (NODE_ENV=development), the indicator window will automatically open DevTools after 1 second. This makes debugging the indicator window much easier.
How it works:
- Set
NODE_ENV=developmentin your.envfile - Run the app with
npm startornpm run dev - The indicator window will auto-open DevTools after loading
Manual DevTools Shortcut:
- Press
Cmd+Shift+U(Mac) orCtrl+Shift+U(Windows/Linux) to manually open DevTools on the indicator window
The floating indicator window shows:
- Recording Status: "Ready" (gray) or "Recording..." (red)
- Active Shortcuts: Highlights which shortcuts are currently available
- Global Shortcuts: Shows all available shortcuts for reference
The indicator window is always visible and updates in real-time as you start/stop recording.

