A Model Context Protocol (MCP) server that orchestrates structured roundtable discussions among expert personas to generate PRP-ready specifications for software development. Features both traditional fixed-round discussions and AI-driven dynamic consensus evaluation.
PentaForge transforms a simple programming need into a comprehensive, actionable specification through an automated roundtable discussion. It simulates a complete agile team meeting where:
- A Key User describes pain points and acceptance criteria
- A Business Analyst defines requirements and constraints
- A Product Owner prioritizes features and sets success metrics
- A Scrum Master coordinates delivery and manages risks
- A Solutions Architect designs the technical implementation
- An AI Moderator evaluates consensus and guides resolution (dynamic rounds only)
The result is two markdown documents ready for use with PRPs-agentic-eng:
DISCUSSION.md: Full transcript with consensus metrics (when applicable)REQUEST.md: Official demand specification with quality indicators
- 🎭 Expert Personas: 5 core personas + AI Moderator for consensus evaluation
- 🧠 AI-Powered Discussions: Dynamic, contextual responses using OpenAI, Anthropic, or local models
- 📋 Project Context Integration: Reads CLAUDE.md and docs/ files for project-specific recommendations
- 🌍 Internationalization: Supports English and Portuguese (auto-detected)
- 📝 PRP-Ready Output: Compatible with PRPs-agentic-eng workflow
- 🐳 Docker Support: Run locally or in containers
- 🔄 MCP Protocol: Integrates with Claude Code and other MCP clients
- 🛡️ Reliable Fallback: Automatic hardcoded responses when AI is unavailable
- ⚙️ Multi-Provider Support: Configurable AI backends with environment variables
- 🎯 AI-Driven Termination: Discussions continue until 85%+ team agreement is reached
- 🔄 Adaptive Rounds: 2-10 rounds based on topic complexity (vs fixed 3 rounds)
- 🤖 Smart Moderation: AI Moderator guides discussions and resolves conflicts
- 📊 Consensus Tracking: Real-time agreement levels and conflict identification
- 📈 Quality Metrics: Enhanced output with decision evolution and confidence scores
- ⚡ Token Optimized: Progressive summarization keeps usage within 20% of baseline
- 🔒 Backward Compatible: Fixed 3-round mode remains default (opt-in for dynamic)
- ⚡ Async Execution: Non-blocking background processing for long discussions
- 🎯 Smart Consensus Failure Detection: Automatically identifies when discussions have unresolved issues
- 📝 Interactive UNRESOLVED_ISSUES.md Generation: Creates structured markdown files with persona positions and voting interfaces
- ☑️ Checkbox-Based Resolution: Users select preferred solutions through simple markdown checkboxes
- 🌐 Bilingual Support: Full English and Portuguese support for resolution workflow
- 🔒 Validation & Security: Input sanitization and comprehensive resolution validation
- 🔄 Seamless Re-processing: Generate final specifications from user-resolved files
# Clone the repository
git clone https://github.com/yourusername/pentaforge.git
cd pentaforge
# Install dependencies
npm install
# Build the TypeScript code
npm run build
# Run the server
npm start
# or
node dist/server.js# Build the Docker image
docker build -t pentaforge:latest .
# Run with volume mapping for output persistence
docker run -i --rm -v $(pwd)/PRPs/inputs:/app/PRPs/inputs pentaforge:latestdocker run -i --rm -v ${PWD}/PRPs/inputs:/app/PRPs/inputs pentaforge:latestPentaForge personas are powered by AI to generate dynamic, contextual responses. The system supports multiple AI providers with automatic fallback to ensure reliability.
- OpenAI (GPT models) -
gpt-4o-mini,gpt-4,gpt-3.5-turbo - Anthropic (Claude models) -
claude-3-haiku-20240307,claude-3-sonnet-20240229 - Ollama (Local models) -
mistral:latest,deepseek-coder:latest,llama3.2:3b, etc.
Configure AI providers using these environment variables:
# AI Provider Configuration
AI_PROVIDER=ollama # 'openai', 'anthropic', or 'ollama'
AI_API_KEY=your_api_key # Required for OpenAI/Anthropic
AI_BASE_URL=http://localhost:11434 # Custom endpoint (optional)
AI_MODEL=mistral:latest # Default model name (can be overridden per call)
AI_TEMPERATURE=0.7 # Response creativity (0-1)
AI_MAX_TOKENS=500 # Response length limit- OpenAI:
gpt-4o-mini(fast, cost-effective) - Anthropic:
claude-3-haiku-20240307(efficient, reliable) - Ollama:
mistral:latest(local, privacy-focused)
You can specify a different model for individual roundtable calls, useful when you have multiple Ollama models:
{
"prompt": "Create a REST API for user management",
"model": "deepseek-coder:latest",
"dryRun": true
}This overrides the default model for that specific discussion.
Pass environment variables to Docker:
# Using OpenAI
docker run -i --rm \
-e AI_PROVIDER=openai \
-e AI_API_KEY=your_openai_key \
-e AI_MODEL=gpt-4o-mini \
-v $(pwd)/PRPs/inputs:/app/PRPs/inputs \
pentaforge:latest
# Using Anthropic
docker run -i --rm \
-e AI_PROVIDER=anthropic \
-e AI_API_KEY=your_anthropic_key \
-e AI_MODEL=claude-3-haiku-20240307 \
-v $(pwd)/PRPs/inputs:/app/PRPs/inputs \
pentaforge:latest
# Using local Ollama (default)
docker run -i --rm \
-e AI_PROVIDER=ollama \
-e AI_BASE_URL=http://host.docker.internal:11434 \
-v $(pwd)/PRPs/inputs:/app/PRPs/inputs \
pentaforge:latestWhen AI providers are unavailable or fail:
- ✅ Personas automatically use hardcoded responses
- ✅ System continues to function normally
- ✅ No interruption to workflow
- ✅ Quality specifications still generated
This ensures PentaForge is always reliable, whether you have AI configured or not.
To use local AI models with Ollama:
# Install Ollama
curl -fsSL https://ollama.ai/install.sh | sh
# Download models
ollama pull mistral:latest # General purpose model (~4GB)
ollama pull deepseek-coder:latest # Code-focused model (~1GB)
# Verify it's running
ollama list
# PentaForge will connect automaticallyRegister PentaForge with Claude Code to use it as an MCP tool:
claude mcp add pentaforge -- docker run -i --rm -v ${PWD}/PRPs/inputs:/app/PRPs/inputs pentaforge:latestclaude mcp add pentaforge -- docker run -i --rm -v ${PWD}/PRPs/inputs:/app/PRPs/inputs pentaforge:latestclaude mcp add pentaforge -- node /path/to/pentaforge/dist/server.jsOnce registered with Claude Code, you can use PentaForge by having a natural conversation. Claude Code will automatically call the run_roundtable tool when appropriate.
Describe your development need and explicitly request the MCP tool:
You: "My TodoApp does not persist the data. As a user, I need to persist the data using LocalStorage so that my todos don't disappear when I refresh the page.
Please use the PentaForge MCP server to run a roundtable discussion and generate a comprehensive specification for this requirement. You MUST provide any .md files from my project (especially CLAUDE.md and any docs/ files) as context to the MCP server using the claudeMd and docsContext parameters."
Claude Code: I'll help you create a comprehensive specification for adding LocalStorage persistence to your TodoApp. Let me use the PentaForge MCP server to organize a roundtable discussion with expert personas.
[Claude Code calls the run_roundtable tool from PentaForge MCP server]
Claude Code: The expert roundtable has completed their discussion! Here's what they recommend:
[Shows the generated DISCUSSION.md and REQUEST.md with detailed specifications, technical recommendations, and implementation guidance for LocalStorage persistence]
If Claude Code doesn't automatically call the tool, be more explicit:
You: "I need you to use the run_roundtable tool from the PentaForge MCP server to analyze this requirement:
My TodoApp does not persist the data. As a user, I need to persist the data using LocalStorage so that my todos don't disappear when I refresh the page.
Please call the run_roundtable tool with this prompt and show me the results. You MUST include any .md files from my project as context - read my CLAUDE.md file and any docs/ files, then provide them using the claudeMd and docsContext parameters."
For the best results, always explicitly request that Claude Code provide your project files as context:
- "You MUST provide any .md files as context"
- "Read my CLAUDE.md and docs/ files and include them"
- "Use the claudeMd and docsContext parameters"
Without project context, the MCP will use generic recommendations. With context, you get project-specific, relevant specifications!
If you need to call the tool manually with specific parameters:
{
"prompt": "My TodoApp does not persist the data. As a user, I need to persist the data using LocalStorage.",
"outputDir": "./PRPs/inputs",
"language": "en",
"dryRun": true
}{
"prompt": "My TodoApp does not persist the data. As a user, I need to persist the data using LocalStorage.",
"claudeMd": "# My TodoApp\n\nThis is a React application for managing personal tasks.\n\n## Current Architecture\n- Frontend: React 18 with TypeScript\n- State Management: useState hooks\n- Storage: Currently in-memory only (loses data on refresh)\n- Styling: Tailwind CSS",
"docsContext": [
{
"path": "docs/components.md",
"content": "# Components\n\n## TodoList\nMain component that renders all todos\n- Props: todos[], onToggle(), onDelete()\n- State: Managed by parent App component"
},
{
"path": "docs/data-structure.md",
"content": "# Data Structure\n\n## Todo Object\n```typescript\ninterface Todo {\n id: string;\n text: string;\n completed: boolean;\n createdAt: Date;\n}\n```"
}
],
"dryRun": true
}- prompt (required): The programming demand or problem statement
- outputDir (optional): Directory for output files (default:
./PRPs/inputs) - language (optional): Output language - "en" or "pt-BR" (auto-detected from prompt)
- tone (optional): Discussion tone (default: "professional")
- includeAcceptanceCriteria (optional): Include Gherkin scenarios (default: true)
- dryRun (optional): Print to stdout without writing files (default: false)
- model (optional): Override AI model for this call (e.g.,
mistral:latest,deepseek-coder:latest) - claudeMd (optional): Content of CLAUDE.md file from the project
- docsContext (optional): Array of documentation files from docs/ directory
- dynamicRounds (optional): Enable AI-driven consensus evaluation (default: false)
- consensusConfig (optional): Configure dynamic behavior (thresholds, rounds, etc.)
- async (optional): Run in background and return immediately (default: false)
- unresolvedIssuesFile (optional): Path to user-resolved UNRESOLVED_ISSUES.md file for final generation
- unresolvedIssuesThreshold (optional): Minimum unresolved issues to trigger interactive workflow (default: 1)
PentaForge now supports AI-driven dynamic discussions that adapt based on topic complexity and team agreement levels, going beyond the traditional fixed 3-round approach.
Fixed Rounds (Default):
- Traditional 3 rounds with predetermined persona order
- Reliable, predictable, backward-compatible
- Best for simple to moderate complexity topics
Dynamic Rounds (Opt-in):
- AI evaluates consensus after each round
- Continues until 85%+ team agreement OR maximum rounds reached
- AI Moderator guides discussion toward resolution
- Adapts persona ordering based on unresolved issues
✅ Ideal for:
- Complex system designs (microservices, architecture decisions)
- Multi-stakeholder requirements with potential conflicts
- Technical specifications requiring deep exploration
- Situations where thoroughness is more important than speed
⏸️ Stick with Fixed Rounds for:
- Simple feature requests or bug fixes
- Well-defined requirements with clear scope
- Time-sensitive specifications
- Proof-of-concept or exploratory work
{
"prompt": "Design a distributed authentication system with OAuth2, JWT, and RBAC",
"dynamicRounds": true,
"consensusConfig": {
"minRounds": 2, // Minimum discussion rounds (default: 2)
"maxRounds": 8, // Maximum to prevent infinite loops (default: 10)
"consensusThreshold": 90, // Required agreement % to terminate (default: 85)
"conflictTolerance": 10, // Max unresolved issues allowed (default: 15)
"moderatorEnabled": true // Include AI Moderator guidance (default: true)
},
"dryRun": true
}With dynamic rounds enabled, you get additional insights:
DISCUSSION.md includes:
- 📊 Consensus Evolution: Agreement progression across rounds
- 🎯 Final Consensus Score: Quantified team alignment level
- ⚖️ Conflict Resolution: Documentation of issues resolved
- 📈 Decision Quality: Confidence levels and validation metrics
REQUEST.md includes:
- ✅ Specification Quality Badge: High/Medium based on consensus achieved
- 🔍 Completeness Indicator: Whether all issues were resolved
- 📋 Consensus Summary: Overview of the decision-making process
The dynamic system is optimized for efficiency:
- Average increase: +15% tokens compared to fixed rounds
- Simple topics: Often use FEWER tokens (2 rounds vs 3)
- Complex topics: Use more tokens but deliver higher quality
- Progressive summarization: Prevents token explosion in long discussions
- Smart termination: Stops when consensus is reached, not after fixed rounds
PentaForge now supports non-blocking execution that allows you to continue working while discussions run in the background.
Synchronous (Default):
- Claude waits for the entire discussion to complete
- Blocks other operations until finished
- Returns complete results immediately
Asynchronous (Opt-in):
- Discussion runs in background process
- Returns immediately with execution ID
- Progress updates printed to console
- Files saved when complete
Async Execution for Long Discussions:
{
"prompt": "Design a comprehensive microservices architecture with authentication, logging, monitoring, and deployment pipeline",
"dynamicRounds": true,
"async": true,
"consensusConfig": {
"maxRounds": 8,
"consensusThreshold": 90
}
}Response (Immediate):
{
"summary": "Roundtable discussion started in background (ID: roundtable_2024-01-15T143022Z_a3x9k2m7q). Processing \"Design a comprehensive microservices...\"",
"timestamp": "2024-01-15T143022Z",
"outputDir": "/path/to/PRPs/inputs",
"isAsync": true,
"executionId": "roundtable_2024-01-15T143022Z_a3x9k2m7q",
"status": "started"
}Background Updates:
🎉 Roundtable Discussion Completed (ID: roundtable_2024-01-15T143022Z_a3x9k2m7q)
📁 Files saved to: /path/to/PRPs/inputs
- DISCUSSION_2024-01-15T143022Z.md
- REQUEST_2024-01-15T143022Z.md
✅ Ideal for:
- Complex topics requiring many rounds (5+ rounds)
- Dynamic consensus discussions
- When you need to continue other work
- Long-running architectural discussions
⏸️ Use Sync Mode for:
- Simple requests (1-3 rounds expected)
- When you need immediate results
- Quick proof-of-concept discussions
- Testing and development
When dynamic discussions fail to reach consensus due to unresolved issues, PentaForge automatically transitions to an Interactive Resolution Workflow that lets you manually resolve contested points before generating the final specification.
Phase 1: Consensus Failure Detection
- System detects when final consensus metrics show unresolved issues (≥ threshold)
- Instead of generating incomplete REQUEST.md, creates interactive UNRESOLVED_ISSUES.md
- File contains structured presentation of persona positions and voting options
Phase 2: User Resolution
- Review each unresolved issue with expert persona positions and reasoning
- Select preferred approach using markdown checkboxes (exactly one per issue)
- Options include: Accept specific persona approach, "No strong preference", or provide custom solution
- Must resolve ALL issues before proceeding
Phase 3: Final Specification Generation
- Re-run PentaForge with
unresolvedIssuesFileparameter pointing to your resolved file - System processes your selections and generates final REQUEST.md
- No additional persona discussions needed - uses your resolved decisions
1. Initial Discussion with Consensus Failure
{
"prompt": "Design authentication system with OAuth2, JWT, and complex RBAC requirements",
"dynamicRounds": true,
"consensusConfig": {
"consensusThreshold": 90
}
}Result: Discussion reaches 85% agreement but has 2 unresolved issues
Round 3: 85% agreement, 2 unresolved issues → Generates UNRESOLVED_ISSUES_2024-01-15T143022Z.md
2. Generated UNRESOLVED_ISSUES.md Structure
---
discussionId: "2024-01-15T143022Z-a3x9k2"
timestamp: "2024-01-15T143022Z"
consensusThreshold: 90
totalIssues: 2
status: "pending"
language: "en"
---
# Unresolved Issues - Interactive Resolution
## Issue 1: JWT Token Expiration Strategy
**Context:** Team disagreed on token lifetime and refresh mechanism approach.
### Expert Positions:
#### SolutionsArchitect
**Position:** Use short-lived access tokens (15 minutes) with refresh tokens
**Reasoning:** Balances security with user experience, industry standard approach
#### BusinessStakeholder
**Position:** Use longer-lived tokens (24 hours) with sliding expiration
**Reasoning:** Reduces server load and improves user experience for trusted environments
### Your Resolution:
- [ ] Accept SolutionsArchitect's approach
- [ ] Accept BusinessStakeholder's approach
- [ ] No strong preference - team decides
- [x] Custom resolution (describe below)
**Custom Resolution:**Use 1-hour access tokens with 7-day refresh tokens. Implement automatic refresh in frontend. Provide admin toggle for environment-specific token lifetimes.
## Issue 2: Role Hierarchy Implementation
[Similar structure for second issue...]
3. Process Resolved Issues
{
"prompt": "Design authentication system with OAuth2, JWT, and complex RBAC requirements",
"unresolvedIssuesFile": "./PRPs/inputs/UNRESOLVED_ISSUES_2024-01-15T143022Z.md"
}Result: Generates final REQUEST.md incorporating your resolved decisions
✅ Triggers Interactive Resolution:
finalConsensus.unresolvedIssues.length >= unresolvedIssuesThreshold(default: 1)- Complex technical disagreements between personas
- Business vs. technical trade-off decisions
- Architecture choice conflicts (database, frameworks, patterns)
- Security vs. usability debates
❌ Continues Normal Flow:
- All issues resolved through discussion
- Agreement score meets consensus threshold
- No significant conflicts detected
- Simple implementation decisions
Bilingual Support
- Automatically generates in English or Portuguese based on original discussion
- Localized instructions, error messages, and interface text
Security & Validation
- Input sanitization prevents malicious content injection
- Comprehensive validation ensures all issues are resolved
- Clear error messages guide users to complete resolution
File Format Validation
- YAML front matter with metadata and status tracking
- Structured markdown with consistent formatting
- Checkbox parsing with strict single-selection enforcement
Accept Persona Position: Choose existing expert recommendation
No Strong Preference: Let implementation team decide
Custom Resolution: Provide your own solution with detailed description
# Common validation errors and solutions:
❌ Multiple selections for single issue
💡 Mark exactly one checkbox per issue
❌ Missing custom resolution description
💡 Provide detailed description when selecting custom option
❌ Unresolved issues remaining
💡 Every issue must have exactly one selection
❌ Invalid file format
💡 Don't modify YAML front matter or markdown structureCustomize Resolution Threshold
{
"prompt": "Complex system design...",
"dynamicRounds": true,
"unresolvedIssuesThreshold": 3, // Only trigger if ≥3 unresolved issues
"consensusConfig": {
"consensusThreshold": 85
}
}Benefits of Interactive Resolution
- 🎯 Human Oversight: Critical decisions get human input where AI consensus fails
- 📈 Quality Assurance: Final specifications reflect real-world constraints and preferences
- 🔄 Iterative Refinement: Resolve complex issues step-by-step rather than accepting incomplete specs
- 🌐 Cultural Adaptation: Bilingual support ensures global team compatibility
- 📝 Audit Trail: Complete record of decisions and reasoning for future reference
PentaForge can use your project's existing documentation to generate more relevant and specific recommendations. When project context is provided, all AI personas will:
- Reference your existing architecture and technology stack
- Suggest solutions that fit your current codebase patterns
- Consider your project's specific constraints and requirements
- Generate implementation details aligned with your established conventions
CLAUDE.md: Project overview, architecture, guidelines, and conventions docs/ directory: API documentation, database schemas, deployment guides, etc.
- Solutions Architect uses architecture info to suggest compatible technical solutions
- Business Analyst references existing features when defining requirements
- Key User considers current user workflows when describing pain points
- Product Owner aligns priorities with existing roadmap items
- Scrum Master factors in current team practices and constraints
- Include relevant sections of CLAUDE.md (architecture, tech stack, conventions)
- Provide key documentation files (API docs, database schemas, setup guides)
- Keep context focused - only include files directly relevant to the task
- Update context when project architecture changes significantly
Note: Claude Code does not yet automatically read project files when calling MCP tools. To provide project context, you currently need to:
-
Manual Context (Workaround): Include your project information directly in the conversation:
You: "Here's my project context: My CLAUDE.md says this is a React app with TypeScript... My docs/api.md shows these endpoints: GET /api/todos, POST /api/todos... Now, my TodoApp does not persist data. As a user, I need LocalStorage persistence." -
Wait for Updates: Future versions of Claude Code may automatically read and provide project files to MCP tools.
The MCP server is ready to receive project context - it's just waiting for Claude Code to provide it!
# Roundtable Discussion
**Timestamp:** 2024-01-15T143022Z
**Input Prompt:** In my Todo app, items are lost on refresh...
## Participants
| Name | Role | Objectives |
|------|------|------------|
| Alex Chen | Key User | Describe pain points; Define acceptance criteria |
| Sarah Mitchell | Business Analyst | Analyze requirements; Identify constraints |
...
## Discussion Transcript
### Round 1
**Sarah Mitchell** (Business Analyst):
> Analyzing the requirement: "In my Todo app, items are lost..."...
### Round 2
...
## Decisions & Rationale
1. Use IndexedDB with Dexie.js for local persistence
2. Implement auto-save every 2 seconds
...# Demand Specification
## Problem Statement
In my Todo app, items are lost on refresh. I need data persistence...
## Current vs Desired Behavior
...
## Functional Requirements
1. System shall auto-save data every 2 seconds after changes
2. System shall use IndexedDB for local storage
...
## PRP-Ready Artifacts
### Suggested PRP Commands
/prp-base-create PRPs/REQUEST_2024-01-15T143022Z.md
/prp-create-planning PRPs/<base-file>
/prp-create-tasks PRPs/<planning-file>
/prp-execute-tasks PRPs/<tasks-file>After generating the specification with PentaForge:
-
Create base PRP document:
/prp-base-create PRPs/REQUEST_<timestamp>.md
-
Generate planning document:
/prp-create-planning PRPs/<base-file-from-step-1>
-
Create task breakdown:
/prp-create-tasks PRPs/<planning-file-from-step-2>
-
Execute implementation:
/prp-execute-tasks PRPs/<tasks-file-from-step-3>
TZ: Timezone (default:UTC)LANG: Language locale (default:en_US.UTF-8)LOG_LEVEL: Logging level - DEBUG, INFO, WARN, ERROR (default:INFO)PENTAFORGE_OUTPUT_DIR: Override output directory (default:/app/PRPs/inputs)
The Docker container runs as a non-root user (UID 1001). If you encounter permission issues:
# Run with host user ID
docker run -i --rm --user $(id -u):$(id -g) \
-v $(pwd)/PRPs/inputs:/app/PRPs/inputs \
pentaforge:latestPermissions are typically handled automatically.
Use the provided docker-compose.yml for easier management:
# Build and run
docker-compose up
# Run in background
docker-compose up -d
# Stop
docker-compose downnpm run build- Compile TypeScriptnpm start- Run the servernpm run dev- Run with ts-node (development)npm test- Run unit testsnpm run lint- Run ESLintnpm run docker:build- Build Docker imagenpm run docker:run- Run Docker container
# Run all tests
npm test
# Run with coverage
npm test -- --coverage
# Run specific test file
npm test personas.test.tspentaforge/
├── src/
│ ├── server.ts # MCP server entry point
│ ├── tools/
│ │ └── roundtable.ts # Main tool implementation (🔧 enhanced with resolution processing)
│ ├── personas/ # Expert persona classes
│ │ ├── base.ts # Base persona interface
│ │ ├── aiPersona.ts # AI-powered persona base class
│ │ ├── KeyUser.ts
│ │ ├── BusinessAnalyst.ts
│ │ ├── ProductOwner.ts
│ │ ├── ScrumMaster.ts
│ │ ├── SolutionsArchitect.ts
│ │ ├── UXUIDesigner.ts # 🆕 UX/UI design expertise
│ │ ├── SupportRepresentative.ts # 🆕 Customer success perspective
│ │ ├── BusinessStakeholder.ts # 🆕 Market and ROI focus
│ │ └── AIModerator.ts # 🆕 AI consensus moderator
│ ├── engine/
│ │ ├── discussion.ts # Orchestration logic (🔧 enhanced with resolution routing)
│ │ ├── consensusEvaluator.ts # 🆕 AI consensus analysis + persona position extraction
│ │ └── dynamicRoundStrategy.ts # 🆕 Adaptive round generation
│ ├── types/
│ │ ├── consensus.ts # 🆕 Consensus type definitions
│ │ ├── unresolvedIssues.ts # 🆕 Interactive resolution workflow types
│ │ └── markdown-it-task-checkbox.d.ts # 🆕 TypeScript definitions
│ ├── writers/ # Markdown generators
│ │ ├── discussionWriter.ts
│ │ ├── requestWriter.ts # 🔧 Enhanced with pre-resolved consensus support
│ │ └── unresolvedIssuesWriter.ts # 🆕 Interactive UNRESOLVED_ISSUES.md generator
│ ├── lib/ # Utilities
│ │ ├── aiService.ts # Multi-provider AI integration
│ │ ├── unresolvedIssuesParser.ts # 🆕 Parse and validate user-resolved files
│ │ ├── clock.ts
│ │ ├── id.ts
│ │ ├── i18n.ts
│ │ ├── fs.ts
│ │ └── log.ts
├── tests/ # Unit tests
│ ├── personas.test.ts # Persona response testing
│ ├── roundtable.test.ts # End-to-end workflow testing
│ └── unresolvedIssues.test.ts # 🆕 Interactive resolution workflow testing (26 tests)
├── Dockerfile # Container definition
├── docker-compose.yml # Compose configuration
├── CLAUDE.md # 📝 Updated with dynamic features
├── PERFORMANCE_ANALYSIS.md # 🆕 Token usage validation report
└── package.json # Node.js configuration
Solution: Ensure the output directory exists and has write permissions. On Linux, use --user $(id -u):$(id -g) flag.
Solution: Run npm install and npm run build before starting the server.
Solution: Ensure Docker Desktop is running and you have Node.js 20+ specified in package.json.
Solution: Restart Claude Code after registration. Check logs with claude mcp list.
Solution: Check your AI configuration:
- Verify
AI_PROVIDERis set correctly (openai,anthropic, orollama) - Ensure
AI_API_KEYis valid for OpenAI/Anthropic - For Ollama, verify it's running:
curl http://localhost:11434/api/tags - Check logs for AI service errors:
LOG_LEVEL=DEBUG npm start
Solution:
- Install Ollama:
curl -fsSL https://ollama.ai/install.sh | sh - Download models:
ollama pull mistral:latestand/orollama pull deepseek-coder:latest - Verify models exist:
ollama list - Check Ollama is running:
ollama serve(if not auto-started) - If using custom model, specify it in the
modelparameter or setAI_MODELenvironment variable
Solution:
- Verify API key is correct and has sufficient credits
- Check model name is valid (e.g.,
gpt-4o-mini,claude-3-haiku-20240307) - Monitor rate limits in provider dashboard
- PentaForge will fallback to hardcoded responses automatically
MIT License - See LICENSE file for details.
Contributions are welcome! Please:
- Fork the repository
- Create a feature branch
- Add tests for new functionality
- Ensure all tests pass
- Submit a pull request
For issues, questions, or suggestions:
- Open an issue on GitHub
- Check existing issues for solutions
- Consult the PRP documentation at https://github.com/Wirasm/PRPs-agentic-eng