We actively support the following versions of AI Memory Module with security updates:
| Version | Supported | End of Support |
|---|---|---|
| 1.x.x | ✅ | TBD |
| < 1.0 | ❌ | 2026-01-14 |
Note: We recommend always using the latest stable release for the best security posture.
We take security seriously. If you discover a security vulnerability in AI Memory Module, please help us address it responsibly.
Email: security@wbsolutions.ca
Subject Line: [SECURITY] AI Memory Module - [Brief Description]
Include in your report:
-
Description of the vulnerability
- What is the issue?
- What impact does it have?
-
Steps to reproduce
- Detailed instructions to reproduce the vulnerability
- Include sample code, payloads, or configuration if applicable
-
Affected versions
- Which versions are affected?
- Have you tested multiple versions?
-
Suggested fix (optional)
- If you have a proposed solution, we'd love to hear it
-
Your contact information
- How can we reach you for clarification?
-
Acknowledgment: We'll acknowledge receipt within 48 hours
-
Initial assessment: We'll provide an initial assessment within 5 business days, including:
- Severity classification (Critical, High, Medium, Low)
- Estimated timeline for fix
- Whether we need more information
-
Resolution:
- Critical vulnerabilities: Patch within 7 days
- High severity: Patch within 14 days
- Medium/Low severity: Patch in next minor release
-
Credit: We'll credit you in the security advisory (unless you prefer to remain anonymous)
We believe in coordinated disclosure:
- We'll work with you to understand the issue
- We'll develop and test a fix
- We'll prepare a security advisory
- We'll release the fix and advisory simultaneously
- We ask that you do not publicly disclose the vulnerability until we've released a fix
Typical timeline: 90 days from initial report to public disclosure
When deploying AI Memory Module, follow these security best practices:
-
Isolate Docker network: Use a dedicated Docker network for AI Memory services
docker network create ai-memory-net
-
Firewall rules: Restrict access to service ports (26350, 28080, 28501)
# Only allow localhost access sudo ufw deny 26350 sudo ufw deny 28080 sudo ufw deny 28501 -
Use SSH tunneling for remote access instead of exposing ports:
ssh -L 28501:localhost:28501 user@remote-server
-
Qdrant API keys: Enable authentication for Qdrant (production deployments)
# docker/.env QDRANT_API_KEY=your-secure-key-here
-
Read-only dashboards: Configure Grafana in viewer mode for non-admins
-
File permissions: Ensure hook scripts have appropriate permissions
chmod 750 .claude/hooks/scripts/*.py
-
Encrypt sensitive memories: Consider encrypting memories containing credentials before storage
-
Regular backups: Back up Qdrant data directory regularly
docker run --rm -v qdrant_storage:/data -v $(pwd)/backup:/backup \ alpine tar czf /backup/qdrant-backup-$(date +%Y%m%d).tar.gz /data
-
Sanitize inputs: Review memories before they're stored to avoid leaking secrets
-
Use official images: Only use official Qdrant and Python base images
-
Keep images updated: Regularly update base images
docker compose pull docker compose up -d
-
Scan for vulnerabilities:
docker scan qdrant/qdrant:latest
-
Run as non-root: Docker containers run as non-root users (already configured)
-
Monitor dependencies: Use Dependabot (enabled by default on GitHub)
-
Audit Python packages:
pip install pip-audit pip-audit -r requirements-dev.txt
-
Pin versions: Use exact versions in requirements files (already done)
Never commit:
.envfiles- API keys
- Credentials
- Personal access tokens
Use environment variables:
export QDRANT_API_KEY="$(openssl rand -hex 32)"Rotate secrets regularly: Change API keys every 90 days (production)
AI Memory Module includes these security features:
- Input validation: All user inputs are validated before processing
- Content sanitization: Code is sanitized before storage to prevent injection
- Graceful degradation: Security failures don't expose sensitive data
- Minimal attack surface: Hook scripts run with minimal permissions
- Isolated execution: Docker containers are isolated from host system
Security-relevant metrics are tracked:
- Failed connection attempts to Qdrant
- Abnormal query patterns
- Memory storage anomalies
- Service health status
Access these via Grafana: http://localhost:23000
-
No built-in authentication: Qdrant runs without auth by default (localhost only)
- Mitigation: Enable Qdrant API keys for production deployments
-
Plaintext storage: Memories are stored unencrypted in Qdrant
- Mitigation: Use encrypted filesystems or Qdrant's upcoming encryption features
-
Local-only design: Designed for single-user, local development
- Mitigation: Don't expose services to untrusted networks
Planned security improvements (see ROADMAP.md):
- End-to-end encryption for sensitive memories (v1.2.0)
- Built-in API key management (v1.2.0)
- Role-based access control (v2.0.0)
- SSO integration (v2.0.0)
- Audit logging (v2.0.0)
| Date | Auditor | Scope | Findings | Status |
|---|---|---|---|---|
| 2026-01-14 | Internal Review | Initial Release | 0 Critical | Resolved |
Security Team: security@wbsolutions.ca General Contact: info@wbsolutions.ca Website: https://wbsolutions.ca
We thank the following security researchers for responsibly disclosing vulnerabilities:
No reported vulnerabilities yet - you could be the first to help secure AI Memory Module!
Last Updated: 2026-01-14 Policy Version: 1.0