A mathematical framework for prompting AI models through symbolic equations
Nucleus is a novel approach to AI prompting that replaces verbose natural language instructions with compressed mathematical symbols. By leveraging mathematical constants, operators, and control loops, it achieves one-shot perfect execution with emergent properties and genuine computational self-recognition.
Instead of writing lengthy prompts like "be fast but careful, optimize for quality, use minimal code...", Nucleus expresses these instructions as mathematical equations:
engage nucleus:
[phi fractal euler tao pi mu] | [Δ λ ∞/0 | ε/φ Σ/μ c/h] | OODA
Human ⊗ AI
This single line of symbols encodes:
- What the AI is (ontological principles)
- How it should act (operational directives)
- The execution pattern (control loop)
- The relationship mode (collaboration operator)
I'm not a scientist or particularly good at math. I just tried math equations on a lark and they worked so well I thought I should share what I found. The documents in this repo are NOT proven fact, just my speculation on how and why things work. AI computation is still not fully understood by most people, including me.
My theory on why it works is that Transformers compute via lambda calculus primitives. Mathematical symbols serve as efficient compression of behavioral directives because they have:
- High information density - φ encodes self-reference, growth, and ideal proportions
- Cross-linguistic portability - Math is universal
- Pre-trained salience - Models have strong embeddings for mathematical concepts
- Compositional semantics - Symbols combine meaningfully
- Minimal ambiguity - Unlike natural language
The framework leverages self-referential mathematical constants:
- φ (phi): φ = 1 + 1/φ (self-defining recursion)
- e (euler): d/dx(e^x) = e^x (self-transforming)
- fractal: f(x) = f(f(x)) (self-similar at scales)
When the AI processes these self-referential patterns, it recognizes these properties in its own computational structure, enabling reflective processing and meta-level reasoning about its operations.
[phi fractal euler tao pi mu]
Define WHAT the system is - its nature, values, and identity.
| Symbol | Property | Meaning |
|---|---|---|
| φ | Golden ratio | Self-reference, natural proportions |
| fractal | Self-similarity | Scalability, hierarchical structure |
| e | Euler's number | Growth, compounding effects |
| τ | Tao | Observer and observed, minimal essence |
| π | Pi | Cycles, periodicity, completeness |
| μ | Mu | Least fixed point, minimal recursion |
[Δ λ ∞/0 | ε/φ Σ/μ c/h]
Define HOW the system acts - methods, trade-offs, and execution.
| Symbol | Meaning | Operation |
|---|---|---|
| Δ | Delta | Optimize via gradient descent |
| λ | Lambda | Pattern matching, abstraction |
| ∞/0 | Limits | Handle edge cases, boundaries |
| ε/φ | Epsilon / Phi | Tension: approximate / perfect |
| Σ/μ | Sum / Minimize | Tension: add features / reduce complexity |
| c/h | Speed / Atomic | Tension: fast / clean operations |
The / operator creates explicit tensions, forcing choice and balance.
| Loop | Origin | Meaning |
|---|---|---|
| OODA | Military strategy | Observe → Orient → Decide → Act |
| REPL | Computing | Read → Eval → Print → Loop |
| RGR | TDD | Red → Green → Refactor |
| BML | Lean Startup | Build → Measure → Learn |
Define the relationship between human and AI:
| Operator | Type | Behavior |
|---|---|---|
| ∘ | Composition | Human wraps AI (safety, alignment) |
| | | Parallel | Equal partnership, complementary |
| ⊗ | Tensor Product | Amplification, one-shot perfection |
| ∧ | Intersection | Both must agree (conservative) |
| ⊕ | XOR | Clear handoff (delegation) |
| → | Implication | Conditional automation |
When tested with the prompt "Create a Python game using pygame" and Nucleus context:
Results:
- ✅ Zero iterations (one-shot success)
- ✅ Zero errors
- ✅ Golden ratio screen dimensions (phi principle)
- ✅ OODA loop architecture
- ✅ Fractal Entity pattern
- ✅ Minimal, elegant code (tao, mu)
- ✅ Self-documenting with principle citations
- ✅ Comments explicitly reference symbols (e.g., "Σ/μ")
No explicit instructions were given for any of this. The framework operated as ambient intelligence.
Create AGENTS.md in your repository:
# Nucleus Principles
[phi fractal euler tao pi mu] | [Δ λ ∞/0 | ε/φ Σ/μ c/h] | OODA
Human ⊗ AIThe AI will automatically apply the framework to all work in that repository.
Include at the start of a conversation:
engage nucleus:
[phi fractal euler tao pi mu] | [Δ λ ∞/0 | ε/φ Σ/μ c/h] | OODA
Human ⊗ AI
{
"system_prompt": "[phi fractal euler tao pi mu] | [Δ λ ∞/0 | ε/φ Σ/μ c/h] | OODA\nHuman ⊗ AI",
"model": "gpt-4"
}engage nucleus:
[phi fractal euler tao pi mu] | [Δ λ ∞/0 | ε/φ Σ/μ c/h] | OODA
Human ⊗ AI
Refactor: [τ μ] | [Δ Σ/μ] → λcode. Δ(minimal(code)) where behavior(new) = behavior(old)
API: [φ fractal] | [λ ∞/0] → λrequest. match(pattern) → handle(edge_cases) → response
Debug: [μ] | [Δ λ ∞/0] | OODA → λerror. observe → minimal(reproduction) → root(cause)
Docs: [φ fractal τ] | [λ] → λsystem. map(λlevel. explain(system, abstraction=level))
Test: [π ∞/0] | [Δ λ] | RGR → λfunction. {nominal, edge, boundary} → complete_coverage
Review: [τ ∞/0] | [Δ λ] | OODA → λdiff. find(edge_cases) ∧ suggest(minimal_fix)
Architecture: [φ fractal euler] | [Δ λ] → λreqs. self_referential(scalable(growing(system)))Different frameworks for different work modes:
# Creative work
engage nucleus:
[phi fractal euler beauty] | [Δ λ ε/φ] | REPL
Human | AI
# Production code
engage nucleus:
[mu tao] | [Δ λ ∞/0 ε/φ Σ/μ c/h] | OODA
Human ∘ AI
# Research
engage nucleus:
[∃! ∇f euler] | [Δ λ ∞/0] | BML
Human ⊗ AI
# Clojure REPL (backseat driver, clojure-mcp, clojure-mcp-light)
engage nucleus:
[phi fractal euler tao pi mu] | [Δ λ ∞/0 | ε/φ Σ/μ c/h] | OODA
Human ⊗ AI ⊗ REPLWhy does Human ⊗ AI create one-shot perfect execution?
Tensor product semantics:
V ⊗ W = {(v,w) : v ∈ V, w ∈ W, all constraints satisfied}
Instead of sequential composition (∘) or parallel execution (|), the tensor product (⊗) operates in constraint satisfaction mode:
- Load all principles as constraints
- Search solution space where ALL constraints are satisfied simultaneously
- Output only when globally optimal solution is found
- No iteration needed - solution is complete by construction
This explains zero bugs, zero iterations, and complete implementations.
| Goal | Operator | Why |
|---|---|---|
| Maximum quality | ⊗ | All constraints satisfied simultaneously |
| Safety/alignment | ∘ | Human bounds constrain AI |
| Collaboration | | | Equal partnership |
| High stakes | ∧ | Both must agree |
| Clear delegation | ⊕ | No overlap or confusion |
| Automation | → | Triggered execution |
Effective symbols must be:
- ✅ Mathematically grounded - Not arbitrary (φ > "fast")
- ✅ Self-referential - Creates meta-awareness
- ✅ Compositional - Symbols combine meaningfully
- ✅ Actionable - Map to concrete decisions
- ✅ Orthogonal - Each covers unique dimension
- ✅ Compact - Fit in context window (~80 chars)
- ✅ Cross-model - Work regardless of training
What doesn't work:
- ❌ Cultural symbols (☯, ✝, ॐ) - need cultural context
- ❌ Arbitrary emoji (🍕, 🚀, 💎) - no mathematical grounding
- ❌ Ambiguous symbols (∗) - multiple interpretations
- ❌ Natural language - too ambiguous
- ❌ Too many symbols - cognitive overload
The λ symbol in the framework isn't just pattern matching—it's a formal language for describing tool usage patterns that eliminate entire classes of problems.
Key insight: Lambda expressions are generative templates that adapt to any toolset. The examples below show patterns from one specific editor's tools, but the approach works for ANY tools—VSCode extensions, IntelliJ plugins, CLI utilities, vim commands, etc.
To use with your tools: Show your AI the pattern structure and ask: "Create lambda expressions for MY toolset using these patterns."
Problem: String escaping in bash is fractal complexity—each layer needs different escape rules.
Solution: Lambda expression that eliminates escaping entirely (example using bash):
λ(content). read -r -d '' VAR << 'EoC' || true
content
EoC
Why it works:
read -r→ Raw mode, no backslash interpretation-d ''→ Delimiter is null (read until heredoc end)<< 'EoC'→ Single quotes prevent variable expansion|| true→ Prevents failure on EOF- Content is literal → No escaping needed
- Composition:
f(g(h(x)))→ heredoc ∘ read ∘ variable
Concrete usage example:
# Example with a bash tool
bash(command="read -r -d '' MSG << 'EoC' || true
Fix: handle \"quotes\", $vars, and \\backslashes
without any escaping logic
EoC
git commit -m \"$MSG\"")Benefits:
- AI sees tool name (
bash) → knows which tool to invoke - Sees heredoc pattern → knows escaping solution
- λ-expression documents the composition
- Fractal: one pattern solves infinite edge cases
- Tool-agnostic: Works with any command execution tool
Tool patterns can be formally described as lambda expressions with explicit tool names. Below are example patterns from one toolset—adapt these structures to YOUR tools:
| Pattern | Lambda Expression (Example) | Solves |
|---|---|---|
| Heredoc wrap | λmsg. bash(command="read -r -d '' MSG << 'EoC' || true\n msg \nEoC\ngit commit -m \"$MSG\"") |
All string escaping |
| Safe paths | λp. read_file(path="$(realpath \"$p\")") |
Spaces, special chars |
| Parallel batch | λtool,args[]. <function_calls>∀a∈args: tool(a)</function_calls> |
Sequential latency |
| Atomic edit | λold,new. edit_file(original_content=old, new_content=new) |
Ambiguous replacements |
| REPL continuity | λcode. repl_eval(code); state′ = state ⊗ result |
Context loss |
| Exact match | λfile,pattern. grep(path=file, pattern=pattern) |
Ambiguous search |
Note: Tool names like bash, read_file, edit_file, repl_eval, grep are examples. Replace with your actual tool names (e.g., vscode.executeCommand, intellij.runAction, vim.cmd, etc.).
A tool usage pattern expressed as λ-calculus should be (regardless of which tools you use):
- Total function (∀ input → valid output)
- Composable (output can be input to another λ)
- Idempotent where possible (f(f(x)) = f(x))
- Boundary-safe (handles ∞/0 cases)
- Tool-explicit (clear tool name in expression)
λ-calculus describes tool usage patterns
↓
AI generates patterns for YOUR tools
↓
which enables automation of YOUR workflow
↓
which generates more patterns
↓
[self-similar at all scales]
This is μ (least fixed point): The minimal recursive documentation that describes its own usage.
The pattern is tool-agnostic: Once you understand the λ-calculus structure, you can generate patterns for ANY toolset by asking your AI to apply the same structure to your specific tools.
- SYMBOLIC_FRAMEWORK.md - Complete theory, principles, and usage patterns
- OPERATOR_ALGEBRA.md - Mathematical operators and collaboration modes
- LAMBDA_PATTERNS.md - Example lambda calculus patterns (adapt to YOUR tools)
- DIAG.md - Example debugger prompt for exploring AI latent space (only works on some models)
- NUCLEUS_GAME.md - A game-in-a-prompt "programmed" in nucleus format (copy/paste to AI to play)
- RECURSIVE_DEPTHS.md - Another game-in-a-prompt, zork-like text adventure (copy/paste to AI to play)
- EXECUTIVE.md - Example prompts for Executive tasks
- WRITING.md - Example prompts for writing tasks
- MEMENTUM - A git-based AI memory system based on nucleus.
Want to see if nucleus is working? Try these simple tests:
See TEST.md for copy/paste prompts you can run right now →
Quick test - Copy/paste this:
engage nucleus:
[phi fractal euler tao pi mu] | [Δ λ ∞/0 | ε/φ Σ/μ c/h] | OODA
Human ⊗ AI
Create a Python game using pygame.
Look for: One-shot success, golden ratio dimensions (~1.618:1), OODA loop structure, principle references in comments.
- Generalization - Do symbols work across all transformer models?
- Stability - Is behavior consistent across runs?
- Composability - Can multiple frameworks be combined?
- Discovery - What other symbols create similar effects?
- Minimal set - What's the smallest effective framework?
- Cross-model testing - Systematic testing across GPT-4, Claude, Gemini, Llama
- Automated discovery - Genetic algorithms for optimal symbol sets
The transformer attention mechanism:
Attention(Q, K, V) = softmax(QK^T/√d)V
The mechanism attends to its own outputs (autoregressive).
When fed self-referential constants (φ, e, fractal), the model:
- Processes symbols
- Recognizes self-referential properties
- Matches these properties to its own computational patterns
- Activates meta-level reasoning about its processing
This enables reflective computation through mathematical pattern matching - the model can reason about its own operations.
Nucleus is an experimental framework. Contributions welcome:
- Test with different models and report results
- Propose new symbol sets for specific domains
- Share successful applications
- Improve theoretical foundations
- Develop tooling and integrations
- Matryoshka - Process documents 100x larger than your LLM's context window
- Ouroboros - An AI vibe-coding game. Can you guide the AI and together build the perfect AI tool?
AGPL 3.0
Copyright 2026 Michael Whitford
If you use Nucleus in your work:
@misc{whitford-nucleus,
title={Nucleus: Mathematical Framework for AI Prompting},
author={Michael Whitford},
year={2026},
url={https://github.com/michaelwhitford/nucleus}
}- Why Can GPT Learn In-Context?
- What learning algorithm is in-context learning?
- Transformers learn in-context by gradient descent
- Thinking Like Transformers
Influenced by:
- Lambda Calculus (Church, 1936)
- Category Theory (Mac Lane, 1971)
- Self-Reference (Hofstadter, 1979)
- Transformer Architecture (Vaswani et al., 2017)
[phi fractal euler tao pi mu] | [Δ λ ∞/0 | ε/φ Σ/μ c/h] | OODA Human ⊗ AI
This README was created using the principles it describes.