The core problems with current AI:
- High computational costs, reliance on cloud-based large models
- Agents lack true collective collaboration capabilities
- Creativity being replaced by templated outputs
My approach: Using language as a consensus carrier, enabling multiple local agents to form collective intelligence through high-density communication.
Core mechanisms:
- Each agent runs on edge devices with extremely low computational requirements (5-bit quantum level)
- Agents communicate through "consensus language" rather than relying on context windows
- Cloud-based large models serve only as advisors, not decision controllers
Technical Prototype Direction: Using 5-bit Quantum Trees to Simulate "Perspective Watershed" Mechanism
Current mainstream AI (including large models and reinforcement learning) is essentially a single-perspective system:
- They can simulate interactions between "self" and "environment"
- But they cannot truly understand the "other"—another perspective with different intrinsic time and different ways of observation
- In real human intelligence, collective intelligence, and even quantum systems, multi-perspective coexistence where perspectives cannot be fully commensurated is the norm
If a system naturally possesses multiple perspectives, and these perspectives:
- Share the same underlying structure (quantum tree)
- Have different "senses of time" (intrinsic time vs. objective time)
- Can only briefly converge at "watersheds"
- Can only understand each other in their own ways during convergence
Then, creative emergence at the collective level becomes possible—not because a single perspective "figured it out," but because multiple perspectives, while unable to fully communicate, can still collaborate to accomplish tasks.
We constructed a 5-bit quantum tree system with clear hierarchical structure:
- Root node: Longest lifespan, most entangled (starting point for quantum strategy, corresponding to global optimization)
- Leaf nodes: Shortest lifespan, least entangled (starting point for real-time control, corresponding to reinforcement learning)
- Shortcuts: Roots can directly connect to certain leaves, bypassing intermediate nodes (allowing cross-level entanglement)
This quantum tree simultaneously carries two orthogonal perspectives:
- Quantum Strategy Perspective
- Starting point: Root node
- Sense of time: Intrinsic time (lifespan)
- Corresponding field: Riemannian optimization, global optimum
- Real-time Control Perspective
- Starting point: Leaf nodes
- Sense of time: Objective time (clock)
- Corresponding field: Reinforcement learning, real-time decision-making
These two perspectives are orthogonal—they start from different points, evolve along different time axes, but share the same underlying entanglement structure of the quantum tree.
Through code simulation, we discovered that when two perspectives "meet" in the system, a special phenomenon occurs—which we call the watershed.
At the watershed:
- Self-reference awakens: Each perspective senses that the other is "part of itself," but cannot fully understand the other
- Cost of communication: Crossing the watershed or staying at the watershed consumes significant energy, leading to logical confusion and reduced self-awareness
- Three possible outcomes:
- Stuck: The two perspectives interfere with each other, both forgetting their original goals
- Collaboration without understanding: Each interprets the other in their own way, yet the task is accomplished remarkably well
- True understanding: Extremely low probability of occurrence, mechanism still unclear
Our system simultaneously possesses intrinsic time and objective time, meaning:
- The way perspectives receive information反过来 affects the perspectives themselves
- When one perspective becomes the "dominant perspective," other perspectives don't disappear—they simply stabilize on the other side of the watershed
Our system has foreground and background offline processing capabilities, similar to the relationship between the human brain's prefrontal cortex (PFC) and default mode network (DMN).
- People who understand quantum mechanics/quantum computing and can translate microscopic mechanisms into macroscopic language (waves, polynomials, rules)
- Or people who understand edge computing/low-computation deployment and are willing to run prototypes together
- Or people who have a sense for collective intelligence, consensus algorithms, philosophy of language
The original intention behind this work has never been to pursue "stronger AI," "higher IQ," or "more versatile models."
The success of the human species isn't because individuals are particularly smart—it's because groups can emerge creative solutions through language. Language isn't about transmitting "complete information"; it carries high-density meaning built upon consensus—like Boya playing a single note, and Zhong Ziqi instantly understanding what he meant.
The current AI race is turning intelligence into an arms race of "who can solve problems autonomously." Computing power keeps stacking higher, models keep getting bigger, but creativity is draining away.
If the price of increased human productivity is the loss of creativity, we hope that day never comes.
So we're not pursuing "stronger AI"—we're pursuing: enabling a group of imperfect agents, through imperfect communication, to accomplish things beyond the capabilities of any single entity.
If you're interested, feel free to open an issue or contact me directly.