Riley Betts Limited

AI proposes.
Your architecture
decides.

AI that cannot go rogue isn't a product feature, it's an architectural guarantee. We build the formal specification and cryptographic enforcement layers that make enterprise AI deployable in regulated systems. Not safer prompts. Not more monitoring. Mathematical proof.

Core Invariant Compliance Operator

α AgentActions :
  Γ(St, α) { accept | reject }
  // Agent proposes. Validator decides. Always.
  // No AI reasoning overrides this. Ever.

Scroll
The Thesis

The AI governance debate is asking the wrong question.

Everyone is asking how do we make AI safer? Better prompts, more guardrails, human-in-the-loop checkpoints. These are answers to a question that shouldn't need asking.

The right question is: should AI be the enforcement layer at all?

In any regulated system, financial services, healthcare, payments, procurement, there are rules that must hold. Not usually. Not 99.9% of the time. Always. A probabilistic system cannot provide a deterministic guarantee. That is not a limitation to engineer around. It is a mathematical fact.

The correct architecture separates reasoning from authority. The AI reasons freely. The validator decides, deterministically, every time. And the specification that defines what the validator enforces is itself formally proven, not tested, not reviewed, not hoped about. Proven.

Observation 01

Human-in-the-loop is mostly theatre

A compliance officer approving a rebalancing proposal they didn't generate, against mandate data they can barely process in the time given, is not governance. It's a ceremony that makes boards feel better while exposing sensitive client data unnecessarily. If an action cannot be validated at runtime, the automation was wrong. Put the validation in the architecture.

Observation 02

AI is a catalyst for better engineering, not a replacement for it

The firms winning with AI aren't the ones prompting harder. They're the ones whose architecture is precise enough that AI generates reliably excellent output. Vague intent produces vague code. Formally specified intententities, invariants, proofs, produces deterministically correct code. AI amplifies the quality of your specification, not the quality of your guesswork.

Observation 03

Most AI governance frameworks are designed for a world that no longer exists

They assume AI outputs, not AI actions. Agentic AI doesn't generate a recommendation for a human to act on, it books the flight, executes the trade, authorises the payment. The governance layer must sit upstream of execution, not downstream of it. A log is not a control. An audit trail reconstructed post-hoc is not evidence. The guarantee must be architectural.

Two Products. One Philosophy.

Formal. Enforced. Proven.

One product specifies software so precisely that AI can build it correctly. The other enforces compliance so rigorously that AI cannot break it. They share the same architectural spine.

Methodology

SpeC++

The 6th generation programming language for AI-first software engineering

Mathematics-first architecture specification that transforms how you work with AI code generation. When you define what must be true, not how to compute it, AI becomes a translator of proven intent, not a generator of hoped-for output.

// Instrument Repository Production Financial System
Specification: 11 pages, 26 invariants
Generated: 3,829 lines of production code
Compression: 350:1 vs traditional requirements
Invalid states possible: 0 // proven, not tested
  • Invariants as contracts: invalid states are architecturally impossible, not merely unlikely
  • Language-agnostic: one specification generates Python, Java, Rust, Go
  • Specification drift prevented: changes flow through the spec, not the code
  • Legacy reverse engineering: extract formal specs from undocumented codebases
  • AI reliability approaches 95%+ with precise specifications; the remainder is caught by automated invariant validation
350×
Spec compression ratio
0
Invalid states possible
90%+
Defect reduction
5–10×
Faster time-to-market
Discuss SpeC++ →
Platform

Compliance Operator

Cryptographically enforced compliance and data privacy for AI-assisted regulated workflows

A formally specified, cryptographically enforced state machine in which AI agents are explicitly treated as untrusted, policy constrained authorisation. Not because we distrust a particular model, because no probabilistic system should be the authoritative enforcement layer for deterministic compliance rules.

// Security Model with Formal Guarantee
T1 Anti-Replay: each proof-gated action authorised ≤ once
T2 Policy Soundness: acceptance ⟹ policy predicate holds
T3 Selective Disclosure: disclosed data provably consistent
T4 ZK Confidentiality: private data never revealed to agent
  • AI agent is architecturally prevented from seeing private data, zero, knowledge proofs enforce compliance without disclosure
  • Fully deterministic: same world state + same action = same decision, always
  • Hash-chained audit log: real-time, cryptographically verifiable, not reconstructed post-hoc
  • Prompt injection, jailbreaks, and social engineering cannot cause non-compliant state transitions
  • No rip-and-replace: sits above legacy systems as an orchestration and enforcement layer
ZK
Proof-gated enforcement
0
Trust extended to AI
4
Formal PCAS theorems
Real-time
Audit evidence
Discuss Compliance Operator →
Why This Matters Now

The cost of getting this wrong is not theoretical.

Failure Mode 01

The 0.1% problem

An AI model that follows compliance rules 99.9% of the time violates them 0.1% of the time. In a high-volume regulated workflow, thousands of transactions per day, that is not a rounding error. That is a compliance programme failure. A regulatory breach. Potentially a criminal matter. Probability is not a defence.

Failure Mode 02

Prompt injection is not a solved problem

An AI agent operating in a live enterprise environment receives input from many sources: user messages, API responses, document contents, other agents. Any of these can contain adversarial instructions. Guardrails and defensive prompting are a perimeter, and perimeters get breached. The only robust defence is architectural separation. The agent literally cannot take the harmful action, regardless of what it is told.

Failure Mode 03

A log is not a control

Most "AI governance" in production is observational: log the inputs, log the outputs, have someone review them. This is not governance. It is forensics. By the time you have detected the problem, the non-compliant state transition has already occurred. Real governance sits upstream of execution. It is the validator that decides whether the action is permitted, not the team that reviews it afterwards.

Proof of Work

We have built this.

The methodology is not theoretical. The following case studies represent real implementations, with real numbers.

Project Domain Specification Output Result Status
Instrument Repository FinancialOMS / IBOR / ABOR 11 pages · 26 invariants · 8 formal proofs 3,829 lines · 36 tests · 350:1 compression Zero invalid states possible (proven). Theorem 6.7: corporate actions architecturally cannot cascade to trading layer. PoC / Pilot
TSP Algorithm Optimisation 616 characters · 4 invariants 1,613 lines · 50 tests · 8.4:1 compression Provably optimal. All invariants machine-verified. Reference implementation
Agentic Compliance Operator Wealth management: managed account rebalancing PCAS security model · 4 theorems · ZK circuits Full operator: validator, witness service, ZK proofs, hash-chained audit log Mathematically proven: no adversaryincluding fully compromised AIcan cause non-compliant state transition. PoC / Pilot
Procurement Compliance Operator Enterprise procurement: three-way match Three-way match proofs · PO / GR / invoice Same core operator, domain plugin for procurement workflow ZK proof of three-way match without disclosing supplier pricing or contract terms. Reference implementation
Financial Services

Investment Management

Mandate compliance, fair allocation, rebalancing authorisation, proven without disclosing client portfolio data.

Regulated Finance

Trade Finance & Payments

Sanctions screening, three-way match, payment authorisation, deterministic enforcement with full cryptographic audit trail.

Healthcare

Clinical Authorisation

Treatment within approved care pathway, patient consent verified, privacy-preserving proof without exposing clinical records.

Enterprise

Regulatory Reporting

Reported figures proven consistent with underlying books. Figures are commercially sensitive. The proof is not.

Consensus vs. Our Position

Where we disagree with the mainstream.

Consensus

Better prompts and fine-tuned models will make AI agents safe enough for regulated use.

Riley Betts

Probabilistic systems cannot make deterministic guarantees. No amount of prompt engineering changes this. You can reduce the probability of failure. You cannot eliminate it. In regulated systems, that is not acceptable. The answer is architectural separation, not better persuasion.

Consensus

Human-in-the-loop is the responsible way to deploy agentic AI in regulated environments.

Riley Betts

Most HITL implementations are reassurance theatre. A human approving an AI-proposed action they cannot validate in the time given, against data they cannot fully process, is not a control. It is ceremony. If the action cannot be validated at runtime, the architecture is wrong. Fix the architecture.

Consensus

AI code generation reduces the need for rigorous software engineering discipline.

Riley Betts

AI code generation demands more rigorous engineering, not less. Slop in, slop out. The constraint moves upstream: from writing correct code to specifying correct behaviour. Organisations that invest in formal specification now will generate demonstrably correct systems at 5–10× the speed of those still relying on iterative debugging.

Consensus

AI governance is primarily a data and model management problem.

Riley Betts

Agentic AI governance is an execution problem. Existing frameworks were designed for models that produce outputs. Agents take actions. The risk is not in the model's response, it is in what the agent does next. Governance must sit upstream of execution, as an architectural constraint, not downstream as observation and review.

Consensus

Explainable AI means you can trace how the model reached its conclusion.

Riley Betts

Explainability of the model is not the same as auditability of the action. When a regulator asks whether a specific compliance check was performed on a specific action at a specific time, the answer must be a cryptographically verifiable, hash-chained record generated in real timenot a narrative reconstructed from logs. Those are not equivalent.

The Team

We have done this before.

Robert Betts
Co-Founder · Architecture & Engineering
LinkedIn →

Twenty-five years building software for financial institutions, derivatives pricing platforms, trading systems, risk infrastructure. Pioneer of Python in financial services, when Python was still considered an unconventional choice for production systems.

Deep practitioner of agentic orchestration, AI planning loops, and the architectural patterns that make autonomous systems safe rather than merely useful. The specification-first philosophy came from years of watching systems fail not because of bad code, but because of bad specifications.

Agentic Systems Financial Services Systems Architecture Autonomous Orchestration Python / AI Engineering Compliance Architecture
Martyn Riley
Co-Founder · Platform & Cryptographic Engineering
LinkedIn →

A successful technology entrepreneur, architect of the Compliance Operator's formal security model and ZK proof infrastructure. Brings a background in systems architecture and a rigorous approach to the hardest problem in enterprise AI: not making AI smarter, but making AI that cannot be wrong in ways that matter.

Author of the policy constrained authorisation model and the SpeC++ specification methodology. His insistence on provability over plausibility is the reason Riley Betts products come with theorems, not promises.

Cryptographic Engineering Formal Methods Zero-Knowledge Proofs Compliance Architecture Security Models SpeC++ Methodology
Jamie Riley-Betts
Co-Founder · Engineering, Mathematics & Symbolic Reasoning
LinkedIn →

Joint Mathematics and Computer Science degree from the University of St Andrews. Academic background in predicate logic, Bayesian inference, statistics, and contemporary AI techniquesprecisely the disciplines at the intersection of formal specification design and probabilistic-to-deterministic reasoning.

Where most engineers encounter formal methods reluctantly and late, Jamie started there. That grounding underpins the mathematical rigour of both SpeC++ and the Compliance Operator: the ability to reason precisely at the boundary between probabilistic AI behaviour and deterministic enforcement guarantees.

Algorithm Development Predicate Logic Bayesian Inference Symbolic Reasoning Statistical Modelling System Testing
Γ

Riley Betts engages as a joint team with your domain and compliance experts. We retain the engineering and cryptographic specialists; you provide the business context and regulatory knowledge. Engagements begin with a fixed-scope pilot designed to prove the architecture on your specific use case before any broader commitment.

Get In Touch

A technical conversation, not a sales call.

If you are deploying AI agents in a regulated environment and the governance question remains open, we would welcome a technical discussion. No pitch deck. No account executive. Two architects who have solved this problem and want to understand whether we can solve it for you.

Email directly

info@rileybetts.aifor technical discussions, pilot proposals, or partnership enquiries

SpeC++Pilot engagement

One critical system specified, implemented, and proven

Compliance Operator proof of concept

Fixed-scope · Your domain, your workflow, your regulators · Board-presentable output