Skip to content

A systems programming language where automatic differentiation is a compiler pass and model parameters are explicit, growable memory.

License

Notifications You must be signed in to change notification settings

pierridotite/NOMA

Repository files navigation

NOMA Logo

NOMA

Neural-Oriented Machine Architecture

A research-driven systems language for machine learning where autodiff is a compiler pass and model parameters are explicit, growable memory.

Stage Stars Discord

Language Guide · Contributing · Discord


Table of Contents


🔥 News

All informations on changelog


What's NOMA ?

NOMA explores a different boundary between language and ML framework:

  • Reverse-mode autodiff as a compiler transformation (LLVM IR)
  • Training loops as a language construct (optimize { ... })
  • Learnable parameters as explicit buffers you can alloc / realloc / free
  • Intent: topology changes (growth, resizing) should be mechanically defined, including what happens to optimizer state

Where the idea came from (short origin story)

I started NOMA after running into the same friction many times: in mainstream ML stacks, changing a model’s topology mid-training often means rebuilding graphs, copying weights, and resetting optimizer state.

Biology suggests a different mental model. Nervous systems remain functional while continuously reshaping their micro-structure. Work on dendritic spine structural plasticity connects local structural remodeling to synaptic efficacy and learning/memory. That makes it a reasonable working hypothesis (not a claim of equivalence) that some forms of “local learning” can be preserved while the global structure changes.

On the ML side, there is also prior work showing that you can change network structure while preserving function (or at least reusing learned information) to reduce the cost of re-training (e.g., Net2Net, Network Morphism).

NOMA is my attempt to make these topology changes explicit and well-defined at the language level.


Dynamic topology growth (what realloc is trying to mean)

realloc grows a learnable parameter buffer while preserving existing values. The goal is to also preserve optimizer state for the existing portion (e.g., Adam moments), so training can continue without a full restart.

fn main() {
    learn W = tensor [[0.1], [0.2]];  // start small

    optimize(W) with adam(0.01) until loss < 0.01 {
        let pred = matmul(X, W);
        let loss = mean((pred - Y) * (pred - Y));

        if loss > 0.5 {
            realloc W = [10, 1];      // grow capacity, keep training
        }

        minimize loss;
    }

    return W;
}

XOR demo (first result, kept short)

We include a small, fully reproducible self-growing XOR toy benchmark to sanity-check the semantics.

Self-growing XOR loss curve

In this demo, after the growth step, the configuration that preserves optimizer state across realloc reconverges faster than a baseline that resets state. This is an early, limited result on a toy task useful as a first signal, not a performance claim.

Full scripts, plots, and notes are in demo_self_growing_xor/.


Quick start

git clone https://github.com/pierridotite/Noma.git
cd Noma
cargo build --release

# Interpreter mode
cargo run -- run examples/03_gradient_descent.noma

# Compile to a standalone binary
cargo run -- build-exe examples/12_linear_regression.noma -o model
./model

Project status (alpha)

Working today (high level):

  • parser + AST
  • reverse-mode autodiff
  • LLVM IR codegen (basic tensor ops)
  • SGD / Adam / RMSprop
  • alloc / realloc / free
  • user-defined functions with autodiff support
  • batch processing + I/O (CSV, Safetensors)

Known limitations (high level):

  • single numeric type (f64)
  • no module system (single-file programs)
  • control flow is limited
  • debugging is minimal

Scientific context (selected references)

Topology changes while reusing learned information

Biological inspiration: structural plasticity and stability

Compiler-level / IR-level autodiff (adjacent area)


Contributing

Issues and PRs are welcome ! Especially around ops, diagnostics, optimizers, docs, and backend work.

See: CONTRIBUTING.md

Citation

If you want to cite NOMA in a scientific paper, please use the following reference; BibTeX snippet below :

@software{NOMA,
  author  = {NOMA Authors},
  title   = {NOMA: Neural-Oriented Machine Architecture},
  year    = {2025},
  version = {alpha},
  url     = {https://github.com/pierridotite/NOMA},
  note    = {Accessed: [date you accessed, e.g. 12/30/2025]}
}

About

A systems programming language where automatic differentiation is a compiler pass and model parameters are explicit, growable memory.

Topics

Resources

License

Contributing

Stars

Watchers

Forks

Packages

No packages published