MathSketch is a lightweight, cloud‑deployed digit recognition service. It was originally built as a learning and exploration project around ML‑assisted math tooling, with an emphasis on clean dependency boundaries, reproducibility, and fast iteration.
The project deliberately separates model development and conversion from runtime serving, so the deployed system stays small, stable, and predictable.
MathSketch is split into three distinct dependency environments:
-
Runtime (API / serving) What actually runs in production. No TensorFlow.
-
Training / Conversion Offline environment for training and converting models (TensorFlow → ONNX).
-
Tooling Developer tools used to manage dependencies (e.g. pip‑tools).
This separation is intentional and enforced.
The project follows a strict intent vs artifact model:
*.infiles express human intent (allowed version ranges).*.txtfiles are machine‑generated lockfiles (fully pinned, reproducible).
You install from *.txt. You edit *.in. You never hand‑edit lockfiles.
mathsketch/
├─ requirements/
│ ├─ runtime.in # Runtime dependency intent (no TensorFlow)
│ ├─ runtime.txt # Runtime lockfile (used for dev + deploy)
│ ├─ train.in # Model training/conversion intent (TensorFlow, tf2onnx)
│ ├─ train.txt # Frozen training/conversion lockfile
│ └─ tools.txt # Tooling dependencies (pip-tools, pip version)
│
├─ mathsketch/ # FastAPI application code
├─ models/ # Keras/TF models and ONNX models used at runtime
├─ static/ # HTML/CSS/JavaScript static files
├─ tools/ # Training/conversion scripts
├─ Dockerfile # Deployment image-building process
├─ README.md
└─ ...
.venv-runtime
The runtime environment is intentionally minimal:
- FastAPI
- ONNX Runtime
- Pillow
- psycopg2-binary
- SQLAlchemy
- Uvicorn
TensorFlow is never installed in this environment — not in dev, not in prod.
pip install -r requirements/runtime.txtThis is the environment used for:
- Local API development
- CI
- Deployment (fly.io)
.venv-train
TensorFlow is used only for model authoring and conversion.
CPU Only:
pip install -r requirements/train_CPU.txtGPU:
pip install -r requirements/train_GPU.txtThis environment is:
- Offline / local
- Rarely changed
- Treated as a known‑good artifact
The resulting ONNX models are what get checked in and served.
.venv-tools
Dependencies are locked using pip‑tools.
- Downgrade pip
pip install "pip<26"- Install tools
pip install -r requirements/tools.txtNote:
pipis soft‑pinned here to avoid known incompatibilities with pip‑tools.
pip-compile requirements/runtime.in
pip-compile requirements/train_CPU.in
pip-compile requirements/train_GPU.inpip-compile --upgrade requirements/runtime.inLockfiles should only change via pip-compile.
- Install runtime deps
- Run the FastAPI app locally
- (Optional) Switch to conversion env to update models
- Commit ONNX artifacts, not TensorFlow models
This keeps dev and prod behavior aligned.
Deployment uses the runtime lockfile only.
- No TensorFlow wheels
- Small image size
- Fast cold starts
The deployed system is deterministic and reproducible.
- Separation of concerns
- Deterministic builds
- Explicit dependency boundaries
- Minimal production surface area
- Reproducibility over novelty
This project is stable and primarily maintained as:
- A reference architecture
- A portfolio artifact
- A testbed for clean ML‑adjacent workflows
All Rights Reserved