This toolkit hosts curated scripts and neural network architectures for quantization, clipping, pruning, and ANN-to-SNN conversion experiments. The codebase is organized as a Python package so it is easy to browse, document, and extend—even when you are only showcasing results.
- Model zoo (
NeuroComp_toolkit/models): MobileNet, VGG, AlexNet, LeNet/MLP, and SVHN classifiers with spiking counterparts bundled by family. - Core tooling (
NeuroComp_toolkit/core): modular helpers for weight and activation quantization, clipping, and noise injection. - Dataset access (
NeuroComp_toolkit/data): reusable loaders for CIFAR, Tiny ImageNet, SVHN, and MNIST benchmark datasets. - Pipelines (
NeuroComp_toolkit/pipelines): the high-levelrun_pipelineentry point orchestrating the ANN→SNN workflow described in the accompanying project report. - Legacy utilities (
NeuroComp_toolkit/utils.py): backwards compatible API surface that re-exports the reorganized modules.
NeuroComp_toolkit/
├── __init__.py
├── core/
│ ├── __init__.py
│ └── quantization.py
├── data/
│ ├── __init__.py
│ └── loaders.py
├── models/
│ ├── mobilenet/ # MobileNet family (ANN, clipped, quantized, spiking)
│ ├── vgg/ # VGG variants for CIFAR-10/100
│ ├── alexnet/ # AlexNet baselines and spiking conversions
│ ├── lenet/ # LeNet/MLP models
│ ├── svhn/ # SVHN ANN/SNN implementations
│ └── shared/ # Common activation/quantization utilities
├── pipelines/
│ ├── __init__.py
│ └── ann_to_snn.py
├── utils.py
├── spiking.py
├── tile_count.py
├── prj_mlp.ini
└── ...
The reorganization keeps legacy functionality intact while making it clear where
quantization, pruning, dataset management, and orchestration code reside.
Consumers can import the structured namespaces directly or continue using the
compatibility layer in utils.py.
- Configure a workflow via an INI file (see
NeuroComp_toolkit/prj_mlp.inifor an example). - Launch the pipeline:
python -m NeuroComp_toolkit.main --config-file prj_mlp.ini
- From Python, call the orchestrator directly:
from NeuroComp_toolkit.pipelines import run_pipeline run_pipeline("prj_mlp.ini")
Note: Some flows require pretrained checkpoints or datasets that are not bundled in the repository. Treat the scripts as structured reference implementations.
The results and methodology originally produced with this codebase are
summarized in Project_report.pdf. The report references
quantization, clipping, and ANN→SNN experiments that map directly to the modules
highlighted above.
- Add new quantization techniques by creating modules under
NeuroComp_toolkit/coreand re-exporting them throughutils.pyif backward compatibility is needed. - Introduce alternate datasets by placing new loader functions in
NeuroComp_toolkit/data/loaders.py. - Build custom experiment pipelines by following the pattern in
NeuroComp_toolkit/pipelines/ann_to_snn.py.
Contributions, reorganizations, or documentation improvements are welcome. The objective is to make it easy to showcase neuromorphic workflows without recreating a full training stack from scratch.
The reorganized modules map to the key sections of the project report:
- Quantization & Pruning Experiments – see
NeuroComp_toolkit/core/quantization.pyfor reusable helpers andNeuroComp_toolkit/pipelines/ann_to_snn.pyfor the orchestration of clipping, quantization, and noise-injection sweeps. - ANN to SNN Conversion – implemented in
NeuroComp_toolkit/spiking.pyand invoked through the pipeline’s spiking stage, showcasing hybrid ANN/SNN evaluation and spike statistics. - Hardware Co-Design Metrics – activation/weight clipping utilities live in
NeuroComp_toolkit/utils.py, whileNeuroComp_toolkit/tile_count.pyoffers a clean CLI for reproducing tile-usage analyses. - Model Zoo Benchmarks – curated under
NeuroComp_toolkit/models/__init__.py, whereMODEL_REGISTRYgroups baseline, BN-folded, clipped, quantized, and spiking variants used throughout the documented experiments.
- Quantization/clipping sweeps retain within ~0.5% of baseline accuracy while reducing activation precision to 6 bits, confirming hardware-aware optimizations documented in
Project_report.pdf. - ANN→SNN conversion with percentile-derived thresholds preserves >90% CIFAR-10 accuracy over 90 time steps, demonstrating viable neuromorphic deployment.
- Hybrid ANN/SNN inference and tile-count tooling expose compute hotspots, enabling co-design guidance for accelerator-aware pruning.