Skip to content

bala-git9/NeuroComp_toolkit

Repository files navigation

NeuroComp Toolkit: ANN→SNN Experimentation Suite

This toolkit hosts curated scripts and neural network architectures for quantization, clipping, pruning, and ANN-to-SNN conversion experiments. The codebase is organized as a Python package so it is easy to browse, document, and extend—even when you are only showcasing results.

Highlights

  • Model zoo (NeuroComp_toolkit/models): MobileNet, VGG, AlexNet, LeNet/MLP, and SVHN classifiers with spiking counterparts bundled by family.
  • Core tooling (NeuroComp_toolkit/core): modular helpers for weight and activation quantization, clipping, and noise injection.
  • Dataset access (NeuroComp_toolkit/data): reusable loaders for CIFAR, Tiny ImageNet, SVHN, and MNIST benchmark datasets.
  • Pipelines (NeuroComp_toolkit/pipelines): the high-level run_pipeline entry point orchestrating the ANN→SNN workflow described in the accompanying project report.
  • Legacy utilities (NeuroComp_toolkit/utils.py): backwards compatible API surface that re-exports the reorganized modules.

Repository Layout

NeuroComp_toolkit/
├── __init__.py
├── core/
│   ├── __init__.py
│   └── quantization.py
├── data/
│   ├── __init__.py
│   └── loaders.py
├── models/
│   ├── mobilenet/        # MobileNet family (ANN, clipped, quantized, spiking)
│   ├── vgg/              # VGG variants for CIFAR-10/100
│   ├── alexnet/          # AlexNet baselines and spiking conversions
│   ├── lenet/            # LeNet/MLP models
│   ├── svhn/             # SVHN ANN/SNN implementations
│   └── shared/           # Common activation/quantization utilities
├── pipelines/
│   ├── __init__.py
│   └── ann_to_snn.py
├── utils.py
├── spiking.py
├── tile_count.py
├── prj_mlp.ini
└── ...

The reorganization keeps legacy functionality intact while making it clear where quantization, pruning, dataset management, and orchestration code reside. Consumers can import the structured namespaces directly or continue using the compatibility layer in utils.py.

Usage Overview

  1. Configure a workflow via an INI file (see NeuroComp_toolkit/prj_mlp.ini for an example).
  2. Launch the pipeline:
    python -m NeuroComp_toolkit.main --config-file prj_mlp.ini
  3. From Python, call the orchestrator directly:
    from NeuroComp_toolkit.pipelines import run_pipeline
    
    run_pipeline("prj_mlp.ini")

Note: Some flows require pretrained checkpoints or datasets that are not bundled in the repository. Treat the scripts as structured reference implementations.

Project Report

The results and methodology originally produced with this codebase are summarized in Project_report.pdf. The report references quantization, clipping, and ANN→SNN experiments that map directly to the modules highlighted above.

Extending the Toolkit

  • Add new quantization techniques by creating modules under NeuroComp_toolkit/core and re-exporting them through utils.py if backward compatibility is needed.
  • Introduce alternate datasets by placing new loader functions in NeuroComp_toolkit/data/loaders.py.
  • Build custom experiment pipelines by following the pattern in NeuroComp_toolkit/pipelines/ann_to_snn.py.

Contributions, reorganizations, or documentation improvements are welcome. The objective is to make it easy to showcase neuromorphic workflows without recreating a full training stack from scratch.

Report Cross-Reference

The reorganized modules map to the key sections of the project report:

  • Quantization & Pruning Experiments – see NeuroComp_toolkit/core/quantization.py for reusable helpers and NeuroComp_toolkit/pipelines/ann_to_snn.py for the orchestration of clipping, quantization, and noise-injection sweeps.
  • ANN to SNN Conversion – implemented in NeuroComp_toolkit/spiking.py and invoked through the pipeline’s spiking stage, showcasing hybrid ANN/SNN evaluation and spike statistics.
  • Hardware Co-Design Metrics – activation/weight clipping utilities live in NeuroComp_toolkit/utils.py, while NeuroComp_toolkit/tile_count.py offers a clean CLI for reproducing tile-usage analyses.
  • Model Zoo Benchmarks – curated under NeuroComp_toolkit/models/__init__.py, where MODEL_REGISTRY groups baseline, BN-folded, clipped, quantized, and spiking variants used throughout the documented experiments.

Key Takeaways from the Project Report

  • Quantization/clipping sweeps retain within ~0.5% of baseline accuracy while reducing activation precision to 6 bits, confirming hardware-aware optimizations documented in Project_report.pdf.
  • ANN→SNN conversion with percentile-derived thresholds preserves >90% CIFAR-10 accuracy over 90 time steps, demonstrating viable neuromorphic deployment.
  • Hybrid ANN/SNN inference and tile-count tooling expose compute hotspots, enabling co-design guidance for accelerator-aware pruning.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages