Stars
- All languages
- Assembly
- C
- C#
- C++
- CMake
- CSS
- Chapel
- Cuda
- Dockerfile
- Emacs Lisp
- Go
- HTML
- Haskell
- Java
- JavaScript
- Jupyter Notebook
- Kotlin
- LLVM
- Lean
- Lua
- MATLAB
- MLIR
- Makefile
- Mojo
- NSIS
- Nix
- Objective-C
- OpenSCAD
- PHP
- Perl
- PowerShell
- Python
- QML
- R
- Rich Text Format
- Roff
- Rust
- SCSS
- Sass
- Scala
- Shell
- Starlark
- SystemVerilog
- TeX
- TypeScript
- VHDL
- Verilog
- Vim Script
🤗 Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal models, for both inference and training.
Tensors and Dynamic neural networks in Python with strong GPU acceleration
Models and examples built with TensorFlow
A high-throughput and memory-efficient inference and serving engine for LLMs
The world's simplest facial recognition api for Python and the command line
Fine-tuning & Reinforcement Learning for LLMs. 🦥 Train OpenAI gpt-oss, DeepSeek, Qwen, Llama, Gemma, TTS 2x faster with 70% less VRAM.
The largest collection of PyTorch image encoders / backbones. Including train, eval, inference, export scripts, and pretrained weights -- ResNet, ResNeXT, EfficientNet, NFNet, Vision Transformer (V…
You like pytorch? You like micrograd? You love tinygrad! ❤️
Official inference framework for 1-bit LLMs
FAIR's research platform for object detection research, implementing popular algorithms like Mask R-CNN and RetinaNet.
Implementation of Vision Transformer, a simple way to achieve SOTA in vision classification with only a single transformer encoder, in Pytorch
Fast and memory-efficient exact attention
Universal LLM Deployment Engine with ML Compilation
Original reference implementation of "3D Gaussian Splatting for Real-Time Radiance Field Rendering"
Janus-Series: Unified Multimodal Understanding and Generation Models
Library of deep learning models and datasets designed to make deep learning more accessible and accelerate ML research.
Toolkit for linearizing PDFs for LLM datasets/training
This is an official implementation for "Swin Transformer: Hierarchical Vision Transformer using Shifted Windows".
RWKV (pronounced RwaKuv) is an RNN with great LLM performance, which can also be directly trained like a GPT transformer (parallelizable). We are at RWKV-7 "Goose". So it's combining the best of RN…
Open source code for AlphaFold 2.
Command line interface for testing internet bandwidth using speedtest.net
Minimal reproduction of DeepSeek R1-Zero
A paper list of object detection using deep learning.
NumPy aware dynamic Python compiler using LLVM
Hackable and optimized Transformers building blocks, supporting a composable construction.



