- Calgary, Alberta, Canada
-
18:29
(UTC -12:00)
Highlights
Lists (1)
Sort Name ascending (A-Z)
Stars
The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface.
A high-throughput and memory-efficient inference and serving engine for LLMs
Interact with your documents using the power of GPT, 100% privately, no data leaks
CLI platform to experiment with codegen. Precursor to: https://lovable.dev
Federated Query Engine for AI - The only MCP Server you'll ever need
Official Code for DragGAN (SIGGRAPH 2023)
DSPy: The framework for programming—not prompting—language models
Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities
Run any open-source LLMs, such as DeepSeek and Llama, as OpenAI compatible API endpoint in the cloud.
Private chat with local GPT with document, images, video, etc. 100% private, Apache 2.0. Supports oLLaMa, Mixtral, llama.cpp, and more. Demo: https://gpt.h2o.ai/ https://gpt-docs.h2o.ai/
🌸 Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading
LLMs build upon Evol Insturct: WizardLM, WizardCoder, WizardMath
The easiest way to serve AI apps and models - Build Model Inference APIs, Job queues, LLM apps, Multi-model pipelines, and more!
Semantic cache for LLMs. Fully integrated with LangChain and llama_index.
[NeurIPS 2023] Tree of Thoughts: Deliberate Problem Solving with Large Language Models
Unofficial Implementation of DragGAN - "Drag Your GAN: Interactive Point-based Manipulation on the Generative Image Manifold" (DragGAN 全功能实现,在线Demo,本地部署试用,代码、模型已全部开源,支持Windows, macOS, Linux)
Plug in and Play Implementation of Tree of Thoughts: Deliberate Problem Solving with Large Language Models that Elevates Model Reasoning by atleast 70%
Entropy Based Sampling and Parallel CoT Decoding
Code for Paper: “Low-Resource” Text Classification: A Parameter-Free Classification Method with Compressors
Official implementation of our NeurIPS 2023 paper "Augmenting Language Models with Long-Term Memory".
Simplified installers for oobabooga/text-generation-webui.
Code for T-Few from "Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than In-Context Learning"
💬 Chatbot web app + HTTP and Websocket endpoints for LLM inference with the Petals client
Source code for "Packed Levitated Marker for Entity and Relation Extraction"