Highlights
- Pro
Starred repositories
A Slack Bot for summarizing arXiv papers, powered by OpenAI LLMs.
Arena-Hard-Auto: An automatic LLM benchmark.
verl: Volcano Engine Reinforcement Learning for LLMs
Official repository for the AnnoMI dataset: the first public collection of expert-annotated MI transcripts.
Fully customizable AI chatbot component for your website
PATIENT-Ψ: Using Large Language Models to Simulate Patients for Training Mental Health Professionals (EMNLP 2024)
An Open Source Python alternative to NotebookLM's podcast feature: Transforming Multimodal Content into Captivating Multilingual Audio Conversations with GenAI
Educational framework exploring ergonomic, lightweight multi-agent orchestration. Managed by OpenAI Solution team.
[ICML 2025] Programming Every Example: Lifting Pre-training Data Quality Like Experts at Scale
Here's how to use Lama3 for beginners and what services are being used.
Evaluation data, LLMs query code and results for "Large Language Models as Zero-Shot Conversational Recommenders" on CIKM 2023.
The official repo of Qwen (通义千问) chat & pretrained large language model proposed by Alibaba Cloud.
Train transformer language models with reinforcement learning.
A beautiful, simple, clean, and responsive Jekyll theme for academics
Large Language Model-enhanced Recommender System Papers
[ICLR 2023] ReAct: Synergizing Reasoning and Acting in Language Models
🔥Highlighting the top ML papers every week.
CRSLab is an open-source toolkit for building Conversational Recommender System (CRS).
RUCAIBox / UniCRS
Forked from wxl1999/UniCRS[KDD22] Official PyTorch implementation for "Towards Unified Conversational Recommender Systems via Knowledge-Enhanced Prompt Learning".
RUCAIBox / KGSF
Forked from Lancelot39/KGSFKDD2020 Improving Conversational Recommender Systems via Knowledge Graph based Semantic Fusion
A high-throughput and memory-efficient inference and serving engine for LLMs
State-of-the-Art Text Embeddings
The definitive Web UI for local AI, with powerful features and easy setup.
Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)


