-
AI Institute, University of South Carolina
- Columbia, South Carolina
- https://vr25.github.io/
- @vrawte
Stars
This repository contains LLM (Large language model) interview question asked in top companies like Google, Nvidia , Meta , Microsoft & fortune 500 companies.
Public code of Dr. Ivan Reznikov used in posts, articles, conferences
LlamaIndex is the leading framework for building LLM-powered agents over your data.
SelfCheckGPT: Zero-Resource Black-Box Hallucination Detection for Generative Large Language Models
🐍 Python Implementation and Extension of RDF2Vec
Materials for ACL-2022 tutorial: Knowledge-Augmented Methods for Natural Language Processing
Interactive deep learning book with multi-framework code, math, and discussions. Adopted at 500 universities from 70 countries including Stanford, MIT, Harvard, and Cambridge.
Pen and paper exercises in machine learning
Large-scale pretrained models for goal-directed dialog
The accompanying code for "Transformer Feed-Forward Layers Are Key-Value Memories". Mor Geva, Roei Schuster, Jonathan Berant, and Omer Levy. EMNLP, 2021.
Extracting Summary Knowledge Graphs from Long Documents
Beyond the Imitation Game collaborative benchmark for measuring and extrapolating the capabilities of language models
Author's implementation of the CRAFT neural architecture from "Trouble on the Horizon" (EMNLP '19)
Graphic notes on Gilbert Strang's "Linear Algebra for Everyone"
Deep Learning based Semantic Textual Similarity Metric for Applications in Translation Technology
Detailed and tailored guide for undergraduate students or anybody want to dig deep into the field of AI with solid foundation.
Code for the EMNLP 2020 paper titled "Chapter Captor: Text Segmentation in Novels"
Notes and exercise attempts for "An Introduction to Statistical Learning"
Student Solutions to An Introduction to Statistical Learning with Applications in R
Long Range Arena for Benchmarking Efficient Transformers
🐍💯pySBD (Python Sentence Boundary Disambiguation) is a rule-based sentence boundary detection that works out-of-the-box.
Pytorch implementation of the paper "Class-Balanced Loss Based on Effective Number of Samples"
BERT fine-tuning for POS tagging task (Keras)
Align the token outputs from Spacy and Huggingface to help understand what language structures transformers see



