Skip to content

AD1985/Lime-Explain-Text-Models

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

18 Commits
 
 
 
 
 
 

Repository files navigation

Model Interpretability

As the world is gearing towards model driven approach for most decision making, it is important to understand how a machine learning model is reaching to a decision. Most NLP models operate over high number of dimensions and so interpretability becomes even more important in such cases.

This repository is an effort to explain use of LIME in explaining a simple logistic alrogithm based text classification model. I will keep on adding more models in future to improve the scope of this analysis and to provide a good reference for any relevant work.

Data is taken from Jigsaw Toxic Comment Classification challenge of Kaggle and can be download from here. This problem was a multi-label problem but I changed it to binary classification problem for ease of explaination of generated results.

Notebook

Explaination Chart

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published