basavyr commited on
Commit
eaaa9e1
·
verified ·
1 Parent(s): e730fe0

Upload TriLM_3.9B_Unpacked_quant_IQ2_TN.gguf

Browse files

Adds the `IQ2_TN`quantized version of the ~4B TriLM model. This quantization has been done on Metal (i.e., the GPU on the M3 Pro silicon) via [`ik_llama.cpp`](https://github.com/ikawrakow/ik_llama.cpp).

[This PR](https://github.com/ikawrakow/ik_llama.cpp/pull/13) adds the `IQ2_TN` format with support for CUDA, AVX, and even Metal (thanks [ikawrakow](https://github.com/ikawrakow) 🙏). Thus, I decided to test quantization and inference for the 3.9B TriLM variant. It seems to work quite nice on the GPU.

.gitattributes CHANGED
@@ -35,3 +35,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
  TriLM_3.9B_Unpacked_quant_TQ1_0.gguf filter=lfs diff=lfs merge=lfs -text
37
  TriLM_3.9B_Unpacked_quant_TQ2_0.gguf filter=lfs diff=lfs merge=lfs -text
 
 
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
  TriLM_3.9B_Unpacked_quant_TQ1_0.gguf filter=lfs diff=lfs merge=lfs -text
37
  TriLM_3.9B_Unpacked_quant_TQ2_0.gguf filter=lfs diff=lfs merge=lfs -text
38
+ TriLM_3.9B_Unpacked_quant_IQ2_TN.gguf filter=lfs diff=lfs merge=lfs -text
TriLM_3.9B_Unpacked_quant_IQ2_TN.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ab9b72b7d05352679efe430929b6d4e3d04ff8c4574d6851eed3496e4fab94c6
3
+ size 1166745280