mirror of
https://github.com/ikawrakow/ik_llama.cpp.git
synced 2026-04-28 18:32:04 +00:00
Bitnet changes (#106)
* Adapting iq2_bn to work without separate scale tensors
Why? It is becoming burdensome to maintain the special Bitnet
conversion in convert_hf_to_gguf.py, so I thnk it is better
to make iq1_bn and iq2_bn just work with the mainline
conversion script (which does not generate scales).
* Adapting iq1_bn to work without separate scale tensors
* Adapting iq2_bn: CUDA dequantize
* Adapting iq2_bn: CUDA works
* Adapting iq1_bn: CUDA works
* Adapting iq1_bn, iq2_bn: NEON
* Adapting iq1_bn, iq2_bn: Metal
Dequantize works, but there is still something wrong
with the dot products.
* WIP
Absoolutely don't see what is wrong with the iq1_bn and iq2_bn
vector dot product kernels.
* Remove iq1_tn and iq2_tn - Part 1
Now that iq1_bn and iq2_bn have per row scales, there is no
reason to also have iq1_tn and iq2_tn.
* Remove iq1_tn and iq2_tn - Part 2
* Bitnet: use the standard llm_build_kv to build self attention
My main motivation was to enable FA. But FA does not work anyway
because head size is 100 for the Botnet ternary models
(and I had forgotten this little detail).
* Revert "Avoid rebuild of GGML graph for each token (#98)"
This reverts commit f2d315b46f.
As far as I can tell, the commit breaks Metal TG.
---------
Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
This commit is contained in:
@@ -175,8 +175,6 @@ extern "C" {
|
||||
LLAMA_FTYPE_MOSTLY_IQ4_K = 140, // except 1d tensors
|
||||
LLAMA_FTYPE_MOSTLY_IQ5_K = 141, // except 1d tensors
|
||||
LLAMA_FTYPE_MOSTLY_IQ6_K = 142, // except 1d tensors
|
||||
LLAMA_FTYPE_MOSTLY_IQ2_TN = 143, // except 1d tensors
|
||||
LLAMA_FTYPE_MOSTLY_IQ1_TN = 144, // except 1d tensors
|
||||
LLAMA_FTYPE_MOSTLY_IQ4_KS = 145, // except 1d tensors
|
||||
LLAMA_FTYPE_MOSTLY_IQ3_KL = 146, // except 1d tensors
|
||||
LLAMA_FTYPE_MOSTLY_IQ2_KS = 147, // except 1d tensors
|
||||
|
||||
Reference in New Issue
Block a user