* WIP: absorb adding input into std_attn and std_ffn
* WIP: NCCL infra
* WIP: add reduce and fake_cpy ops
* WIP
* WIP: graph appears to work, layer is broken
* WIP: Qwen3-MoE works with graph, layer still broken
* WIP: GLM-4.5 graph works
* WIP: fix sm layer (dense)
* WIP: fix sm layer (MoE)
* WIP: fast PP with bespoke 4-GPU NCCL
I guess, I'm not using NCCL the right way as PP is very
low with a single communicator group for 3 or more GPUs.
But if I create 4 communicator groups for pairs of GPUs
(0,1, 2,3, 0,2, 1,3) and use that, PP is fast: I'm hitting
1500 t/s for L3-70B on the 4x3090 system, which is
~20% better than the previous sm graph without NCCL.
But that cannot be the solution (I cannot be creating pairwise
communicators and associated logic for every possible number of GPUs).
* WIP: Cohere2
* Explicitely set device
* Bespoke 3-GPU case
* WIP
* Do not repeat get_rows multiple times
* Fix 3 GPUs
* OK, let's leave it in
* Simple async
* This sync seems enough
* Only do async for 4 or more backends
With 2 GPUs (so, 3 backends) not using async is slightly faster
* Scheduler changes
* Use OpenMP if available
Surprisingly (at least to me), this is quite a bit faster than
std::thread and std::barrier. GLM-4.5-AIR with 4 GPUs is now
at 105 t/s at zero context!
* Do not use OpenMP if there are tensor overrides
* Set omp max active levels
* Be more careful with having set the device before using a stream
* Command line option to turn on async. Set to false by defualt for now
---------
Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
* This should do the trick for PP
* Command line option to set max. extra VRAM that the scheduler can use
* Fix bug and cleanup
* Looks like with this change it is working with tensor overrides
* Nah, it is not working
* OK, this seems to be working
* Disable split scheduling with tensor overrides
---------
Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
* Rearrange graph nodes
So that we can do graph portions that are the same on 2 or more
GPUs at the same time.
* Separate graph compute implementation for split mode graph
* This is better
---------
Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
* Offload only activated experts
* This seems to do the trick for -fmoe
* Do not recalculate activated expers for fused up/gate
* Log out of bounds access details
* Add a command line argument
---------
Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
* Adapting iq2_bn to work without separate scale tensors
Why? It is becoming burdensome to maintain the special Bitnet
conversion in convert_hf_to_gguf.py, so I thnk it is better
to make iq1_bn and iq2_bn just work with the mainline
conversion script (which does not generate scales).
* Adapting iq1_bn to work without separate scale tensors
* Adapting iq2_bn: CUDA dequantize
* Adapting iq2_bn: CUDA works
* Adapting iq1_bn: CUDA works
* Adapting iq1_bn, iq2_bn: NEON
* Adapting iq1_bn, iq2_bn: Metal
Dequantize works, but there is still something wrong
with the dot products.
* WIP
Absoolutely don't see what is wrong with the iq1_bn and iq2_bn
vector dot product kernels.
* Remove iq1_tn and iq2_tn - Part 1
Now that iq1_bn and iq2_bn have per row scales, there is no
reason to also have iq1_tn and iq2_tn.
* Remove iq1_tn and iq2_tn - Part 2
* Bitnet: use the standard llm_build_kv to build self attention
My main motivation was to enable FA. But FA does not work anyway
because head size is 100 for the Botnet ternary models
(and I had forgotten this little detail).
* Revert "Avoid rebuild of GGML graph for each token (#98)"
This reverts commit f2d315b46f.
As far as I can tell, the commit breaks Metal TG.
---------
Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
Introduces caching of GGML graph to avoid unnecessary full rebuild between each token.
KV cache parameters, which change with each token, are updated directly in cached GGML
graph. Can be disabled with GGML_DISABLE_GRAPH_CACHING environment variable.
* Merging mainline - WIP
* Merging mainline - WIP
AVX2 and CUDA appear to work.
CUDA performance seems slightly (~1-2%) lower as it is so often
the case with llama.cpp/ggml after some "improvements" have been made.
* Merging mainline - fix Metal
* Remove check
---------
Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>