* Copy reduce result to other GPUs if necessary
* Avoid ggml_get_rows for TG
* For the output ops use the result of the split that ran on the main GPU
* More models
* A hopefully more efficient adaptive_p sampling
* Once at it, lets fix the formatting too
* More formatting
* Hopefully better
* This should be better
* Correctly accumulate adaptive_p sampling time
* AVX2
* A hopefully more efficient adaptive_p sampling
* Once at it, lets fix the formatting too
* More formatting
* Correctly accumulate sampling time for adaptive_p
* adaptive-p sampler: fix zeroed orig_probs bug and refactor
- Fix bug where original probabilities were captured as zero by calculating
them from logits in llama_prep_adaptive_p (new).
- Replace vector with unordered_map to track candidate probabilities,
filtering for relevance via logit delta (16.6f).
- Standardize API naming: llama_<action/verb>_<focus/name/topic>_<extra/info>
- Update function signatures to follow most other samplers.
* resolve merge bug
* adaptive-p: revert reordering function definitions
* Attempt to fix the many GPU issue in split mode graph
* WIP: this seems more stable
Still hanging after a while if I try to use all 7 GPUs
* Reenable OpenMP in scheduler async
Seems solid up to 4 GPUs. It did hang with --max-gpu 6.
* printf cleanup
* Add ability to merge up+gate exps to more models
* We need to of course pass the merged tensor to build_ffn
* All the others
* Also Qwen3VL-MoE
---------
Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
* WIP - not working
* WIP - not working
* WIP - GPT-OSS working
However, extremely stupid. The only way I could correctly repack the
up/gate experts is to copy up and gate into host buffers, repack
into another host buffer, copy back into the ffn_up_gate_exps tensor.
This is going to be very slow for giant 500 GB models.
My attempts to do this via a compute graph on the backend holding
the tensors was unsuccessful.
For GPT-OSS-20B I see ~6-7% better PP when using the original
ik_llama.cpp fused_up_gate CUDA implementation, and ~10% when
using the small batch size implementation.
Other models are not working yet on CUDA as I need to fix the
fused mul-unary implementation.
* WIP
* WIP - Qwen3-MoE (and hopefully all others) working
But when I say here and in the previous commit "working",
I mean PP is working. TG is still broken.
* WIP: TG seems to be working
* Minor
* Add command line option to merge experts up/gate
* Add merge up/gate command line parameter to llama-bench
* Turn off merge_up_gate_exps if split mode graph
It is not yet implemented
* When no bias, allow merging up/gate with tensor overrides
* Arghh, we need to increase the context size again
* Cleanup
* server: improve speed of speculative decoding
change logs
rpc: add recompute
spec dec fix
* Fix n_batch_size not set to context size for draft model
---------
Co-authored-by: firecoperana <firecoperana>
* Better VRAM utilization strategy for split mode graph
* Fix assert when --max-gpu is less than available GPUs
---------
Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
* Mimo-2 support
* Fix bug for head sizes not being the same
It still does not solve the Mimo-2 quantized cache issue.
* Fix quantized cache
* Minor
---------
Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>