Commit Graph

327 Commits

Author SHA1 Message Date
yurko
627d46912c qwen3next: disable flash-attn for cpu-only contexts 2026-02-08 01:04:38 -08:00
yurko
670434ea8e qwen3next: clean up chunked delta-net shape handling 2026-02-08 00:49:37 -08:00
yurko
343e335ff0 qwen3next: warn when forcing fused decode mode 2026-02-08 00:08:33 -08:00
yurko
64099e71c0 qwen3next: make fused delta safe by default and fix fused tensor layout 2026-02-08 00:06:29 -08:00
yurko
143e88ae77 qwen3next: add decode-only fused delta mode 2026-02-07 23:05:19 -08:00
yurko
9930f4d961 qwen3next: default fused delta-net off and document quality checks 2026-02-07 22:56:51 -08:00
yurko
b33cef68ad qwen3next: add runtime switch for fused delta-net path 2026-02-07 17:31:17 -08:00
yurko
6dd990d15a qwen3next: add fused delta-net op and wire model path 2026-02-07 14:32:16 -08:00
yurko
5a6c4e8da5 qwen3next: keep recurrent state in 4d layout through delta path 2026-02-07 14:00:09 -08:00
yurko
de5bf44e8c qwen3next: drop redundant cont before recurrent state flatten 2026-02-07 13:45:37 -08:00
yurko
43edfa237b qwen3next: avoid extra cont on linear attention output 2026-02-07 13:30:29 -08:00
yurko
0e3891b348 qwen3next: remove redundant v_conv cont in delta path 2026-02-07 13:25:34 -08:00
yurko
a1163d0b68 qwen3next: trim delta-net graph overhead in chunking path 2026-02-07 13:21:02 -08:00
yurko
fffd27e3c8 qwen3next: harden seq-state flow and support optional dense FFN layers 2026-02-07 13:12:26 -08:00
yurko
6db8dc86ca qwen3next: split cpu/cuda eval builds and tune PP scheduling 2026-02-06 19:28:17 -08:00
Yurko
e64b43392f cuda: reduce qwen3next moe/ssm sync overhead and refresh eval 2026-02-06 14:46:59 +00:00
yurko
9fbb50481e qwen3next: optimize broadcast sub and single-seq ssm conv 2026-02-06 12:50:43 +00:00
yurko
a7df116441 qwen3next: add architecture support and recurrent-state fixes 2026-02-06 12:13:09 +00:00
Kawrakow
33308908db Merge pull request #1211 from ikawrakow/ik/reduce_mla3_compute_buffer_size
Reduce CUDA compute buffer size for mla=3
2026-01-31 14:24:14 +02:00
Kawrakow
b85a2a50d5 Reduce compute buffer size for mla=3 2026-01-31 10:43:05 +00:00
Kawrakow
4d13ae03b5 Also these other two places 2026-01-30 15:36:29 +00:00
Kawrakow
098b1a2e04 Fix MiniMax-M2 KV-cache loading/saving 2026-01-30 13:38:07 +00:00
Kawrakow
686fd1ebec Use standard output calculation for MiniMax-M2 graph parallel (#1199) 2026-01-29 09:06:40 +02:00
Kawrakow
68ed62447c Split mode graph for Minimax-M2 (#1195)
* Split mode graph for Minimax-M2

* Cleanup

* Forgotten ffn_exp_probs_b
2026-01-29 07:27:06 +02:00
Kawrakow
30381fc1fc Faster hybrid inference when shared experts (#1191) 2026-01-26 07:22:05 +02:00
Kawrakow
478b56871f Faster long context TG on CUDA for GLM-4.5/4.6/4.7/AIR (part 2) (#1190)
* This works

* Make quantized KV cache work

* Remove the glm45 graph building changes

* Add condition
2026-01-26 07:21:47 +02:00
Kawrakow
28f8320f3a Much faster rng sampling (#1187) 2026-01-25 09:11:27 +02:00
Kawrakow
04beeffa4e Faster long context TG on CUDA for GLM-4.5/4.6/4.7/AIR (#1183)
* Similar hack to #1182 for GLM-4.5/6/7

* Refinements

* Disable when the KV cache is not f16
2026-01-24 09:39:29 +02:00
Kawrakow
2a7cc09149 Remove llamafile remnants (#1179) 2026-01-22 13:20:23 +02:00
Kawrakow
851fda3509 Split mode graph: use CUDA graphs (#1177)
* Use GUDA graphs also when theretensor overrides

* Change graph key

* This seems to work
2026-01-22 12:38:36 +02:00
Kawrakow
987651e54c Make comments more precise when experts gating function is missing (#1175) 2026-01-21 09:12:40 +02:00
Kawrakow
9e07839ba3 Correct GLM-4.7-Flash gating function (#1174)
* Correct GLM-4.7-Flash gating function

* This is better
2026-01-21 07:53:18 +02:00
Kawrakow
996e77047a Avoid ggml_get_rows if not necessary (#1160)
* Copy reduce result to other GPUs if necessary

* Avoid ggml_get_rows for TG

* For the output ops use the result of the split that ran on the main GPU

* More models
2026-01-20 15:38:21 +02:00
Kawrakow
132a01d25d GLM-4.7-Flash support (#1168)
* GLM-4.7-Flash support

* Model type

* Make FA work for mla != 0
2026-01-20 12:46:52 +02:00
Kawrakow
ef5f17940c sampling: refactor sorting (#1166)
* sampling: refactor sorting

* Couldn't look at it without fixing it.
2026-01-19 16:48:54 +02:00
Kawrakow
98b30e5e81 Faster adaptive_p sampling (#1165)
* A hopefully more efficient adaptive_p sampling

* Once at it, lets fix the formatting too

* More formatting

* Hopefully better

* This should be better

* Correctly accumulate adaptive_p sampling time

* AVX2
2026-01-19 16:03:09 +02:00
Kawrakow
fa58c20c42 A hopefully more efficient adaptive_p sampling (#1161)
* A hopefully more efficient adaptive_p sampling

* Once at it, lets fix the formatting too

* More formatting

* Correctly accumulate sampling time for adaptive_p
2026-01-19 15:01:55 +02:00
Kawrakow
0c0b6e4b8b Copy reduce result to other GPUs if necessary (#1156) 2026-01-19 08:40:26 +02:00
dungquixote42
6dfbef27ec Adaptive p: bugfix + optimization + refactor (#1155)
* adaptive-p sampler: fix zeroed orig_probs bug and refactor

- Fix bug where original probabilities were captured as zero by calculating
  them from logits in llama_prep_adaptive_p (new).
- Replace vector with unordered_map to track candidate probabilities,
  filtering for relevance via logit delta (16.6f).
- Standardize API naming: llama_<action/verb>_<focus/name/topic>_<extra/info>
- Update function signatures to follow most other samplers.

* resolve merge bug

* adaptive-p: revert reordering function definitions
2026-01-18 08:26:06 +02:00
firecoperana
d71a3ec315 Server: refactor and rename functions (#1151)
* Server: rename functions and refactor code

rename functions

refactor update slots

rename params_base

rename timings

* change

* Revert kv cache name changes

* Revert 2

* fix test build error

---------

Co-authored-by: firecoperana <firecoperana>
2026-01-18 08:16:57 +02:00
Kawrakow
7024fdbc72 Additional graph reduce types for split mode graph (#1154)
* WIP: add Q8_0 and BF16 as possible reduce types

Does not work - there is a big somewhere

* This finally works
2026-01-18 08:02:49 +02:00
Kawrakow
709e1a5375 Fixing split mode graph with many GPUs (#1152)
* Attempt to fix the many GPU issue in split mode graph

* WIP: this seems more stable

Still hanging after a while if I try to use all 7 GPUs

* Reenable OpenMP in scheduler async

Seems solid up to 4 GPUs. It did hang with --max-gpu 6.

* printf cleanup
2026-01-17 08:05:24 +02:00
Kawrakow
cb1063f6cd Fix experts/shared experts split (#1147) 2026-01-14 15:35:16 +02:00
Kawrakow
978202a754 Merge ffn_up and ffn_gate experts tensors (part 2) (#1139)
* Add ability to merge up+gate exps to more models

* We need to of course pass the merged tensor to build_ffn

* All the others

* Also Qwen3VL-MoE

---------

Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2026-01-13 08:07:52 +02:00
firecoperana
1a461525d5 server: stop processing the prompt when client disconnects (#1134)
implement generator-based API for task results

Update httplib.h to 0.27.0

Fix embedding error

Stop prompt processing when disconnected

Co-authored-by: firecoperana <firecoperana>
2026-01-13 07:56:59 +02:00
Kawrakow
c03c2d7cc6 Merge ffn_up and ffn_gate experts tensors (#1137)
* WIP - not working

* WIP - not working

* WIP - GPT-OSS working

However, extremely stupid. The only way I could correctly repack the
up/gate experts is to copy up and gate into host buffers, repack
into another host buffer, copy back into the ffn_up_gate_exps tensor.
This is going to be very slow for giant 500 GB models.

My attempts to do this via a compute graph on the backend holding
the tensors was unsuccessful.

For GPT-OSS-20B I see ~6-7% better PP when using the original
ik_llama.cpp fused_up_gate CUDA implementation, and ~10% when
using the small batch size implementation.

Other models are not working yet on CUDA as I need to fix the
fused mul-unary implementation.

* WIP

* WIP - Qwen3-MoE (and hopefully all others) working

But when I say here and in the previous commit "working",
I mean PP is working. TG is still broken.

* WIP: TG seems to be working

* Minor

* Add command line option to merge experts up/gate

* Add merge up/gate command line parameter to llama-bench

* Turn off merge_up_gate_exps if split mode graph

It is not yet implemented

* When no bias, allow merging up/gate with tensor overrides

* Arghh, we need to increase the context size again

* Cleanup
2026-01-12 18:30:53 +02:00
Kawrakow
738dc60b78 We don't need these 2026-01-10 15:32:21 +00:00
dungquixote42
52ad1c6421 Implement Adaptive-P Sampler (#1100)
* initial implementation of adaptive-p sampler

* explicitly mark candidates unsorted + cleanup qualifiers

* cosmetic update

* reorg prototypes

* lockstep with mainline

* add _impl for _init + reorg

* add LLAMA_API to prototypes

* update sharpness to 10

* lockstep: rng seed

* delete llama_sampling member in llama_sampler_adaptive_p

* fix LLAMA_API return type

* lockstep: rng seed cont

* actually correct implementation

* lockstep: sorting behavior

* const -> constexpr for known constants

* add missing space

* fix softmax usage in adaptive p sampler

* cosmetic changes

* implement do-not-sort version of softmax

* simpify rng seed, add static to constexpr

* refactor: remove iface + use shared rng + use actually original probabilities

* adaptive-p: add dedicated rng back in

* fix initial max_logit + add float vector to adaptive p sampler context + stochastic sampling

* adaptive-p: fuse first softmax with transformation

* adaptive-p: implement binary search selection

* adaptive-p: update comment
2026-01-10 07:58:53 +02:00
Kawrakow
dd3c3f72f2 Fix split mode graph for GPT-OSS with partial offload (#1128)
Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2026-01-10 07:57:43 +02:00
Kawrakow
08a0da389c Better VRAM utilization strategy for split mode graph (#1126)
* Better VRAM utilization strategy for split mode graph

* Fix assert when --max-gpu is less than available GPUs

---------

Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2026-01-09 13:36:02 +02:00