Commit Graph

9 Commits

Author SHA1 Message Date
yurko
64099e71c0 qwen3next: make fused delta safe by default and fix fused tensor layout 2026-02-08 00:06:29 -08:00
yurko
143e88ae77 qwen3next: add decode-only fused delta mode 2026-02-07 23:05:19 -08:00
yurko
9930f4d961 qwen3next: default fused delta-net off and document quality checks 2026-02-07 22:56:51 -08:00
yurko
81e788e2f6 docs: refresh qwen3next perf review and benchmark matrix 2026-02-07 17:31:17 -08:00
yurko
6db8dc86ca qwen3next: split cpu/cuda eval builds and tune PP scheduling 2026-02-06 19:28:17 -08:00
Yurko
e64b43392f cuda: reduce qwen3next moe/ssm sync overhead and refresh eval 2026-02-06 14:46:59 +00:00
yurko
c767cfa1d3 docs: update qwen3next perf report for cuda MoE/SSM tuning 2026-02-06 13:52:54 +00:00
yurko
9fbb50481e qwen3next: optimize broadcast sub and single-seq ssm conv 2026-02-06 12:50:43 +00:00
Kawrakow
0ceeb11721 Merge mainline llama.cpp (#3)
* Merging mainline - WIP

* Merging mainline - WIP

AVX2 and CUDA appear to work.
CUDA performance seems slightly (~1-2%) lower as it is so often
the case with llama.cpp/ggml after some "improvements" have been made.

* Merging mainline - fix Metal

* Remove check

---------

Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2024-07-27 07:55:01 +02:00