Fused FFN_UP+FFN_GATE op (#741)

* Fused up+gate+unary for regular (not MoE) FFN - CPU

* WIP CUDA

* Seems to be working on CUDA

For a dense model we get 2-3% speedup for PP and ~0.6% for TG.

* Add command line option

This time the option is ON by default, and one needs to turn it
off via -no-fug or --no-fused-up-gate

---------

Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
This commit is contained in:
Kawrakow
2025-08-31 18:16:36 +03:00
committed by GitHub
parent d55e98519f
commit 8de297b795
10 changed files with 276 additions and 12 deletions

View File

@@ -191,6 +191,7 @@ struct gpt_params {
int mla_attn = 0; // MLA 0: standard attention, 1: MLA with K and transposed V cache, 2: MLA with just K cache
int attn_max_batch = 0; // Max batch size to use when computing attention (only applicable if flash_attn = false)
bool fused_moe_up_gate = false; // fused up*unary(gate) op for MoE models
bool fused_up_gate = true; // fused up*unary(gate) op
int min_experts = -1;
float thresh_experts = 0;