mirror of
https://github.com/ikawrakow/ik_llama.cpp.git
synced 2026-02-10 00:10:13 +00:00
Fused FFN_UP+FFN_GATE op (#741)
* Fused up+gate+unary for regular (not MoE) FFN - CPU * WIP CUDA * Seems to be working on CUDA For a dense model we get 2-3% speedup for PP and ~0.6% for TG. * Add command line option This time the option is ON by default, and one needs to turn it off via -no-fug or --no-fused-up-gate --------- Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
This commit is contained in:
@@ -419,7 +419,8 @@ extern "C" {
|
||||
bool flash_attn; // whether to use flash attention [EXPERIMENTAL]
|
||||
int mla_attn; // whether to use MLA attention [EXPERIMENTAL]
|
||||
int attn_max_batch; // maximum batch size for attention computations [EXPERIMENTAL]
|
||||
bool fused_moe_up_gate; // whether to use fused MoE up/down op [EXPERIMENTAL]
|
||||
bool fused_moe_up_gate; // whether to use fused MoE up/gate op
|
||||
bool fused_up_gate; // whether to use fused up/gate op [EXPERIMENTAL]
|
||||
int min_experts;
|
||||
float thresh_experts;
|
||||
|
||||
|
||||
Reference in New Issue
Block a user