CUDA: prompt processing optimizations for MoE models (#739)

* Skip the row id computation for the ffn_down op

Sadly, almost negligible performance gain.

* Also this doesn't do much

* Also this barely moves the needle

* This is slightly better

---------

Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
This commit is contained in:
Kawrakow
2025-08-30 12:09:41 +03:00
committed by GitHub
parent f529c3a808
commit d55e98519f
5 changed files with 117 additions and 57 deletions

View File

@@ -13342,7 +13342,7 @@ struct llm_build_context {
// whether to use n_tokens as the matrix dimension during multiplication or n_head
// n_tokens is higher during prompt processing, this allows to optimize for this case
bool pp_opt = n_tokens >= 128; // Is it a fixed constant or is it somehow relared to n_head? original: n_tokens > n_head;
bool pp_opt = n_tokens >= 32; //128; // Is it a fixed constant or is it somehow relared to n_head? original: n_tokens > n_head;
for (int il = 0; il < n_layer; ++il) {
struct ggml_tensor * inpSA = inpL;