mirror of
https://github.com/ikawrakow/ik_llama.cpp.git
synced 2026-04-29 19:01:47 +00:00
Offload only activated experts to the GPU (#698)
* Offload only activated experts * This seems to do the trick for -fmoe * Do not recalculate activated expers for fused up/gate * Log out of bounds access details * Add a command line argument --------- Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
This commit is contained in:
@@ -424,6 +424,7 @@ extern "C" {
|
||||
bool fused_up_gate; // whether to use fused up/gate op [EXPERIMENTAL]
|
||||
int min_experts;
|
||||
float thresh_experts;
|
||||
bool only_active_experts;
|
||||
|
||||
// Abort callback
|
||||
// if it returns true, execution of llama_decode() will be aborted
|
||||
|
||||
Reference in New Issue
Block a user