mirror of
https://github.com/ikawrakow/ik_llama.cpp.git
synced 2026-01-26 17:20:01 +00:00
* Bump GGML_MAX_CONTEXTS to allow loading more shards This var prevents more than 64 shards from being loaded - Specifically relevant for large models such as DeepSeek R1. * https://github.com/ikawrakow/ik_llama.cpp/pull/611#issuecomment-3072175559