mirror of
https://github.com/ikawrakow/ik_llama.cpp.git
synced 2026-01-26 17:20:01 +00:00
491 lines
29 KiB
Markdown
491 lines
29 KiB
Markdown
### 🐛 [#522](https://github.com/ikawrakow/ik_llama.cpp/issues/522) - Bug: disabling CUDA graphs due to mul_mat_id
|
||
|
||
| **Author** | `SlavikCA` |
|
||
| :--- | :--- |
|
||
| **State** | ❌ **Closed** |
|
||
| **Created** | 2025-06-12 |
|
||
| **Updated** | 2025-06-12 |
|
||
|
||
---
|
||
|
||
#### Description
|
||
|
||
### What happened?
|
||
|
||
Equipment:
|
||
|
||
Chinese mod of 4090 D, with 48GB VRAM
|
||
Intel Xeon 5218 (16 cores)
|
||
6 channels of DDR4-2666 * 64GB
|
||
|
||
```
|
||
nvidia-smi
|
||
+-----------------------------------------------------------------------------------------+
|
||
| NVIDIA-SMI 575.57.08 Driver Version: 575.57.08 CUDA Version: 12.9 |
|
||
|-----------------------------------------+------------------------+----------------------+
|
||
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
|
||
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
|
||
| | | MIG M. |
|
||
|=========================================+========================+======================|
|
||
| 1 NVIDIA GeForce RTX 4090 D On | 00000000:00:11.0 Off | Off |
|
||
| 36% 56C P0 95W / 425W | 42265MiB / 49140MiB | 39% Default |
|
||
| | | N/A |
|
||
```
|
||
|
||
When I run llama-server or llama-sweep-bench I'm getting a lot of `disabling CUDA graphs due to mul_mat_id` messages in the logs. Inference runs fine, so should just ignore them? Or what does it tell me?
|
||
|
||
### Name and Version
|
||
|
||
cmake -B ./build -DGGML_CUDA=ON -DGGML_BLAS=0FF -DCMAKE_CUDA_HOST_COMPILER=/usr/bin/g++-12`
|
||
|
||
./build/bin/llama-server --version
|
||
version: 3745 (a0ac16b9)
|
||
built with cc (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0 for x86_64-linux-gnu
|
||
|
||
### What operating system are you seeing the problem on?
|
||
|
||
Linux
|
||
|
||
### Relevant log output
|
||
|
||
```shell
|
||
CUDA_VISIBLE_DEVICES=0 ./build/bin/llama-sweep-bench \
|
||
> --model /mnt/models/ollama/models--ubergarm--DeepSeek-R1-0528-GGUF/snapshots/076fc03e6aa0827dc90b2b18dfd3da35d537bc52/IQ2_K_R4/DeepSeek-R1-0528-IQ2_K_R4-00001-of-00005.gguf \
|
||
> --ctx-size 32768 \
|
||
> -ctk q8_0 -fa -mla 3 \
|
||
> -b 4096 -ub 4096 \
|
||
> -amb 512 \
|
||
> -fmoe \
|
||
> --temp 0.6 --top-p 0.95 \
|
||
> --n-gpu-layers 999 \
|
||
> --override-tensor "blk\.([1-9])\.ffn_.*=CUDA0" \
|
||
> --override-tensor exps=CPU \
|
||
> --parallel 1 \
|
||
> --threads 16
|
||
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
|
||
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
|
||
ggml_cuda_init: found 1 CUDA devices:
|
||
Device 0: NVIDIA GeForce RTX 4090 D, compute capability 8.9, VMM: yes
|
||
llama_model_loader: additional 4 GGUFs metadata loaded.
|
||
llama_model_loader: loaded meta data with 52 key-value pairs and 1147 tensors from /mnt/models/ollama/models--ubergarm--DeepSeek-R1-0528-GGUF/snapshots/076fc03e6aa0827dc90b2b18dfd3da35d537bc52/IQ2_K_R4/DeepSeek-R1-0528-IQ2_K_R4-00001-of-00005.gguf (version GGUF V3 (latest))
|
||
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
|
||
llama_model_loader: - kv 0: general.architecture str = deepseek2
|
||
llama_model_loader: - kv 1: general.type str = model
|
||
llama_model_loader: - kv 2: general.name str = DeepSeek R1 0528
|
||
llama_model_loader: - kv 3: general.version str = 0528
|
||
llama_model_loader: - kv 4: general.basename str = DeepSeek-R1
|
||
llama_model_loader: - kv 5: general.size_label str = 256x21B
|
||
llama_model_loader: - kv 6: deepseek2.block_count u32 = 61
|
||
llama_model_loader: - kv 7: deepseek2.context_length u32 = 163840
|
||
llama_model_loader: - kv 8: deepseek2.embedding_length u32 = 7168
|
||
llama_model_loader: - kv 9: deepseek2.feed_forward_length u32 = 18432
|
||
llama_model_loader: - kv 10: deepseek2.attention.head_count u32 = 128
|
||
llama_model_loader: - kv 11: deepseek2.attention.head_count_kv u32 = 128
|
||
llama_model_loader: - kv 12: deepseek2.rope.freq_base f32 = 10000.000000
|
||
llama_model_loader: - kv 13: deepseek2.attention.layer_norm_rms_epsilon f32 = 0.000001
|
||
llama_model_loader: - kv 14: deepseek2.expert_used_count u32 = 8
|
||
llama_model_loader: - kv 15: general.file_type u32 = 338
|
||
llama_model_loader: - kv 16: deepseek2.leading_dense_block_count u32 = 3
|
||
llama_model_loader: - kv 17: deepseek2.vocab_size u32 = 129280
|
||
llama_model_loader: - kv 18: deepseek2.attention.q_lora_rank u32 = 1536
|
||
llama_model_loader: - kv 19: deepseek2.attention.kv_lora_rank u32 = 512
|
||
llama_model_loader: - kv 20: deepseek2.attention.key_length u32 = 192
|
||
llama_model_loader: - kv 21: deepseek2.attention.value_length u32 = 128
|
||
llama_model_loader: - kv 22: deepseek2.expert_feed_forward_length u32 = 2048
|
||
llama_model_loader: - kv 23: deepseek2.expert_count u32 = 256
|
||
llama_model_loader: - kv 24: deepseek2.expert_shared_count u32 = 1
|
||
llama_model_loader: - kv 25: deepseek2.expert_weights_scale f32 = 2.500000
|
||
llama_model_loader: - kv 26: deepseek2.expert_weights_norm bool = true
|
||
llama_model_loader: - kv 27: deepseek2.expert_gating_func u32 = 2
|
||
llama_model_loader: - kv 28: deepseek2.rope.dimension_count u32 = 64
|
||
llama_model_loader: - kv 29: deepseek2.rope.scaling.type str = yarn
|
||
llama_model_loader: - kv 30: deepseek2.rope.scaling.factor f32 = 40.000000
|
||
llama_model_loader: - kv 31: deepseek2.rope.scaling.original_context_length u32 = 4096
|
||
llama_model_loader: - kv 32: deepseek2.rope.scaling.yarn_log_multiplier f32 = 0.100000
|
||
llama_model_loader: - kv 33: tokenizer.ggml.model str = gpt2
|
||
llama_model_loader: - kv 34: tokenizer.ggml.pre str = deepseek-v3
|
||
llama_model_loader: - kv 35: tokenizer.ggml.tokens arr[str,129280] = ["<|begin▁of▁sentence|>", "<<3C>...
|
||
llama_model_loader: - kv 36: tokenizer.ggml.token_type arr[i32,129280] = [3, 3, 3, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
|
||
llama_model_loader: - kv 37: tokenizer.ggml.merges arr[str,127741] = ["Ġ t", "Ġ a", "i n", "Ġ Ġ", "h e...
|
||
llama_model_loader: - kv 38: tokenizer.ggml.bos_token_id u32 = 0
|
||
llama_model_loader: - kv 39: tokenizer.ggml.eos_token_id u32 = 1
|
||
llama_model_loader: - kv 40: tokenizer.ggml.padding_token_id u32 = 1
|
||
llama_model_loader: - kv 41: tokenizer.ggml.add_bos_token bool = true
|
||
llama_model_loader: - kv 42: tokenizer.ggml.add_eos_token bool = false
|
||
llama_model_loader: - kv 43: tokenizer.chat_template str = {% if not add_generation_prompt is de...
|
||
llama_model_loader: - kv 44: general.quantization_version u32 = 2
|
||
llama_model_loader: - kv 45: quantize.imatrix.file str = /mnt/raid/models/ubergarm/DeepSeek-R1...
|
||
llama_model_loader: - kv 46: quantize.imatrix.dataset str = ubergarm-imatrix-calibration-corpus-v...
|
||
llama_model_loader: - kv 47: quantize.imatrix.entries_count i32 = 721
|
||
llama_model_loader: - kv 48: quantize.imatrix.chunks_count i32 = 812
|
||
llama_model_loader: - kv 49: split.no u16 = 0
|
||
llama_model_loader: - kv 50: split.count u16 = 5
|
||
llama_model_loader: - kv 51: split.tensors.count i32 = 1147
|
||
llama_model_loader: - type f32: 361 tensors
|
||
llama_model_loader: - type q5_0: 61 tensors
|
||
llama_model_loader: - type iq4_ks: 116 tensors
|
||
llama_model_loader: - type iq5_ks: 435 tensors
|
||
llama_model_loader: - type iq2_k_r4: 116 tensors
|
||
llama_model_loader: - type iq3_k_r4: 58 tensors
|
||
llm_load_vocab: special tokens cache size = 818
|
||
llm_load_vocab: token to piece cache size = 0.8223 MB
|
||
llm_load_print_meta: format = GGUF V3 (latest)
|
||
llm_load_print_meta: arch = deepseek2
|
||
llm_load_print_meta: vocab type = BPE
|
||
llm_load_print_meta: n_vocab = 129280
|
||
llm_load_print_meta: n_merges = 127741
|
||
llm_load_print_meta: vocab_only = 0
|
||
llm_load_print_meta: n_ctx_train = 163840
|
||
llm_load_print_meta: n_embd = 7168
|
||
llm_load_print_meta: n_layer = 61
|
||
llm_load_print_meta: n_head = 128
|
||
llm_load_print_meta: n_head_kv = 128
|
||
llm_load_print_meta: n_rot = 64
|
||
llm_load_print_meta: n_swa = 0
|
||
llm_load_print_meta: n_swa_pattern = 1
|
||
llm_load_print_meta: n_embd_head_k = 192
|
||
llm_load_print_meta: n_embd_head_v = 128
|
||
llm_load_print_meta: n_gqa = 1
|
||
llm_load_print_meta: n_embd_k_gqa = 24576
|
||
llm_load_print_meta: n_embd_v_gqa = 16384
|
||
llm_load_print_meta: f_norm_eps = 0.0e+00
|
||
llm_load_print_meta: f_norm_rms_eps = 1.0e-06
|
||
llm_load_print_meta: f_clamp_kqv = 0.0e+00
|
||
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
|
||
llm_load_print_meta: f_logit_scale = 0.0e+00
|
||
llm_load_print_meta: n_ff = 18432
|
||
llm_load_print_meta: n_expert = 256
|
||
llm_load_print_meta: n_expert_used = 8
|
||
llm_load_print_meta: causal attn = 1
|
||
llm_load_print_meta: pooling type = 0
|
||
llm_load_print_meta: rope type = 0
|
||
llm_load_print_meta: rope scaling = yarn
|
||
llm_load_print_meta: freq_base_train = 10000.0
|
||
llm_load_print_meta: freq_scale_train = 0.025
|
||
llm_load_print_meta: n_ctx_orig_yarn = 4096
|
||
llm_load_print_meta: rope_finetuned = unknown
|
||
llm_load_print_meta: ssm_d_conv = 0
|
||
llm_load_print_meta: ssm_d_inner = 0
|
||
llm_load_print_meta: ssm_d_state = 0
|
||
llm_load_print_meta: ssm_dt_rank = 0
|
||
llm_load_print_meta: model type = 671B
|
||
llm_load_print_meta: model ftype = IQ2_K_R4 - 2.375 bpw
|
||
llm_load_print_meta: model params = 672.050 B
|
||
llm_load_print_meta: model size = 219.019 GiB (2.799 BPW)
|
||
llm_load_print_meta: repeating layers = 217.886 GiB (2.793 BPW, 670.196 B parameters)
|
||
llm_load_print_meta: general.name = DeepSeek R1 0528
|
||
llm_load_print_meta: BOS token = 0 '<|begin▁of▁sentence|>'
|
||
llm_load_print_meta: EOS token = 1 '<|end▁of▁sentence|>'
|
||
llm_load_print_meta: PAD token = 1 '<|end▁of▁sentence|>'
|
||
llm_load_print_meta: LF token = 131 'Ä'
|
||
llm_load_print_meta: max token length = 256
|
||
llm_load_print_meta: n_layer_dense_lead = 3
|
||
llm_load_print_meta: n_lora_q = 1536
|
||
llm_load_print_meta: n_lora_kv = 512
|
||
llm_load_print_meta: n_ff_exp = 2048
|
||
llm_load_print_meta: n_expert_shared = 1
|
||
llm_load_print_meta: expert_weights_scale = 2.5
|
||
llm_load_print_meta: expert_weights_norm = 1
|
||
llm_load_print_meta: expert_gating_func = sigmoid
|
||
llm_load_print_meta: rope_yarn_log_mul = 0.1000
|
||
llm_load_tensors: ggml ctx size = 0.93 MiB
|
||
Tensor blk.1.ffn_norm.weight buffer type overriden to CUDA0
|
||
Tensor blk.1.ffn_gate.weight buffer type overriden to CUDA0
|
||
Tensor blk.1.ffn_down.weight buffer type overriden to CUDA0
|
||
Tensor blk.1.ffn_up.weight buffer type overriden to CUDA0
|
||
Tensor blk.2.ffn_norm.weight buffer type overriden to CUDA0
|
||
Tensor blk.2.ffn_gate.weight buffer type overriden to CUDA0
|
||
Tensor blk.2.ffn_down.weight buffer type overriden to CUDA0
|
||
Tensor blk.2.ffn_up.weight buffer type overriden to CUDA0
|
||
Tensor blk.3.ffn_norm.weight buffer type overriden to CUDA0
|
||
Tensor blk.3.ffn_gate_inp.weight buffer type overriden to CUDA0
|
||
Tensor blk.3.ffn_gate_exps.weight buffer type overriden to CUDA0
|
||
Tensor blk.3.ffn_down_exps.weight buffer type overriden to CUDA0
|
||
Tensor blk.3.ffn_up_exps.weight buffer type overriden to CUDA0
|
||
Tensor blk.3.ffn_gate_shexp.weight buffer type overriden to CUDA0
|
||
Tensor blk.3.ffn_down_shexp.weight buffer type overriden to CUDA0
|
||
Tensor blk.3.ffn_up_shexp.weight buffer type overriden to CUDA0
|
||
Tensor blk.4.ffn_norm.weight buffer type overriden to CUDA0
|
||
Tensor blk.4.ffn_gate_inp.weight buffer type overriden to CUDA0
|
||
Tensor blk.4.ffn_gate_exps.weight buffer type overriden to CUDA0
|
||
Tensor blk.4.ffn_down_exps.weight buffer type overriden to CUDA0
|
||
Tensor blk.4.ffn_up_exps.weight buffer type overriden to CUDA0
|
||
Tensor blk.4.ffn_gate_shexp.weight buffer type overriden to CUDA0
|
||
Tensor blk.4.ffn_down_shexp.weight buffer type overriden to CUDA0
|
||
Tensor blk.4.ffn_up_shexp.weight buffer type overriden to CUDA0
|
||
Tensor blk.5.ffn_norm.weight buffer type overriden to CUDA0
|
||
Tensor blk.5.ffn_gate_inp.weight buffer type overriden to CUDA0
|
||
Tensor blk.5.ffn_gate_exps.weight buffer type overriden to CUDA0
|
||
Tensor blk.5.ffn_down_exps.weight buffer type overriden to CUDA0
|
||
Tensor blk.5.ffn_up_exps.weight buffer type overriden to CUDA0
|
||
Tensor blk.5.ffn_gate_shexp.weight buffer type overriden to CUDA0
|
||
Tensor blk.5.ffn_down_shexp.weight buffer type overriden to CUDA0
|
||
Tensor blk.5.ffn_up_shexp.weight buffer type overriden to CUDA0
|
||
Tensor blk.6.ffn_norm.weight buffer type overriden to CUDA0
|
||
Tensor blk.6.ffn_gate_inp.weight buffer type overriden to CUDA0
|
||
Tensor blk.6.ffn_gate_exps.weight buffer type overriden to CUDA0
|
||
Tensor blk.6.ffn_down_exps.weight buffer type overriden to CUDA0
|
||
Tensor blk.6.ffn_up_exps.weight buffer type overriden to CUDA0
|
||
Tensor blk.6.ffn_gate_shexp.weight buffer type overriden to CUDA0
|
||
Tensor blk.6.ffn_down_shexp.weight buffer type overriden to CUDA0
|
||
Tensor blk.6.ffn_up_shexp.weight buffer type overriden to CUDA0
|
||
Tensor blk.7.ffn_norm.weight buffer type overriden to CUDA0
|
||
Tensor blk.7.ffn_gate_inp.weight buffer type overriden to CUDA0
|
||
Tensor blk.7.ffn_gate_exps.weight buffer type overriden to CUDA0
|
||
Tensor blk.7.ffn_down_exps.weight buffer type overriden to CUDA0
|
||
Tensor blk.7.ffn_up_exps.weight buffer type overriden to CUDA0
|
||
Tensor blk.7.ffn_gate_shexp.weight buffer type overriden to CUDA0
|
||
Tensor blk.7.ffn_down_shexp.weight buffer type overriden to CUDA0
|
||
Tensor blk.7.ffn_up_shexp.weight buffer type overriden to CUDA0
|
||
Tensor blk.8.ffn_norm.weight buffer type overriden to CUDA0
|
||
Tensor blk.8.ffn_gate_inp.weight buffer type overriden to CUDA0
|
||
Tensor blk.8.ffn_gate_exps.weight buffer type overriden to CUDA0
|
||
Tensor blk.8.ffn_down_exps.weight buffer type overriden to CUDA0
|
||
Tensor blk.8.ffn_up_exps.weight buffer type overriden to CUDA0
|
||
Tensor blk.8.ffn_gate_shexp.weight buffer type overriden to CUDA0
|
||
Tensor blk.8.ffn_down_shexp.weight buffer type overriden to CUDA0
|
||
Tensor blk.8.ffn_up_shexp.weight buffer type overriden to CUDA0
|
||
Tensor blk.9.ffn_norm.weight buffer type overriden to CUDA0
|
||
Tensor blk.9.ffn_gate_inp.weight buffer type overriden to CUDA0
|
||
Tensor blk.9.ffn_gate_exps.weight buffer type overriden to CUDA0
|
||
Tensor blk.9.ffn_down_exps.weight buffer type overriden to CUDA0
|
||
Tensor blk.9.ffn_up_exps.weight buffer type overriden to CUDA0
|
||
Tensor blk.9.ffn_gate_shexp.weight buffer type overriden to CUDA0
|
||
Tensor blk.9.ffn_down_shexp.weight buffer type overriden to CUDA0
|
||
Tensor blk.9.ffn_up_shexp.weight buffer type overriden to CUDA0
|
||
Tensor blk.10.ffn_gate_exps.weight buffer type overriden to CPU
|
||
Tensor blk.10.ffn_down_exps.weight buffer type overriden to CPU
|
||
Tensor blk.10.ffn_up_exps.weight buffer type overriden to CPU
|
||
Tensor blk.11.ffn_gate_exps.weight buffer type overriden to CPU
|
||
Tensor blk.11.ffn_down_exps.weight buffer type overriden to CPU
|
||
Tensor blk.11.ffn_up_exps.weight buffer type overriden to CPU
|
||
Tensor blk.12.ffn_gate_exps.weight buffer type overriden to CPU
|
||
Tensor blk.12.ffn_down_exps.weight buffer type overriden to CPU
|
||
Tensor blk.12.ffn_up_exps.weight buffer type overriden to CPU
|
||
Tensor blk.13.ffn_gate_exps.weight buffer type overriden to CPU
|
||
Tensor blk.13.ffn_down_exps.weight buffer type overriden to CPU
|
||
Tensor blk.13.ffn_up_exps.weight buffer type overriden to CPU
|
||
Tensor blk.14.ffn_gate_exps.weight buffer type overriden to CPU
|
||
Tensor blk.14.ffn_down_exps.weight buffer type overriden to CPU
|
||
Tensor blk.14.ffn_up_exps.weight buffer type overriden to CPU
|
||
Tensor blk.15.ffn_gate_exps.weight buffer type overriden to CPU
|
||
Tensor blk.15.ffn_down_exps.weight buffer type overriden to CPU
|
||
Tensor blk.15.ffn_up_exps.weight buffer type overriden to CPU
|
||
Tensor blk.16.ffn_gate_exps.weight buffer type overriden to CPU
|
||
Tensor blk.16.ffn_down_exps.weight buffer type overriden to CPU
|
||
Tensor blk.16.ffn_up_exps.weight buffer type overriden to CPU
|
||
Tensor blk.17.ffn_gate_exps.weight buffer type overriden to CPU
|
||
Tensor blk.17.ffn_down_exps.weight buffer type overriden to CPU
|
||
Tensor blk.17.ffn_up_exps.weight buffer type overriden to CPU
|
||
Tensor blk.18.ffn_gate_exps.weight buffer type overriden to CPU
|
||
Tensor blk.18.ffn_down_exps.weight buffer type overriden to CPU
|
||
Tensor blk.18.ffn_up_exps.weight buffer type overriden to CPU
|
||
Tensor blk.19.ffn_gate_exps.weight buffer type overriden to CPU
|
||
Tensor blk.19.ffn_down_exps.weight buffer type overriden to CPU
|
||
Tensor blk.19.ffn_up_exps.weight buffer type overriden to CPU
|
||
Tensor blk.20.ffn_gate_exps.weight buffer type overriden to CPU
|
||
Tensor blk.20.ffn_down_exps.weight buffer type overriden to CPU
|
||
Tensor blk.20.ffn_up_exps.weight buffer type overriden to CPU
|
||
Tensor blk.21.ffn_gate_exps.weight buffer type overriden to CPU
|
||
Tensor blk.21.ffn_down_exps.weight buffer type overriden to CPU
|
||
Tensor blk.21.ffn_up_exps.weight buffer type overriden to CPU
|
||
Tensor blk.22.ffn_gate_exps.weight buffer type overriden to CPU
|
||
Tensor blk.22.ffn_down_exps.weight buffer type overriden to CPU
|
||
Tensor blk.22.ffn_up_exps.weight buffer type overriden to CPU
|
||
Tensor blk.23.ffn_gate_exps.weight buffer type overriden to CPU
|
||
Tensor blk.23.ffn_down_exps.weight buffer type overriden to CPU
|
||
Tensor blk.23.ffn_up_exps.weight buffer type overriden to CPU
|
||
Tensor blk.24.ffn_gate_exps.weight buffer type overriden to CPU
|
||
Tensor blk.24.ffn_down_exps.weight buffer type overriden to CPU
|
||
Tensor blk.24.ffn_up_exps.weight buffer type overriden to CPU
|
||
Tensor blk.25.ffn_gate_exps.weight buffer type overriden to CPU
|
||
Tensor blk.25.ffn_down_exps.weight buffer type overriden to CPU
|
||
Tensor blk.25.ffn_up_exps.weight buffer type overriden to CPU
|
||
Tensor blk.26.ffn_gate_exps.weight buffer type overriden to CPU
|
||
Tensor blk.26.ffn_down_exps.weight buffer type overriden to CPU
|
||
Tensor blk.26.ffn_up_exps.weight buffer type overriden to CPU
|
||
Tensor blk.27.ffn_gate_exps.weight buffer type overriden to CPU
|
||
Tensor blk.27.ffn_down_exps.weight buffer type overriden to CPU
|
||
Tensor blk.27.ffn_up_exps.weight buffer type overriden to CPU
|
||
Tensor blk.28.ffn_gate_exps.weight buffer type overriden to CPU
|
||
Tensor blk.28.ffn_down_exps.weight buffer type overriden to CPU
|
||
Tensor blk.28.ffn_up_exps.weight buffer type overriden to CPU
|
||
Tensor blk.29.ffn_gate_exps.weight buffer type overriden to CPU
|
||
Tensor blk.29.ffn_down_exps.weight buffer type overriden to CPU
|
||
Tensor blk.29.ffn_up_exps.weight buffer type overriden to CPU
|
||
Tensor blk.30.ffn_gate_exps.weight buffer type overriden to CPU
|
||
Tensor blk.30.ffn_down_exps.weight buffer type overriden to CPU
|
||
Tensor blk.30.ffn_up_exps.weight buffer type overriden to CPU
|
||
Tensor blk.31.ffn_gate_exps.weight buffer type overriden to CPU
|
||
Tensor blk.31.ffn_down_exps.weight buffer type overriden to CPU
|
||
Tensor blk.31.ffn_up_exps.weight buffer type overriden to CPU
|
||
Tensor blk.32.ffn_gate_exps.weight buffer type overriden to CPU
|
||
Tensor blk.32.ffn_down_exps.weight buffer type overriden to CPU
|
||
Tensor blk.32.ffn_up_exps.weight buffer type overriden to CPU
|
||
Tensor blk.33.ffn_gate_exps.weight buffer type overriden to CPU
|
||
Tensor blk.33.ffn_down_exps.weight buffer type overriden to CPU
|
||
Tensor blk.33.ffn_up_exps.weight buffer type overriden to CPU
|
||
Tensor blk.34.ffn_gate_exps.weight buffer type overriden to CPU
|
||
Tensor blk.34.ffn_down_exps.weight buffer type overriden to CPU
|
||
Tensor blk.34.ffn_up_exps.weight buffer type overriden to CPU
|
||
Tensor blk.35.ffn_gate_exps.weight buffer type overriden to CPU
|
||
Tensor blk.35.ffn_down_exps.weight buffer type overriden to CPU
|
||
Tensor blk.35.ffn_up_exps.weight buffer type overriden to CPU
|
||
Tensor blk.36.ffn_gate_exps.weight buffer type overriden to CPU
|
||
Tensor blk.36.ffn_down_exps.weight buffer type overriden to CPU
|
||
Tensor blk.36.ffn_up_exps.weight buffer type overriden to CPU
|
||
Tensor blk.37.ffn_gate_exps.weight buffer type overriden to CPU
|
||
Tensor blk.37.ffn_down_exps.weight buffer type overriden to CPU
|
||
Tensor blk.37.ffn_up_exps.weight buffer type overriden to CPU
|
||
Tensor blk.38.ffn_gate_exps.weight buffer type overriden to CPU
|
||
Tensor blk.38.ffn_down_exps.weight buffer type overriden to CPU
|
||
Tensor blk.38.ffn_up_exps.weight buffer type overriden to CPU
|
||
Tensor blk.39.ffn_gate_exps.weight buffer type overriden to CPU
|
||
Tensor blk.39.ffn_down_exps.weight buffer type overriden to CPU
|
||
Tensor blk.39.ffn_up_exps.weight buffer type overriden to CPU
|
||
Tensor blk.40.ffn_gate_exps.weight buffer type overriden to CPU
|
||
Tensor blk.40.ffn_down_exps.weight buffer type overriden to CPU
|
||
Tensor blk.40.ffn_up_exps.weight buffer type overriden to CPU
|
||
Tensor blk.41.ffn_gate_exps.weight buffer type overriden to CPU
|
||
Tensor blk.41.ffn_down_exps.weight buffer type overriden to CPU
|
||
Tensor blk.41.ffn_up_exps.weight buffer type overriden to CPU
|
||
Tensor blk.42.ffn_gate_exps.weight buffer type overriden to CPU
|
||
Tensor blk.42.ffn_down_exps.weight buffer type overriden to CPU
|
||
Tensor blk.42.ffn_up_exps.weight buffer type overriden to CPU
|
||
Tensor blk.43.ffn_gate_exps.weight buffer type overriden to CPU
|
||
Tensor blk.43.ffn_down_exps.weight buffer type overriden to CPU
|
||
Tensor blk.43.ffn_up_exps.weight buffer type overriden to CPU
|
||
Tensor blk.44.ffn_gate_exps.weight buffer type overriden to CPU
|
||
Tensor blk.44.ffn_down_exps.weight buffer type overriden to CPU
|
||
Tensor blk.44.ffn_up_exps.weight buffer type overriden to CPU
|
||
Tensor blk.45.ffn_gate_exps.weight buffer type overriden to CPU
|
||
Tensor blk.45.ffn_down_exps.weight buffer type overriden to CPU
|
||
Tensor blk.45.ffn_up_exps.weight buffer type overriden to CPU
|
||
Tensor blk.46.ffn_gate_exps.weight buffer type overriden to CPU
|
||
Tensor blk.46.ffn_down_exps.weight buffer type overriden to CPU
|
||
Tensor blk.46.ffn_up_exps.weight buffer type overriden to CPU
|
||
Tensor blk.47.ffn_gate_exps.weight buffer type overriden to CPU
|
||
Tensor blk.47.ffn_down_exps.weight buffer type overriden to CPU
|
||
Tensor blk.47.ffn_up_exps.weight buffer type overriden to CPU
|
||
Tensor blk.48.ffn_gate_exps.weight buffer type overriden to CPU
|
||
Tensor blk.48.ffn_down_exps.weight buffer type overriden to CPU
|
||
Tensor blk.48.ffn_up_exps.weight buffer type overriden to CPU
|
||
Tensor blk.49.ffn_gate_exps.weight buffer type overriden to CPU
|
||
Tensor blk.49.ffn_down_exps.weight buffer type overriden to CPU
|
||
Tensor blk.49.ffn_up_exps.weight buffer type overriden to CPU
|
||
Tensor blk.50.ffn_gate_exps.weight buffer type overriden to CPU
|
||
Tensor blk.50.ffn_down_exps.weight buffer type overriden to CPU
|
||
Tensor blk.50.ffn_up_exps.weight buffer type overriden to CPU
|
||
Tensor blk.51.ffn_gate_exps.weight buffer type overriden to CPU
|
||
Tensor blk.51.ffn_down_exps.weight buffer type overriden to CPU
|
||
Tensor blk.51.ffn_up_exps.weight buffer type overriden to CPU
|
||
Tensor blk.52.ffn_gate_exps.weight buffer type overriden to CPU
|
||
Tensor blk.52.ffn_down_exps.weight buffer type overriden to CPU
|
||
Tensor blk.52.ffn_up_exps.weight buffer type overriden to CPU
|
||
Tensor blk.53.ffn_gate_exps.weight buffer type overriden to CPU
|
||
Tensor blk.53.ffn_down_exps.weight buffer type overriden to CPU
|
||
Tensor blk.53.ffn_up_exps.weight buffer type overriden to CPU
|
||
Tensor blk.54.ffn_gate_exps.weight buffer type overriden to CPU
|
||
Tensor blk.54.ffn_down_exps.weight buffer type overriden to CPU
|
||
Tensor blk.54.ffn_up_exps.weight buffer type overriden to CPU
|
||
Tensor blk.55.ffn_gate_exps.weight buffer type overriden to CPU
|
||
Tensor blk.55.ffn_down_exps.weight buffer type overriden to CPU
|
||
Tensor blk.55.ffn_up_exps.weight buffer type overriden to CPU
|
||
Tensor blk.56.ffn_gate_exps.weight buffer type overriden to CPU
|
||
Tensor blk.56.ffn_down_exps.weight buffer type overriden to CPU
|
||
Tensor blk.56.ffn_up_exps.weight buffer type overriden to CPU
|
||
Tensor blk.57.ffn_gate_exps.weight buffer type overriden to CPU
|
||
Tensor blk.57.ffn_down_exps.weight buffer type overriden to CPU
|
||
Tensor blk.57.ffn_up_exps.weight buffer type overriden to CPU
|
||
Tensor blk.58.ffn_gate_exps.weight buffer type overriden to CPU
|
||
Tensor blk.58.ffn_down_exps.weight buffer type overriden to CPU
|
||
Tensor blk.58.ffn_up_exps.weight buffer type overriden to CPU
|
||
Tensor blk.59.ffn_gate_exps.weight buffer type overriden to CPU
|
||
Tensor blk.59.ffn_down_exps.weight buffer type overriden to CPU
|
||
Tensor blk.59.ffn_up_exps.weight buffer type overriden to CPU
|
||
Tensor blk.60.ffn_gate_exps.weight buffer type overriden to CPU
|
||
Tensor blk.60.ffn_down_exps.weight buffer type overriden to CPU
|
||
Tensor blk.60.ffn_up_exps.weight buffer type overriden to CPU
|
||
llm_load_tensors: offloading 61 repeating layers to GPU
|
||
llm_load_tensors: offloading non-repeating layers to GPU
|
||
llm_load_tensors: offloaded 62/62 layers to GPU
|
||
llm_load_tensors: CPU buffer size = 16849.34 MiB
|
||
llm_load_tensors: CPU buffer size = 44228.69 MiB
|
||
llm_load_tensors: CPU buffer size = 45768.69 MiB
|
||
llm_load_tensors: CPU buffer size = 44704.69 MiB
|
||
llm_load_tensors: CPU buffer size = 43745.14 MiB
|
||
llm_load_tensors: CPU buffer size = 580.45 MiB
|
||
llm_load_tensors: CUDA0 buffer size = 36627.32 MiB
|
||
....................................................................................................
|
||
llama_new_context_with_model: n_ctx = 32768
|
||
llama_new_context_with_model: n_batch = 4096
|
||
llama_new_context_with_model: n_ubatch = 4096
|
||
llama_new_context_with_model: flash_attn = 1
|
||
llama_new_context_with_model: mla_attn = 3
|
||
llama_new_context_with_model: attn_max_b = 512
|
||
llama_new_context_with_model: fused_moe = 1
|
||
llama_new_context_with_model: ser = -1, 0
|
||
llama_new_context_with_model: freq_base = 10000.0
|
||
llama_new_context_with_model: freq_scale = 0.025
|
||
llama_kv_cache_init: CUDA0 KV buffer size = 1166.65 MiB
|
||
llama_new_context_with_model: KV self size = 1166.62 MiB, c^KV (q8_0): 1166.62 MiB, kv^T: not used
|
||
llama_new_context_with_model: CUDA_Host output buffer size = 0.49 MiB
|
||
llama_new_context_with_model: CUDA0 compute buffer size = 4104.02 MiB
|
||
llama_new_context_with_model: CUDA_Host compute buffer size = 624.05 MiB
|
||
llama_new_context_with_model: graph nodes = 8245
|
||
llama_new_context_with_model: graph splits = 104
|
||
|
||
main: n_kv_max = 32768, n_batch = 4096, n_ubatch = 4096, flash_attn = 1, n_gpu_layers = 999, n_threads = 16, n_threads_batch = 16
|
||
|
||
| PP | TG | N_KV | T_PP s | S_PP t/s | T_TG s | S_TG t/s |
|
||
|-------|--------|--------|----------|----------|----------|----------|
|
||
ggml_backend_cuda_graph_compute: disabling CUDA graphs due to mul_mat_id
|
||
ggml_backend_cuda_graph_compute: disabling CUDA graphs due to mul_mat_id
|
||
ggml_backend_cuda_graph_compute: disabling CUDA graphs due to mul_mat_id
|
||
ggml_backend_cuda_graph_compute: disabling CUDA graphs due to mul_mat_id
|
||
ggml_backend_cuda_graph_compute: disabling CUDA graphs due to mul_mat_id
|
||
ggml_backend_cuda_graph_compute: disabling CUDA graphs due to mul_mat_id
|
||
ggml_backend_cuda_graph_compute: disabling CUDA graphs due to mul_mat_id
|
||
ggml_backend_cuda_graph_compute: disabling CUDA graphs due to mul_mat_id
|
||
ggml_backend_cuda_graph_compute: disabling CUDA graphs due to mul_mat_id
|
||
ggml_backend_cuda_graph_compute: disabling CUDA graphs due to mul_mat_id
|
||
ggml_backend_cuda_graph_compute: disabling CUDA graphs due to mul_mat_id
|
||
ggml_backend_cuda_graph_compute: disabling CUDA graphs due to mul_mat_id
|
||
ggml_backend_cuda_graph_compute: disabling CUDA graphs due to mul_mat_id
|
||
ggml_backend_cuda_graph_compute: disabling CUDA graphs due to mul_mat_id
|
||
ggml_backend_cuda_graph_compute: disabling CUDA graphs due to mul_mat_id
|
||
ggml_backend_cuda_graph_compute: disabling CUDA graphs due to too many consecutive updates
|
||
| 4096 | 1024 | 0 | 49.803 | 82.24 | 153.659 | 6.66 |
|
||
```
|
||
|
||
---
|
||
|
||
#### 💬 Conversation
|
||
|
||
👤 **ikawrakow** commented the **2025-06-12** at **05:03:54**:<br>
|
||
|
||
This warning is hidden behind `#ifdef NDEBUG`, so should not appear in a release build.
|
||
|
||
---
|
||
|
||
👤 **SlavikCA** commented the **2025-06-12** at **05:07:30**:<br>
|
||
|
||
so, safe to ignore?
|
||
|
||
---
|
||
|
||
👤 **ikawrakow** commented the **2025-06-12** at **05:15:20**:<br>
|
||
|
||
Yes, the warning is safe to ignore. But you should make sure that you are using a Release build (where this warning should normally not appear), else your performance will be very low. Try adding `-DCMAKE_BUILD_TYPE=Release` to your `cmake` command. If you still see this message, ask your `cmake` vendor why `NDEBUG` is not defined in a release build.
|
||
|
||
---
|
||
|
||
👤 **SlavikCA** commented the **2025-06-12** at **05:19:05**:<br>
|
||
|
||
I did this:
|
||
```
|
||
cmake -B ./build -DGGML_CUDA=ON -DGGML_BLAS=0FF -DCMAKE_CUDA_HOST_COMPILER=/usr/bin/g++-12
|
||
cmake --build ./build --config Release -j $(nproc)
|
||
```
|
||
|
||
I'll try with `-DCMAKE_BUILD_TYPE=Release` |