Add GitHub data: filename sanitization (#640)

This commit is contained in:
Thomas
2025-07-23 13:31:53 +02:00
committed by GitHub
parent 3600d82e98
commit eaa2510a28
626 changed files with 0 additions and 0 deletions

View File

@@ -0,0 +1,19 @@
### 🔀 [#310](https://github.com/ikawrakow/ik_llama.cpp/pull/310) - Metal: FA and FlashMLA
| **Author** | `ikawrakow` |
| :--- | :--- |
| **State** | ❌ **Closed** |
| **Created** | 2025-04-03 |
| **Updated** | 2025-04-03 |
---
#### Description
Performance is not great, but it works with standard attentions and all 3 MLA options.
"Works" as:
* `f16` KV cache works for all combinations of `fa` and `mla`
* I have allowed only `Q8_0` quantized cache
* Quantized cache only works with standard attention (`-mla 0`) without FA
* With FA quantized cache kind of works, but we get messages such as `ggml_metal_get_buffer: error: tensor 'v-26' buffer is nil`. Not sure why. PPL is slightly higher than without FA