mirror of
https://github.com/ikawrakow/ik_llama.cpp.git
synced 2026-01-26 17:20:01 +00:00
Add GitHub data: filename sanitization (#640)
This commit is contained in:
15
github-data/pull_requests/210 - Repack also experts.md
Normal file
15
github-data/pull_requests/210 - Repack also experts.md
Normal file
@@ -0,0 +1,15 @@
|
||||
### 🔀 [#210](https://github.com/ikawrakow/ik_llama.cpp/pull/210) - Repack also experts
|
||||
|
||||
| **Author** | `ikawrakow` |
|
||||
| :--- | :--- |
|
||||
| **State** | ❌ **Closed** |
|
||||
| **Created** | 2025-02-19 |
|
||||
| **Updated** | 2025-02-19 |
|
||||
|
||||
---
|
||||
|
||||
#### Description
|
||||
|
||||
When I implemented run time repacking, I required the tensor to be 2D to be eligible for repacking, I guess to simplify the code. But I forgot about MoE models, where expert weights are in 3D tensors.
|
||||
|
||||
This PR fixes that. This leads to very significant performance gains. E.g., for DeepSeek-Lite quantized with `IQ4_XS`, we get `PP-512 = 545` t/s on the main branch, and `PP-512 = 677 t/s` with this PR when using run time repacking.
|
||||
Reference in New Issue
Block a user