mirror of
https://github.com/ikawrakow/ik_llama.cpp.git
synced 2026-01-26 17:20:01 +00:00
29 lines
974 B
Markdown
29 lines
974 B
Markdown
### ✨ [#26](https://github.com/ikawrakow/ik_llama.cpp/issues/26) - Feature Request: Improve CPU processing speed for large contexts
|
|
|
|
| **Author** | `ikawrakow` |
|
|
| :--- | :--- |
|
|
| **State** | ✅ **Open** |
|
|
| **Created** | 2024-08-22 |
|
|
|
|
---
|
|
|
|
#### Description
|
|
|
|
### Prerequisites
|
|
|
|
- [X] I am running the latest code. Mention the version if possible as well.
|
|
- [X] I carefully followed the [README.md](https://github.com/ggerganov/llama.cpp/blob/master/README.md).
|
|
- [X] I searched using keywords relevant to my issue to make sure that I am creating a new issue that is not already open (or closed).
|
|
- [X] I reviewed the [Discussions](https://github.com/ggerganov/llama.cpp/discussions), and have a new and useful enhancement to share.
|
|
|
|
### Feature Description
|
|
|
|
Recent open source / open weight models provide long context window, and hence it would be useful to improve CPU processing speed for large prompts.
|
|
|
|
### Motivation
|
|
|
|
See #25
|
|
|
|
### Possible Implementation
|
|
|
|
See #25 |