### ✨ [#26](https://github.com/ikawrakow/ik_llama.cpp/issues/26) - Feature Request: Improve CPU processing speed for large contexts | **Author** | `ikawrakow` | | :--- | :--- | | **State** | ✅ **Open** | | **Created** | 2024-08-22 | --- #### Description ### Prerequisites - [X] I am running the latest code. Mention the version if possible as well. - [X] I carefully followed the [README.md](https://github.com/ggerganov/llama.cpp/blob/master/README.md). - [X] I searched using keywords relevant to my issue to make sure that I am creating a new issue that is not already open (or closed). - [X] I reviewed the [Discussions](https://github.com/ggerganov/llama.cpp/discussions), and have a new and useful enhancement to share. ### Feature Description Recent open source / open weight models provide long context window, and hence it would be useful to improve CPU processing speed for large prompts. ### Motivation See #25 ### Possible Implementation See #25