Files
ik_llama.cpp/github-data/pull_requests/497 - Make prompt cache saving and restoring MLA aware.md
2025-07-23 13:31:53 +02:00

692 B

🔀 #497 - Make prompt cache saving and restoring MLA aware

Author saood06
State Closed
Created 2025-06-06
Updated 2025-06-06

Description

Tested working with both a long (3.5K tokens) and a short prompt with both matching up in size with what is expected. The long prompt was also tested on a fresh launch of the server to ensure it gave output consistent with what would be expected given the information in the prompt.

Closes #436


💬 Conversation

👤 ikawrakow submitted a review the 2025-06-06 at 08:33:36: APPROVED