mirror of
https://github.com/turboderp-org/exllamav2.git
synced 2026-04-30 11:11:33 +00:00
Update README.md
This commit is contained in:
@@ -1,3 +1,7 @@
|
|||||||
|
# Note
|
||||||
|
|
||||||
|
**This project is archived for now**. Development continues on [ExLlamaV3](https://github.com/turboderp-org/exllamav3).
|
||||||
|
|
||||||
# ExLlamaV2
|
# ExLlamaV2
|
||||||
|
|
||||||
ExLlamaV2 is an inference library for running local LLMs on modern consumer GPUs.
|
ExLlamaV2 is an inference library for running local LLMs on modern consumer GPUs.
|
||||||
|
|||||||
Reference in New Issue
Block a user