Files
ik_llama.cpp/pyrightconfig.json
Kawrakow 0ceeb11721 Merge mainline llama.cpp (#3)
* Merging mainline - WIP

* Merging mainline - WIP

AVX2 and CUDA appear to work.
CUDA performance seems slightly (~1-2%) lower as it is so often
the case with llama.cpp/ggml after some "improvements" have been made.

* Merging mainline - fix Metal

* Remove check

---------

Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2024-07-27 07:55:01 +02:00

22 lines
528 B
JSON

{
"extraPaths": ["gguf-py"],
"pythonVersion": "3.9",
"pythonPlatform": "All",
"reportUnusedImport": "warning",
"reportDuplicateImport": "error",
"reportDeprecated": "warning",
"reportUnnecessaryTypeIgnoreComment": "warning",
"executionEnvironments": [
{
// TODO: make this version override work correctly
"root": "gguf-py",
"pythonVersion": "3.8",
},
{
// uses match expressions in steps.py
"root": "examples/server/tests",
"pythonVersion": "3.10",
},
],
}