mrhaoxx
|
7a9daf0cd4
|
[feat](kt-kernel): support avx2 only inference for bf16 fp8 and gptq int4 (#1892)
* feat: support avx2 bf16 fp8 inference
* feat: support avx2 gptq int4 inference
* fix: numeric issues in fp8 dequant
* Tutorial avx2 (#1900)
* fix: prevent injecting -DLLAMA_AVX512=ON on AVX2-only machines
* docs: add AVX2 tutorial for running KTransformers on AVX2-only CPUs
* Tutorial avx2 (#1901)
* fix: prevent injecting -DLLAMA_AVX512=ON on AVX2-only machines
* docs: add AVX2 tutorial for running KTransformers on AVX2-only CPUs
* docs: update README.md
---------
Co-authored-by: Benjamin F <159887351+yyj6666667@users.noreply.github.com>
|
2026-03-27 14:45:02 +08:00 |
|