mirror of
https://github.com/kvcache-ai/ktransformers.git
synced 2026-03-15 02:47:22 +00:00
* Fix Qwen3.5 FP8 load for VL detection
1, for VL models(Qwen3.5), modify base_key: model.layers.{N} -> model.language_model.layers.{N}
2, clean DUPLICATED class BF16SafeTensorLoader(SafeTensorLoader) , only the first overrided one.
* Indent type
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
---------
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>