* Fixing Gigachat support
* Gigachat: CUDA FA (needs 192 x 192 for MLA = 3)
* Gigachat: CPU FA (needs 192 x 192 for MLA = 3)
---------
Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
The handle_apply_template handler was defined but never registered as
an HTTP endpoint, causing 404 errors when calling POST /apply-template.
This commit adds the missing endpoint registration to match the
functionality available in llama.cpp mainline.
Co-authored-by: Gapeleon <gapeleon@users.noreply.github.com>
The logic to skip the logprobs of the stop token was originally from
ggml-org/llama.cpp#2849, and was later modified as part of
ggml-org/llama.cpp#10643 to be applied only to STOP_TYPE_WORD.
The latter change wasn't included in #723. Then, after #958 got merged,
the logic got inadvertently applied to GLM-4.5/4.6 and Kimi K2,
resulting in truncated logprobs when streaming is off.
This commit reverts the logic from ggml-org/llama.cpp#2849, such that
the logprobs of the stop token will always be included in the response,
when logprobs is enabled. From testing, this matches with the behavior
of Fireworks inference server, for both chat completions and text
completions endpoints.
Also fix logprobs param handling for the text completion endpoint.
* Fix changes meaning warnings
* A couple of more warnings and formatting
---------
Co-authored-by: firecoperana <firecoperana>
Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
* Add mainline compatible FA command line option
* Graph reuse: add command line argument to turn it on
* WIP
* This seems to work
* This is perhaps cleaner
* Change the command line option to -gr
---------
Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
* Fix q5_0_r4
The issue waqs in the tail part. As almost all models have tensor
rows that are multiple of 128, that part was never triggered in testing.
But ithe gpt-oss models have an embedding size of 2880, so we end
up there and trigger the bug.
* Fix q6_0_r4
Same fix as q5_0_r4
* Fix q4_0_r8
* Fix q5_0_r4 and q6_0_r4 also on Zen4
* Fix q4_0_r8 also on Zen4
---------
Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
so more recent users that haven't followed the history of FlashMLA
evolution and hence don't know about the MLA options get the best setting
without having to add -mla 3 on the command line.
Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
* Use new-new-mma also for MLA=3, and use mask bounds
This gives us ~25% better PP at 32k tokens compared to main
* This seems better
---------
Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
* Fuse concat and copy into K cache
* Avoid ggml_cont() when n_token = 1
Combined effect: about +2% in TG performance with full GPU offload
Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
* Add command line argument for draft model
* Remove second context of draft model
* Format print
* print usage if parsing -draft fails
---------
Co-authored-by: firecoperana <firecoperana>
This commit enables IQK quantization operations on ARM-based systems,
specifically tested on NVIDIA DGX Spark with GB10 Grace Blackwell.
Changes:
- Enable IQK_IMPLEMENT macro for ARM NEON operations
- Add arm_neon.h header include for ARM SIMD intrinsics
- Fix compilation errors related to missing NEON types and functions
Build requirements for ARM:
cmake .. -DGGML_CUDA=ON \
-DCMAKE_CXX_FLAGS="-march=armv8.2-a+dotprod+fp16" \
-DCMAKE_C_FLAGS="-march=armv8.2-a+dotprod+fp16"
Tested on:
- Platform: NVIDIA DGX Spark (aarch64)
- CPU: GB10 Grace Blackwell Superchip
- Memory: 128GB unified memory
Fixes build errors:
- 'float32x4_t' does not name a type
- 'vld1q_f32' was not declared in this scope
- 'v_expf' was not declared in this scope
- Missing FP16 NEON intrinsics