Files
ik_llama.cpp/ggml/src/ggml-cann.cpp
firecoperana d5cd99f9c8 Merge vulkan code from mainline up to commit of 6/28/2025 (#563)
* Merge vulkan code from mainline up to commit of 6/28/2025

* Vulkan Optimizations and Fixes (#8959)

* Optimize Vulkan REPEAT performance

* Use Vulkan GLSL fused multiply-add instruction where possible

* Add GGML_VULKAN_PERF option to output performance data per operator

* Rework and fix Vulkan descriptor set and descriptor pool handling

* Fix float32 concat f16 shader validation error

* Add Vulkan GROUP_NORM eps parameter

* Fix validation error with transfer queue memory barrier flags

* Remove trailing whitespaces

vulkan : do not use tensor->extra (#9407)

* vulkan : do not use tensor->extra

This patch allows using the Vulkan backend with the RPC backend as
tensor->extra is no longer used.

Ref: #8536

* Adapt GGML_VULKAN_CHECK_RESULTS to extra removal (#2)

---------

Co-authored-by: 0cc4m <picard12@live.de>
# Conflicts:
#	ggml/src/ggml-vulkan.cpp

vulkan : fix build (#0)

ggml-ci

Improve Vulkan shader build system (#9239)

* Improve Vulkan shader builds system

- Add dependency to vulkan-shaders-gen to rebuild shaders when changing the shader compilation utility.
- Add option to generate debug info for Vulkan shaders to provide shader source to Vulkan shader profiling tools

* remove not required self dependency

ggml : fix build break for the vulkan-debug (#9265)

- windows build : Ok.
- linux build : Ok.

Signed-off-by: Changyeon Kim <cyzero.kim@samsung.com>

vulkan: correctly report support for OP_CONT (ggml/946)

test-backend-ops fails because ggml_cont aborts
when invoked passing an unsupported type.

This commit makes ggml_cont tests pass

Signed-off-by: Salvatore Mesoraca <s.mesoraca16@gmail.com>

vulkan: add dryrun support to sin and cos ops (ggml/947)

sin and cos failed test-backend-ops because they
tried to dereference a context pointer that is null
on dry runs.

This commit prevents that segfault.

Signed-off-by: Salvatore Mesoraca <s.mesoraca16@gmail.com>

# Conflicts:
#	ggml/src/ggml-vulkan.cpp

Overlap cmdbuffer creation and cmdbuffer execution in Vulkan backend by submitting smaller cmdbuffers early. (#9118)

* Overlap cmdbuffer creation and cmdbuffer execution in Vulkan backend by submitting smaller cmdbuffers early.

* fix compile issues

* Fix issues where the last submit wasn't executed or handled properly.

* remove trailing whitespace

* Repair GGML_VULKAN_CHECK_RESULTS

* Increase submit counter only if actual work has been submitted and increase submit count to 100.

* Fix some nodes are not checked with GGML_VULKAN_CHECK_RESULTS enabled.
# Conflicts:
#	ggml/src/ggml-vulkan.cpp

Enable use to the rebar feature to upload buffers to the device. (#9251)

vulkan : argsort barriers must be under uniform control flow (ggml/951)

a return before a barrier (that happens only in some threads in
a workgroup) leads to UB.
While the old code actually works on some devices,
it fails on some others (i.e. "smaller" GPUs).

BTW, I think it would be better to set specialization constants
when the graph is built, in that way the local workgroup
could be sized appropriately.
But it would take a lot of work.

Signed-off-by: Salvatore Mesoraca <s.mesoraca16@gmail.com>

vulkan : fix build for GGML_VULKAN_RUN_TESTS, add TFLOPS to log (ggml/961)

vulkan : multithread pipeline creation (ggml/963)

vulkan : mul_mat: fix UB with small warps (ggml/952)

When the device's warp size is less than 16,
it is possible for loadstride_a (mul_mm.comp:114)
and loadstride_b (mul_mm.comp:115) to be set to 0.
Because they are calculated as: the workgroup size,
multiplied by LOAD_VEC_* (which can be 1) and divided by 16.
And the workgroup size is set to be the same as the
warp/subgroup size.

The loadstride_* variables are used as increments in the
loops that populate the buffers used for the multiplication.

When they are 0 they cause an infinite loop.
But infinite loops without side-effects are UB and the
values of loadstride_* are known at compile time.
So, the compiler quietly optimizes all the loops away.
As a consequence, the buffers are not populated and
the multiplication result is just a matrix with all elements
set to 0.

We prevent the UB by making sure that the workgroup size
will never be less than 16, even if our device has a
smaller warp size (e.g. 8).

Signed-off-by: Salvatore Mesoraca <s.mesoraca16@gmail.com>

vulkan : retry allocation with fallback flags (whisper/2451)

Co-authored-by: Samuel Morris <samuel.morris@artlist.io>

vulkan : improve ggml_vk_create_buffer error handling (#9898)

vulkan: Fix newly added tests for permuted mul_mat and 1D im2col (#10226)

vulkan: Throttle the number of shader compiles during the build step. (#10222)

Fixes #9582

Spawning too many concurrent copies of glslc leads to "Failed to create pipes"
errors on Linux. This change applies the same throttling we use for
multithreaded pipeline creation.
# Conflicts:
#	ggml/src/vulkan-shaders/vulkan-shaders-gen.cpp

vulkan: Optimize contiguous copies (#10254)

* tests: Fix memory bandwidth calculation for perf tests

Add a flops calculation for flash attention.

Add one GGML_OP_CPY perf test.

* vulkan: Optimize contiguous copies

Add a variant of the copy shader for when the tensors are contiguous. Avoid
the complex addressing calculations, and do four elements per invocation
to hide some other overhead.

Apply similar changes to the scale shader, since scale is always contiguous.

Add a "progress bar" for shader compiles.
# Conflicts:
#	tests/test-backend-ops.cpp

vulkan: Use macros to make the mat mul pipeline creation more concise (#10259)

Also add vk_matmul_pipeline2 to hold f16/f32 accumulator versions of a
pipeline. This isn't really used yet.

vulkan: Optimize binary ops (#10270)

Reuse the index calculations across all of src0/src1/dst. Add a shader
variant for when src0/src1 are the same dimensions and additional modulus
for src1 aren't needed. Div/mod are slow, so add "fast" div/mod that
have a fast path when the calculation isn't needed or can be done more
cheaply.
# Conflicts:
#	ggml/src/ggml-vulkan.cpp
#	ggml/src/vulkan-shaders/acc.comp

ggml : vulkan logs (whisper/2547)

vulkan: Optimize some mat-vec mul quant shaders (#10296)

Compute two result elements per workgroup (for Q{4,5}_{0,1}). This reuses
the B loads across the rows and also reuses some addressing calculations.
This required manually partially unrolling the loop, since the compiler
is less willing to unroll outer loops.

Add bounds-checking on the last iteration of the loop. I think this was at
least partly broken before.

Optimize the Q4_K shader to vectorize most loads and reduce the number of
bit twiddling instructions.

Vulkan: Fix device info output format specifiers (#10366)

* Vulkan: Fix device info output format specifiers

* Vulkan: Use zu printf specifier for size_t instead of ld

vulkan: remove use of null initializer (#10372)

Seems like this isn't working for vulkan-over-metal when the array is sized
by a spec constant. Maybe a spirv-cross limitation?

vulkan: Optimize soft_max (#10301)

* vulkan: Optimize soft_max

Large soft_max could already saturate memory, but small/medium sizes were
pretty slow. The bulk of the gains for them comes from using a smaller
workgroup size, and making the workgroup size match the subgroup size also
makes the barriers much cheaper.

Cache some values in locals to avoid refetching/recomputing. And stamp
out a few "template instantiations" so smaller cases will fully unroll.

Add a missing early return for OOB rows. This happens when there are more
than 512 rows and the dispatch is 512 x H.

* vulkan: Further soft_max optimizations

Restore the workgroup size of 512 case, use it for >1024.

Use unrollable loops for more iteration counts.

vulkan: further optimize mul_mat_vec using larger loads (#10387)

* vulkan: Use pipeline_robustness to disable robustness in mul_mat_vec.

Add some early returns for nonexistent rows in mul_mat_vec shaders. These
can only be hit when dispatching a 2D grid of workgroups. Fix the logic
for the 2D grid of workgroups to round up.

Enable the pipeline robustness extension if it's available, and use it to
disable robustness for these pipelines. The instructions to do the bounds
checking contend for the same ALU resources as the bit twiddling dequant
instructions.

* vulkan: Add GLSL structure aliases for quant types to allow larger loads

In Vulkan it's not possible to cast pointer types, so instead you have to
declare an aliased binding for the memory with a different type. This
commit adds aliases for the quant formats using 16b ints, and in a few
places where the struct size is a multiple of 4 also using 32b ints.
Currently only q4_k's aliases are used, but others will be used in
subsequent commits.

* vulkan: use larger loads in q5_k and q6_k shaders.

Similar to the optimization I did in q4_k recently, this vectorizes some loads
and reduces the number of bit twiddling instructions.

* vulkan: use larger K step per iteration in mul_mat_vec.

Add vec4 dequantization functions, and use them to do K=8 per iteration in
mul_mat_vec. This uses 16b loads for the quant values and 128b loads for B
which helps reduce the load on the memory system.

The K_PER_ITER==2 logic is still there, just for F16/F32, and really only
because they support unaligned sizes.

Tweak the num_iters/unrolling logic to be simpler and catch a couple missed
unrolling opportunities.

vulkan: copy iq4_nl LUT into shared memory (#10409)

vulkan: predicate max operation in soft_max shaders/soft_max (#10437)

Fixes #10434

vulkan: Fix a vulkan-shaders-gen arugment parsing error (#10484)

The vulkan-shaders-gen was not parsing the --no-clean argument correctly.
Because the previous code was parsing the arguments which have a value only
and the --no-clean argument does not have a value, it was not being parsed
correctly. This commit can now correctly parse arguments that don't have values.

vulkan: fix group_norm (#10496)

Fix bad calculation of the end of the range. Add a backend test that
covers the bad case (taken from stable diffusion).

Fixes https://github.com/leejet/stable-diffusion.cpp/issues/439.
# Conflicts:
#	ggml/src/ggml-vulkan.cpp

vulkan: optimize Q2_K and Q3_K mul_mat_vec (#10459)

vulkan: skip integer div/mod in get_offsets for batch_idx==0 (#10506)

vulkan: further optimize q5_k mul_mat_vec (#10479)

vulkan: Handle GPUs with less shared memory (#10468)

There have been reports of failure to compile on systems with <= 32KB
of shared memory (e.g. #10037). This change makes the large tile size
fall back to a smaller size if necessary, and makes mul_mat_id fall
back to CPU if there's only 16KB of shared memory.

vulkan: define all quant data structures in types.comp (#10440)

vulkan: get the first command buffer submitted sooner (#10499)

This is an incremental improvement over #9118 to get work to the GPU a bit
sooner. The first part is to start with a smaller number of nodes before
the first submit, and ramp it up to the current 100 nodes/submit. The
second part is to reduce the dryrun overhead for all the nodes that just
need to request descriptor space.

With these changes I get around 1-2% speedup on RTX 4070 combined with my
old Haswell-era CPU.

vulkan: Dynamic subgroup size support for Q6_K mat_vec (#10536)

* subgroup 64 version with subgroup add. 15% faster

scalable version

tested for subgroup sizes 16-128

* check for subgroup multiple of 16 and greater than 16

* subgroup sizes are always a power of 2 (https://github.com/KhronosGroup/GLSL/issues/45)

* force 16 sequential threads per block

* make 16 subgroup size a constant

vulkan: optimize and reenable split_k (#10637)

Use vector loads when possible in mul_mat_split_k_reduce. Use split_k
when there aren't enough workgroups to fill the shaders.

vulkan: Implement "fast divide" (mul+shift) for unary ops like copy (#10642)

vulkan: Add VK_NV_cooperative_matrix2 support for mul_mat and flash attention (#10206)

# Conflicts:
#	ggml/src/vulkan-shaders/dequant_funcs_cm2.comp
#	ggml/src/vulkan-shaders/flash_attn_cm2.comp
#	ggml/src/vulkan-shaders/mul_mm_cm2.comp

Vulkan: VK_KHR_cooperative_matrix support to speed up prompt processing (#10597)

* Vulkan: Implement VK_KHR_cooperative_matrix support in the matrix matrix multiplication shader

* Improve performance with better q4_k and q5_k dequant and store unrolling

* Add Vulkan MUL_MAT and MUL_MAT_ID accumulator precision selection

* Rework mulmat shader selection and compilation logic, avoid compiling shaders that won't get used by device

* Vulkan: Implement accumulator switch for specific mul mat mat shaders

* Vulkan: Unroll more loops for more mul mat mat performance

* Vulkan: Add VK_AMD_shader_core_properties2 support to read Compute Unit count for split_k logic

* Disable coopmat support on AMD proprietary driver

* Remove redundant checks

* Add environment variable GGML_VK_DISABLE_COOPMAT to disable VK_KHR_cooperative_matrix support

* Fix rebase typo

* Fix coopmat2 MUL_MAT_ID pipeline selection
# Conflicts:
#	ggml/src/ggml-vulkan.cpp

vulkan: compile a test shader in cmake to check for coopmat2 support (#10713)

# Conflicts:
#	ggml/src/ggml-vulkan.cpp
#	ggml/src/ggml-vulkan/CMakeLists.txt
#	ggml/src/vulkan-shaders/test_coopmat2_support.comp

Vulkan: fix NaN in tanh.comp with AMD proprietary driver on Windows (#10723)

* Vulkan: fix NaN in tanh.comp

* Faster NaN-free tanh

vulkan: fix compile warnings (#10731)

vulkan: disable spirv-opt for coopmat shaders (#10763)

There are some bugs in the 1.3.296 SDK, so disable this. It isn't strictly
necessary anyway.

Add missing dependency on vulkan-shaders-gen, so shaders get recompiled when it
changes.

Fix coopmat support reporting when glslc doesn't support NV_coopmat2.

vulkan: dynamic subgroup size for the remaining k quants (#10745)

* q5_k

q4_k

q3_k

q2_k

q6_k multi row example

* revert as multi row isnt faster for k quants

vulkan: request round-to-even for fp16 in im2col/rope_head (#10767)

Vulkan doesn't mandate a specific rounding mode, but the shader_float_controls
feature allows rounding mode to be requested if the implementation supports it.

Vulkan: Add VK_EXT_subgroup_size_control support to ensure full subgroups for coopmats (#10721)

* Vulkan: Add VK_EXT_subgroup_size_control support to ensure full subgroups for coopmats

* Fix subgroup size control extension support check

Add accf32 and accf16 checks for coopmats

* Also disable coopmats on amdvlk

Vulkan: Use improved q4_k and q5_k dequant code in dequant shaders (#10798)

vulkan: small mul_mat_vec optimizations (#10665)

* double the number of rows per workgroup

* Update ggml-vulkan.cpp

* Vulkan: Add VK_EXT_subgroup_size_control support to ensure full subgroups for coopmats

* only increase the number of rows for amd and subgroup size 64

* fix missing NUM_ROWS for mul_mat_vec_iq4_nl_f16_f32, untested

* use subgroup min and max to check for gcn (requires https://github.com/ggerganov/llama.cpp/pull/10721)

* manual merge ggml-vulkan.cpp

* set min and max subgroup size in any case

* Also double the number of rows for Intel GPUs

Change Debug print name

add GGML_ROPE_TYPE_MROPE

rwkv6: add wkv6 support for Vulkan backend (#10829)

* rwkv_wkv6 vulkan shader

* RWKV_WKV6 Vulkan op tests passed

Signed-off-by: Molly Sophia <mollysophia379@gmail.com>

* Apply code format changes

Signed-off-by: Molly Sophia <mollysophia379@gmail.com>

* add [[unroll]] and remove unnecessary conditions

* add uma support

* fix erros in EditorConfig Checker

---------

Signed-off-by: Molly Sophia <mollysophia379@gmail.com>
Co-authored-by: Molly Sophia <mollysophia379@gmail.com>
# Conflicts:
#	ggml/src/ggml-vulkan.cpp
#	ggml/src/vulkan-shaders/wkv6.comp

vulkan: bugfixes for small subgroup size systems + llvmpipe test (#10809)

* ensure mul mat shaders work on systems with subgroup size less than 32

more fixes

add test

* only s_warptile_mmq needs to be run with 32 threads or more
# Conflicts:
#	.github/workflows/build.yml

vulkan : fix soft_max.comp division by zero (whisper/2633)

This change prevents a division by zero error when p.KY is 0.

vulkan: optimize coopmat2 dequant functions (#10855)

Change the code to do 16b loads when possible and extract the appropriate
component late, so the code is effectively decoding a pair of elements and
then selecting one. This can allow more commoning to happen in the compiler
when neighboring elements are loaded.

vulkan: build fixes for 32b (#10927)

* vulkan: build fixes for 32b

Should fix #10923

* vulkan: initialize some buffer/offset variables

examples, ggml : fix GCC compiler warnings (#10983)

Warning types fixed (observed under MSYS2 GCC 14.2.0):
* format '%ld' expects argument of type 'long int', but argument has type 'size_t'
* llama.cpp/ggml/src/ggml-vulkan/vulkan-shaders/vulkan-shaders-gen.cpp:81:46: warning: missing initializer for member '_STARTUPINFOA::lpDesktop' [-Wmissing-field-initializers]  (emitted for all struct field except first)
# Conflicts:
#	examples/export-lora/export-lora.cpp

vulkan: multi-row k quants (#10846)

* multi row k quant shaders!

* better row selection

* more row choices

* readjust row selection

* rm_kq=2 by default

vulkan: Use push constant offset to handle misaligned descriptors (#10987)

vulkan: im2col and matmul optimizations for stable diffusion (#10942)

* tests: Add im2col perf tests

* vulkan: optimize im2col, more elements per thread

* vulkan: increase small tile size for NV_coopmat2

* vulkan: change im2col to 512 elements per workgroup

vulkan: optimize mul_mat for small values of N (#10991)

Make the mul_mat_vec shaders support N>1 (as a spec constant, NUM_COLS) where
the batch_strides are overloaded to hold the row strides. Put the loads from the
B matrix in the innermost loop because it should cache better.

Share some code for reducing the result values to memory in mul_mat_vec_base.
# Conflicts:
#	tests/test-backend-ops.cpp

fix: Vulkan shader gen binary path (#11037)

Vulkan: Add device-specific blacklist for coopmat for the AMD proprietary driver (#11074)

* Vulkan: Add device-specific blacklist for coopmat for the AMD proprietary driver

* Add (TM) to AMD name check

fix lora print

Disable GL_KHR_cooperative_matrix Vulkan extension if not available. (#11117)

* Disable GL_KHR_cooperative_matrix Vulkan extension if not available.

* Perform Vulkan extensions checks in a more sensible order

* Remove unnecessary #ifdef directive
# Conflicts:
#	ggml/src/vulkan-shaders/test_coopmat_support.comp

llama: add support for QRWKV6 model architecture (#11001)

Vulkan: Fix float16 use on devices without float16 support + fix subgroup_size_control validation error (#11161)

* Vulkan: Remove float16 use in shaders

* Fix validation error about subgroup_size_control extension

fix: ggml: fix vulkan-shaders-gen build (#10448)

* fix: ggml: fix vulkan-shaders-gen build

The vulkan-shaders-gen target was not being built correctly
in case of cross-compilation.
Other outputs need to be built for the cross compile target,
but vulkan-shaders-gen needs to be built for the host.

* refactor: ggml: Improve vulkan-shaders-gen toolchain setup

- Add GGML_SHADERS_GEN_TOOLCHAIN CMake option.
- Auto-detect host toolchain if not set.

* refactor: ggml: Improve vulkan-shaders-gen toolchain setup

Use configure_file to generate host_toolchain.cmake from template

* fix: ggml: Fix compile error

Fix compile error not finding vulkan-shaders-gen

* fix: vulkan-shaders-gen build and path handling

Fix build issues with vulkan-shaders-gen:
- Add target dependency for correct build order
- Use CMAKE_HOST_SYSTEM_NAME for executable suffix
- Fix MSVC output directory in host toolchain
- Normalize path handling for cross-compilation

* fix: improve host compiler detection in vulkan shader build

Improve host compiler detection for vulkan shader generation:
- Add NO_CMAKE_FIND_ROOT_PATH to all compiler searches
- Consolidate compiler detection logic
- Fix Windows-specific MSVC detection
- Ensure correct compiler search in cross-compilation

* refactor: Simplify CMake function for detecting host compiler

Simplified the CMake function to improve the process of detecting the host compiler.

* fix: Remove unnecessary Vulkan library linkage in CMakeLists.txt

Since `vulkan-shader-gen.cpp` only requires the `glslc` executable
and not the Vulkan headers or libraries, CMakeLists.txt needs to
be corrected.
(See: ecc93d0558fc3ecb8a5af69d2ece02fae4710ade)

* refactor: Rename host_toolchain.cmake.in

- Rename host_toolchain.cmake.in to cmake/host-toolchain.cmake.in

* refactor: GGML_VULKAN_SHADERS_GEN_TOOLCHAIN

Rename the macro GGML_SHADERS_GEN_TOOLCHAIN to GGML_VULKAN_SHADERS_GEN_TOOLCHAIN
# Conflicts:
#	ggml/src/ggml-vulkan/CMakeLists.txt

vulkan: scale caching for k quants + misc fixes (#11081)

* q6_k scale caching

* 16 bit unpack

* q4_k test (slow)

* revert it

* q3_k

* q2_k

* little stuff

* try precalculating products of a and q2_k scales

* Revert "try precalculating products of a and q2_k scales"

This reverts commit 65110b81f23f66331a50c6e889a7c1ab9470a86b.

* unpack should be u16, add vim swap to gitignore (about time)

* better q4_k scales

* q5_k

* better q6_k with separate paths for all threads and partial threads in use, plus some more optimizations

* q2_k better dequant

* q3_k optimizations

* q3_k use hmask simd from cpu avx version

* make the caches happy

* q3_k separate out calculation

* q2_k separate out

* little stuff

* use calc_superblock everywhere

* q2_k optimize scale calculation

* more barriers

vulkan: optimize coopmat2 q2_k dequant function (#11130)

vulkan: optimize coopmat2 q4_k/q5_k dequant functions. (#11206)

Do masking on whole dwords, fetch all scales at once.

vulkan: support copy from f32 to q4_0/q4_1/q5_0/q5_1/q8_0/iq4_nl (#11166)

* vulkan: support copy from f32 to q4_0/q4_1/q5_0/q5_1/q8_0/iq4_nl

Shaders are based on cpy.cu.

* vulkan: support copy from q4_0/q4_1/q5_0/q5_1/q8_0/iq4_nl to f32

* ggml: copy q->f32 assumes some contiguity in the destination
# Conflicts:
#	ggml/src/ggml-cpu/ggml-cpu.c
#	ggml/src/vulkan-shaders/copy_from_quant.comp
#	ggml/src/vulkan-shaders/copy_to_quant.comp

vulkan: fix coopmat2 flash attention for non-contiguous inputs (#11281)

Add code similar to mul_mm_cm2 to force alignment of strides, to avoid
a performance regression.

Add noncontiguous FA tests in test-backend-ops.

Fixes #11268.
# Conflicts:
#	tests/test-backend-ops.cpp

vulkan: fix coopmat2 validation failures (#11284)

mul mat and flash attention shaders were loading f32 types directly into
A/B matrices, which happens to work but is technically invalid usage.
For FA, we can load it as an Accumulator matrix and convert and this
is not in the inner loop and is cheap enough. For mul mat, it's more
efficient to do this conversion in a separate pass and have the input(s)
be f16.

coopmat2 requires SPIR-V 1.6 (related using to LocalSizeId). LocalSizeId
requires maintenance4 be enabled, and SPIR-V 1.6 requires Vulkan 1.3.

vulkan: fix diag_mask_inf (#11323)

With robustbufferaccess disabled, this shader was showing OOB stores. There
is a bounds check in the code, but the workgrouop dimensions were reversed vs
CUDA and it was running the wrong number of threads. So fix the workgroup
dimensions and disable robustness for this pipeline.

vulkan: sort shaders for more deterministic binary (#11315)

Fixes #11306.

Vulkan-run-test: fix mmq_wg_denoms (#11343)

There should be a copy-and-paste error here.

*mmq_wg_denoms should be used together with *warptile_mmq, instead of
wg_denoms.

vulkan: compile shaders on-demand (#11406)

Reduce first-run startup time and memory consumption.

Should fix #11339.

vulkan: Catch pipeline creation failure and print an error message (#11436)

* vulkan: Catch pipeline creation failure and print an error message

Also, fix some warnings from my on-demand compile change.

* vulkan: fix pipeline creation logging

vulkan: implement initial support for IQ2 and IQ3 quantizations (#11360)

* vulkan: initial support for IQ3_S

* vulkan: initial support for IQ3_XXS

* vulkan: initial support for IQ2_XXS

* vulkan: initial support for IQ2_XS

* vulkan: optimize Q3_K by removing branches

* vulkan: implement dequantize variants for coopmat2

* vulkan: initial support for IQ2_S

* vulkan: vertically realign code

* port failing dequant callbacks from mul_mm

* Fix array length mismatches

* vulkan: avoid using workgroup size before it is referenced

* tests: increase timeout for Vulkan llvmpipe backend

---------

Co-authored-by: Jeff Bolz <jbolz@nvidia.com>
# Conflicts:
#	ggml/src/vulkan-shaders/dequant_iq2_s.comp
#	ggml/src/vulkan-shaders/dequant_iq2_xs.comp
#	ggml/src/vulkan-shaders/dequant_iq2_xxs.comp
#	ggml/src/vulkan-shaders/dequant_iq3_s.comp
#	ggml/src/vulkan-shaders/dequant_iq3_xxs.comp

CUDA: non-contiguous (RMS) norm support (#11659)

vulkan: use smaller combined allocations to avoid fragmentation (#11551)

# Conflicts:
#	ggml/src/ggml-alloc.c

vulkan: initial support for IQ4_XS quantization (#11501)

# Conflicts:
#	ggml/src/vulkan-shaders/dequant_iq4_xs.comp

vulkan: optimize coopmat2 iq2/iq3 callbacks (#11521)

* vulkan: optimize coopmat2 iq2/iq3 callbacks

* build: trigger CI on GLSL compute shader changes

vulkan: print shared memory size (#11719)

# Conflicts:
#	ggml/src/ggml-vulkan.cpp

vulkan: account for lookup tables when checking shared memory size (#11502)

# Conflicts:
#	ggml/src/ggml-vulkan.cpp

vulkan: add environment variable GGML_VK_PREFER_HOST_MEMORY to avoid VRAM allocation (#11592)

vulkan: linux builds + small subgroup size fixes (#11767)

* mm subgroup size

* upload vulkan x86 builds

vulkan: initial support for IQ1_S and IQ1_M quantizations (#11528)

* vulkan: initial support for IQ1_S and IQ1_M quantizations

* vulkan: define MMV kernels for IQ1 quantizations

* devops: increase timeout of Vulkan tests again

* vulkan: simplify ifdef for init_iq_shmem
# Conflicts:
#	ggml/src/vulkan-shaders/dequant_iq1_m.comp
#	ggml/src/vulkan-shaders/dequant_iq1_s.comp
#	ggml/src/vulkan-shaders/mul_mat_vec_iq1_m.comp
#	ggml/src/vulkan-shaders/mul_mat_vec_iq1_s.comp

vulkan: support multi/vision rope, and noncontiguous rope (#11902)

# Conflicts:
#	ggml/src/ggml-vulkan.cpp
#	ggml/src/vulkan-shaders/rope_multi.comp
#	ggml/src/vulkan-shaders/rope_vision.comp

vulkan: implement several ops relevant for ggml_opt (#11769)

* vulkan: support memset_tensor

* vulkan: support GGML_OP_SUM

* vulkan: implement GGML_OP_ARGMAX

* vulkan: implement GGML_OP_SUB

* vulkan: implement GGML_OP_COUNT_EQUAL

* vulkan: implement GGML_OP_OPT_STEP_ADAMW

* vulkan: fix check_results RWKV_WKV6 crash and memory leaks

* vulkan: implement GGML_OP_REPEAT_BACK

* tests: remove invalid test-backend-ops REPEAT_BACK tests

* vulkan: fix COUNT_EQUAL memset using a fillBuffer command
# Conflicts:
#	ggml/src/ggml-vulkan.cpp
#	ggml/src/vulkan-shaders/argmax.comp
#	ggml/src/vulkan-shaders/count_equal.comp
#	ggml/src/vulkan-shaders/opt_step_adamw.comp
#	ggml/src/vulkan-shaders/repeat_back.comp
#	ggml/src/vulkan-shaders/sub.comp
#	tests/test-backend-ops.cpp

vulkan: implement more backpropagation operators (#11914)

* vulkan: implement GGML_OP_ROPE_BACK

* vulkan: implement GGML_OP_RMS_NORM_BACK

* vulkan: implement GGML_OP_SILU_BACK

* vulkan: implement GGML_OP_SOFTMAX_BACK
# Conflicts:
#	ggml/src/vulkan-shaders/rms_norm_back.comp
#	ggml/src/vulkan-shaders/silu_back.comp
#	ggml/src/vulkan-shaders/soft_max_back.comp

Add memset tensor in all backend interface

SYCL: implement memset ggml backend buffer interface (#12580)

* SYCL: implement memset ggml backend buffer interface

* use GGML_ABORT macro

* Do not wait for all queues to finish for memset operation
# Conflicts:
#	ggml/src/ggml-sycl.cpp

add OP sigmoid (#12056)

Co-authored-by: Judd <foldl@boxvest.com>
# Conflicts:
#	ggml/src/vulkan-shaders/sigmoid.comp

vulkan: fix assertion when qy_needs_dequant (#12068)

Looks like a copy/paste bug from qx_needs_dequant.

vulkan: improve im2col (#11826)

* vulkan: improve im2col performance

vulkan: matmul dequantization improvements (#12015)

* faster dequant for old quants

* dont use unpack for iq4_nl

* vec2 unpack for q8

vulkan: add specific MMV kernels for IQ2 and IQ3 quants + optimizations (#11595)

* vulkan: implement specialized MMV kernels for IQ2 quantizations

* vulkan: add MMV kernels for IQ3 quants

* vulkan: Increase MMV batch size and unroll IQ LUT setup

* vulkan: fix init_iq_shmem for WG sizes larger than tables

* vulkan: common batch size for all I-quants
# Conflicts:
#	ggml/src/vulkan-shaders/mul_mat_vec_iq2_s.comp
#	ggml/src/vulkan-shaders/mul_mat_vec_iq2_xs.comp
#	ggml/src/vulkan-shaders/mul_mat_vec_iq2_xxs.comp
#	ggml/src/vulkan-shaders/mul_mat_vec_iq3_s.comp
#	ggml/src/vulkan-shaders/mul_mat_vec_iq3_xxs.comp

cuda/vulkan: specify fp32-only support for some operations in supports_op (ggml/1129)

ggml-ci

# Conflicts:
#	ggml/src/ggml-cuda.cu
#	tests/test-backend-ops.cpp

mat vec double buffer (#12188)

vulkan: fix bug in coopmat1 mul_mat_id (#12316)

* tests: run mul_mat_id with a larger N

* vulkan: fix bug in coopmat1 mul_mat_id

Update build.yml for Windows Vulkan builder to use Vulkan 1.4.304 SDK for VK_NV_cooperative_matrix2 support (#12301)

vulkan: Adjust coopmat2 tile sizes and selection heuristic (#12258)

vulkan: Pad N dimension of B matrix for coopmat2 perf, to avoid bounds checking (#12273)

* vulkan: Pad N dimension of B matrix for coopmat2 perf, to avoid bounds checking

vulkan: use fp32 in coopmat2 q4_k dequant function (#12309)

vulkan: subgroup size tuning (#12087)

* vulkan: subgroup size test

* Vulkan: Add device architecture enum and logic to recognize AMD generations

* vulkan: use new architecture logic to specify subgroup size

* Initial vulkan subgroup size tuning for RDNA3

* vulkan: commonize RDNA subgroup tuning

* vulkan: override subgroup size if required_subgroup_size = 0

* vulkan: disable warp 32 for RDNA3

* vulkan: fine tuned RDNA1 subgroup sizes

* vulkan: adjusted subgroup size map

* vulkan: fixed RDNA2 subgroup map

---------

Co-authored-by: 0cc4m <picard12@live.de>

vulkan: Add N/2 and N/4 optimized paths in coopmat2 shader (#12312)

ggml-vulkan: remove unused find_program(glslc) (#12416)

It's already found by FindVulkan.cmake in the parent CMakeLists

Vulkan: Default to 1GB allocations instead of 4GB to avoid fragmentation and driver issues (#12434)

vulkan: Submit once enough matmul work has been recorded (#12406)

I've been seeing significantly worse performance for tg with flash attention
enabled vs disabled, and it seems to be related to the submit heuristic.
Change the heuristic to check how many bytes worth of weight matrix are
used and flush every 100MB, and ramp up after the first few submits.
This seems to resolve the issue, and also increases perf for non-FA a bit.

vulkan: optimize iq1 coopmat2 dequant functions (#12427)

vulkan: workaround for AMD Windows driver 16 bit unpack8 bug (#12472)

Vulkan: RTE rounding for cpy to quant (#12480)

* Vulkan: RTE rounding for cpy to quant

Co-Authored-By: Jeff Bolz <jbolz@nvidia.com>

* remove trailing whitespace

* avoid duplicating pipeline_cpy_f32_quant

* fix copypasting issue

* remove duplicated code

---------

Co-authored-by: Jeff Bolz <jbolz@nvidia.com>

vulkan: Optimize mul_mat_vec p021 and nc shaders (#12505)

* tests: add mul_mat perf/functional tests for p021/nc vulkan shaders

* vulkan: Optimize mul_mat_vec p021 and nc shaders.

These shaders are used in attention calculations, and when the KV cache grows
large they start to dominate the run time. For the nc shader (which is called
with large 'k' dimension), use unrolling and vector loads. For the p021 shader
(which is called with large 'm' and small 'k' dimensions), take advantage of
grouped query attention to reuse loads from the A matrix for the whole group,
and reduce the number of workgroups (too much overhead from tiny dispatches).

Using subgroupAdd in the p021 shader also helps, use that conditionally.
# Conflicts:
#	tests/test-backend-ops.cpp

vulkan: fix mul_mat_vec failure in backend tests (#12529)

The OOB calculation could be wrong if the last iteration was during one of
the unrolled loops. Adjust the unrolling counts to avoid this. Add a couple
new backend tests that hit this failure on NVIDIA GPUs.

vulkan: fix coopmat shader generation when cross-compiling (#12272)

* vulkan: fix coopmat shader generation when cross-compiling

Previously the status of coopmat{,2} support isn't passed to the
vulkan-shaders-gen project building on the host, which leads to build
failure because of the cross-compiling code expecting coopmat{,2}
shaders that didn't get generated.

Fix this by passing the coopmat{,2} support status to vulkan-shaders
subproject.

Signed-off-by: Icenowy Zheng <uwu@icenowy.me>

* Only call coop-mat shaders once

* Fix whitespace

---------

Signed-off-by: Icenowy Zheng <uwu@icenowy.me>
Co-authored-by: bandoti <141645996+bandoti@users.noreply.github.com>

cmake: improve Vulkan cooperative matrix support checks (whisper/2966)

Co-authored-by: Sandro Hanea <me@sandro.rocks>

cmake : fix whitespace (#0)

Vulkan: Add DP4A MMQ and Q8_1 quantization shader (#12135)

* Vulkan: Add DP4A MMQ and Q8_1 quantization shader

* Add q4_0 x q8_1 matrix matrix multiplication support

* Vulkan: Add int8 coopmat MMQ support

* Vulkan: Add q4_1, q5_0 and q5_1 quants, improve integer dot code

* Add GL_EXT_integer_dot_product check

* Remove ggml changes, fix mmq pipeline picker

* Remove ggml changes, restore Intel coopmat behaviour

* Fix glsl compile attempt when integer vec dot is not supported

* Remove redundant code, use non-saturating integer dot, enable all matmul sizes for mmq

* Remove redundant comment

* Fix integer dot check

* Fix compile issue with unsupported int dot glslc

* Update Windows build Vulkan SDK version
# Conflicts:
#	ggml/src/ggml-vulkan.cpp
#	ggml/src/vulkan-shaders/mul_mmq.comp
#	ggml/src/vulkan-shaders/mul_mmq_funcs.comp
#	ggml/src/vulkan-shaders/quantize_q8_1.comp
#	ggml/src/vulkan-shaders/test_integer_dot_support.comp

vulkan: fix build when glslc doesn't support coopmat (#12683)

Vulkan: Fix mmq int dot float cache size (#12722)

vulkan: Implement grouped query attention in the coopmat2 FA shader (#12559)

When adjacent batches of Q share the same batches of K/V, batch them into
the same workgroup. For example, when:

dst(128,32,1,1) = FA(q(128,1,32,1), k(128,16640,8,1), v(128,16640,8,1))

previously we would run 32 workgroups computing 1 result each, now we will
run 8 workgroups computing 4 results each.

This doesn't directly translate to better performance (at least when you have
>=32 SMs), but in a subsequent change I'll enable split_k which will scale much
better with 4x fewer workgroups.

cmake: remove caching from vulkan coopmat checks (#12719)

vulkan: Implement split_k for coopmat2 flash attention. (#12627)

When using group query attention, we have one workgroup per KV batch and this
can be very few workgroups (e.g. just 8 in some models). Enable split_k to
spread the work across SMs. This helps a lot when the KV cache is large.
# Conflicts:
#	ggml/src/vulkan-shaders/flash_attn_split_k_reduce.comp

vulkan: Fix missing cmake logic for dot product extension (#12721)

vulkan: set cmake minimum and project name in vulkan-shaders (#12744)

vulkan: Hybrid waitForFences/getFenceStatus to reduce fence latency (#12630)

There seems to be a bubble waking up from waitForFences, which costs a few
percent performance and also increased variance in performance. This change
inserts an "almost_ready" fence when the graph is about 80% complete and we
waitForFences for the almost_ready fence and then spin (with _mm_pauses) waiting
for the final fence to be signaled.
# Conflicts:
#	ggml/src/ggml-vulkan.cpp

cmake: fix ggml-shaders-gen compiler paths containing spaces (#12747)

fixes error for compiler paths with spaces

Vulkan: Tune Vulkan mmq int dot shader for performance (#12767)

vulkan: Use unclamped loads for flash attention mask (#12720)

nem1 must be a multiple of GGML_KQ_MASK_PAD, and GGML_KQ_MASK_PAD is a multiple
of the number of rows in the matrix. The KV dim is a multiple of the number of
columns for the aligned shader.

vulkan: fix NaN issue in flash attention shader (#12776)

Use -FLT_MAX/2 rather than -inf as the initial value for computing the maximum.

vulkan: Use fp16 for the flash attention P*V multiplication (#12783)

This is consistent with the ggml-cuda behavior and the mul_mat fallback.

vulkan: In coopmat2 mmq, load q4_k/q5_k scales through shared memory (#12833)

q4_k and q5_k had a lot of redundant global loads where the same 16B of
scale information is repeatedly loaded and decoded during each loop iteration.
This change restructures the loops to more explicitly iterate over whole
blocks in the outer loop (with unrolled inner loop) and to copy/decode the
scale data into shared memory once at the start of each outer loop. The copy
is pipelined so the scale load from global memory is relatively cheap.

This improves q4_k/q5_k model prompt processing performance by around 5-7%.
I briefly tried applying this to q6_k and q4_0, and it didn't help for q6_k
and hurt for q4_0.

The big "else" path in mul_mm_cm2.comp that had all the clamped/unclamped
variants isn't used as often as it originally was (e.g. due to the padded_N
change), so I trimmed it down to offset some of the new complexity of the
semi-manual loop unrolling.

vulkan: use aligned loads for flash attention mask (#12853)

Rewrite the stride logic for the mask tensor in the FA shader to force the
stride to be aligned, to allow using more efficient loads.

vulkan: enable coopmat2 FA gqa and split_k optimizations more often (#12931)

The grouped query attention optmization doesn't require a power of two ratio,
the only thing relying on it was the modulo operation written as bitwise &.

split_k need not depend on gqa_ratio - enable it any time there's only one
workgroup in the X dimension. The shader gets the split index from the x coord,
and multiple workgroups in the X dimension (pre-split) indicates a larger
FA operation that wouldn't need splitting.

vulkan: support noncontiguous rms_norm (#13031)

# Conflicts:
#	ggml/src/ggml-vulkan.cpp

vulkan: matmul gcn tuning (#13016)

* tune matmul for gcn

* this one is more power efficient

* Update ggml/src/ggml-vulkan/ggml-vulkan.cpp

Co-authored-by: 0cc4m <picard12@live.de>

* disable this tune for the proprietary driver

---------

Co-authored-by: 0cc4m <picard12@live.de>

vulkan: use uint array index to avoid glslang bug (#13193)

vulkan: Handle src1 batch dimension in non-contiguous mat-vec-mul shader (#13191)

* vulkan: Handle src1 batch dimension in non-contiguous mat-vec-mul shader

vulkan: Add bfloat16 support (#12554)

* vulkan: Add bfloat16 support

This adds bfloat16 matrix multiply support based on VK_KHR_shader_bfloat16.
The extension is required for coopmat multiply support, but matrix-vector
multiply trivially promotes bf16 to fp32 and doesn't require the extension.
The copy/get_rows shaders also don't require the extension.

It's probably possible to fall back to non-coopmat and promote to fp32 when
the extension isn't supported, but this change doesn't do that.

The coopmat support also requires a glslc that supports the extension, which
currently requires a custom build.

* vulkan: Support bf16 tensors without the bf16 extension or coopmat support

Compile a variant of the scalar mul_mm shader that will promote the bf16
values to float, and use that when either the bf16 extension or the coopmat
extensions aren't available.

* vulkan: bfloat16 fixes (really works without bfloat16 support now)

* vulkan: fix spirv-val failure and reenable -O
# Conflicts:
#	ggml/src/vulkan-shaders/test_bfloat16_support.comp

vulkan: Additional type support for unary, binary, and copy (#13266)

Support f16->f32 copy.
Support f16->f16 and f32->f32 unary ops.
Support all combinations of f16/f32 for src0/src1/dst for add/sub/mul/div.
# Conflicts:
#	ggml/src/ggml-vulkan.cpp

vulkan: Allow up to 4096 elements for mul_mat_id row_ids (#13326)

This assert fired running Qwen_Qwen3-30B-A3B-Q2_K.gguf:

GGML_ASSERT(nei0 * nei1 <= 3072);

The tensor is 8 x 512. Increase this array size to accommodate.

vulkan: scalar flash attention implementation (#13324)

* vulkan: scalar flash attention implementation

* vulkan: always use fp32 for scalar flash attention

* vulkan: use vector loads in scalar flash attention shader

* vulkan: remove PV matrix, helps with register usage

* vulkan: reduce register usage in scalar FA, but perf may be slightly worse

* vulkan: load each Q value once. optimize O reduction. more tuning

* vulkan: support q4_0/q8_0 KV in scalar FA

* CI: increase timeout to accommodate newly-supported tests

* vulkan: for scalar FA, select between 1 and 8 rows

* vulkan: avoid using Float16 capability in scalar FA
# Conflicts:
#	ggml/src/ggml-vulkan.cpp
#	ggml/src/vulkan-shaders/flash_attn.comp

vulkan: workaround FA compile failures on macos (#13517)

vulkan: KHR_coopmat flash attention (#13506)

This shader uses coopmat1 to do the Q*K^T multiply. The P*V multiply is more
difficult for various reasons so I haven't done it. Performance for this
shader is around 2.5x better than for the scalar shader when doing prompt
processing. Some of the benefit may be from other optimizations like staging
through shared memory, or splitting by rows.
# Conflicts:
#	ggml/src/vulkan-shaders/flash_attn_cm1.comp

cmake: simplify vulkan shader test logic (#13263)

vulkan: use scalar FA rather than coopmat2 when N==1 (#13554)

Add pipeline_acc_f32

vulkan: move common FA code to flash_attn_base.comp (#13556)

* vulkan: move common FA code to flash_attn_base.comp

* vulkan: move common FA index/stride setup code to flash_attn_base.comp

* build fix
# Conflicts:
#	ggml/src/vulkan-shaders/flash_attn_base.comp

cmake: use the current build config for vulkan-shaders-gen (#13595)

* fix: use the current build config for `vulkan-shaders-gen`

* fix: only pass a valid build type to `--config`

Vulkan: Add f32 accumulator support to quantized mul mat to fix GLM4 32B incoherence (#13607)

# Conflicts:
#	ggml/src/ggml-vulkan.cpp

vulkan: fix warnings (#13626)

* small fixes

* remove ifdef

use LOG_WARN to replace `std::cerr` (#13657)

vulkan: Disable coopmat/coopmat2/bfloat extensions if glslc doesn't support it (#13696)

vulkan: support CPY from any type to itself (#13695)

Reuse the f16/f32 copy shaders, and just scale the number of elements
according to the type size.

add GGML_LOG_WARN

vulkan: mark IM2COL as supporting non-contig (#13783)

# Conflicts:
#	ggml/src/ggml-vulkan.cpp

vulkan: use timestamp queries for GGML_VULKAN_PERF (#13817)

Also change it to be controlled by an env var rather than cmake flag

vulkan : Remove unexpected ; (ggml/1253)

vulkan: fix warnings in perf logger querypool code (#13937)

ggml-vulkan: adds support for op CONV_TRANSPOSE_1D (#13813)

* * ggml-vulkan: adds op CONV_TRANSPOSE_1D

* test-backend-ops: adds more spohisticated tests for CONV_TRANSPOSE_1D

* Missing barrier added to shader.
Number of additional tests reduced to 108.

* * Fixes typo in variable name.

* Removes extra whitespaces.

* Adds int64->int32 casts to prevent possible warnings.

* Problem size reduced in tests to pass tests with llvmpipe.

* supports_op condition moved from unintended position
# Conflicts:
#	ggml/src/ggml-vulkan.cpp
#	ggml/src/vulkan-shaders/conv_transpose_1d.comp

vulkan: Enable VK_KHR_cooperative_matrix extension for Intel Xe2 GPUs (#14001)

* allowing B580 and U9-288V

* experimenting code to detect Xe2

* allowing coopmat only for Xe2 GPUs

* fixed comment wording

* fixed comment wording

* removed unnecessary driver check

Vulkan: Don't default to CPU device (like llvmpipe), even if no other device is available, to allow fallback to CPU backend (#14099)

# Conflicts:
#	ggml/src/ggml-vulkan.cpp

vulkan: force device 0 in CI (#14106)

Add GGML_LOG_INFO

vulkan: Track descriptor pools/sets per-context (#14109)

Use the same descriptor set layout for all pipelines (MAX_PARAMETER_COUNT == 8)
and move it to the vk_device. Move all the descriptor pool and set tracking to
the context - none of it is specific to pipelines anymore. It has a single vector
of pools and vector of sets, and a single counter to track requests and a single
counter to track use.

vulkan: Better thread-safety for command pools/buffers (#14116)

This change moves the command pool/buffer tracking into a vk_command_pool
structure. There are two instances per context (for compute+transfer) and
two instances per device for operations that don't go through a context.
This should prevent separate contexts from stomping on each other.
# Conflicts:
#	ggml/src/ggml-vulkan.cpp

vulkan: mutex around vkQueueSubmit (#14127)

This fixes the remaining crash in test-thread-safety on my system.

cmake: clean up external project logic for vulkan-shaders-gen (#14179)

* Remove install step for vulkan-shaders-gen

* Add install step to normalize msvc with make

* Regenerate modified shaders at build-time
# Conflicts:
#	.github/workflows/build.yml

cmake: remove shader-gen step-targets from ggml-vulkan (#14226)

* Remove step-targets from vulkan-shaders-gen

* Unset DESTDIR when building vulkan-shaders-gen

Vulkan: Set device max size for host memory to avoid OOM warning and fallback to CPU buffer (#14249)

Add support for VK_EXT_debug_utils to add labels to Vulkan objects. (#13792)

* Add support for VK_EXT_debug_utils to add labels to Vulkan objects. In step 1 compute pipelines are getting labeled.

* remove #ifdef for debug utils and add queue marker.
# Conflicts:
#	ggml/src/ggml-vulkan.cpp

vulkan: update windows SDK in CI (#14334)

vulkan: update windows SDK in release.yml (#14344)

# Conflicts:
#	.github/workflows/release.yml

cmake: regen vulkan shaders when shaders-gen sources change (#14398)

* Add shaders-gen sources as target deps

vulkan: Fix GGML_VULKAN_SHADER_DEBUG_INFO (#14427)

This setting needs to be passed through to vulkan-shaders-gen

vulkan: lock accesses of pinned_memory vector (#14333)

vulkan: handle noncontig in the final case of ggml_vk_get_cpy_pipeline (#14378)

Fix cuda build error

test

* remove  new cpu backend and yml files

* remove new op and GGML_ROPE_TYPE_NEOX

* fix build error

* change cmake file to add matrix operation

* remove coopmat2 check in flash attention

* print gpu info for vulkan

* disable fuse to recover vulkan performance

---------

Co-authored-by: 0cc4m <picard12@live.de>
Co-authored-by: firecoperana <firecoperana>
2025-07-02 08:49:42 +02:00

2022 lines
70 KiB
C++

/*
* Copyright (c) 2023-2024 The ggml authors
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to
* deal in the Software without restriction, including without limitation the
* rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
* sell copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
* IN THE SOFTWARE.
*/
#include "ggml-cann.h"
#include <acl/acl.h>
#include <stdarg.h>
#include <cmath>
#include <cstdio>
#include <cstring>
#include <mutex>
#include "ggml-backend-impl.h"
#include "ggml-cann/aclnn_ops.h"
#include "ggml-cann/common.h"
#define GGML_COMMON_DECL_C
#include "ggml-common.h"
/**
* @brief Default logging callback for GGML.
*
* This function is the default logging callback that logs messages to stderr.
*
* @param level The log level.
* @param msg The log message.
* @param user_data User data passed to the callback.
*/
static void ggml_cann_default_log_callback(enum ggml_log_level level,
const char* msg, void* user_data) {
GGML_UNUSED(level);
GGML_UNUSED(user_data);
fprintf(stderr, "%s", msg);
}
ggml_log_callback ggml_cann_log_callback = ggml_cann_default_log_callback;
void* ggml_cann_log_user_data = NULL;
GGML_API void ggml_backend_cann_log_set_callback(ggml_log_callback log_callback,
void* user_data) {
ggml_cann_log_callback = log_callback;
ggml_cann_log_user_data = user_data;
}
#define GGML_CANN_LOG_INFO(...) ggml_cann_log(GGML_LOG_LEVEL_INFO, __VA_ARGS__)
#define GGML_CANN_LOG_WARN(...) ggml_cann_log(GGML_LOG_LEVEL_WARN, __VA_ARGS__)
#define GGML_CANN_LOG_ERROR(...) \
ggml_cann_log(GGML_LOG_LEVEL_ERROR, __VA_ARGS__)
GGML_ATTRIBUTE_FORMAT(2, 3)
/**
* @brief Log a message using the current logging callback.
*
* This function formats a log message and passes it to the current logging
* callback.
*
* @param level The log level.
* @param format The format string for the log message.
* @param ... The arguments for the format string.
*/
static void ggml_cann_log(enum ggml_log_level level, const char* format, ...) {
if (ggml_cann_log_callback != NULL) {
va_list args;
va_start(args, format);
char buffer[128];
int len = vsnprintf(buffer, 128, format, args);
if (len < 128) {
ggml_cann_log_callback(level, buffer, ggml_cann_log_user_data);
} else {
// vsnprintf adds a null terminator
std::vector<char> buffer2(len + 1);
va_end(args);
va_start(args, format);
vsnprintf(&buffer2[0], buffer2.size(), format, args);
ggml_cann_log_callback(level, buffer2.data(),
ggml_cann_log_user_data);
}
va_end(args);
}
}
/**
* @brief Handles CANN errors by printing an error message and aborting.
*
* @param stmt The statement that caused the error.
* @param func The function in which the error occurred.
* @param file The file in which the error occurred.
* @param line The line number where the error occurred.
* @param msg The error message.
*/
[[noreturn]] void ggml_cann_error(const char* stmt, const char* func,
const char* file, int line, const char* msg) {
int32_t id = -1;
aclrtGetDevice(&id);
GGML_CANN_LOG_ERROR("CANN error: %s\n", msg);
GGML_CANN_LOG_ERROR(" current device: %d, in function %s at %s:%d\n", id, func,
file, line);
GGML_CANN_LOG_ERROR(" %s\n", stmt);
// abort with GGML_ASSERT to get a stack trace
GGML_ABORT("CANN error");
}
/**
* @brief Sets the device to be used by CANN.
*
* @param device The device ID to set.
*/
void ggml_cann_set_device(const int32_t device) {
// TODO: uncomment these lines after empty context has fixed.
// int current_device;
// ACL_CHECK(aclrtGetDevice(&current_device));
// if (device == current_device) {
// return;
// }
ACL_CHECK(aclrtSetDevice(device));
}
/**
* @brief Retrieves the current device ID.
*
* @return The current device ID.
*/
int32_t ggml_cann_get_device() {
int32_t id;
ACL_CHECK(aclrtGetDevice(&id));
return id;
}
/**
* @brief Initialize the CANN device information.
*
* This function initializes the CANN device information by obtaining the
* device count and setting the memory allocation granularity for each device.
*
* @return A structure containing the device information.
*/
static ggml_cann_device_info ggml_cann_init() {
ggml_cann_device_info info = {};
aclError err = aclrtGetDeviceCount((uint32_t*)&info.device_count);
if (err != ACL_SUCCESS) {
GGML_CANN_LOG_ERROR("%s: failed to initialize CANN: %s\n",
__func__, aclGetRecentErrMsg());
return info;
}
GGML_ASSERT(info.device_count <= GGML_CANN_MAX_DEVICES);
for (int id = 0; id < info.device_count; ++id) {
aclrtPhysicalMemProp prop = {};
prop.handleType = ACL_MEM_HANDLE_TYPE_NONE;
prop.allocationType = ACL_MEM_ALLOCATION_TYPE_PINNED;
prop.memAttr = ACL_HBM_MEM_HUGE;
prop.location.type = ACL_MEM_LOCATION_TYPE_DEVICE;
prop.location.id = id;
prop.reserve = 0;
ACL_CHECK(aclrtMemGetAllocationGranularity(
&prop, ACL_RT_MEM_ALLOC_GRANULARITY_RECOMMENDED,
&info.devices[id].vmm_granularity));
}
// TODO: add more device info later.
return info;
}
/**
* @brief Retrieve the CANN device information.
*
* This function returns a reference to a structure containing the CANN device
* information. The device information is initialized once and reused on
* subsequent calls.
*
* @return A reference to the structure containing the device information.
*/
const ggml_cann_device_info& ggml_cann_info() {
static ggml_cann_device_info info = ggml_cann_init();
return info;
}
//#define DEBUG_CANN_MALLOC
/**
* @brief A pool of CANN buffers(legacy).
*
* This class manages a pool of CANN buffers for a specific device.
*/
struct ggml_cann_pool_leg : public ggml_cann_pool {
/**
* @brief The maximum number of buffers in the pool.
*/
static const int MAX_BUFFERS = 256;
/**
* @brief The device ID associated with this buffer pool.
*/
int device;
/**
* @brief Structure representing a CANN buffer.
*/
struct ggml_cann_buffer {
void* ptr = nullptr; ///< Pointer to the buffer memory.
size_t size = 0; ///< Size of the buffer.
};
/**
* @brief Array of CANN buffers in the pool.
*/
ggml_cann_buffer buffer_pool[MAX_BUFFERS] = {};
/**
* @brief Total size of all buffers in the pool.
*/
size_t pool_size = 0;
/**
* @brief Constructor to initialize the buffer pool for a specific device.
*
* @param device The device ID to associate with this buffer pool.
*/
explicit ggml_cann_pool_leg(int device) : device(device) {}
/**
* @brief Destructor to free all buffers in the pool.
*/
~ggml_cann_pool_leg() {
ggml_cann_set_device(device);
for (int i = 0; i < MAX_BUFFERS; ++i) {
ggml_cann_buffer& b = buffer_pool[i];
if (b.ptr != nullptr) {
ACL_CHECK(aclrtFree(b.ptr));
pool_size -= b.size;
}
}
GGML_ASSERT(pool_size == 0);
}
/**
* @brief Allocate a buffer of the given size.
*
* @param size The size of the buffer to allocate.
* @param actual_size A pointer to a variable to receive the actual size of
* the allocated buffer.
* @return A pointer to the allocated buffer.
*/
void* alloc(size_t size, size_t* actual_size) override {
#ifdef DEBUG_CANN_MALLOC
int nnz = 0;
size_t max_size = 0;
#endif
size_t best_diff = 1ull << 36;
int ibest = -1;
for (int i = 0; i < MAX_BUFFERS; ++i) {
ggml_cann_buffer& b = buffer_pool[i];
if (b.ptr != nullptr) {
#ifdef DEBUG_CANN_MALLOC
++nnz;
if (b.size > max_size) max_size = b.size;
#endif
if (b.size >= size) {
size_t diff = b.size - size;
if (diff < best_diff) {
best_diff = diff;
ibest = i;
if (!best_diff) {
void* ptr = b.ptr;
*actual_size = b.size;
b.ptr = nullptr;
b.size = 0;
return ptr;
}
}
}
}
}
if (ibest >= 0) {
ggml_cann_buffer& b = buffer_pool[ibest];
void* ptr = b.ptr;
*actual_size = b.size;
b.ptr = nullptr;
b.size = 0;
return ptr;
}
void* ptr;
size_t look_ahead_size = (size_t)(1.05 * size);
look_ahead_size = 256 * ((look_ahead_size + 255) / 256);
ggml_cann_set_device(device);
ACL_CHECK(
aclrtMalloc(&ptr, look_ahead_size, ACL_MEM_MALLOC_HUGE_FIRST));
*actual_size = look_ahead_size;
pool_size += look_ahead_size;
#ifdef DEBUG_CANN_MALLOC
GGML_CANN_LOG_INFO(
"%s[%d]: %d buffers, max_size = %u MB, pool_size = %u MB, "
"requested %u MB\n",
__func__, device, nnz, (uint32_t)(max_size / 1024 / 1024),
(uint32_t)(pool_size / 1024 / 1024),
(uint32_t)(size / 1024 / 1024));
#endif
return ptr;
}
/**
* @brief Free a buffer and return it to the pool.
*
* @param ptr Pointer to the buffer to free.
* @param size Size of the buffer to free.
*/
void free(void* ptr, size_t size) override {
for (int i = 0; i < MAX_BUFFERS; ++i) {
ggml_cann_buffer& b = buffer_pool[i];
if (b.ptr == nullptr) {
b.ptr = ptr;
b.size = size;
return;
}
}
// memory should always buffered. these memory may still needed by
// tasks in stream.
// TODO, fix me.
GGML_ABORT("Cann buffer pool full, increase MAX_CANN_BUFFERS\n");
}
};
/**
* @brief A pool of CANN buffers with virtual memory.
*
* This class manages a pool of CANN buffers with virtual memory for a specific
* device.
*/
struct ggml_cann_pool_vmm : public ggml_cann_pool {
/**
* @brief The maximum size of the virtual memory pool (32 GB).
*/
static const size_t CANN_POOL_VMM_MAX_SIZE = 1ull << 35; // 32 GB
/**
* @brief The device ID associated with this buffer pool.
*/
int device;
/**
* @brief Pointer to the start of the virtual memory pool.
*/
void* pool_addr = 0;
/**
* @brief Amount of virtual memory used in the pool.
*/
size_t pool_used = 0;
/**
* @brief Total size of the virtual memory pool.
*/
size_t pool_size = 0;
/**
* @brief Allocation granularity for the virtual memory pool.
*/
size_t granularity;
/**
* @brief Handles for the physical memory allocated.
*/
std::vector<aclrtDrvMemHandle> handles;
/**
* @brief Offsets for the mapped memory regions.
*/
std::vector<void*> map_offsets;
/**
* @brief Constructor to initialize the buffer pool with virtual memory for
* a specific device.
*
* @param device The device ID to associate with this buffer pool.
*/
explicit ggml_cann_pool_vmm(int device)
: device(device),
granularity(ggml_cann_info().devices[device].vmm_granularity) {}
/**
* @brief Destructor to free all buffers in the virtual memory pool.
*/
~ggml_cann_pool_vmm() {
if (pool_addr != 0) {
for (auto& offset : map_offsets) {
ACL_CHECK(aclrtUnmapMem(offset));
}
for (auto& handle : handles) {
ACL_CHECK(aclrtFreePhysical(handle));
}
ACL_CHECK(aclrtReleaseMemAddress(pool_addr));
}
}
/**
* @brief Allocate a buffer of the given size in the virtual memory pool.
*
* @param size The size of the buffer to allocate.
* @param actual_size A pointer to a variable to receive the actual size of
* the allocated buffer.
* @return A pointer to the allocated buffer.
*/
void* alloc(size_t size, size_t* actual_size) override {
// round up the allocation size to the alignment to ensure that all
// allocations are aligned for all data types
const size_t alignment = 128;
size = alignment * ((size + alignment - 1) / alignment);
size_t avail = pool_size - pool_used;
if (size > avail) {
// round up to the next multiple of the granularity
size_t reserve_size = size - avail;
reserve_size =
granularity * ((reserve_size + granularity - 1) / granularity);
GGML_ASSERT(pool_size + reserve_size <= CANN_POOL_VMM_MAX_SIZE);
// allocate more physical memory
aclrtPhysicalMemProp prop = {};
prop.handleType = ACL_MEM_HANDLE_TYPE_NONE;
prop.allocationType = ACL_MEM_ALLOCATION_TYPE_PINNED;
prop.memAttr = ACL_HBM_MEM_HUGE;
prop.location.type = ACL_MEM_LOCATION_TYPE_DEVICE;
prop.location.id = device;
prop.reserve = 0;
aclrtDrvMemHandle handle;
ACL_CHECK(aclrtMallocPhysical(&handle, reserve_size, &prop, 0));
// reserve virtual address space (if not already reserved)
if (pool_addr == 0) {
ACL_CHECK(aclrtReserveMemAddress(
&pool_addr, CANN_POOL_VMM_MAX_SIZE, 0, NULL, 1));
}
// map at the end of the pool
ACL_CHECK(aclrtMapMem((char*)pool_addr + pool_size, reserve_size, 0,
handle, 0));
handles.push_back(handle);
map_offsets.push_back((char*)pool_addr + pool_size);
// add to the pool
pool_size += reserve_size;
// GGML_CANN_LOG_INFO("cann pool[%d]: size increased to %llu MB (
// reserved %llu MB)\n",
// device, (unsigned long long) (pool_size/1024/1024),
// (unsigned long long) (reserve_size/1024/1024));
}
GGML_ASSERT(pool_addr != 0);
void* ptr = (void*)((char*)pool_addr + pool_used);
*actual_size = size;
pool_used += size;
#ifdef DEBUG_CANN_MALLOC
GGML_CANN_LOG_INFO("cann pool[%d]: allocated %llu bytes at %llx\n", device,
(unsigned long long)size, (unsigned long long)ptr);
#endif
return ptr;
}
/**
* @brief Free a buffer and return it to the virtual memory pool.
*
* @param ptr Pointer to the buffer to free.
* @param size Size of the buffer to free.
*/
void free(void* ptr, size_t size) override {
#ifdef DEBUG_CANN_MALLOC
GGML_CANN_LOG_INFO("cann pool[%d]: freed %llu bytes at %llx\n", device,
(unsigned long long)size, (unsigned long long)ptr);
#endif
pool_used -= size;
// all deallocations must be in reverse order of the allocations
GGML_ASSERT(ptr == (void*)((char*)pool_addr + pool_used));
}
};
/**
* @brief Create a new CANN pool for a specific device.
*
* Factory method to create a new CANN pool object based on the device type.
*
* @param device The device ID for which to create the pool.
* @return A unique pointer to the created CANN pool.
*/
std::unique_ptr<ggml_cann_pool> ggml_backend_cann_context::new_pool_for_device(
int device) {
// return std::unique_ptr<ggml_cann_pool>(new ggml_cann_pool_leg(device));
return std::unique_ptr<ggml_cann_pool>(new ggml_cann_pool_vmm(device));
}
// cann buffer
/**
* @brief Context for managing a CANN buffer associated with a specific device.
*
* This structure holds information about a CANN buffer, including the device
* ID, device pointer, and a name derived from GGML_CANN_NAME and the device ID.
*/
struct ggml_backend_cann_buffer_context {
int32_t device; ///< The device ID associated with this buffer context.
void* dev_ptr =
nullptr; ///< Pointer to the device memory allocated for the buffer.
/**
* @brief Constructor to initialize the CANN buffer context.
*
* @param device The device ID associated with this buffer context.
* @param dev_ptr Pointer to the device memory allocated for the buffer.
*/
ggml_backend_cann_buffer_context(int32_t device, void* dev_ptr)
: device(device),
dev_ptr(dev_ptr) {}
/**
* @brief Destructor to free the device memory allocated for the buffer.
*/
~ggml_backend_cann_buffer_context() { ACL_CHECK(aclrtFree(dev_ptr)); }
};
/**
* @brief Retrieve the name associated with a CANN buffer.
*
* This function returns the name of a CANN buffer, which is stored in the
* context of the buffer.
*
* @param buffer The CANN buffer whose name is to be retrieved.
* @return A pointer to a C-string containing the name of the buffer.
*/
GGML_CALL static const char* ggml_backend_cann_buffer_get_name(
ggml_backend_buffer_t buffer) {
return "CANN";
GGML_UNUSED(buffer);
}
/**
* @brief Check if a buffer is a CANN buffer.
*
* This function checks if a given buffer is a CANN buffer by comparing its
* `get_name` function pointer to `ggml_backend_cann_buffer_get_name`.
*
* @param buffer The buffer to check.
* @return true if the buffer is a CANN buffer, false otherwise.
*/
GGML_CALL static bool ggml_backend_buffer_is_cann(
ggml_backend_buffer_t buffer) {
return buffer->iface.get_name == ggml_backend_cann_buffer_get_name;
}
/**
* @brief Free resources associated with a CANN buffer.
*
* This function frees the resources associated with a CANN buffer, including
* its context.
*
* @param buffer The CANN buffer to free.
*/
GGML_CALL static void ggml_backend_cann_buffer_free_buffer(
ggml_backend_buffer_t buffer) {
ggml_backend_cann_buffer_context* ctx =
(ggml_backend_cann_buffer_context*)buffer->context;
delete ctx;
}
/**
* @brief Retrieve the base pointer of a CANN buffer.
*
* This function returns the base pointer of a CANN buffer, which points to the
* device memory allocated for the buffer.
*
* @param buffer The CANN buffer whose base pointer is to be retrieved.
* @return A pointer to the base of the device memory allocated for the buffer.
*/
GGML_CALL static void* ggml_backend_cann_buffer_get_base(
ggml_backend_buffer_t buffer) {
ggml_backend_cann_buffer_context* ctx =
(ggml_backend_cann_buffer_context*)buffer->context;
return ctx->dev_ptr;
}
/**
* @brief Transform quantized Q4.0 tensor data into a format suitable for CANN
* processing.
*
* This function transforms quantized Q4.0 tensor data into a format suitable
* for CANN processing. It extracts quantization values and scales from the
* source data and prepares them in a format expected by CANN operations.
*
* @param tensor Pointer to the tensor information.
* @param src Pointer to the source data in Q4.0 format.
* @param dst Pointer to the destination buffer where transformed data will be
* stored.
*/
GGML_CALL static void ggml_backend_cann_transform_q4_0(ggml_tensor* tensor,
const void* src,
void* dst) {
int64_t n_elems = ggml_nelements(tensor);
int64_t groups = n_elems / QK4_0;
size_t quant_bytes = n_elems * sizeof(uint8_t) / 2;
uint8_t* quant_offset = (uint8_t*)dst;
uint16_t* scale_offset = (uint16_t*)((char*)dst + quant_bytes);
for (int i = 0; i < groups; i++) {
const block_q4_0* group =
(const block_q4_0*)((const char*)src + i * sizeof(block_q4_0));
*scale_offset = group->d;
scale_offset++;
// 0-15
for (int j = 0; j < QK4_0 / 2; j += 2) {
(*quant_offset) = (group->qs[j] & 0x0F);
(*quant_offset) |= ((group->qs[j + 1] << 4));
quant_offset++;
}
// 16-31
for (int j = 0; j < QK4_0 / 2; j += 2) {
(*quant_offset) = (group->qs[j] >> 4);
(*quant_offset) |= (group->qs[j + 1] & 0xF0);
quant_offset++;
}
}
// put (uint4b_t -8) into int4b_t
for (quant_offset = (uint8_t*)dst;
quant_offset < (uint8_t*)dst + quant_bytes; quant_offset++) {
(*quant_offset) ^= 0x88;
}
}
/**
* @brief Transform CANN processed data back into quantized Q4.0 format.
*
* This function transforms CANN processed data back into quantized Q4.0 format.
* It reverses the transformation performed by
* ggml_backend_cann_transform_q4_0(), converting the data back into its
* original quantized form.
*
* @param tensor Pointer to the tensor information.
* @param src Pointer to the source buffer containing transformed data.
* @param dst Pointer to the destination buffer where the Q4.0 formatted data
* will be stored.
*/
GGML_CALL static void ggml_backend_cann_transform_back_q4_0(
const ggml_tensor* tensor, void* src, void* dst) {
int64_t n_elems = ggml_nelements(tensor);
int64_t groups = n_elems / QK4_0;
size_t quant_bytes = n_elems * sizeof(uint8_t) / 2;
uint8_t* quant_offset = (uint8_t*)src;
uint16_t* scale_offset = (uint16_t*)((char*)src + quant_bytes);
for (; quant_offset < (uint8_t*)src + quant_bytes; quant_offset++) {
(*quant_offset) ^= 0x88;
}
quant_offset = (uint8_t*)src;
for (int i = 0; i < groups; i++) {
block_q4_0* group = (block_q4_0*)((char*)dst + i * sizeof(block_q4_0));
group->d = *scale_offset;
scale_offset++;
// 0-15
for (int j = 0; j < QK4_0 / 2; j += 2) {
group->qs[j] = ((*quant_offset) & 0x0F);
group->qs[j + 1] = ((*quant_offset) >> 4);
quant_offset++;
}
// 16-31
for (int j = 0; j < QK4_0 / 2; j += 2) {
group->qs[j] |= ((*quant_offset) << 4);
group->qs[j + 1] |= ((*quant_offset) & 0xF0);
quant_offset++;
}
}
}
/**
* @brief Transform quantized Q8.0 tensor data into a format suitable for CANN
* processing.
*
* This function transforms quantized Q8.0 tensor data into a format suitable
* for CANN processing. It extracts quantization values and scales from the
* source data and prepares them in a format expected by CANN operations.
*
* @param tensor Pointer to the tensor information.
* @param src Pointer to the source data in Q8.0 format.
* @param dst Pointer to the destination buffer where transformed data will be
* stored.
*/
GGML_CALL static void ggml_backend_cann_transform_q8_0(ggml_tensor* tensor,
const void* src,
void* dst) {
int64_t n_elems = ggml_nelements(tensor);
int64_t groups = n_elems / QK8_0;
size_t quant_bytes = n_elems * sizeof(uint8_t);
uint8_t* quant_offset = (uint8_t*)dst;
uint16_t* scale_offset = (uint16_t*)((char*)dst + quant_bytes);
for (int i = 0; i < groups; i++) {
const block_q8_0* group =
(const block_q8_0*)((const char*)src + i * sizeof(block_q8_0));
*scale_offset = group->d;
scale_offset++;
size_t group_quant_size = QK8_0 * sizeof(uint8_t);
memcpy(quant_offset, group->qs, group_quant_size);
quant_offset += group_quant_size;
}
}
/**
* @brief Transform CANN processed data back into quantized Q8.0 format.
*
* This function transforms CANN processed data back into quantized Q8.0 format.
* It reverses the transformation performed by
* ggml_backend_cann_transform_q8_0(), converting the data back into its
* original quantized form.
*
* @param tensor Pointer to the tensor information.
* @param src Pointer to the source buffer containing transformed data.
* @param dst Pointer to the destination buffer where the Q8.0 formatted data
* will be stored.
*/
GGML_CALL static void ggml_backend_cann_transform_back_q8_0(
const ggml_tensor* tensor, const void* src, void* dst) {
int64_t n_elems = ggml_nelements(tensor);
int64_t groups = n_elems / QK8_0;
size_t quant_bytes = n_elems * sizeof(uint8_t);
const uint8_t* quant_offset = (const uint8_t*)src;
const uint16_t* scale_offset =
(const uint16_t*)((const char*)src + quant_bytes);
for (int i = 0; i < groups; i++) {
block_q8_0* group = (block_q8_0*)((char*)dst + i * sizeof(block_q8_0));
group->d = *scale_offset;
scale_offset++;
size_t group_quant_size = QK8_0 * sizeof(uint8_t);
memcpy(group->qs, quant_offset, group_quant_size);
quant_offset += group_quant_size;
}
}
/**
* @brief Transform tensor data based on its type for CANN processing.
*
* This function transforms tensor data based on its quantization type for CANN
* processing. It dispatches the transformation based on the tensor's type to
* specialized functions handling Q4.0 and Q8.0 formats.
*
* @param tensor Pointer to the tensor information.
* @param src Pointer to the source data to be transformed.
* @param dst Pointer to the destination buffer where transformed data will be
* stored.
*/
GGML_CALL static void ggml_backend_cann_transform(ggml_tensor* tensor,
const void* src, void* dst) {
switch (tensor->type) {
case GGML_TYPE_Q4_0:
ggml_backend_cann_transform_q4_0(tensor, src, dst);
break;
case GGML_TYPE_Q8_0:
ggml_backend_cann_transform_q8_0(tensor, src, dst);
break;
default:
break;
}
}
/**
* @brief Transform CANN processed data back into tensor data based on its type.
*
* This function transforms CANN processed data back into tensor data based on
* its quantization type for Q4.0 and Q8.0 formats. It dispatches the
* transformation based on the tensor's type to specialized functions.
*
* @param tensor Pointer to the tensor information.
* @param src Pointer to the source data containing CANN processed data.
* @param dst Pointer to the destination buffer where transformed tensor data
* will be stored.
*/
GGML_CALL static void ggml_backend_cann_transform_back(
const ggml_tensor* tensor, void* src, void* dst) {
switch (tensor->type) {
case GGML_TYPE_Q4_0:
ggml_backend_cann_transform_back_q4_0(tensor, src, dst);
break;
case GGML_TYPE_Q8_0:
ggml_backend_cann_transform_back_q8_0(tensor, src, dst);
break;
default:
break;
}
}
/**
* @brief Check if transformation is needed for a given tensor type.
*
* This function checks if transformation is needed for a given tensor type
* to prepare data for CANN processing.
*
* @param type The tensor type to check.
* @return true if transformation is needed, false otherwise.
*/
GGML_CALL static bool need_transform(ggml_type type) {
switch (type) {
case GGML_TYPE_Q4_0:
case GGML_TYPE_Q8_0:
return true;
default:
return false;
}
}
/**
* @brief Initialize a tensor using data from a CANN buffer.
*
* This function initializes a tensor using data from a CANN buffer.
* It handles special cases such as views and quantization.
*
* @param buffer The CANN buffer from which to initialize the tensor.
* @param tensor Pointer to the tensor to be initialized.
*/
GGML_CALL static void ggml_backend_cann_buffer_init_tensor(
ggml_backend_buffer_t buffer, ggml_tensor* tensor) {
if (tensor->view_src != NULL && tensor->view_offs == 0) {
GGML_ASSERT(tensor->view_src->buffer->buft == buffer->buft);
return;
}
// TODO: can backend doesn't support quantized yet. Just leave the code
// here.
if (ggml_is_quantized(tensor->type)) {
// Initialize padding to 0 to avoid possible NaN values
size_t original_size = ggml_nbytes(tensor);
size_t padded_size =
ggml_backend_buft_get_alloc_size(buffer->buft, tensor);
if (padded_size > original_size && tensor->view_src == nullptr) {
size_t memset_size = padded_size - original_size;
ACL_CHECK(aclrtMemset((char*)tensor->data + original_size,
memset_size, 0, memset_size));
}
}
}
// TODO: need handle tensor which has paddings.
/**
* @brief Set tensor data in a CANN buffer.
*
* This function sets tensor data in a CANN buffer, handling transformations
* if needed based on the tensor's type.
*
* @param buffer The CANN buffer where the tensor data will be set.
* @param tensor Pointer to the tensor whose data will be set.
* @param data Pointer to the source data to be copied into the tensor.
* @param offset Offset in the source data from where to start copying.
* @param size Size of the data to be copied, in bytes.
*/
GGML_CALL static void ggml_backend_cann_buffer_set_tensor(
ggml_backend_buffer_t buffer, ggml_tensor *tensor, const void *data,
size_t offset, size_t size) {
ggml_backend_cann_buffer_context *ctx =
(ggml_backend_cann_buffer_context *)buffer->context;
ggml_cann_set_device(ctx->device);
// TODO: refer to cann(#6017), it use thread's default stream.
// For acl, synchronous functions use this default stream.
// Why aclrtSynchronizeDevice?
if (!need_transform(tensor->type)) {
ACL_CHECK(aclrtMemcpy((char *)tensor->data + offset, size, data, size,
ACL_MEMCPY_HOST_TO_DEVICE));
} else {
void *transform_buffer = malloc(size);
ggml_backend_cann_transform(tensor, data, transform_buffer);
#ifndef NDEBUG
void *check_buffer = malloc(size);
ggml_backend_cann_transform_back(tensor, transform_buffer,
check_buffer);
GGML_ASSERT(memcmp(data, check_buffer, size) == 0);
free(check_buffer);
#endif
ACL_CHECK(aclrtMemcpy((char *)tensor->data + offset, size,
transform_buffer, size,
ACL_MEMCPY_HOST_TO_DEVICE));
free(transform_buffer);
}
}
/**
* @brief Get tensor data from a CANN buffer.
*
* This function retrieves tensor data from a CANN buffer, handling
* transformations if needed based on the tensor's type.
*
* @param buffer The CANN buffer from which to retrieve tensor data.
* @param tensor Pointer to the tensor whose data will be retrieved.
* @param data Pointer to the destination buffer where the tensor data will be
* copied.
* @param offset Offset in the destination buffer where to start copying.
* @param size Size of the data to be copied, in bytes.
*/
GGML_CALL static void ggml_backend_cann_buffer_get_tensor(
ggml_backend_buffer_t buffer, const ggml_tensor* tensor, void* data,
size_t offset, size_t size) {
ggml_backend_cann_buffer_context* ctx =
(ggml_backend_cann_buffer_context*)buffer->context;
ggml_cann_set_device(ctx->device);
if (!need_transform(tensor->type)) {
ACL_CHECK(aclrtMemcpy(data, size, (char*)tensor->data + offset, size,
ACL_MEMCPY_DEVICE_TO_HOST));
} else {
void* transform_buffer = malloc(size);
ACL_CHECK(aclrtMemcpy(transform_buffer, size,
(char*)tensor->data + offset, size,
ACL_MEMCPY_DEVICE_TO_HOST));
ggml_backend_cann_transform_back(tensor, transform_buffer, data);
free(transform_buffer);
}
}
/**
* @brief Copy tensor data between CANN buffers if possible.
*
* This function copies tensor data between CANN buffers if the source and
* destination buffers are CANN buffers and they meet the necessary conditions
* (same device or devices can access each other).
*
* @param buffer The destination CANN buffer where the tensor data will be
* copied.
* @param src Pointer to the source tensor whose data will be copied.
* @param dst Pointer to the destination tensor where the data will be copied.
* @return true if the copy operation succeeded, false otherwise.
*/
GGML_CALL static bool ggml_backend_cann_buffer_cpy_tensor(
ggml_backend_buffer_t buffer, const ggml_tensor* src, ggml_tensor* dst) {
if (ggml_backend_buffer_is_cann(src->buffer)) {
ggml_backend_cann_buffer_context* src_ctx =
(ggml_backend_cann_buffer_context*)src->buffer->context;
ggml_backend_cann_buffer_context* dst_ctx =
(ggml_backend_cann_buffer_context*)buffer->context;
size_t memcpy_size = ggml_nbytes(src);
// Same device.
if (src_ctx->device == dst_ctx->device) {
ACL_CHECK(aclrtMemcpy((char*)dst->data, memcpy_size,
(const char*)src->data, memcpy_size,
ACL_MEMCPY_DEVICE_TO_DEVICE));
return true;
} else {
// Different device but can access by peer.
int32_t canAccessPeer = 0;
ACL_CHECK(aclrtDeviceCanAccessPeer(&canAccessPeer, src_ctx->device,
dst_ctx->device));
if (canAccessPeer) {
ggml_cann_set_device(src_ctx->device);
ACL_CHECK(aclrtDeviceEnablePeerAccess(dst_ctx->device, 0));
ACL_CHECK(aclrtMemcpy((char*)dst->data, memcpy_size,
(const char*)src->data, memcpy_size,
ACL_MEMCPY_DEVICE_TO_DEVICE));
return true;
}
}
}
return false;
}
/**
* @brief Clear a CANN buffer by setting all its memory to a specified value.
*
* This function clears a CANN buffer by setting all its memory to a specified
* value.
*
* @param buffer The CANN buffer to be cleared.
* @param value The value to which each byte in the buffer will be set.
*/
GGML_CALL static void ggml_backend_cann_buffer_clear(
ggml_backend_buffer_t buffer, uint8_t value) {
ggml_backend_cann_buffer_context* ctx =
(ggml_backend_cann_buffer_context*)buffer->context;
ggml_cann_set_device(ctx->device);
ACL_CHECK(aclrtMemset(ctx->dev_ptr, buffer->size, value, buffer->size));
}
/**
* @brief Interface for a CANN buffer in the backend.
*
* This structure defines function pointers to operations that can be performed
* on a CANN buffer within the backend.
*/
static ggml_backend_buffer_i ggml_backend_cann_buffer_interface = {
/* .get_name = */ ggml_backend_cann_buffer_get_name,
/* .free_buffer = */ ggml_backend_cann_buffer_free_buffer,
/* .get_base = */ ggml_backend_cann_buffer_get_base,
/* .init_tensor = */ ggml_backend_cann_buffer_init_tensor,
/* .memset_tensor = */ NULL,
/* .set_tensor = */ ggml_backend_cann_buffer_set_tensor,
/* .get_tensor = */ ggml_backend_cann_buffer_get_tensor,
/* .cpy_tensor = */ ggml_backend_cann_buffer_cpy_tensor,
/* .clear = */ ggml_backend_cann_buffer_clear,
/* .reset = */ NULL,
};
// cann buffer type
/**
* @brief Structure representing context information for a specific backend
* buffer type.
*/
struct ggml_backend_cann_buffer_type_context {
int32_t
device; /**< Device identifier associated with the buffer context. */
std::string name; /**< Name associated with the buffer context. */
};
/**
* @brief Retrieves the name associated with a CANN buffer type.
*
* This function returns the descriptive name associated with the specified
* CANN buffer type context.
*
* @param buft Pointer to the buffer type context.
* @return Const pointer to the C-style string containing the name.
*/
GGML_CALL static const char* ggml_backend_cann_buffer_type_name(
ggml_backend_buffer_type_t buft) {
return "CANN";
GGML_UNUSED(buft);
}
/**
* @brief Allocates a new CANN buffer of the specified type and size.
*
* This function allocates a new CANN buffer on the specified device with the
* given size.
*
* @param buft Pointer to the buffer type context.
* @param size Size in bytes of the buffer to allocate.
* @return Pointer to the allocated buffer, or nullptr if allocation fails.
*/
GGML_CALL static ggml_backend_buffer_t
ggml_backend_cann_buffer_type_alloc_buffer(ggml_backend_buffer_type_t buft,
size_t size) {
ggml_backend_cann_buffer_type_context* buft_ctx =
(ggml_backend_cann_buffer_type_context*)buft->context;
ggml_cann_set_device(buft_ctx->device);
size = std::max(size, (size_t)1);
void* dev_ptr;
aclError err = aclrtMalloc(&dev_ptr, size, ACL_MEM_MALLOC_HUGE_FIRST);
if (err != ACL_SUCCESS) {
GGML_CANN_LOG_ERROR(
"%s: allocating %.2f MiB on device %d: aclrtMalloc failed: %s\n",
__func__, size / 1024.0 / 1024.0, buft_ctx->device,
aclGetRecentErrMsg());
return nullptr;
}
ggml_backend_cann_buffer_context* ctx =
new ggml_backend_cann_buffer_context(buft_ctx->device, dev_ptr);
return ggml_backend_buffer_init(buft, ggml_backend_cann_buffer_interface,
ctx, size);
}
/**
* @brief Retrieves the memory alignment requirement for CANN buffers of this
* type.
*
* This function returns the alignment requirement in bytes for memory allocated
* by the CANN buffer type.
*
* @param buft Pointer to the buffer type context (unused in this
* implementation).
* @return The alignment requirement in bytes (fixed at 128 bytes for CANN
* buffers).
*/
GGML_CALL static size_t ggml_backend_cann_buffer_type_get_alignment(
ggml_backend_buffer_type_t buft) {
return 128;
GGML_UNUSED(buft);
}
/**
* @brief Calculates the allocation size required for a tensor in a CANN buffer.
*
* Computes the total allocation size needed for storing the tensor's data in a
* CANN buffer, considering any necessary padding or adjustments for quantized
* types.
*
* @param buft Pointer to the buffer type context (unused in this
* implementation).
* @param tensor Pointer to the tensor for which the allocation size is
* calculated.
* @return The total allocation size in bytes required for the tensor in the
* CANN buffer.
*/
GGML_CALL static size_t ggml_backend_cann_buffer_type_get_alloc_size(
ggml_backend_buffer_type_t buft, const ggml_tensor* tensor) {
size_t size = ggml_nbytes(tensor);
int64_t ne0 = tensor->ne[0];
// last line must bigger than 32, because every single op deal at
// least 32 bytes.
// TODO: quantized type?
// int64_t line_size = ne0 * ggml_element_size(tensor);
// int64_t line_size_align_32 = (line_size + 31) & ~31;
// size += (line_size_align_32 - line_size);
// TODO: not support quantized yet.
// TODO: consider un-continue tensor.
if (ggml_is_quantized(tensor->type)) {
if (ne0 % MATRIX_ROW_PADDING != 0) {
size += ggml_row_size(
tensor->type, MATRIX_ROW_PADDING - ne0 % MATRIX_ROW_PADDING);
}
}
return size;
GGML_UNUSED(buft);
}
/**
* @brief Interface for managing CANN buffer types in the GGML backend.
*
* Provides function pointers for allocating, querying properties, and managing
* memory for CANN buffer types in the GGML backend.
*/
static ggml_backend_buffer_type_i ggml_backend_cann_buffer_type_interface = {
/* .get_name = */ ggml_backend_cann_buffer_type_name,
/* .alloc_buffer = */ ggml_backend_cann_buffer_type_alloc_buffer,
/* .get_alignment = */ ggml_backend_cann_buffer_type_get_alignment,
/* .get_max_size = */ NULL, // defaults to SIZE_MAX
/* .get_alloc_size = */ ggml_backend_cann_buffer_type_get_alloc_size,
/* .is_host = */ NULL,
};
/**
* @brief Retrieves the CANN buffer type for a specified device.
*
* This function initializes and returns the buffer type interface associated
* with the given device. It ensures thread-safe access using a mutex.
*
* @param device The device index for which to retrieve the buffer type.
* @return A pointer to the buffer type interface for the specified device, or
* nullptr if the device index is out of range.
*/
GGML_CALL ggml_backend_buffer_type_t
ggml_backend_cann_buffer_type(int32_t device) {
static std::mutex mutex;
std::lock_guard<std::mutex> lock(mutex);
if (device >= ggml_backend_cann_get_device_count()) {
return nullptr;
}
static ggml_backend_buffer_type
ggml_backend_cann_buffer_types[GGML_CANN_MAX_DEVICES];
static bool ggml_backend_cann_buffer_type_initialized = false;
if (!ggml_backend_cann_buffer_type_initialized) {
for (int32_t i = 0; i < GGML_CANN_MAX_DEVICES; i++) {
ggml_backend_cann_buffer_types[i] = {
/* .iface = */ ggml_backend_cann_buffer_type_interface,
/* .context = */
new ggml_backend_cann_buffer_type_context{
i, "CANN" + std::to_string(i)},
};
}
ggml_backend_cann_buffer_type_initialized = true;
}
return &ggml_backend_cann_buffer_types[device];
}
/**
* @brief Computes the forward operation for a given tensor using CANN
* operations.
*
* This function selects the appropriate CANN operation based on the type of
* operation specified in the tensor and performs the computation.
*
* @param ctx The CANN context containing necessary resources and
* configurations.
* @param dst The destination tensor where the result of the computation will be
* stored.
* @return true if the computation was successful; false otherwise.
*/
static bool ggml_cann_compute_forward(ggml_backend_cann_context& ctx,
struct ggml_tensor* dst) {
switch (dst->op) {
case GGML_OP_REPEAT:
ggml_cann_repeat(ctx, dst);
break;
case GGML_OP_GET_ROWS:
ggml_cann_get_rows(ctx, dst);
break;
case GGML_OP_DUP:
ggml_cann_dup(ctx, dst);
break;
case GGML_OP_ADD:
ggml_cann_add(ctx, dst);
break;
case GGML_OP_ACC:
ggml_cann_acc(ctx, dst);
break;
case GGML_OP_MUL:
ggml_cann_mul_div<aclnnMulGetWorkspaceSize, aclnnMul>(ctx, dst);
break;
case GGML_OP_DIV:
ggml_cann_mul_div<aclnnDivGetWorkspaceSize, aclnnDiv>(ctx, dst);
break;
case GGML_OP_UNARY:
switch (ggml_get_unary_op(dst)) {
case GGML_UNARY_OP_GELU:
ggml_cann_activation<aclnnGeluGetWorkspaceSize, aclnnGelu>(
ctx, dst);
break;
case GGML_UNARY_OP_SILU:
ggml_cann_activation<aclnnSiluGetWorkspaceSize, aclnnSilu>(
ctx, dst);
break;
// TODO: Use faster gelu??
case GGML_UNARY_OP_GELU_QUICK:
ggml_cann_activation<aclnnGeluGetWorkspaceSize, aclnnGelu>(
ctx, dst);
break;
case GGML_UNARY_OP_TANH:
ggml_cann_activation<aclnnTanhGetWorkspaceSize, aclnnTanh>(
ctx, dst);
break;
case GGML_UNARY_OP_RELU:
ggml_cann_activation<aclnnReluGetWorkspaceSize, aclnnRelu>(
ctx, dst);
break;
case GGML_UNARY_OP_HARDSIGMOID:
ggml_cann_activation<aclnnHardsigmoidGetWorkspaceSize,
aclnnHardsigmoid>(ctx, dst);
break;
case GGML_UNARY_OP_HARDSWISH:
ggml_cann_activation<aclnnHardswishGetWorkspaceSize,
aclnnHardswish>(ctx, dst);
break;
default:
return false;
}
break;
case GGML_OP_NORM:
ggml_cann_norm(ctx, dst);
break;
case GGML_OP_GROUP_NORM:
ggml_cann_group_norm(ctx, dst);
break;
case GGML_OP_CONCAT:
ggml_cann_concat(ctx, dst);
break;
case GGML_OP_UPSCALE:
ggml_cann_upsample_nearest2d(ctx, dst);
break;
case GGML_OP_PAD:
ggml_cann_pad(ctx, dst);
break;
case GGML_OP_ARANGE:
ggml_cann_arange(ctx, dst);
break;
case GGML_OP_TIMESTEP_EMBEDDING:
ggml_cann_timestep_embedding(ctx, dst);
break;
case GGML_OP_LEAKY_RELU:
ggml_cann_leaky_relu(ctx, dst);
break;
case GGML_OP_RMS_NORM:
ggml_cann_rms_norm(ctx, dst);
break;
case GGML_OP_MUL_MAT:
ggml_cann_mul_mat(ctx, dst);
break;
case GGML_OP_MUL_MAT_ID:
return false;
case GGML_OP_SCALE:
ggml_cann_scale(ctx, dst);
break;
case GGML_OP_SQR:
ggml_cann_sqr(ctx, dst);
break;
case GGML_OP_CLAMP:
ggml_cann_clamp(ctx, dst);
break;
case GGML_OP_CPY:
ggml_cann_cpy(ctx, dst);
break;
case GGML_OP_CONT:
ggml_cann_dup(ctx, dst);
break;
case GGML_OP_NONE:
case GGML_OP_RESHAPE:
case GGML_OP_VIEW:
case GGML_OP_PERMUTE:
case GGML_OP_TRANSPOSE:
break;
case GGML_OP_DIAG_MASK_INF:
ggml_cann_diag_mask(ctx, dst, -INFINITY);
break;
case GGML_OP_SOFT_MAX:
ggml_cann_softmax(ctx, dst);
break;
case GGML_OP_ROPE:
ggml_cann_rope(ctx, dst);
break;
case GGML_OP_IM2COL:
ggml_cann_im2col(ctx, dst);
break;
case GGML_OP_POOL_2D:
ggml_cann_pool2d(ctx, dst);
break;
case GGML_OP_SUM_ROWS:
ggml_cann_sum_rows(ctx, dst);
break;
case GGML_OP_ARGSORT:
ggml_cann_argsort(ctx, dst);
break;
default:
return false;
}
return true;
}
// backend
/**
* @brief Retrieves the name associated with the CANN backend.
*
* This function returns the name assigned to the CANN backend, which is stored
* in the context of the provided backend structure.
*
* @param backend Pointer to the CANN backend structure.
* @return A pointer to a constant string representing the backend name.
*/
GGML_CALL static const char* ggml_backend_cann_name(ggml_backend_t backend) {
ggml_backend_cann_context* cann_ctx =
(ggml_backend_cann_context*)backend->context;
return cann_ctx->name.c_str();
}
/**
* @brief Frees resources associated with the CANN backend.
*
* This function releases resources associated with the CANN backend context
* and resets the device associated with the backend to its initial state.
*
* @param backend Pointer to the CANN backend structure to be freed.
*/
GGML_CALL static void ggml_backend_cann_free(ggml_backend_t backend) {
ggml_backend_cann_context* cann_ctx =
(ggml_backend_cann_context*)backend->context;
ACL_CHECK(aclrtSynchronizeDevice());
ACL_CHECK(aclrtResetDevice(cann_ctx->device));
// finalize when last backend freed.
if (cann_ctx->device == ggml_backend_cann_get_device_count() - 1) {
ACL_CHECK(aclFinalize());
}
delete cann_ctx;
delete backend;
}
/**
* @brief Retrieves the default buffer type associated with the CANN backend.
*
* This function returns the buffer type specific to the device associated
* with the CANN backend. It is used to allocate buffers for computations
* performed by the backend.
*
* @param backend Pointer to the CANN backend structure.
* @return Pointer to the buffer type structure for the CANN backend.
*/
GGML_CALL static ggml_backend_buffer_type_t
ggml_backend_cann_get_default_buffer_type(ggml_backend_t backend) {
ggml_backend_cann_context* cann_ctx =
(ggml_backend_cann_context*)backend->context;
return ggml_backend_cann_buffer_type(cann_ctx->device);
}
/**
* @brief Sets tensor data asynchronously in the CANN backend.
*
* This function asynchronously sets tensor data in the CANN backend. Depending
* on the tensor type, it may perform data transformations before copying data
* to the device.
*
* @param backend Pointer to the CANN backend structure.
* @param tensor Pointer to the tensor structure to set data for.
* @param data Pointer to the host data to copy to the tensor.
* @param offset Offset in bytes within the host data.
* @param size Size of the data to copy in bytes.
*/
GGML_CALL static void ggml_backend_cann_set_tensor_async(ggml_backend_t backend,
ggml_tensor *tensor,
const void *data,
size_t offset,
size_t size) {
ggml_backend_cann_context *cann_ctx =
(ggml_backend_cann_context *)backend->context;
if (!need_transform(tensor->type)) {
ACL_CHECK(aclrtMemcpyAsync((char *)tensor->data + offset, size, data,
size, ACL_MEMCPY_HOST_TO_DEVICE,
cann_ctx->stream()));
} else {
void *transform_buffer = malloc(size);
ggml_backend_cann_transform(tensor, data, transform_buffer);
#ifndef NDEBUG
void *check_buffer = malloc(size);
ggml_backend_cann_transform_back(tensor, transform_buffer,
check_buffer);
GGML_ASSERT(memcmp(data, check_buffer, size));
free(check_buffer);
#endif
ACL_CHECK(aclrtMemcpyAsync(
(char *)tensor->data + offset, size, transform_buffer, size,
ACL_MEMCPY_HOST_TO_DEVICE, cann_ctx->stream()));
ACL_CHECK(aclrtSynchronizeStream(cann_ctx->stream()));
free(transform_buffer);
}
}
GGML_CALL static void ggml_backend_cann_get_tensor_async(
ggml_backend_t backend, const ggml_tensor *tensor, void *data,
size_t offset, size_t size) {
ggml_backend_cann_context *cann_ctx =
(ggml_backend_cann_context *)backend->context;
ggml_backend_buffer_t buf =
tensor->view_src ? tensor->view_src->buffer : tensor->buffer;
GGML_ASSERT(buf->buft == ggml_backend_cann_buffer_type(cann_ctx->device) &&
"unsupported buffer type");
if (!need_transform(tensor->type)) {
ACL_CHECK(aclrtMemcpyAsync(data, size, (char *)tensor->data + offset,
size, ACL_MEMCPY_DEVICE_TO_HOST,
cann_ctx->stream()));
} else {
void *transform_buffer = malloc(size);
ACL_CHECK(aclrtMemcpyAsync(
transform_buffer, size, (char *)tensor->data + offset, size,
ACL_MEMCPY_DEVICE_TO_HOST, cann_ctx->stream()));
ACL_CHECK(aclrtSynchronizeStream(cann_ctx->stream()));
ggml_backend_cann_transform_back(tensor, transform_buffer, data);
free(transform_buffer);
}
}
/**
* @brief Asynchronously copies tensor data between CANN backends.
*
* This function copies tensor data asynchronously between two CANN backends. It
* checks if both tensors reside in CANN buffers and whether the devices support
* peer-to-peer access for direct copying. If not, it returns false.
*
* @param backend_src Pointer to the source CANN backend structure.
* @param backend_dst Pointer to the destination CANN backend structure.
* @param src Pointer to the source tensor to copy data from.
* @param dst Pointer to the destination tensor to copy data to.
* @return true if the copy operation succeeds, false otherwise.
*/
GGML_CALL static bool ggml_backend_cann_cpy_tensor_async(
ggml_backend_t backend_src, ggml_backend_t backend_dst,
const ggml_tensor* src, ggml_tensor* dst) {
GGML_ASSERT(ggml_backend_is_cann(backend_src) ||
ggml_backend_is_cann(backend_dst));
if (!ggml_backend_buffer_is_cann(src->buffer) ||
!ggml_backend_buffer_is_cann(dst->buffer)) {
return false;
}
ggml_backend_buffer_t buf_src =
src->view_src ? src->view_src->buffer : src->buffer;
ggml_backend_buffer_t buf_dst =
dst->view_src ? dst->view_src->buffer : dst->buffer;
ggml_backend_cann_context* cann_ctx_src =
(ggml_backend_cann_context*)backend_src->context;
ggml_backend_cann_context* cann_ctx_dst =
(ggml_backend_cann_context*)backend_dst->context;
size_t copy_size = ggml_nbytes(dst);
if (backend_src != backend_dst) {
ggml_backend_cann_buffer_context* buf_ctx_src =
(ggml_backend_cann_buffer_context*)buf_src->context;
ggml_backend_cann_buffer_context* buf_ctx_dst =
(ggml_backend_cann_buffer_context*)buf_dst->context;
GGML_ASSERT(cann_ctx_src->device == buf_ctx_src->device);
GGML_ASSERT(cann_ctx_dst->device == buf_ctx_dst->device);
int32_t canAccessPeer = 0;
ACL_CHECK(aclrtDeviceCanAccessPeer(&canAccessPeer, cann_ctx_src->device,
cann_ctx_dst->device));
if (!canAccessPeer) {
return false;
}
// need open both directions for memcpyasync between devices.
ggml_cann_set_device(cann_ctx_dst->device);
ACL_CHECK(aclrtDeviceEnablePeerAccess(cann_ctx_src->device, 0));
ggml_cann_set_device(cann_ctx_src->device);
ACL_CHECK(aclrtDeviceEnablePeerAccess(cann_ctx_dst->device, 0));
ACL_CHECK(aclrtMemcpyAsync(dst->data, copy_size, src->data, copy_size,
ACL_MEMCPY_DEVICE_TO_DEVICE,
cann_ctx_src->stream()));
//TODO: workaround for Event didn`t work here.
aclrtSynchronizeStream(cann_ctx_src->stream());
} else {
// src and dst are on the same backend
ACL_CHECK(aclrtMemcpyAsync(dst->data, copy_size, src->data, copy_size,
ACL_MEMCPY_DEVICE_TO_DEVICE,
cann_ctx_dst->stream()));
}
return true;
}
/**
* @brief Synchronizes a CANN backend.
*
* This function synchronizes the specified CANN backend by waiting for all
* operations in its associated stream to complete.
*
* @param backend Pointer to the CANN backend structure to synchronize.
*/
GGML_CALL static void ggml_backend_cann_synchronize(ggml_backend_t backend) {
ggml_backend_cann_context* cann_ctx =
(ggml_backend_cann_context*)backend->context;
ggml_cann_set_device(cann_ctx->device);
ACL_CHECK(aclrtSynchronizeStream(cann_ctx->stream()));
}
/**
* @brief Computes a computational graph using a CANN backend.
*
* This function computes the operations defined in the computational graph
* using the specified CANN backend.
*
* @param backend Pointer to the CANN backend structure to use for computation.
* @param cgraph Pointer to the computational graph structure containing nodes
* representing operations to be computed.
* @return enum ggml_status Returns GGML_STATUS_SUCCESS if computation
* completes successfully, otherwise an appropriate error status.
*/
GGML_CALL static enum ggml_status ggml_backend_cann_graph_compute(
ggml_backend_t backend, ggml_cgraph* cgraph) {
ggml_backend_cann_context* cann_ctx =
(ggml_backend_cann_context*)backend->context;
ggml_cann_set_device(cann_ctx->device);
for (int i = 0; i < cgraph->n_nodes; i++) {
ggml_tensor* node = cgraph->nodes[i];
if (ggml_is_empty(node) || node->op == GGML_OP_NONE) {
continue;
}
bool ok = ggml_cann_compute_forward(*cann_ctx, node);
if (!ok) {
GGML_CANN_LOG_ERROR("%s: error: op not supported %s (%s)\n", __func__,
node->name, ggml_op_name(node->op));
}
GGML_ASSERT(ok);
}
return GGML_STATUS_SUCCESS;
}
/**
* @brief Checks if the CANN backend supports a specific operation.
*
* This function checks whether the specified operation is supported by the
* CANN backend.
*
* @param backend Pointer to the CANN backend structure to check support for
* the operation.
* @param op Pointer to the tensor representing the operation to check.
* @return bool Returns true if the operation is supported by the backend,
* otherwise false.
*/
GGML_CALL static bool ggml_backend_cann_supports_op(ggml_backend_t backend,
const ggml_tensor* op) {
switch (op->op) {
case GGML_OP_UNARY:
switch (ggml_get_unary_op(op)) {
case GGML_UNARY_OP_GELU:
case GGML_UNARY_OP_SILU:
case GGML_UNARY_OP_RELU:
case GGML_UNARY_OP_HARDSIGMOID:
case GGML_UNARY_OP_HARDSWISH:
case GGML_UNARY_OP_GELU_QUICK:
case GGML_UNARY_OP_TANH:
return true;
default:
return false;
}
case GGML_OP_MUL_MAT: {
switch (op->src[0]->type) {
case GGML_TYPE_F16:
case GGML_TYPE_F32:
case GGML_TYPE_Q8_0:
// TODO: fix me
// Current groupsize should not be greater than k-1 in
// aclnnWeightQuantBatchMatmulV2GetWorkspaceSize().
case GGML_TYPE_Q4_0:
return true;
default:
return false;
}
}
case GGML_OP_MUL_MAT_ID:
return false;
// embedding
case GGML_OP_GET_ROWS: {
switch (op->src[0]->type) {
case GGML_TYPE_F32:
case GGML_TYPE_F16:
case GGML_TYPE_Q4_0:
case GGML_TYPE_Q8_0:
return true;
default:
return false;
}
} break;
case GGML_OP_CPY: {
switch (op->type) {
case GGML_TYPE_F32:
case GGML_TYPE_F16:
case GGML_TYPE_Q8_0:
case GGML_TYPE_Q4_0:
return true;
default:
return false;
}
}
case GGML_OP_DUP:
case GGML_OP_REPEAT:
case GGML_OP_CONCAT:
case GGML_OP_NONE:
case GGML_OP_RESHAPE:
case GGML_OP_VIEW:
case GGML_OP_PERMUTE:
case GGML_OP_TRANSPOSE:
case GGML_OP_NORM:
case GGML_OP_ADD:
case GGML_OP_MUL:
case GGML_OP_DIV:
case GGML_OP_RMS_NORM:
case GGML_OP_SCALE:
case GGML_OP_SQR:
case GGML_OP_CLAMP:
case GGML_OP_CONT:
case GGML_OP_DIAG_MASK_INF:
case GGML_OP_SOFT_MAX:
case GGML_OP_ROPE:
case GGML_OP_IM2COL:
case GGML_OP_POOL_2D:
case GGML_OP_SUM_ROWS:
case GGML_OP_ARGSORT:
case GGML_OP_ACC:
case GGML_OP_GROUP_NORM:
case GGML_OP_UPSCALE:
case GGML_OP_PAD:
case GGML_OP_ARANGE:
case GGML_OP_TIMESTEP_EMBEDDING:
case GGML_OP_LEAKY_RELU:
return true;
default:
return false;
}
GGML_UNUSED(backend);
}
/**
* @brief Checks if the backend buffer type is associated with the CANN backend.
*
* This function checks whether the provided backend buffer type is associated
* with the CANN backend based on the comparison of its name retrieval function
* pointer.
*
* @param buft Pointer to the backend buffer type to check.
* @return bool Returns true if the buffer type is associated with the CANN
* backend, otherwise false.
*/
static bool ggml_backend_buft_is_cann(ggml_backend_buffer_type_t buft) {
return buft->iface.get_name == ggml_backend_cann_buffer_type_name;
}
/**
* @brief Checks if the CANN backend supports a specific backend buffer type.
*
* This function determines whether the CANN backend supports the given backend
* buffer type by comparing the device context of the backend and buffer type.
* It returns true if the devices are same between the backend context and
* buffer type context.
*
* @param backend Pointer to the CANN backend.
* @param buft Pointer to the backend buffer type to check.
* @return bool Returns true if the CANN backend supports the buffer type,
* otherwise false.
*/
GGML_CALL static bool ggml_backend_cann_supports_buft(
ggml_backend_t backend, ggml_backend_buffer_type_t buft) {
if (ggml_backend_buft_is_cann(buft)) {
ggml_backend_cann_context * cann_ctx =
(ggml_backend_cann_context *)backend->context;
ggml_backend_cann_buffer_type_context * buft_ctx =
(ggml_backend_cann_buffer_type_context *)buft->context;
return buft_ctx->device == cann_ctx->device;
}
return false;
}
/**
* @brief Determines if a tensor operation should be offloaded to the CANN
* backend.
*
* This function checks if a given tensor operation should be offloaded to the
* CANN backend based on the operation type and the size of the tensor. It
* returns true if the second dimension (ne[1]) of the tensor is greater than or
* equal to the minimum batch size and the operation is not GGML_OP_GET_ROWS.
*
* @param backend Pointer to the CANN backend.
* @param op Pointer to the tensor operation to check.
* @return bool Returns true if the operation should be offloaded, otherwise
* false.
*/
GGML_CALL static bool ggml_backend_cann_offload_op(ggml_backend_t backend,
const ggml_tensor* op) {
const int min_batch_size = 32;
GGML_UNUSED(backend);
return op->ne[1] >= min_batch_size && op->op != GGML_OP_GET_ROWS;
}
/**
* @brief Creates a new event for the CANN backend.
*
* This function initializes a new event for the CANN backend by setting the
* device and creating an ACL runtime event. The created event is then wrapped
* in a ggml_backend_event structure and returned.
*
* @param backend Pointer to the CANN backend.
* @return ggml_backend_event_t Returns a pointer to the new event structure.
*/
static ggml_backend_event_t ggml_backend_cann_event_new(
ggml_backend_t backend) {
ggml_backend_cann_context* cann_ctx =
(ggml_backend_cann_context*)backend->context;
ggml_cann_set_device(cann_ctx->device);
aclrtEvent event;
ACL_CHECK(aclrtCreateEvent(&event));
return new ggml_backend_event{
/* .backend = */ backend,
/* .context = */ event,
};
}
/**
* @brief Frees a CANN backend event.
*
* This function destroys the ACL runtime event associated with the given CANN
* backend event and then deletes the event structure itself.
*
* @param event Pointer to the event structure to be freed.
*/
static void ggml_backend_cann_event_free(ggml_backend_event_t event) {
ACL_CHECK(aclrtDestroyEvent((aclrtEvent)event->context));
delete event;
}
/**
* @brief Records an event on the CANN backend stream.
*
* This function records the given event on the ACL runtime stream associated
* with the backend context.
*
* @param event Pointer to the event structure to be recorded.
*/
static void ggml_backend_cann_event_record(ggml_backend_event_t event) {
ggml_backend_cann_context* cann_ctx =
(ggml_backend_cann_context*)event->backend->context;
ACL_CHECK(aclrtRecordEvent((aclrtEvent)event->context, cann_ctx->stream()));
}
/**
* @brief Waits for a recorded event to complete on the CANN backend stream.
*
* This function makes the given backend wait for the event to complete on its
* ACL runtime stream.
*
* @param backend Pointer to the backend structure.
* @param event Pointer to the event structure that the backend needs to wait
* for.
*/
static void ggml_backend_cann_event_wait(ggml_backend_t backend,
ggml_backend_event_t event) {
ggml_backend_cann_context* cann_ctx =
(ggml_backend_cann_context*)backend->context;
if (ggml_backend_is_cann(event->backend)) {
ACL_CHECK(aclrtStreamWaitEvent(cann_ctx->stream(),
(aclrtEvent)event->context));
} else {
GGML_ABORT("fatal error");
}
}
/**
* @brief Synchronizes the given event on the CANN backend.
*
* This function waits for the specified event to complete on the ACL runtime.
*
* @param event Pointer to the event structure to be synchronized.
*/
static void ggml_backend_cann_event_synchronize(ggml_backend_event_t event) {
ACL_CHECK(aclrtSynchronizeEvent((aclrtEvent)event->context));
}
/**
* @brief Structure defining the interface for the CANN backend.
*
* This structure contains function pointers for various operations
* supported by the CANN backend, including name retrieval, memory
* management, tensor operations, synchronization, and event handling.
*/
static ggml_backend_i ggml_backend_cann_interface = {
/* .get_name = */ ggml_backend_cann_name,
/* .free = */ ggml_backend_cann_free,
/* .get_default_buffer_type = */ ggml_backend_cann_get_default_buffer_type,
/* .set_tensor_async = */ ggml_backend_cann_set_tensor_async,
/* .get_tensor_async = */ ggml_backend_cann_get_tensor_async,
/* .cpy_tensor_async = */ ggml_backend_cann_cpy_tensor_async,
/* .synchronize = */ ggml_backend_cann_synchronize,
/* .graph_plan_create = */ NULL,
/* .graph_plan_free = */ NULL,
/* .graph_plan_update = */ NULL,
/* .graph_plan_compute = */ NULL,
/* .graph_compute = */ ggml_backend_cann_graph_compute,
/* .supports_op = */ ggml_backend_cann_supports_op,
/* .supports_buft = */ ggml_backend_cann_supports_buft,
/* .offload_op = */ ggml_backend_cann_offload_op,
/* .event_new = */ ggml_backend_cann_event_new,
/* .event_free = */ ggml_backend_cann_event_free,
/* .event_record = */ ggml_backend_cann_event_record,
/* .event_wait = */ ggml_backend_cann_event_wait,
/* .event_synchronize = */ ggml_backend_cann_event_synchronize,
};
/**
* @brief Return the hardcoded GUID for the CANN backend.
*
* This function returns a static GUID which uniquely identifies the CANN
* backend.
*
* @return A pointer to the static GUID.
*/
static ggml_guid_t ggml_backend_cann_guid() {
static ggml_guid guid = {0xa1, 0x94, 0xaf, 0xac, 0xbd, 0x4f, 0x47, 0x34,
0xbe, 0x1a, 0x9e, 0x71, 0x1f, 0x9e, 0xed, 0x64};
return &guid;
}
GGML_CALL ggml_backend_t ggml_backend_cann_init(int32_t device) {
aclInit(nullptr);
if (device < 0 || device >= ggml_backend_cann_get_device_count()) {
GGML_CANN_LOG_ERROR("%s: error: invalid device %d\n", __func__, device);
return nullptr;
}
ggml_backend_cann_context* ctx = new ggml_backend_cann_context(device);
if (ctx == nullptr) {
GGML_CANN_LOG_ERROR("%s: error: failed to allocate context\n", __func__);
return nullptr;
}
ggml_backend_t cann_backend =
new ggml_backend{/* .guid = */ ggml_backend_cann_guid(),
/* .interface = */ ggml_backend_cann_interface,
/* .context = */ ctx};
return cann_backend;
}
GGML_CALL bool ggml_backend_is_cann(ggml_backend_t backend) {
return backend != NULL &&
ggml_guid_matches(backend->guid, ggml_backend_cann_guid());
}
GGML_CALL int32_t ggml_backend_cann_get_device_count() {
return ggml_cann_info().device_count;
}
GGML_CALL void ggml_backend_cann_get_device_description(
int32_t device, char* description, size_t description_size) {
ggml_cann_set_device(device);
const char* soc_name = aclrtGetSocName();
snprintf(description, description_size, "%s", soc_name);
}
GGML_CALL void ggml_backend_cann_get_device_memory(int32_t device, size_t* free,
size_t* total) {
ggml_cann_set_device(device);
ACL_CHECK(aclrtGetMemInfo(ACL_HBM_MEM, free, total));
}
// backend registry
/**
* @brief Initializes a CANN backend based on the provided parameters.
*
* This function initializes a CANN backend using the device index and then
* initializes the backend using `ggml_backend_cann_init`.
*
* @param params Parameters for initialization (unused in this implementation).
* @param user_data User data containing the device index to initialize the
* backend.
* @return ggml_backend_t The initialized CANN backend.
*/
GGML_CALL static ggml_backend_t ggml_backend_reg_cann_init(const char* params,
void* user_data) {
ggml_backend_t cann_backend =
ggml_backend_cann_init((int)(intptr_t)user_data);
return cann_backend;
GGML_UNUSED(params);
}
extern "C" GGML_CALL int ggml_backend_cann_reg_devices();
/**
* @brief Registers CANN (Ascend) devices as backend options.
*
* This function initializes ACL, retrieves the number of available CANN
* devices, and registers each device as a backend option using
* `ggml_backend_register`. Each device is given a unique name based on
* `GGML_CANN_NAME` followed by its index.
*
* @return int The number of CANN devices registered.
*/
GGML_CALL int ggml_backend_cann_reg_devices() {
uint32_t device_count = ggml_backend_cann_get_device_count();
// initialization
for (uint32_t i = 0; i < device_count; i++) {
char name[128];
snprintf(name, sizeof(name), "CANN%d", i);
ggml_backend_register(name, ggml_backend_reg_cann_init,
ggml_backend_cann_buffer_type(i),
(void*)(intptr_t)i);
}
return device_count;
}