End-to-end split-KV (FlashDecoding-style) for the CK unified attention
kernel. The host launches a single 3D grid with z == num_splits; each
CTA computes its KV-range slice and writes a normalized (o_acc, lse)
partial to FP32 workspaces, which the caller reduces into the final
output.
Pipeline changes:
- operator() returns ck_tile::make_tuple(o_acc, lse) instead of just
o_acc. The masked-empty early-exit returns lse = -inf so downstream
combine weighs the partial as zero.
- LSE is built in the natural-log domain from the pipeline's *unscaled*
rowmax: lse = (scale_s / log2(e)) * m + log(l). Previously we used
m / log2(e) + log(l), which dropped the per-head scale and produced
LSE values ~1/scale too large.
- Fix post-process parity: which SP register is left in the
alu0-done/alu1-pending state at loop exit depends on the parity of
the *iteration count* (= num_total_loop - num_blocks_start), not on
num_total_loop alone. For non-split (num_blocks_start == 0) the two
parities coincide; for splits starting at an odd tile they don't.
- Fix split-KV page-table offset: num_blocks_start is counted in
kPageBlockSize-sized tiles, but block_tables is indexed in
page_size-sized pages — shifting block_table_offset by num_blocks_start
reads the wrong pages whenever kPageBlockSize != page_size. Replaced
with split_token_offset = num_blocks_start * kPageBlockSize added to
logical_token before /page_size, so the page lookup uses the absolute
token position.
Kernel + dispatcher:
- Drop kargs.i_split; each CTA reads i_split = blockIdx.z.
- GridSize{2D,Decode} now take num_splits and add it as the z-dim
(defaults to 1, so non-split callers see dim3(..., 1, 1)).
- New write path: when num_splits > 1, the kernel skips the user
epilogue and instead writes the FP32 (o_acc, lse) tile pair into
workspace tensors at [head, split, batch_start_token, ...] using
Default2DEpilogue (UseRawStore=true) for o_acc and store_tile for
lse. Host strides come from kargs.
Co-authored-by: Cursor <cursoragent@cursor.com>
Composable Kernel Tile
concept
ck_tile provides a programming model with templated abstractions to enable users to implement performance-critical kernels for machine learning workloads. introduces following basic concepts to help users building your own operator
- tensor coordinate transformation, this is the core concept of layout/index transform abstraction in both compiler time and run time.
- tile-based programming model, including tile-level api and the concept of distributed tensor.
ck_tile is independently from the old ck, located under /include/ck_tile. You don't need to include anything from old CK, ck_tile has similiar (indeed almost the same) implementations for users to build operators. We will have a transition period to pull everything from old ck into ck_tile, stay tuned.
component
ck_tile is splitted into several componenets including core, host, ops/gemm, ops/fmha... each component you only need to include a single header (e.g #include "ck_tile/core.hpp", #include "ck_tile/ops/fmha.hpp") then you are able to use the function/structure inside (different from old ck)
[core]
ck_tile/core contains all the basic data structure and function to build the kernel, you can only include this header and build your own operators that utilizing all the basic building blocks introduced in ck.
core/container
- array, store runtime variables with fixed length (tensor index, register buffer, etc...)
- tuple, same as std::tuple, hold different type of data, and one of the solution to achieve multiple buffer.
- sequence, compile time integer sequence used to build various internal structures, or to describe tile size
- other convenient structure build on top of above 3
core/numeric
- gpu data type like
fp16_t,bf16_t,fp8_t... and the conversion between each other - constexpr integer similiar to std::integral_constant to be used as compile time integer.
- math functions and numeric utilities
core/algorithm
- coordinate transformation system, used to build tensor transform and compile time indexing. This is the core idea introduced in old
ckto describe how a tensor is build by several basic transform primitives likemerge/unmerge/embedetc... and how we indexing into a ND tensor that finally mapped to 1D memory offset.
core/tensor
- tensor descriptor, to describe how a ND tensor
- distributed tensor, describe the storage of this tensor, and the distribution of how a collection of threads collaborately work for this tensor.
- tile level API, including
load_tile,store_tile,shuffle_tile,slice_tile, etc...
[host]
ck_tile/host contains all the host side utilities to launch a kernel, create the device buffer, and some reference implementations. This can be used to create examples (like that under ck_tile example folder) and simple executable to invoke this kernel, so if you only need ck_tile to build your own device library then it's OK to not include this. Based on this, it is recommended to include the specific header you needed under this folder to avoid including unwanted headers (e.g, only include ck_tile/host/kernel_launch.hpp), unless you are writing a host executable.
[ops/gemm, ops/fmha, ops/reduce...]
our implementation of different device operators.
- warp, warp tile level operator
- block, block tile level operator
- pipeline, pipeline that can achieve a customized tile level mainloop (or epilogue). By switching different pipeline to the kernel template you can have different kind of pipeline optimizations.
- kernel, template interface for users to instantiate a particular kernel
[ops/epilogue]
epilogue part of our kernel. We may extend this epilogue part to let users to build their own cutomized epilogues.
[ref]
reference implementation of cpu or gpu. This folder is supposed to include a specific header on demand.
examples
currently we put all ck_tile related example under /example/ck_tile folder. Please check each example's subfolder.