Files
composable_kernel/example/32_batched_gemm_scale_softmax_gemm
John Shumway b5e2f26808 [CK_BUILDER] Put global CK functions in an the CK namespace (#3232)
* Wrap ck host utitlies in CK namespace.

The CK and CK-Tile source code bases are incompatible because CK is not properly using namespaces everywhere. In particular, we need to put hip_check_error in the ck namespace.

Move all functions in include/ck_/host_utility that were in global namespace into the ck namespace.

There may be additional namespace problems like this, and it's possible we'll have namespace clashes. But it is good design to properly guard our to code bases (CK and CKTile) so that they can both coexist. Moreover, estabilishing this compatiblity is essential if we are going to allow the builder to instantiate  kernels from either template library.

* Add using declarations to test code.

After moving some of the untils into the ck namespace, most examples and a few tests had to be updated to recognize the new namespace declarations. We add using declarations to individual compute units for functions that were previously in the global namespace.

* Add using declarations to client examples.

[ROCm/composable_kernel commit: ad57f6ef0b]
2025-11-19 11:23:02 +01:00
..

Batched GEMM-Scale-Softmax-GEMM: Fused Attention

Theory

This example demonstrates the fused attention mechanism used in transformer models, implementing the sequence: batched Q×K^T → scaling → softmax → ×V in a single kernel. This pattern is critical for efficient transformer inference and training.

Mathematical Formulation:

  • Attention: \text{softmax}\left(\frac{QK^T}{\sqrt{d_k}}\right)V
  • Q: [B, H, N, d_k] queries
  • K: [B, H, N, d_k] keys
  • V: [B, H, N, d_v] values
  • O: [B, H, N, d_v] output

Algorithmic Background:

  • Computes Q×K^T, scales by 1/\sqrt{d_k}, applies softmax, then multiplies by V.
  • Uses numerically stable softmax and memory-efficient tiling.
  • Used in multi-head attention and transformer blocks.

How to Run

Prerequisites

Please follow the instructions in the main Build Guide section as a prerequisite to building and running this example.

Build and run

cd composable_kernel/example/32_batched_gemm_scale_softmax_gemm
mkdir build && cd build
cmake -DCMAKE_CXX_COMPILER=/opt/rocm/bin/hipcc ..
make -j

# Example run
./batched_gemm_scale_softmax_gemm_xdl --batch=32 --heads=12 --seq_len=512 --head_dim=64 --verify=1 --time=1

Source Code Structure

Directory Layout

example/32_batched_gemm_scale_softmax_gemm/
├── batched_gemm_scale_softmax_gemm_xdl.cpp         # Main example: sets up, runs, and verifies fused attention
include/ck/tensor_operation/gpu/device/
│   └── device_batched_gemm_scale_softmax_gemm.hpp       # Device-level fused attention API
include/ck/tensor_operation/gpu/device/impl/
│   └── device_batched_attention_impl.hpp                # Attention-specific implementation
│   └── device_online_softmax_impl.hpp                   # Online softmax implementation
include/ck/tensor_operation/gpu/grid/
│   └── gridwise_batched_gemm_softmax.hpp                # Grid-level fused attention kernel
│   └── gridwise_online_softmax.hpp                      # Grid-level online softmax

Key Classes and Functions

  • DeviceBatchedGemmScaleSoftmaxGemm (in device_batched_gemm_scale_softmax_gemm.hpp):
    Device API for fused attention.
  • gridwise_batched_gemm_softmax (in gridwise_batched_gemm_softmax.hpp):
    Implements the tiled/blocking fused attention kernel.
  • gridwise_online_softmax (in gridwise_online_softmax.hpp):
    Implements numerically stable, memory-efficient softmax.

This example demonstrates how Composable Kernel implements efficient, fused attention for transformer and large language models.