Files
composable_kernel/example/14_gemm_quantization
John Shumway ad57f6ef0b [CK_BUILDER] Put global CK functions in an the CK namespace (#3232)
* Wrap ck host utitlies in CK namespace.

The CK and CK-Tile source code bases are incompatible because CK is not properly using namespaces everywhere. In particular, we need to put hip_check_error in the ck namespace.

Move all functions in include/ck_/host_utility that were in global namespace into the ck namespace.

There may be additional namespace problems like this, and it's possible we'll have namespace clashes. But it is good design to properly guard our to code bases (CK and CKTile) so that they can both coexist. Moreover, estabilishing this compatiblity is essential if we are going to allow the builder to instantiate  kernels from either template library.

* Add using declarations to test code.

After moving some of the untils into the ck namespace, most examples and a few tests had to be updated to recognize the new namespace declarations. We add using declarations to individual compute units for functions that were previously in the global namespace.

* Add using declarations to client examples.
2025-11-19 11:23:02 +01:00
..

GEMM with Quantization

Theory

This example demonstrates GEMM (General Matrix Multiplication) with quantized inputs or weights. Quantization is a technique to reduce memory and computation by representing values with lower-precision integer types (e.g., int8), commonly used for efficient inference in deep learning.

Mathematical Formulation:

  • Quantized GEMM: C = \text{dequant}(A_q) \times \text{dequant}(B_q)
  • A_q, B_q: quantized matrices (e.g., int8)
  • \text{dequant}(x_q) = (x_q - z) \cdot s (scale s, zero-point z)
  • C: output matrix (often in higher precision, e.g., float32 or float16)

Algorithmic Background:

  • Quantized values are dequantized on-the-fly during GEMM computation.
  • Accumulation is performed in higher precision for accuracy.
  • Supports symmetric and asymmetric quantization.

How to Run

Prerequisites

Please follow the instructions in the main Build Guide section as a prerequisite to building and running this example.

Build and run

cd composable_kernel/example/14_gemm_quantization
mkdir build && cd build
cmake -DCMAKE_CXX_COMPILER=/opt/rocm/bin/hipcc ..
make -j

# Example run
./gemm_quantization_xdl --verify=1 --time=1

Source Code Structure

Directory Layout

example/14_gemm_quantization/
├── gemm_quantization_xdl.cpp         # Main example: sets up, runs, and verifies quantized GEMM
include/ck/tensor_operation/gpu/device/
│   └── device_gemm_quantized.hpp       # Device-level quantized GEMM API
include/ck/tensor_operation/gpu/device/impl/
│   └── device_gemm_quantized_impl.hpp  # Implementation
include/ck/tensor_operation/gpu/grid/
│   └── gridwise_gemm_quantized.hpp     # Grid-level quantized GEMM kernel
include/ck/tensor_operation/gpu/element/
    └── quantization_operations.hpp     # Quantization/dequantization utilities

Key Classes and Functions

  • DeviceGemmQuantized (in device_gemm_quantized.hpp):
    Device API for quantized GEMM.
  • gridwise_gemm_quantized (in gridwise_gemm_quantized.hpp):
    Implements the tiled/blocking quantized GEMM kernel.
  • quantization_operations (in quantization_operations.hpp):
    Defines quantization and dequantization functions.

This example demonstrates how Composable Kernel supports efficient quantized matrix multiplication for deep learning inference.