Files
Vidyasagar Ananthan 92c67a824f [DOCS] Documentation Addition (Readme updates) (#2495)
* GH-2368 Adding a basic glossary

GH-2368 Minor edits

GH-2368 Adding missing READMEs and standardization.

resolving readme updates

GH-2368 Minor improvements to documentation.

Improving some readmes.

Further improvement for readmes.

Cleaned up the documentation in 'client_example' (#2468)

Update for PR

Update ACRONYMS.md to remove trivial terms

Update ACRONYMS.md to provide detailed explanations for BF16 and BF8 formats

Apply suggestion from @spolifroni-amd

Co-authored-by: spolifroni-amd <Sandra.Polifroni@amd.com>

Apply suggestion from @spolifroni-amd

Co-authored-by: spolifroni-amd <Sandra.Polifroni@amd.com>

Update README.md to clarify CK Tile API description and remove outdated references to the Tile Engine.

revise 37_transpose readme

revise 36_copy readme

Remove references to the Tile Engine in README files for 19_gemm_multi_d and 35_batched_transpose, and update distribution links for clarity.

Remove references to the Tile Engine in multiple README files and update distribution links for consistency and clarity.

Remove references to the Tile Engine in README files across multiple examples

* GH-2368 Adding a basic glossary

GH-2368 Minor edits

GH-2368 Adding missing READMEs and standardization.

resolving readme updates

GH-2368 Minor improvements to documentation.

Improving some readmes.

Further improvement for readmes.

Cleaned up the documentation in 'client_example' (#2468)

Update for PR

Update ACRONYMS.md to remove trivial terms

Update ACRONYMS.md to provide detailed explanations for BF16 and BF8 formats

Apply suggestion from @spolifroni-amd

Co-authored-by: spolifroni-amd <Sandra.Polifroni@amd.com>

Apply suggestion from @spolifroni-amd

Co-authored-by: spolifroni-amd <Sandra.Polifroni@amd.com>

Update README.md to clarify CK Tile API description and remove outdated references to the Tile Engine.

revise 37_transpose readme

revise 36_copy readme

Remove references to the Tile Engine in README files for 19_gemm_multi_d and 35_batched_transpose, and update distribution links for clarity.

Remove references to the Tile Engine in multiple README files and update distribution links for consistency and clarity.

Remove references to the Tile Engine in README files across multiple examples

Refine README files by removing outdated references to the Tile Engine

* Updates based on PR feedback 1

* Updates based on PR feedback 2

* Updates based on PR feedback 3

* Updates based on PR feedback 4

* Updates based on PR feedback 5

* Updates based on PR feedback 6

* Updates based on PR feedback 7

* Updates based on PR feedback 8

* Content Modification of CK Tile Example

* Modify the ck_tile gemm config

---------

Co-authored-by: AviralGoelAMD <aviral.goel@amd.com>
Co-authored-by: ThomasNing <thomas.ning@amd.com>
2025-10-16 03:10:57 -07:00

5.7 KiB
Raw Permalink Blame History

GEMM with Double Multiply Operations

This example demonstrates a GEMM followed by two sequential elementwise multiplication operations. This fusion pattern is useful for implementing layers that require matrix multiplication followed by multiple scaling or masking operations, such as certain attention mechanisms or gated neural network architectures.

Mathematical Formulation

The operation performs a matrix multiplication followed by two sequential elementwise multiplications.

  1. GEMM Stage: A standard matrix multiplication. C_{temp1} = A \times B

  2. First Multiplication: Elementwise multiplication with tensor D. C_{temp2} = C_{temp1} \odot D

  3. Second Multiplication: Elementwise multiplication with tensor E. F = C_{temp2} \odot E

The key optimization is that the intermediate tensors C_temp1 and C_temp2 are never written to global memory. All operations are fused into the GEMM's epilogue, operating on data held in registers.

Algorithmic Strategy: GEMM with Dual-Multiply Epilogue

The implementation uses a tiled GEMM algorithm with a multi-stage fused epilogue that performs two sequential multiplications.

  1. Tiled GEMM Core: The kernel begins with a standard tiled GEMM. A thread block computes a tile of the product A \times B, accumulating the result in registers.

  2. Dual-Multiply Epilogue: Before any data is written to global memory, the following sequence occurs for the tile of data held in registers:

    • Load First Multiplicand: Threads load the corresponding elements of tensor D.
    • First Multiplication: The elementwise multiplication is performed in registers: result *= D.
    • Load Second Multiplicand: Threads load the corresponding elements of tensor E.
    • Second Multiplication: The second elementwise multiplication is performed in registers: result *= E.
    • Store Final Result: The final result F is written to global memory.

This deep fusion eliminates multiple kernel launches and the memory bandwidth required to write and re-read intermediate tensors.

Source Code Organization

Build and Run

Prerequisites

Ensure the Composable Kernel library is built and installed.

cd /path/to/composable_kernel/build
make -j install

Build the Example

cd /path/to/composable_kernel/example/65_gemm_multiply_multiply
mkdir build && cd build

cmake \
  -DCMAKE_CXX_COMPILER=/opt/rocm/bin/hipcc \
  -DCMAKE_PREFIX_PATH="/opt/rocm;${CK_INSTALL_PATH}" \
  ..

make -j

Run the Example

# Run the example with default settings
./gemm_multiply_multiply_xdl

# Run with verification, data initialization, and timing
./gemm_multiply_multiply_xdl 1 2 1

Applications

This fusion pattern is useful for several types of neural network operations and advanced computational patterns.

  • Multi-Scale Attention: Some attention mechanisms apply multiple scaling factors sequentially, such as learned attention scales followed by positional scaling.
  • Gated Mechanisms: Advanced gating architectures that use multiple multiplicative gates in sequence, such as in some RNN variants or transformer modifications.
  • Feature Modulation: Computer vision models that apply multiple feature modulation operations, such as style-based generators or attention-based feature refinement.
  • Masking Operations: Applying multiple types of masks (e.g., attention mask followed by a dropout mask) in sequence.
  • Custom Activations: Implementing complex activation functions that involve multiple multiplicative terms.
  • Mixture of Experts: Some MoE architectures use multiple routing or gating multiplications in sequence.

Performance Considerations

The performance benefits of this fusion depend on several factors:

  • Memory Bandwidth Savings: Eliminates two full tensor read/write cycles for intermediate results
  • Cache Locality: Maintains data in registers throughout the computation pipeline
  • Instruction Scheduling: Allows better interleaving of compute and memory operations
  • Kernel Launch Overhead: Reduces from three separate kernel launches to one

Comparison with Sequential Operations

Approach Kernel Launches Memory Bandwidth Register Pressure Implementation Complexity
Sequential 3 kernels 3× intermediate storage Low Simple
Fused 1 kernel No intermediate storage Medium Moderate

Extension Possibilities

This pattern can be extended in several ways:

  • More Multiplications: Additional sequential multiplications can be added to the epilogue
  • Mixed Operations: Combine multiplications with additions or other elementwise operations
  • Conditional Operations: Apply multiplications conditionally based on masks or thresholds
  • Broadcasting: Handle different broadcasting patterns for the multiplicand tensors

This example demonstrates the flexibility of the epilogue fusion approach, showing how multiple sequential operations can be efficiently combined with matrix multiplication.