Files
composable_kernel/client_example/08_fused_attention/README.md
Vidyasagar Ananthan 92c67a824f [DOCS] Documentation Addition (Readme updates) (#2495)
* GH-2368 Adding a basic glossary

GH-2368 Minor edits

GH-2368 Adding missing READMEs and standardization.

resolving readme updates

GH-2368 Minor improvements to documentation.

Improving some readmes.

Further improvement for readmes.

Cleaned up the documentation in 'client_example' (#2468)

Update for PR

Update ACRONYMS.md to remove trivial terms

Update ACRONYMS.md to provide detailed explanations for BF16 and BF8 formats

Apply suggestion from @spolifroni-amd

Co-authored-by: spolifroni-amd <Sandra.Polifroni@amd.com>

Apply suggestion from @spolifroni-amd

Co-authored-by: spolifroni-amd <Sandra.Polifroni@amd.com>

Update README.md to clarify CK Tile API description and remove outdated references to the Tile Engine.

revise 37_transpose readme

revise 36_copy readme

Remove references to the Tile Engine in README files for 19_gemm_multi_d and 35_batched_transpose, and update distribution links for clarity.

Remove references to the Tile Engine in multiple README files and update distribution links for consistency and clarity.

Remove references to the Tile Engine in README files across multiple examples

* GH-2368 Adding a basic glossary

GH-2368 Minor edits

GH-2368 Adding missing READMEs and standardization.

resolving readme updates

GH-2368 Minor improvements to documentation.

Improving some readmes.

Further improvement for readmes.

Cleaned up the documentation in 'client_example' (#2468)

Update for PR

Update ACRONYMS.md to remove trivial terms

Update ACRONYMS.md to provide detailed explanations for BF16 and BF8 formats

Apply suggestion from @spolifroni-amd

Co-authored-by: spolifroni-amd <Sandra.Polifroni@amd.com>

Apply suggestion from @spolifroni-amd

Co-authored-by: spolifroni-amd <Sandra.Polifroni@amd.com>

Update README.md to clarify CK Tile API description and remove outdated references to the Tile Engine.

revise 37_transpose readme

revise 36_copy readme

Remove references to the Tile Engine in README files for 19_gemm_multi_d and 35_batched_transpose, and update distribution links for clarity.

Remove references to the Tile Engine in multiple README files and update distribution links for consistency and clarity.

Remove references to the Tile Engine in README files across multiple examples

Refine README files by removing outdated references to the Tile Engine

* Updates based on PR feedback 1

* Updates based on PR feedback 2

* Updates based on PR feedback 3

* Updates based on PR feedback 4

* Updates based on PR feedback 5

* Updates based on PR feedback 6

* Updates based on PR feedback 7

* Updates based on PR feedback 8

* Content Modification of CK Tile Example

* Modify the ck_tile gemm config

---------

Co-authored-by: AviralGoelAMD <aviral.goel@amd.com>
Co-authored-by: ThomasNing <thomas.ning@amd.com>
2025-10-16 03:10:57 -07:00

90 lines
2.9 KiB
Markdown
Raw Blame History

This file contains ambiguous Unicode characters
This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.
# Fused Attention Examples
This directory contains comprehensive examples demonstrating CK's high-performance fused attention implementations, which are critical for modern transformer architectures and large language models.
---
## Theory
**Fused Multi-Head Attention Operation:**
The fused attention mechanism performs the core transformer operation in a single, optimized kernel:
$$
\text{Attention}(Q, K, V) = \text{Softmax}(Q K^T / \sqrt{d_k}) V
$$
**Detailed Mathematical Steps:**
1. **Query-Key Attention Scores**: $S = Q K^T$
2. **Scale**: $S_{\text{scaled}} = S / \sqrt{d_k}$
3. **Softmax**: $A = \text{Softmax}(S_{\text{scaled}})$
4. **Weighted Value Sum**: $\text{Output} = A V$
- Multi-head extension: Each head computes attention independently, then results are concatenated and projected.
- Tensor shapes: Q, K, V, Output are typically [Batch, Seq_len, Num_heads, Head_dim].
**Algorithmic Background:**
- Fused attention combines two GEMMs and a softmax in a single kernel, minimizing memory traffic.
- Supports bias, masking, and permutation for transformer and LLM workloads.
---
## How to Run
### Prerequisites
Please follow the instructions in the main [Build Guide](../../README.md#building-ck) section as a prerequisite to building and running this example.
### Build and run
```bash
cd composable_kernel/client_example/08_fused_attention
mkdir build && cd build
cmake -DCMAKE_CXX_COMPILER=/opt/rocm/bin/hipcc ..
make -j
# Example run (basic fused attention)
./fused_attention
# Example run (fused attention with bias)
./fused_attention_bias
```
---
## Source Code Structure
### Directory Layout
```
client_example/08_fused_attention/
├── fused_attention.cpp # Main client example: fused attention (Q, K, V)
├── fused_attention_bias.cpp # Fused attention with bias
├── CMakeLists.txt # Build configuration for the example
```
### Key Functions
- **main()** (in each `.cpp`):
Sets up Q, K, V tensors, configures attention parameters, launches the fused kernel, and verifies the result.
- **Fused attention kernel invocation**:
Uses the Composable Kernel device API to launch the fused attention operation, optionally with bias.
---
## Additional Details
- Supports FP16, BF16, FP32, and mixed precision.
- Handles causal and generic masking for autoregressive and variable-length models.
- Optimized for memory efficiency (no intermediate attention matrix in global memory).
- Example parameters can be adjusted in the source for different transformer workloads.
---
## Related Examples
- [01_gemm](../01_gemm/README.md): GEMM for Q×K^T and Attn×V
- [06_softmax](../06_softmax/README.md): Softmax client API usage
- [03_gemm_layernorm](../03_gemm_layernorm/README.md): Fused GEMM + layer normalization
- [07_grouped_convnd_fwd](../07_grouped_convnd_fwd/README.md): Grouped convolution for vision transformers
---
[Back to Client Examples](../README.md)