Anthony Chang 24fd4a0b59 Fused attention client example (#494)
* reopen masking att instance due to CI is upgraded

* re-enable instances previously failed on 9110

* enable ksize-kpadding pair validity test

* add non-masked attention+permute test; expose masking boolean to attention kernel handles

* disable bench

* fix test

* move files

* bulk rename batched_gemm_masking_scale_softmax_gemm_permute to batched_gemm_softmax_gemm_permute

* format

* amend rename

* disable bench in test

* add mask/no-mask test for non-permute attention kernels

* disable broken kernel instance

* example working

add non-permuted problem statement

evaluating whether overhead comes from permutation or the extra kernel arg

* interface for bias addition without implementing it

* test and profiler running

* tidy

* mask type determined by enum class

* unify example code

* move masking specialization to its own header

* align formats

* extract helper functions

* experiment merging dims for attn w/ permute; shows perf parity with attn wo/ permute

* add tensor specialization to template args

since tensor spec packed shows perf parity when permutation isn't needed

remove redundant template args

comment on 'packed' tensor specialization

* grouped attention with input/output permute example

* format

* clean up

* refactor acc0 tile visitor

* fused attention client example

* format

Co-authored-by: shaojiewang <wsjmessi@163.com>
Co-authored-by: Chao Liu <chao.liu2@amd.com>
2022-10-27 15:04:31 -06:00
2022-08-18 14:53:47 -05:00
2018-10-08 22:49:58 -05:00
2021-08-08 17:41:54 +00:00
2022-08-24 18:43:43 -05:00
2022-10-03 14:34:40 -05:00
2022-10-03 14:53:32 -05:00
2022-10-03 14:34:40 -05:00

Composable Kernel

Methodology

Composable Kernel (CK) library aims to provide a programming model for writing performance critical kernels for machine learning workloads across multiple architectures including GPUs, CPUs, etc, through general purpose kernel languages, like HIP C++.

CK utilizes two concepts to achieve performance portability and code maintainability:

  • A tile-based programming model
  • Algorithm complexity reduction for complex ML operators, using innovative technique we call "Tensor Coordinate Transformation".

ALT

Code Structure

Current CK library are structured into 4 layers:

  • "Templated Tile Operators" layer
  • "Templated Kernel and Invoker" layer
  • "Instantiated Kernel and Invoker" layer
  • "Client API" layer

ALT

Contributors

The list of developers and contributors is here: Contributors

Citation

If you use CK, please use following citations:

License

CK is released under the MIT license. License File

Build CK

Build docker image

DOCKER_BUILDKIT=1 docker build -t ck:latest -f Dockerfile .

Launch docker

docker run                                     \
-it                                            \
--privileged                                   \
--group-add sudo                               \
-w /root/workspace                             \
-v ${PATH_TO_LOCAL_WORKSPACE}:/root/workspace  \
ck:latest                                      \
/bin/bash

Build CK

mkdir build && cd build

# Need to specify target ID, example below is for gfx908 and gfx90a
cmake                                                                                             \
-D CMAKE_PREFIX_PATH=/opt/rocm                                                                    \
-D CMAKE_CXX_COMPILER=/opt/rocm/bin/hipcc                                                         \
-D CMAKE_CXX_FLAGS="-O3"                                                                          \
-D CMAKE_BUILD_TYPE=Release                                                                       \
-D GPU_TARGETS=gfx908;gfx90a                                                                      \
..

Build examples and tests

 make -j examples tests
 make test

Instructions for running each individual examples are under example

Build ckProfiler

 make -j ckProfiler

Instructions for running ckProfiler are under profiler

Install CK

make install

Using CK as pre-built kernel library

Instructions for using CK as a pre-built kernel library are under client_example

Caveat

Kernel Timing and Verification

CK's own kernel timer will warn up kernel once, and then run it multiple times to get average kernel time. For some kernels that use atomic add, this will cause output buffer to be accumulated multiple times, causing verification failure. To work around it, do not use CK's own timer and do verification at the same time. CK's own timer and verification in each example and ckProfiler can be enabled or disabled from command line.

Description
[DEPRECATED] Moved to ROCm/rocm-libraries repo. NOTE: develop branch is maintained as a read-only mirror
Readme MIT Cite this repository 228 MiB
Languages
C++ 93.1%
Python 4.5%
CMake 1.5%
Shell 0.5%
Pawn 0.2%