mirror of
https://github.com/ROCm/composable_kernel.git
synced 2026-05-04 13:41:24 +00:00
doc reorg and edits (#1112)
* doc reorg and edits * Update wrapper.rst with changes from PR #1098 * Update docs/dockerhub.rst Co-authored-by: Bartlomiej Wroblewski <bwroblewski10@gmail.com> * Update docs/index.rst Co-authored-by: Bartlomiej Wroblewski <bwroblewski10@gmail.com> * Update docs/what-is-ck.rst Co-authored-by: Bartlomiej Wroblewski <bwroblewski10@gmail.com> * Update docs/what-is-ck.rst Restored to 4 bullets, with additional text for wrapper. Co-authored-by: Bartlomiej Wroblewski <bwroblewski10@gmail.com> * Update docs/Contributors_Guide.rst Co-authored-by: Lisa <lisajdelaney@gmail.com> * Update API_Reference_Guide.rst using sentence case for title * updated index structure per Lisa * separate docker hub and tutorial --------- Co-authored-by: Bartlomiej Wroblewski <bwroblewski10@gmail.com> Co-authored-by: Lisa <lisajdelaney@gmail.com> Co-authored-by: Illia Silin <98187287+illsilin@users.noreply.github.com>
This commit is contained in:
@@ -1,52 +1,44 @@
|
||||
===============
|
||||
CK Hello world
|
||||
===============
|
||||
.. meta::
|
||||
:description: Composable Kernel documentation and API reference library
|
||||
:keywords: composable kernel, CK, ROCm, API, documentation
|
||||
|
||||
-------------------------------------
|
||||
Motivation
|
||||
-------------------------------------
|
||||
.. _hello-world:
|
||||
|
||||
This tutorial is aimed at engineers dealing with artificial intelligence and machine learning who
|
||||
would like to optimize their pipelines and squeeze every performance drop by adding Composable
|
||||
Kernel (CK) library to their projects. We would like to make the CK library approachable so
|
||||
the tutorial is not based on the latest release and doesn't have all the bleeding edge features,
|
||||
but it will be reproducible now and forever.
|
||||
********************************************************************
|
||||
Hello World Tutorial
|
||||
********************************************************************
|
||||
|
||||
During this tutorial we will have an introduction to the CK library, we will build it and run some
|
||||
examples and tests, so to say we will run a "Hello world" example. In future tutorials we will go
|
||||
in depth and breadth and get familiar with other tools and ways to integrate CK into your project.
|
||||
This tutorial is for engineers dealing with artificial intelligence and machine learning who
|
||||
would like to optimize pipelines and improve performance using the Composable
|
||||
Kernel (CK) library. This tutorial provides an introduction to the CK library. You will build the library and run some examples using a "Hello World" example.
|
||||
|
||||
-------------------------------------
|
||||
Description
|
||||
-------------------------------------
|
||||
===========
|
||||
|
||||
Modern AI technology solves more and more problems in all imaginable fields, but crafting fast and
|
||||
efficient workflows is still challenging. CK is one of the tools to make AI heavy lifting as fast
|
||||
and efficient as possible. CK is a collection of optimized AI operator kernels and tools to create
|
||||
new ones. The library has components required for majority of modern neural networks architectures
|
||||
including matrix multiplication, convolution, contraction, reduction, attention modules, variety of
|
||||
activation functions, fused operators and many more.
|
||||
Modern AI technology solves more and more problems in a variety of fields, but crafting fast and
|
||||
efficient workflows is still challenging. CK can make the AI workflow fast
|
||||
and efficient. CK is a collection of optimized AI operator kernels with tools to create
|
||||
new kernels. The library has components required for modern neural network architectures
|
||||
including matrix multiplication, convolution, contraction, reduction, attention modules, a variety of activation functions, and fused operators.
|
||||
|
||||
So how do we (almost) reach the speed of light? CK acceleration abilities are based on:
|
||||
CK library acceleration features are based on:
|
||||
|
||||
* Layered structure.
|
||||
* Tile-based computation model.
|
||||
* Tensor coordinate transformation.
|
||||
* Hardware acceleration use.
|
||||
* Support of low precision data types including fp16, bf16, int8 and int4.
|
||||
* Layered structure
|
||||
* Tile-based computation model
|
||||
* Tensor coordinate transformation
|
||||
* Hardware acceleration use
|
||||
* Support of low precision data types including fp16, bf16, int8 and int4
|
||||
|
||||
If you are excited and need more technical details and benchmarking results - read this awesome
|
||||
If you need more technical details and benchmarking results read the following
|
||||
`blog post <https://community.amd.com/t5/instinct-accelerators/amd-composable-kernel-library-efficient-fused-kernels-for-ai/ba-p/553224>`_.
|
||||
|
||||
For more details visit our `github repository <https://github.com/ROCmSoftwarePlatform/composable_kernel>`_.
|
||||
To download the library visit the `composable_kernel repository <https://github.com/ROCmSoftwarePlatform/composable_kernel>`_.
|
||||
|
||||
-------------------------------------
|
||||
Hardware targets
|
||||
-------------------------------------
|
||||
================
|
||||
|
||||
CK library fully supports `gfx908` and `gfx90a` GPU architectures and only some operators are
|
||||
supported for `gfx1030`. Let's check the hardware you have at hand and decide on the target
|
||||
GPU architecture.
|
||||
CK library fully supports `gfx908` and `gfx90a` GPU architectures, while only some operators are
|
||||
supported for `gfx1030` devices. Check your hardware to determine the target GPU architecture.
|
||||
|
||||
========== =========
|
||||
GPU Target AMD GPU
|
||||
@@ -59,47 +51,24 @@ gfx1030 Radeon PRO V620, W6800, W6800X, W6800X Duo, W6900X, RX 6800, RX 6
|
||||
There are also `cloud options <https://aws.amazon.com/ec2/instance-types/g4/>`_ you can find if
|
||||
you don't have an AMD GPU at hand.
|
||||
|
||||
-------------------------------------
|
||||
Build the library
|
||||
-------------------------------------
|
||||
=================
|
||||
|
||||
First let's clone the library and rebase to the tested version::
|
||||
This tutorial is based on the use of docker images as explained in :ref:`docker-hub`. Download a docker image suitable for your OS and ROCm release, run or start the docker container, and then resume the tutorial from this point.
|
||||
|
||||
git clone https://github.com/ROCmSoftwarePlatform/composable_kernel.git
|
||||
cd composable_kernel/
|
||||
git checkout tutorial_hello_world
|
||||
.. note::
|
||||
|
||||
To make our lives easier we prepared
|
||||
`docker images <https://hub.docker.com/r/rocm/composable_kernel>`_ with all the necessary
|
||||
dependencies. Pick the right image and create a container. In this tutorial we use
|
||||
``rocm/composable_kernel:ck_ub20.04_rocm6.0`` image, it is based on Ubuntu 20.04 and
|
||||
ROCm v6.0.
|
||||
You can also `install ROCm <https://rocm.docs.amd.com/projects/install-on-linux/en/latest/>`_ on your system, clone the `Composable Kernel repository <https://github.com/ROCmSoftwarePlatform/composable_kernel.git>`_ on GitHub, and use that to build and run the examples using the commands described below.
|
||||
|
||||
If your current folder is ``${HOME}``, start the docker container with::
|
||||
|
||||
docker run \
|
||||
-it \
|
||||
--privileged \
|
||||
--group-add sudo \
|
||||
-w /root/workspace \
|
||||
-v ${HOME}:/root/workspace \
|
||||
rocm/composable_kernel:ck_ub20.04_rocm6.0 \
|
||||
/bin/bash
|
||||
|
||||
If your current folder is different from ``${HOME}``, adjust the line ``-v ${HOME}:/root/workspace``
|
||||
to fit your folder structure.
|
||||
|
||||
Inside the docker container current folder is ``~/workspace``, library path is
|
||||
``~/workspace/composable_kernel``, navigate to the library::
|
||||
Both the docker container and GitHub repository include the Composable Kernel library. Navigate to the library::
|
||||
|
||||
cd composable_kernel/
|
||||
|
||||
Create and go to the ``build`` directory::
|
||||
Create and change to a ``build`` directory::
|
||||
|
||||
mkdir build && cd build
|
||||
|
||||
In the previous section we talked about target GPU architecture. Once you decide which one is right
|
||||
for you, run CMake using the right ``GPU_TARGETS`` flag::
|
||||
The previous section discussed supported GPU architecture. Once you decide which hardware targets are needed, run CMake using the ``GPU_TARGETS`` flag::
|
||||
|
||||
cmake \
|
||||
-D CMAKE_PREFIX_PATH=/opt/rocm \
|
||||
@@ -109,26 +78,25 @@ for you, run CMake using the right ``GPU_TARGETS`` flag::
|
||||
-D BUILD_DEV=OFF \
|
||||
-D GPU_TARGETS="gfx908;gfx90a;gfx1030" ..
|
||||
|
||||
If everything went well the CMake run will end up with::
|
||||
If everything goes well the CMake command will return::
|
||||
|
||||
-- Configuring done
|
||||
-- Generating done
|
||||
-- Build files have been written to: "/root/workspace/composable_kernel/build"
|
||||
|
||||
Finally, we can build examples and tests::
|
||||
Finally, you can build examples and tests::
|
||||
|
||||
make -j examples tests
|
||||
|
||||
If everything is smooth, you'll see::
|
||||
When complete you should see::
|
||||
|
||||
Scanning dependencies of target tests
|
||||
[100%] Built target tests
|
||||
|
||||
---------------------------
|
||||
Run examples and tests
|
||||
---------------------------
|
||||
======================
|
||||
|
||||
Examples are listed as test cases as well, so we can run all examples and tests with::
|
||||
Examples are listed as test cases as well, so you can run all examples and tests with::
|
||||
|
||||
ctest
|
||||
|
||||
@@ -136,38 +104,32 @@ You can check the list of all tests by running::
|
||||
|
||||
ctest -N
|
||||
|
||||
We can also run them separately, here is a separate example execution::
|
||||
You can also run examples separately as shown in the following example execution::
|
||||
|
||||
./bin/example_gemm_xdl_fp16 1 1 1
|
||||
|
||||
The arguments ``1 1 1`` mean that we want to run this example in the mode: verify results with CPU,
|
||||
initialize matrices with integers and benchmark the kernel execution. You can play around with
|
||||
these parameters and see how output and execution results change.
|
||||
The arguments ``1 1 1`` mean that you want to run this example in the mode: verify results with CPU, initialize matrices with integers, and benchmark the kernel execution. You can play around with these parameters and see how output and execution results change.
|
||||
|
||||
If everything goes well and you have a device based on `gfx908` or `gfx90a` architecture you should see
|
||||
something like::
|
||||
If you have a device based on `gfx908` or `gfx90a` architecture, and if the example runs as expected, you should see something like::
|
||||
|
||||
a_m_k: dim 2, lengths {3840, 4096}, strides {4096, 1}
|
||||
b_k_n: dim 2, lengths {4096, 4096}, strides {1, 4096}
|
||||
b_k_n: dim 2, lengths {4096, 4096}, strides {4096, 1}
|
||||
c_m_n: dim 2, lengths {3840, 4096}, strides {4096, 1}
|
||||
launch_and_time_kernel: grid_dim {480, 1, 1}, block_dim {256, 1, 1}
|
||||
Warm up 1 time
|
||||
Start running 10 times...
|
||||
Perf: 1.10017 ms, 117.117 TFlops, 87.6854 GB/s, DeviceGemmXdl<256, 256, 128, 4, 8, 32, 32, 4, 2> NumPrefetch: 1, LoopScheduler: Default, PipelineVersion: v1
|
||||
Perf: 1.08153 ms, 119.136 TFlops, 89.1972 GB/s, DeviceGemm_Xdl_CShuffle<Default, 256, 256, 128, 32, 8, 2, 32, 32, 4, 2, 8, 4, 1, 2> LoopScheduler: Interwave, PipelineVersion: v1
|
||||
|
||||
Meanwhile, running it on a `gfx1030` device should result in::
|
||||
However, running it on a `gfx1030` device should result in the following::
|
||||
|
||||
a_m_k: dim 2, lengths {3840, 4096}, strides {4096, 1}
|
||||
b_k_n: dim 2, lengths {4096, 4096}, strides {1, 4096}
|
||||
c_m_n: dim 2, lengths {3840, 4096}, strides {4096, 1}
|
||||
DeviceGemmXdl<256, 256, 128, 4, 8, 32, 32, 4, 2> NumPrefetch: 1, LoopScheduler: Default, PipelineVersion: v1 does not support this problem
|
||||
|
||||
But don't panic, some of the operators are supported on `gfx1030` architecture, so you can run a
|
||||
Don't worry, some operators are supported on `gfx1030` architecture, so you can run a
|
||||
separate example like::
|
||||
|
||||
./bin/example_gemm_dl_fp16 1 1 1
|
||||
|
||||
and it should result in something nice similar to::
|
||||
and it should return something like::
|
||||
|
||||
a_m_k: dim 2, lengths {3840, 4096}, strides {1, 4096}
|
||||
b_k_n: dim 2, lengths {4096, 4096}, strides {4096, 1}
|
||||
@@ -182,12 +144,9 @@ and it should result in something nice similar to::
|
||||
|
||||
.. note::
|
||||
|
||||
There was a new CMake flag ``DL_KERNELS`` added in the latest versions of CK. If you use one of
|
||||
the newest versions of the library and do not see the above results when running
|
||||
``example_gemm_dl_fp16``, it might be necessary to add ``-D DL_KERNELS=ON`` to your CMake command
|
||||
in order to build the operators supported on the `gfx1030` architecture.
|
||||
A new CMake flag ``DL_KERNELS`` has been added to the latest versions of CK. If you do not see the above results when running ``example_gemm_dl_fp16``, you might need to add ``-D DL_KERNELS=ON`` to your CMake command to build the operators supported on the `gfx1030` architecture.
|
||||
|
||||
We can also run a separate test::
|
||||
You can also run a separate test::
|
||||
|
||||
ctest -R test_gemm_fp16
|
||||
|
||||
@@ -198,13 +157,9 @@ If everything goes well you should see something like::
|
||||
|
||||
100% tests passed, 0 tests failed out of 1
|
||||
|
||||
-----------
|
||||
Summary
|
||||
-----------
|
||||
=======
|
||||
|
||||
In this tutorial we took the first look at the Composable Kernel library, built it on your system
|
||||
and ran some examples and tests. Stay tuned, in the next tutorial we will run kernels with different
|
||||
configs to find out the best one for your hardware and task.
|
||||
In this tutorial you took the first look at the Composable Kernel library, built it on your system and ran some examples and tests. In the next tutorial you will run kernels with different configurations to find out the best one for your hardware and task.
|
||||
|
||||
P.S.: Don't forget to switch off the cloud instance if you have launched one, you can find better
|
||||
ways to spend your money for sure!
|
||||
P.S.: If you are running on a cloud instance, don't forget to switch off the cloud instance.
|
||||
|
||||
Reference in New Issue
Block a user