Commit Graph

335 Commits

Author SHA1 Message Date
Qianfeng
2caa0a74f0 Misc fixes (#994)
* reinterpret_cast to const char* in dumpBufferToFile to be compatible with both const and non-const input pointers

* Add seed input to GeneratorTensor_4 for normal_distribution generator

* Add GetTypeString() for DeviceElementwiseImpl

* Add HIP_CHECK_ERROR macro

[ROCm/composable_kernel commit: b4fc4d0b8d]
2023-10-19 11:26:04 -05:00
Bartłomiej Kocot
2d230d0f5c Extend available elementwise operations with conv examples (#995)
* Extend available elementwise operations with conv examples

* Fixes

* Remove not needed convert

* Update CMakeFile and dir name

[ROCm/composable_kernel commit: 82f3a835d5]
2023-10-19 17:23:19 +02:00
rocking
815ed3a1f9 Layernorm and groupnorm support to save mean and inverse std in forward (#929)
* save mean and inverse std in normalization

* Save mean and inverse std in splitK

* Vector save mean and inv std

* Modify instance for save mean and std

* simplify the layernorm example

* Save mean and std in groupnorm example

* Save mean and inv std in ckProfiler and test

* Remove compute data type from base class

* Save mean and inv std in client example

* Add changelog

* clang format

* Fix compile error

* Refine naming

* Avoid error in bf16

* revert changelog

[ROCm/composable_kernel commit: 3696fe1c76]
2023-10-19 07:36:29 +08:00
zjing14
5c3f35d358 fixed math-ci error; suspend a warning (#996)
Co-authored-by: Jing Zhang <jizha@amd.com>

[ROCm/composable_kernel commit: 58338bb203]
2023-10-18 16:30:13 -07:00
zjing14
dc94c20258 Clean DTYPES conditions in CMake (#974)
* Add a condition to build fp8 instances

* simplified buffer_load/store

* add bfp8/fp8

* fixed

* remove all f8/bf8 condition include folder

* fixed cmake conditions

* fixed DTYPES=fp16/bfp16

* fix

* fixed buffer_load

* fixed buffer_store

* fix

* clean example cmake files

* fixed ci

* fixed cit

---------

Co-authored-by: Rostyslav Geyyer <rosty.geyyer@amd.com>
Co-authored-by: Jing Zhang <jizha@amd.com>

[ROCm/composable_kernel commit: bf435140dc]
2023-10-18 11:14:14 -05:00
zjing14
522bbcb766 Add contraction_multi_abd (#972)
* add gridwise_multi_abd

* move element_op into RunRead

* merge element_wise op with data read

* add multiABD example

* allow packed elementwise_op

* changed example

* clean

* clean

* add is_detected

* fix

* minor fix

* add scaleAdd_vec4 example

* init commit for contraction_multi_ABD

* add examples

* add examples of multiA and broadcast

* update example

* fixed comments

* Update cmake-ck-dev.sh

* Update cmake-ck-dev.sh

* Add comments into the example

* Update CMakeLists.txt

---------

Co-authored-by: Jing Zhang <jizha@amd.com>

[ROCm/composable_kernel commit: 1cc36ba5fb]
2023-10-17 20:17:58 -05:00
zjing14
fc1434e0f2 added ab_elementwise_op support into splitK Gemm (#956)
* add ab_elementwise

* fixed ci

* fixed a merge issue

* fixed pr comments

* fixed a conflict

* remove 61_example

---------

Co-authored-by: Jing Zhang <jizha@amd.com>

[ROCm/composable_kernel commit: bf0addb575]
2023-10-17 09:24:02 -05:00
Bartłomiej Kocot
49f179f755 Add grouped conv bwd weight wmma (#985)
* Add grouped conv bwd weight wmma

* Update README, changelog, profiler

* Minor fixes

* Fix grouped conv bwd wei dl kernel

* Minor fixes

* Minor stylistic fixes

[ROCm/composable_kernel commit: 16d7c4d2f7]
2023-10-17 10:32:26 +02:00
zjing14
4052b33a3f workaround with float (#992)
Co-authored-by: Jing Zhang <jizha@amd.com>

[ROCm/composable_kernel commit: 39430bfdeb]
2023-10-16 15:42:59 -07:00
Rostyslav Geyyer
9d72b08639 Add splitk gemm fp16 @ fp16 with fp8 compute instances (#983)
* Add ComputeType

* Update for compatibility

* Add instances

* Update profiler api

[ROCm/composable_kernel commit: fa753f27ba]
2023-10-13 16:27:11 -05:00
zjing14
1271deb162 add vector_type support into thread_copy_v3r1 (#969)
* add vector_type support into thread_copy_v3r1

* remove unncessary type_convert

* fixed datatype

* fixed dataType

* changed API with is_packx_invocable

* changed example

* add missing cmake file

* fixed ci

* fixed cmake

---------

Co-authored-by: Jing Zhang <jizha@amd.com>

[ROCm/composable_kernel commit: 2ce9b56c64]
2023-10-13 15:11:43 -05:00
zjing14
775a87175c simplified buffer_load/store (#971)
* simplified buffer_load/store

* add bfp8/fp8

* fixed

* fixed buffer_load

* fixed buffer_store

---------

Co-authored-by: Jing Zhang <jizha@amd.com>

[ROCm/composable_kernel commit: f3b02ecfd2]
2023-10-11 20:29:01 -05:00
zjing14
91e1cf6750 Revert "Grouped Gemm with looping over the tiles. (#788)" (#982)
This reverts commit 43fe5037d4ff9d07365e5d3b8f5b31676a8ff9da.

[ROCm/composable_kernel commit: c99323be6e]
2023-10-11 14:27:29 -05:00
Adam Osewski
34b77070f3 Grouped Gemm with looping over the tiles. (#788)
* Introduce LocalBlockToCTileMap.

* Change the signature of CalculateBottomIndex() function which now does
not accept any argument. The B2C map which is already passed as an
argument to the kernel Run function is calculating block's local id
already outside at kernel entry point __global__ function.
The LocalB2C map stores as members local block ID.

* Use LocalBlockToCTile map in device ops.

* First draft of tile loop work distribution.

* Fix typo.

* Simplify kernel arguments.

Calculate descriptors & B2C maps on the device.

* Use looping kernel.

* Fix B2C constructor.

* Fix Navi21 errors.

* Calculate tile start/end in device kernel.

* Change Run API to accept user provided workspace buffer.

* Add new line at EOF.

* Move Gemm KernelArguments to device op interface.

* Remove unused code.

* Update API.

* Launch grid size which is min of occupancy vs tile count

* Get back to use constant memory for gemm descriptors.

* Remove unused code.

* Add default virtual method implementation.

* Update comments to conform with doxygen style.

* Fix doc style and unused parameters.

* Add thread cluster lengths to kernel name.

* Remove old splitk impl and replace it with tile looping one.

* Modify instances.

* set KPerBlock to 64
* maximize wherever possible vector load size.

* Fix instances cluster lengths.

* Change comment style.

* Use 128b store where possible in instances.

* Update test cases, since KPerBlock has doubled.

* Update output stream operator for Sequence.

* Add pipeline version to GroupedGEMM device op type string.

* Fix pipeline version type logging.

* Fix input tensors type after merge.

* Fix compiler error.

* Fix output stream operator for Pipeline version.

* Store using 128b.

* Set of instances with kpb 32/64

* Limit number of instances

* Remove commented out instances.

* Fix function name.

* Limit the number of instances.

Add pipline version to the regular instances

* Change thr cluster layout for reading B tensor.

* disabled failed instances

---------

Co-authored-by: Adam Osewski <aosewski@amd.com>
Co-authored-by: zjing14 <zhangjing14@gmail.com>
Co-authored-by: Jing Zhang <jizha@amd.com>

[ROCm/composable_kernel commit: a4f72a314a]
2023-10-10 22:21:15 -05:00
Bartłomiej Kocot
a080f37211 Fix MNKPadding in gridwise_gemm_xdlops_v2r3 (#981)
[ROCm/composable_kernel commit: 98c8071475]
2023-10-10 21:48:07 +02:00
zjing14
0053cbab22 Fixed f8_gemm NaN (#975)
* workaround nan problem by changing output to fp16

* enable f8/bf8 gemm tests on MI200

* workaround f16 to f8 conversion

---------

Co-authored-by: Jing Zhang <jizha@amd.com>

[ROCm/composable_kernel commit: ac9595a9f1]
2023-10-10 10:30:26 -05:00
Illia Silin
3b3782dd07 Revert "Add support for mixed precision in contraction scale and bilinear" (#967)
* Revert "Add support for mixed precision in contraction scale and bilinear (#936)"

This reverts commit f7aff936cb9d02dc8e53a8a3ea8648e1058253a2.

* revert commits #957 and #960

[ROCm/composable_kernel commit: 4daedf8ca5]
2023-10-05 14:58:23 -07:00
zjing14
f60e9faeb1 Grouped conv bwd data with fp16 input and bf8fp8 comp (#962)
* Add f8 bf8 gemm example

* Add element-wise ops

* Add intrinsics

* Update reference calculation

* Add an additional type option for xdlops gemm

* Fix build process

* Add bf8 to buffer addressing

* Update blockwise op, split typeA and typeB

* Update for compatibility

* Uppdate naming to f8->fp8

* Update naming

* Format

* Update naming (#937)

* Add a client example

* Add computetypes to device and gridwise ops

* Add instances, update instance factory

* Format

* Fix a flag

* Add ckProfiler mode

* Fix typos

* Add an example

* Add bf8 generator

* add bf8 mfma; fixed type_convert for bf8

* move verfication ahead of timing

* Update reference calculation

* Fix reference

* Narrow down float init range

* Fix bf8 bf8 mfma

* Add bf8 @ fp8 mfma

* Update example

* Update instances

* Update profiler api

* Update for compatibility

* Format

* Remove extra example

* Clean up

* workaround convert

* added instance of f16_bf8f8, and client example

* fixed mfma selector

* format

---------

Co-authored-by: Rostyslav Geyyer <rosty.geyyer@amd.com>
Co-authored-by: Rostyslav Geyyer <46627076+geyyer@users.noreply.github.com>
Co-authored-by: Jing Zhang <jizha@amd.com>

[ROCm/composable_kernel commit: 04f93aadb8]
2023-10-04 18:04:27 -05:00
Rostyslav Geyyer
6a3eedbff0 Add conv bwd weight fp16 comp bf8 fp8 op, instances and example (#945)
* Add f8 bf8 gemm example

* Add element-wise ops

* Add intrinsics

* Update reference calculation

* Add an additional type option for xdlops gemm

* Fix build process

* Add bf8 to buffer addressing

* Update blockwise op, split typeA and typeB

* Update for compatibility

* Uppdate naming to f8->fp8

* Update naming

* Format

* Update naming (#937)

* Add a client example

* Add computetypes to device and gridwise ops

* Add instances, update instance factory

* Format

* Fix a flag

* Add ckProfiler mode

* Fix typos

* Add an example

* Add bf8 generator

* add bf8 mfma; fixed type_convert for bf8

* move verfication ahead of timing

* Update reference calculation

* Fix reference

* Narrow down float init range

* Fix bf8 bf8 mfma

* Add bf8 @ fp8 mfma

* Update example

* Update instances

* Update profiler api

* Update for compatibility

* Format

* Remove extra example

* Clean up

* workaround convert

---------

Co-authored-by: Jing Zhang <jizha@amd.com>

[ROCm/composable_kernel commit: 42facfc6b7]
2023-10-04 08:19:08 -05:00
zjing14
ffaff83a2f 3d grouped conv fwd with input/output fp16 and comp fp8 (#931)
* add f8 comp instance

* fixed

* fixed comments

* rename

* fixed dtype

* format

* fixed CI

* fixed ci

* add missing ComputeType

* fixed cit

* fixed

* Update cmake-ck-dev.sh

---------

Co-authored-by: Jing Zhang <jizha@amd.com>

[ROCm/composable_kernel commit: e921e1f08d]
2023-10-03 20:04:26 -05:00
zjing14
33859062bd Fixed contraction issues (#960)
* add missing ComputeType

* fixed

* Update cmake-ck-dev.sh

---------

Co-authored-by: Jing Zhang <jizha@amd.com>

[ROCm/composable_kernel commit: aa46039f2d]
2023-10-03 09:32:44 -05:00
Rostyslav Geyyer
28a1199b62 Add fp8 @ bf8 gemm support and example (#933)
* Add f8 bf8 gemm example

* Add element-wise ops

* Add intrinsics

* Update reference calculation

* Add an additional type option for xdlops gemm

* Fix build process

* Add bf8 to buffer addressing

* Update blockwise op, split typeA and typeB

* Update for compatibility

* Uppdate naming to f8->fp8

* Update naming

* Format

[ROCm/composable_kernel commit: bd09b5c538]
2023-10-02 16:39:03 -05:00
zjing14
50c12c6c43 Contraction multi abd (#957)
* add gridwise_multi_abd

* move element_op into RunRead

* merge element_wise op with data read

* add multiABD example

* allow packed elementwise_op

* changed example

* clean

* clean

* add is_detected

* fix

* minor fix

* add scaleAdd_vec4 example

* init commit for contraction_multi_ABD

* add examples

* add examples of multiA and broadcast

* update example

* fixed comments

* Update cmake-ck-dev.sh

* Update cmake-ck-dev.sh

* Add comments into the example

---------

Co-authored-by: Jing Zhang <jizha@amd.com>

[ROCm/composable_kernel commit: 9d58c42103]
2023-10-02 09:18:36 -05:00
Bartlomiej Wroblewski
ce003d6493 Add support for mixed precision in contraction scale and bilinear (#936)
* Extract common functionality to separate files

* Reference contraction: Remove incorrect consts from type_converts

* Reference contraction: Add missing type_convert for dst value

* Reference contraction: Fix incorrect order of B matrix dimensions

* Add support for mixed precision in contraction scale and bilinear

* Move using statements from instances to a common file

* Move using statements from examples to a common file

* Fix the order of B matrix dimensions across examples and profiler

* Fix the computation of error threshold

* Make ComputeDataType an optional argument

* Include possible DataType -> ComputeDataType casting error in the threshold

* Remove commented code

[ROCm/composable_kernel commit: f07485060e]
2023-09-29 10:54:31 -05:00
Bartłomiej Kocot
612cbbdc54 Add grouped conv bwd data wmma (#950)
* Add grouped conv bwd data wmma

* Fix copyrights

* Add instances with smaller NPerBlock

* Update interface test

* Minor stylistic fixes

* Minor stylistic fixes

[ROCm/composable_kernel commit: cb53874002]
2023-09-28 23:10:18 +02:00
Illia Silin
96f752aba9 Fix gemm_splitk test, add hip_check_error after kernel calls in kernel_launch. (#951)
* Added error check after kernel launch (#919)

Co-authored-by: Xiaodong Wang <xdwang@meta.com>
Co-authored-by: Xiaodong Wang <xw285@cornell.edu>

* remove M=0 test cases for test_gemm_splitk

---------

Co-authored-by: Xiaodong Wang <xdwang@meta.com>
Co-authored-by: Xiaodong Wang <xw285@cornell.edu>

[ROCm/composable_kernel commit: bc1108bb3e]
2023-09-27 15:19:33 -07:00
Bartlomiej Wroblewski
bf38d27453 Handle type conversions to a const datatype (#944)
* Handle type conversions to a const datatype

* Review: Handle X being const data type as well

* Review: Remove typo

[ROCm/composable_kernel commit: f4af5aed8b]
2023-09-27 15:02:42 -05:00
Bartłomiej Kocot
be5cb244c0 Add column to image kernel (#930)
* Add column to image kernel

* Minor fixes for dtypes and client examples

* Disable tests for disabled dtypes

* Disable add instances functions for disabled data types

* Minor stylistic fixes

* Revert "Disable add instances functions for disabled data types"

This reverts commit 728b869563.

* Instances reduction

* Add comments in device_column_to_image_impl

* Update changelog and Copyrights

* Improve changelog

[ROCm/composable_kernel commit: e2243a4d1e]
2023-09-27 17:19:06 +02:00
zjing14
fb513ac42b Add multiple A/B support (#906)
* add gridwise_multi_abd

* move element_op into RunRead

* merge element_wise op with data read

* add multiABD example

* allow packed elementwise_op

* changed example

* clean

* clean

* add is_detected

* fix

* minor fix

* add scaleAdd_vec4 example

---------

Co-authored-by: Jing Zhang <jizha@amd.com>

[ROCm/composable_kernel commit: 11676c7e49]
2023-09-26 21:16:23 -05:00
zjing14
3fe6761718 Fixed Gemmv2r3 kpad (#938)
* added kpad support into v2r3

* add generic instances

* fixed comments

* fixed mnk padding

* Update device_batched_gemm_xdl.hpp

* fixed kpad

---------

Co-authored-by: Jing Zhang <jizha@amd.com>

[ROCm/composable_kernel commit: 48ba6e8a69]
2023-09-26 18:40:00 -05:00
Bartlomiej Wroblewski
4497a8874f Fix DL GEMM instances with too large vector size (#901)
* Fix vector lengths of DL GEMM instances with padding
* Add checks for correctness of vector lenghts in DL GEMM

[ROCm/composable_kernel commit: 63cd459248]
2023-09-18 14:08:23 +02:00
Rostyslav Geyyer
1a7a4a775e Add native conversions fp8<->fp32 (#908)
* Add native conversions

* Add bf8 conversions

[ROCm/composable_kernel commit: f17af2e9ed]
2023-09-17 20:56:27 -05:00
Bartlomiej Kocot
b287234d67 Stylistic improvements for grouped convolution code
Remove unnecessary ignoring

Update test/grouped_convnd_bwd_weight/test_grouped_convnd_bwd_weight.cpp


[ROCm/composable_kernel commit: bc2d0583d3]
2023-09-15 20:03:47 +02:00
zjing14
2d384eaba7 Add fp16/fp8 support into Grouped gemm FixedNK (#874)
* move all arguments into device

* add b2c_tile_map

* add examples

* add SetDeviceKernelArgs

* dedicated fixed_nk solution

* init client api

* add grouped_gemm_bias example

* add a instance

* add instances

* formatting

* fixed cmake

* Update EnableCompilerWarnings.cmake

* Update cmake-ck-dev.sh

* clean; fixed comments

* fixed comment

* add instances for fp32 output

* add instances for fp32 output

* add fp32 out client example

* fixed CI

* init commit for kbatch

* add splitk gridwise

* format

* fixed

* clean deviceop

* clean code

* finish splitk

* fixed instances

* change m_loops to tile_loops

* add setkbatch

* clean code

* add splitK+bias

* add instances

* opt mk_nk instances

* clean examples

* fixed CI

* remove zero

* finished non-zero

* clean

* clean code

* optimized global_barrier

* fixed ci

* fixed CI

* instance and client

* removed AddBias

* format

* fixed CI

* fixed CI

* move 20_grouped_gemm to 21_grouped_gemm

* clean

* formatting

* clean

* clean

* fixed computeType

---------

Co-authored-by: Jing Zhang <jizha@amd.com>

[ROCm/composable_kernel commit: f9d0eddb90]
2023-09-14 21:04:10 -05:00
Bartłomiej Kocot
f4999cd99a Add grouped conv bwd weight dl instances and new layout (#897)
* Add grouped conv bwd weight dl instances and new layout

* Add M and N padding

* Remove todo comment

* Enable grouped conv fwd dl k,c=1 generic instance

* Comment fixes

[ROCm/composable_kernel commit: 475188ca2e]
2023-09-13 10:14:31 -05:00
zjing14
5bb25a9688 fixed fp8 issues (#894)
* fixed fp8 init; and reference gemm

* Update host_tensor_generator.hpp

* fixed convert

* fixed reference gemm

* fixed comments

* fixed comments

* fixed ci

* fixed computeType

---------

Co-authored-by: Jing Zhang <jizha@amd.com>

[ROCm/composable_kernel commit: a66d14edf2]
2023-09-12 22:17:56 -05:00
Rostyslav Geyyer
0752117077 Refactor f8_t, add bf8_t (#792)
* Refactor f8_t to add bf8_t

* Add check_err impl for f8_t

* Update fp8 test

* Format

* Revert the fix

* Update vector_type implementation

* Add bf8 test

* Add bf8, use BitInt types

* Add bf8 conversion methods

* Update type_convert for fp8/bf8

* Add check_err fp8/bf8 support

* Add subnorm fp8 tests

* Add subnorm bf8 tests

* Fix conversion

* Add bf8 cmake bindings

* Add macros to enable build with disabled fp8/bf8

* Remove is_native method

* Update flag combination for mixed precision instances

* Add more flag checks

* Add another flag to a client example

* Add type traits, decouple f8/bf8 casting

* Clean up

* Decouple fp8 and bf8 flags

* Remove more redundant flags

* Remove leftover comments

[ROCm/composable_kernel commit: 62d4af7449]
2023-09-12 17:04:27 -05:00
Bartlomiej Wroblewski
b4064d1401 Add new instances and support for small cases in DPP8 GEMM (#896)
[ROCm/composable_kernel commit: 547dbcfbc2]
2023-09-12 10:05:23 -05:00
Bartlomiej Wroblewski
bf5b711799 Enable DPP8 GEMM on Navi3 (#892)
[ROCm/composable_kernel commit: 8f84a01237]
2023-09-08 11:14:57 -05:00
Haocong WANG
c2866bb432 [Navi3x] Add fp16/int8 wmma conv forward instances (#746)
* fix wmma gemm int8; add grouped conv int8 example

* Add int8 gemm-bilinear instances

* compile sanity check unknown

* Sanity pass + clang-format

* add int8 conv profiler instances

* solve merge conflict

---------

Co-authored-by: zjing14 <zhangjing14@gmail.com>
Co-authored-by: Chao Liu <chao.liu2@amd.com>

[ROCm/composable_kernel commit: 562b4cec48]
2023-09-07 21:59:26 -05:00
Bartlomiej Wroblewski
02f8f707e8 Redesign the DPP8 GEMM kernel to use warp-wise component (#863)
* Redesign the DPP8 GEMM kernel to use warp-wise component

* Review: Improve error messages

* Review: Remove unnecessary empty lines

* Review: Fix M, N per thread names

* Review: Rename mfma_input_type to dpp_input_type

* Review: Fix tensor adaptor; remove unnecessary element

* Review: Remove calls to dpp_gemm's MakeCDescriptor

* Review: Add blockwise doc, change function names to include dimension names

* Review: Remove duplicated code; Move Block2CtileMap alias to the top of the file

* Review: Add __restrict__ keywords

* Review: Use MatrixPadder for padding A, B, C matrices

* Review: Remove hardcoded datatypes

* Review: Change names from FloatX to XDataType

* Review: Introduce AK0 and BK0 instead of a single K0

* Review: Remove construction of dpp_datatypes object

* Review: Rename DppInstrRunner to DppLanegroupGemm

[ROCm/composable_kernel commit: 37a8c1f756]
2023-09-06 11:44:09 -05:00
zjing14
29daafc158 added padding of K into gemm_v2r3 (#887)
* added kpad support into v2r3

* add generic instances

* fixed comments

* fixed mnk padding

* Update device_batched_gemm_xdl.hpp

---------

Co-authored-by: Jing Zhang <jizha@amd.com>

[ROCm/composable_kernel commit: 3786bfe1cc]
2023-09-06 10:15:52 -05:00
Illia Silin
8860638f7c fix syntax (#890)
[ROCm/composable_kernel commit: 7dcb14d9d4]
2023-09-05 11:29:44 -07:00
Bartłomiej Kocot
d79b1c5dd0 Add image to column kernel (#867)
* Add image to column kernel

* Add instances, tests, profiler, example

* Add client example

* Several fixes of image to column

* Fix variable name in device_image_to_column_impl

* Several fixes of image to column profiler

* Fix num_btype calculation

* Make new mesaurements for correct bytes calculation

[ROCm/composable_kernel commit: 0077eeb3be]
2023-09-05 10:11:40 -05:00
Bartłomiej Kocot
748899987a Fix K padding calculation for grouped conv data (#876)
* Fix K padding calculation for grouped conv data

* Restore previous padd for 1x1 specialization

[ROCm/composable_kernel commit: c981f6d033]
2023-09-05 10:07:41 -05:00
zjing14
c79ecbccfb Grouped Gemm with Fixed K and N with SplitK (#818)
* move all arguments into device

* add b2c_tile_map

* add examples

* add SetDeviceKernelArgs

* dedicated fixed_nk solution

* init client api

* add grouped_gemm_bias example

* add a instance

* add instances

* formatting

* fixed cmake

* Update EnableCompilerWarnings.cmake

* Update cmake-ck-dev.sh

* clean; fixed comments

* fixed comment

* add instances for fp32 output

* add instances for fp32 output

* add fp32 out client example

* fixed CI

* init commit for kbatch

* add splitk gridwise

* format

* fixed

* clean deviceop

* clean code

* finish splitk

* fixed instances

* change m_loops to tile_loops

* add setkbatch

* clean code

* add splitK+bias

* add instances

* opt mk_nk instances

* clean examples

* fixed CI

* remove zero

* finished non-zero

* clean

* clean code

* optimized global_barrier

* fixed ci

* fixed CI

* removed AddBias

* format

* fixed CI

* fixed CI

* move 20_grouped_gemm to 21_grouped_gemm

---------

Co-authored-by: Jing Zhang <jizha@amd.com>

[ROCm/composable_kernel commit: f5ec04f091]
2023-08-31 09:22:12 -05:00
rocking
0b07461518 MaxPool & AvgPool bwd instances, test, ckProfiler, client example (#861)
* Add maxpool instances

* Rename index pool to max pool.

* Add maxpool bwd bf16 instances

* Add avg pool bwd instances

* Rename avgpool and maxpool to avg_pool3d and max_pool

* Add bf16 pool fwd instances

* Add max pool bwd to ckProfiler

* Add avg pool3d bwd to ckProfiler

* Add avg pool bwd test

* Fix bug of reference pool fwd (dilation)

* Fix bug of max pool bwd  (dilation and initZero)

* Support bf16 compute data type

* Force compute type be f32. Because atomicAdd only support f32

* Add max pool bwd test

* Rename folder

* Rename pool

* Add max pool bwd client example

* Add avg pool bwd client example

* Add missing workspace

* clang format

* Rename macro

* remove useless header

* remove useless layout

[ROCm/composable_kernel commit: 866377de18]
2023-08-31 21:01:50 +08:00
Illia Silin
ddf8b9ac4c fix gemm_streamk example on mi300 (#875)
[ROCm/composable_kernel commit: bf1912ed3d]
2023-08-30 20:18:38 -07:00
zjing14
f8bcfe60ac add an example of customized type convert - bfp16_rtn (#869)
* add an example of customized bfp16_rtn

* fixed threadwise_copy

---------

Co-authored-by: Jing Zhang <jizha@amd.com>

[ROCm/composable_kernel commit: 38ada109ea]
2023-08-29 12:31:24 -05:00
zjing14
90c7e28a8d Fp16/fp8 mixed-precision Gemm with multiply+add fusion (#865)
* add compute_type

* add multiply_add ckProfiler

* add f8_fp16 support

* clean

* clean

* fixed lds size calc

* format

---------

Co-authored-by: Jing Zhang <jizha@amd.com>

[ROCm/composable_kernel commit: 31ea132aa2]
2023-08-28 16:27:32 -05:00