Commit Graph

249 Commits

Author SHA1 Message Date
arai713
3af8c81a72 Transpose 3d (#984)
* added working example for 5D input using 1D kernel

* example with 5D input tensor and 2d kernel - not working: issues with arguments

* added updated version of 3d device op - changed descriptors/dims

* added example file to check kernel

* fixed descriptor and isSupportedArgument stride problem

* added and modified kernel for 3d - updated tids/loop

* adding some more 5d example files

* fixed some issues

* changes made for testing

* working version: fixed error in stride for A, still a bit inefficient

* cleaned up formatting/comments

* updating formatting

* more formatting fixes

* fixing cmake, adding back gpu targets in cmake script

* adding client example

* added instances for client example

* fixed errors in client example

* implemented client ex with device_elementwise.hpp and device_elementwise_3d_impl.hpp

* removed extra files

* minor formatting and naming fixes

* adding test files and profiler

* fixing minor error

* minor fix

* removed unneccesary comments, renamed files

* updated instance list for client example, added different layout example

* removing instances

* fixed error in instance generation

* remove comments

* update profiler and client example tensor layouts

* fixed errors in test/profiler

* updated vector dim access to enable vector load

* updated test/profiler files

* updated example with 1d kernel

* updating profiler

* renamed files

---------

Co-authored-by: Jing Zhang <jizha@amd.com>
2023-11-08 19:45:07 -06:00
rocking
a3d9a2cd42 Layernorm4d (#1022)
* Rename folder

* Add layernorm 4d fwd example

* Rename original layernorm example

* Add layernorm 4d f16  test

* Add layernorm4d_fwd client example

* Support layernorm4D in ckProfiler

* Rename groupnorm to groupnorm fwd in example

* Rename layernorm and group fwd in test

* Rename normalization to normalization_fwd (instances)

* Add fwd to DeviceNormalization

* Rename external api header

* Rename folder, because we can also add bwd in this folder

* Add fwd in layernorm and groupnorm (profiler

* Fix compile error

---------

Co-authored-by: Po Yen Chen <PoYen.Chen@amd.com>
2023-11-09 08:34:51 +08:00
zjing14
98fd41f597 Add Gemm instances for performance improvement (#1018)
* improve kpad

* more tuning parameters

* f16_f8_fp16

* cut test time

* add f16_f8_fp16

* add f16_f8_f16

* testing instances for skinny cases

* format

* clean

* add fp16_f8_fp16

* clang-format

* add grouped gemm instalces

* fixed profile grouped_gemm

* clean

* clean

* clean

* clean

* clean

* add missing instance func

* fixed inferface

---------

Co-authored-by: Jing Zhang <jizha@amd.com>
Co-authored-by: root <root@sh5-1e707-rc06-38.mkm.dcgpu>
2023-11-07 09:09:58 -06:00
Daming Feng
aa0b979887 Add compute type check for convolution instances (#1015)
* add compute type check for fp16 in forward convolution instances

* Add compute type check for default compute types

---------

Co-authored-by: Bartlomiej Kocot <barkocot@amd.com>
2023-11-07 01:33:11 +01:00
Bartlomiej Wroblewski
4ef704d8a6 Add support for mixed precision in contraction scale and bilinear (#973)
* Add support for mixed precision in contraction scale and bilinear (#936)

* Extract common functionality to separate files

* Reference contraction: Remove incorrect consts from type_converts

* Reference contraction: Add missing type_convert for dst value

* Reference contraction: Fix incorrect order of B matrix dimensions

* Add support for mixed precision in contraction scale and bilinear

* Move using statements from instances to a common file

* Move using statements from examples to a common file

* Fix the order of B matrix dimensions across examples and profiler

* Fix the computation of error threshold

* Make ComputeDataType an optional argument

* Include possible DataType -> ComputeDataType casting error in the threshold

* Remove commented code

* Make the ComputeDataType an optional argument in instance

---------

Co-authored-by: Illia Silin <98187287+illsilin@users.noreply.github.com>
2023-11-02 14:26:33 -07:00
Bartłomiej Kocot
f27ea94ecb Add ScaleAddScaleAddRelu post op for conv fwd (#1006)
* Add ScaleAddScaleAddRelu post op for conv fwd

* Fixes

* Fix instance file name

* Minor fix
2023-11-01 18:31:30 -05:00
Po Yen Chen
db4461c142 Disable the SLP vectorizer to prevent unnecessary wait (#1008)
* Disable the SLP vectorizer to prevent unnecessary wait

* Add comment to the reason of adding flag

* Fix wording
2023-11-01 03:28:36 +08:00
Bartłomiej Kocot
2e824c6d46 Add support for groups in Img2Col/Col2Img (#1007)
* Add support for groups in Img2Col/Col2Img

* Fix interface test

* Fix interface test G to N

* Improve performance

* Change gemm layout to 3d

* Fixes
2023-10-31 10:46:32 +01:00
zjing14
bec84efbb7 Enabled padding for regular gemm (#1004)
* add mnk padding for fp8

* add padding for row_col layout

* added padding for fp32

---------

Co-authored-by: Jing Zhang <jizha@amd.com>
2023-10-23 16:52:53 -05:00
Bartłomiej Kocot
ac0e006766 Fix cmake dtype check (#989)
* Fix instances dtype check

* Fix source dtypes seletor for examples and tests

* Sync with new cmakefile changes

* Remove not needed ifdefs

* Remove not needed ifdefs
2023-10-21 22:19:43 +02:00
Qianfeng
b4fc4d0b8d Misc fixes (#994)
* reinterpret_cast to const char* in dumpBufferToFile to be compatible with both const and non-const input pointers

* Add seed input to GeneratorTensor_4 for normal_distribution generator

* Add GetTypeString() for DeviceElementwiseImpl

* Add HIP_CHECK_ERROR macro
2023-10-19 11:26:04 -05:00
Bartłomiej Kocot
82f3a835d5 Extend available elementwise operations with conv examples (#995)
* Extend available elementwise operations with conv examples

* Fixes

* Remove not needed convert

* Update CMakeFile and dir name
2023-10-19 17:23:19 +02:00
Po Yen Chen
deef92d5d0 Avoid force setting ENABLE_PIPELINE_V2_OPT to OFF (#961)
* Avoid force setting ENABLE_PIPELINE_V2_OPT to OFF

* Remove compilation option variable MAX_ILP_OPTS
2023-10-19 23:19:07 +08:00
rocking
3696fe1c76 Layernorm and groupnorm support to save mean and inverse std in forward (#929)
* save mean and inverse std in normalization

* Save mean and inverse std in splitK

* Vector save mean and inv std

* Modify instance for save mean and std

* simplify the layernorm example

* Save mean and std in groupnorm example

* Save mean and inv std in ckProfiler and test

* Remove compute data type from base class

* Save mean and inv std in client example

* Add changelog

* clang format

* Fix compile error

* Refine naming

* Avoid error in bf16

* revert changelog
2023-10-19 07:36:29 +08:00
zjing14
bf435140dc Clean DTYPES conditions in CMake (#974)
* Add a condition to build fp8 instances

* simplified buffer_load/store

* add bfp8/fp8

* fixed

* remove all f8/bf8 condition include folder

* fixed cmake conditions

* fixed DTYPES=fp16/bfp16

* fix

* fixed buffer_load

* fixed buffer_store

* fix

* clean example cmake files

* fixed ci

* fixed cit

---------

Co-authored-by: Rostyslav Geyyer <rosty.geyyer@amd.com>
Co-authored-by: Jing Zhang <jizha@amd.com>
2023-10-18 11:14:14 -05:00
Bartłomiej Kocot
16d7c4d2f7 Add grouped conv bwd weight wmma (#985)
* Add grouped conv bwd weight wmma

* Update README, changelog, profiler

* Minor fixes

* Fix grouped conv bwd wei dl kernel

* Minor fixes

* Minor stylistic fixes
2023-10-17 10:32:26 +02:00
Rostyslav Geyyer
fa753f27ba Add splitk gemm fp16 @ fp16 with fp8 compute instances (#983)
* Add ComputeType

* Update for compatibility

* Add instances

* Update profiler api
2023-10-13 16:27:11 -05:00
zjing14
c99323be6e Revert "Grouped Gemm with looping over the tiles. (#788)" (#982)
This reverts commit a4f72a314a.
2023-10-11 14:27:29 -05:00
Adam Osewski
a4f72a314a Grouped Gemm with looping over the tiles. (#788)
* Introduce LocalBlockToCTileMap.

* Change the signature of CalculateBottomIndex() function which now does
not accept any argument. The B2C map which is already passed as an
argument to the kernel Run function is calculating block's local id
already outside at kernel entry point __global__ function.
The LocalB2C map stores as members local block ID.

* Use LocalBlockToCTile map in device ops.

* First draft of tile loop work distribution.

* Fix typo.

* Simplify kernel arguments.

Calculate descriptors & B2C maps on the device.

* Use looping kernel.

* Fix B2C constructor.

* Fix Navi21 errors.

* Calculate tile start/end in device kernel.

* Change Run API to accept user provided workspace buffer.

* Add new line at EOF.

* Move Gemm KernelArguments to device op interface.

* Remove unused code.

* Update API.

* Launch grid size which is min of occupancy vs tile count

* Get back to use constant memory for gemm descriptors.

* Remove unused code.

* Add default virtual method implementation.

* Update comments to conform with doxygen style.

* Fix doc style and unused parameters.

* Add thread cluster lengths to kernel name.

* Remove old splitk impl and replace it with tile looping one.

* Modify instances.

* set KPerBlock to 64
* maximize wherever possible vector load size.

* Fix instances cluster lengths.

* Change comment style.

* Use 128b store where possible in instances.

* Update test cases, since KPerBlock has doubled.

* Update output stream operator for Sequence.

* Add pipeline version to GroupedGEMM device op type string.

* Fix pipeline version type logging.

* Fix input tensors type after merge.

* Fix compiler error.

* Fix output stream operator for Pipeline version.

* Store using 128b.

* Set of instances with kpb 32/64

* Limit number of instances

* Remove commented out instances.

* Fix function name.

* Limit the number of instances.

Add pipline version to the regular instances

* Change thr cluster layout for reading B tensor.

* disabled failed instances

---------

Co-authored-by: Adam Osewski <aosewski@amd.com>
Co-authored-by: zjing14 <zhangjing14@gmail.com>
Co-authored-by: Jing Zhang <jizha@amd.com>
2023-10-10 22:21:15 -05:00
Lauren Wrubleski
5913609168 Replace CMake return from later CMake (#970) 2023-10-05 14:58:58 -07:00
Illia Silin
4daedf8ca5 Revert "Add support for mixed precision in contraction scale and bilinear" (#967)
* Revert "Add support for mixed precision in contraction scale and bilinear (#936)"

This reverts commit f07485060e.

* revert commits #957 and #960
2023-10-05 14:58:23 -07:00
zjing14
04f93aadb8 Grouped conv bwd data with fp16 input and bf8fp8 comp (#962)
* Add f8 bf8 gemm example

* Add element-wise ops

* Add intrinsics

* Update reference calculation

* Add an additional type option for xdlops gemm

* Fix build process

* Add bf8 to buffer addressing

* Update blockwise op, split typeA and typeB

* Update for compatibility

* Uppdate naming to f8->fp8

* Update naming

* Format

* Update naming (#937)

* Add a client example

* Add computetypes to device and gridwise ops

* Add instances, update instance factory

* Format

* Fix a flag

* Add ckProfiler mode

* Fix typos

* Add an example

* Add bf8 generator

* add bf8 mfma; fixed type_convert for bf8

* move verfication ahead of timing

* Update reference calculation

* Fix reference

* Narrow down float init range

* Fix bf8 bf8 mfma

* Add bf8 @ fp8 mfma

* Update example

* Update instances

* Update profiler api

* Update for compatibility

* Format

* Remove extra example

* Clean up

* workaround convert

* added instance of f16_bf8f8, and client example

* fixed mfma selector

* format

---------

Co-authored-by: Rostyslav Geyyer <rosty.geyyer@amd.com>
Co-authored-by: Rostyslav Geyyer <46627076+geyyer@users.noreply.github.com>
Co-authored-by: Jing Zhang <jizha@amd.com>
2023-10-04 18:04:27 -05:00
Rostyslav Geyyer
42facfc6b7 Add conv bwd weight fp16 comp bf8 fp8 op, instances and example (#945)
* Add f8 bf8 gemm example

* Add element-wise ops

* Add intrinsics

* Update reference calculation

* Add an additional type option for xdlops gemm

* Fix build process

* Add bf8 to buffer addressing

* Update blockwise op, split typeA and typeB

* Update for compatibility

* Uppdate naming to f8->fp8

* Update naming

* Format

* Update naming (#937)

* Add a client example

* Add computetypes to device and gridwise ops

* Add instances, update instance factory

* Format

* Fix a flag

* Add ckProfiler mode

* Fix typos

* Add an example

* Add bf8 generator

* add bf8 mfma; fixed type_convert for bf8

* move verfication ahead of timing

* Update reference calculation

* Fix reference

* Narrow down float init range

* Fix bf8 bf8 mfma

* Add bf8 @ fp8 mfma

* Update example

* Update instances

* Update profiler api

* Update for compatibility

* Format

* Remove extra example

* Clean up

* workaround convert

---------

Co-authored-by: Jing Zhang <jizha@amd.com>
2023-10-04 08:19:08 -05:00
zjing14
e921e1f08d 3d grouped conv fwd with input/output fp16 and comp fp8 (#931)
* add f8 comp instance

* fixed

* fixed comments

* rename

* fixed dtype

* format

* fixed CI

* fixed ci

* add missing ComputeType

* fixed cit

* fixed

* Update cmake-ck-dev.sh

---------

Co-authored-by: Jing Zhang <jizha@amd.com>
2023-10-03 20:04:26 -05:00
zjing14
f477fca436 add generic instances (#947)
Co-authored-by: Jing Zhang <jizha@amd.com>
2023-10-03 09:32:28 -05:00
Rostyslav Geyyer
bd09b5c538 Add fp8 @ bf8 gemm support and example (#933)
* Add f8 bf8 gemm example

* Add element-wise ops

* Add intrinsics

* Update reference calculation

* Add an additional type option for xdlops gemm

* Fix build process

* Add bf8 to buffer addressing

* Update blockwise op, split typeA and typeB

* Update for compatibility

* Uppdate naming to f8->fp8

* Update naming

* Format
2023-10-02 16:39:03 -05:00
Bartlomiej Wroblewski
f07485060e Add support for mixed precision in contraction scale and bilinear (#936)
* Extract common functionality to separate files

* Reference contraction: Remove incorrect consts from type_converts

* Reference contraction: Add missing type_convert for dst value

* Reference contraction: Fix incorrect order of B matrix dimensions

* Add support for mixed precision in contraction scale and bilinear

* Move using statements from instances to a common file

* Move using statements from examples to a common file

* Fix the order of B matrix dimensions across examples and profiler

* Fix the computation of error threshold

* Make ComputeDataType an optional argument

* Include possible DataType -> ComputeDataType casting error in the threshold

* Remove commented code
2023-09-29 10:54:31 -05:00
Bartłomiej Kocot
cb53874002 Add grouped conv bwd data wmma (#950)
* Add grouped conv bwd data wmma

* Fix copyrights

* Add instances with smaller NPerBlock

* Update interface test

* Minor stylistic fixes

* Minor stylistic fixes
2023-09-28 23:10:18 +02:00
Bartłomiej Kocot
e2243a4d1e Add column to image kernel (#930)
* Add column to image kernel

* Minor fixes for dtypes and client examples

* Disable tests for disabled dtypes

* Disable add instances functions for disabled data types

* Minor stylistic fixes

* Revert "Disable add instances functions for disabled data types"

This reverts commit 728b869563.

* Instances reduction

* Add comments in device_column_to_image_impl

* Update changelog and Copyrights

* Improve changelog
2023-09-27 17:19:06 +02:00
Rostyslav Geyyer
94bfa50256 Add fp8 gemm instances (#920)
* Add fp8 gemm instances

* Update instance naming
2023-09-26 14:59:33 -05:00
Illia Silin
2ea75bd6d7 Resolve some data type issues and cmake policy. (#940)
* split the types in gemm_bilinear instances, add condition to cmake policy

* fix syntax

* split the data types in batchnorm examples

* fix the batchnorm_bwd test

* fix types in the batchnorm_bwd test
2023-09-26 08:39:11 -07:00
Bartłomiej Kocot
c95538325b Add 3d grouped conv fwd wmma instances (#935)
* Add 3d grouped conv fwd wmma instances

* Refactor fwd conv tests

* Split wmma instances for each specialization

* Minor stylistic fixes
2023-09-23 18:56:31 +02:00
Rostyslav Geyyer
ede64ae9db Update naming (#937) 2023-09-22 10:08:45 -05:00
Illia Silin
bba085d2b5 Refactoring cmake files to build data types separately. (#932)
* refactor cmake files for the tests

* refactor cmake files for examples

* fix cmake for gemm example

* fix the cmake file for all examples

* add splitting by data types in gemm_splitk instance header

* rename test to reflect only dl instances are used

* clean up CI workspace, update cmake for instances

* change the jenkinsfile syntax

* build all instances except DL on gfx11

* move workspace cleanup after stages

* clean up workspace after every stage

* isolate data types in grouped_conv_fwd header

* isolate dl instances for grouped_conv2d_fwd

* fix syntax

* fix cmake and batchnorm instances

* fix typo

* fix reduction instances

* fix grouped_conv headers

* fix syntax

* replace parsing logic for instances, replace bfp16 with bf16

* fix the client examples build

* clean up DTYPES from instances cmake files

* update the parsing logic in cmake files

* make an exception for reduction kernels

* update few remaining cmake files to handle DTYPES

* fix syntax

* fix cmake conflicts

* replace f8 with fp8 test name

* resolve conflicts for dpp instances
2023-09-20 22:15:56 -07:00
Bartlomiej Wroblewski
63cd459248 Fix DL GEMM instances with too large vector size (#901)
* Fix vector lengths of DL GEMM instances with padding
* Add checks for correctness of vector lenghts in DL GEMM
2023-09-18 14:08:23 +02:00
Bartlomiej Kocot
bc2d0583d3 Stylistic improvements for grouped convolution code
Remove unnecessary ignoring

Update test/grouped_convnd_bwd_weight/test_grouped_convnd_bwd_weight.cpp
2023-09-15 20:03:47 +02:00
zjing14
f9d0eddb90 Add fp16/fp8 support into Grouped gemm FixedNK (#874)
* move all arguments into device

* add b2c_tile_map

* add examples

* add SetDeviceKernelArgs

* dedicated fixed_nk solution

* init client api

* add grouped_gemm_bias example

* add a instance

* add instances

* formatting

* fixed cmake

* Update EnableCompilerWarnings.cmake

* Update cmake-ck-dev.sh

* clean; fixed comments

* fixed comment

* add instances for fp32 output

* add instances for fp32 output

* add fp32 out client example

* fixed CI

* init commit for kbatch

* add splitk gridwise

* format

* fixed

* clean deviceop

* clean code

* finish splitk

* fixed instances

* change m_loops to tile_loops

* add setkbatch

* clean code

* add splitK+bias

* add instances

* opt mk_nk instances

* clean examples

* fixed CI

* remove zero

* finished non-zero

* clean

* clean code

* optimized global_barrier

* fixed ci

* fixed CI

* instance and client

* removed AddBias

* format

* fixed CI

* fixed CI

* move 20_grouped_gemm to 21_grouped_gemm

* clean

* formatting

* clean

* clean

* fixed computeType

---------

Co-authored-by: Jing Zhang <jizha@amd.com>
2023-09-14 21:04:10 -05:00
Bartłomiej Kocot
475188ca2e Add grouped conv bwd weight dl instances and new layout (#897)
* Add grouped conv bwd weight dl instances and new layout

* Add M and N padding

* Remove todo comment

* Enable grouped conv fwd dl k,c=1 generic instance

* Comment fixes
2023-09-13 10:14:31 -05:00
zjing14
a66d14edf2 fixed fp8 issues (#894)
* fixed fp8 init; and reference gemm

* Update host_tensor_generator.hpp

* fixed convert

* fixed reference gemm

* fixed comments

* fixed comments

* fixed ci

* fixed computeType

---------

Co-authored-by: Jing Zhang <jizha@amd.com>
2023-09-12 22:17:56 -05:00
Rostyslav Geyyer
62d4af7449 Refactor f8_t, add bf8_t (#792)
* Refactor f8_t to add bf8_t

* Add check_err impl for f8_t

* Update fp8 test

* Format

* Revert the fix

* Update vector_type implementation

* Add bf8 test

* Add bf8, use BitInt types

* Add bf8 conversion methods

* Update type_convert for fp8/bf8

* Add check_err fp8/bf8 support

* Add subnorm fp8 tests

* Add subnorm bf8 tests

* Fix conversion

* Add bf8 cmake bindings

* Add macros to enable build with disabled fp8/bf8

* Remove is_native method

* Update flag combination for mixed precision instances

* Add more flag checks

* Add another flag to a client example

* Add type traits, decouple f8/bf8 casting

* Clean up

* Decouple fp8 and bf8 flags

* Remove more redundant flags

* Remove leftover comments
2023-09-12 17:04:27 -05:00
Bartlomiej Wroblewski
547dbcfbc2 Add new instances and support for small cases in DPP8 GEMM (#896) 2023-09-12 10:05:23 -05:00
Haocong WANG
562b4cec48 [Navi3x] Add fp16/int8 wmma conv forward instances (#746)
* fix wmma gemm int8; add grouped conv int8 example

* Add int8 gemm-bilinear instances

* compile sanity check unknown

* Sanity pass + clang-format

* add int8 conv profiler instances

* solve merge conflict

---------

Co-authored-by: zjing14 <zhangjing14@gmail.com>
Co-authored-by: Chao Liu <chao.liu2@amd.com>
2023-09-07 21:59:26 -05:00
Bartlomiej Wroblewski
37a8c1f756 Redesign the DPP8 GEMM kernel to use warp-wise component (#863)
* Redesign the DPP8 GEMM kernel to use warp-wise component

* Review: Improve error messages

* Review: Remove unnecessary empty lines

* Review: Fix M, N per thread names

* Review: Rename mfma_input_type to dpp_input_type

* Review: Fix tensor adaptor; remove unnecessary element

* Review: Remove calls to dpp_gemm's MakeCDescriptor

* Review: Add blockwise doc, change function names to include dimension names

* Review: Remove duplicated code; Move Block2CtileMap alias to the top of the file

* Review: Add __restrict__ keywords

* Review: Use MatrixPadder for padding A, B, C matrices

* Review: Remove hardcoded datatypes

* Review: Change names from FloatX to XDataType

* Review: Introduce AK0 and BK0 instead of a single K0

* Review: Remove construction of dpp_datatypes object

* Review: Rename DppInstrRunner to DppLanegroupGemm
2023-09-06 11:44:09 -05:00
zjing14
3786bfe1cc added padding of K into gemm_v2r3 (#887)
* added kpad support into v2r3

* add generic instances

* fixed comments

* fixed mnk padding

* Update device_batched_gemm_xdl.hpp

---------

Co-authored-by: Jing Zhang <jizha@amd.com>
2023-09-06 10:15:52 -05:00
zjing14
a61b8b785e Fixed fp8 gemm (#882)
* add generic instances; fixed initi with fp8

* fixed comment

---------

Co-authored-by: Jing Zhang <jizha@amd.com>
2023-09-06 09:59:20 -05:00
Bartłomiej Kocot
0077eeb3be Add image to column kernel (#867)
* Add image to column kernel

* Add instances, tests, profiler, example

* Add client example

* Several fixes of image to column

* Fix variable name in device_image_to_column_impl

* Several fixes of image to column profiler

* Fix num_btype calculation

* Make new mesaurements for correct bytes calculation
2023-09-05 10:11:40 -05:00
Bartłomiej Kocot
0c9a1d25b3 Add nhwgc dl generic instances for grouped conv fwd (#879) 2023-09-05 10:07:56 -05:00
zjing14
f5ec04f091 Grouped Gemm with Fixed K and N with SplitK (#818)
* move all arguments into device

* add b2c_tile_map

* add examples

* add SetDeviceKernelArgs

* dedicated fixed_nk solution

* init client api

* add grouped_gemm_bias example

* add a instance

* add instances

* formatting

* fixed cmake

* Update EnableCompilerWarnings.cmake

* Update cmake-ck-dev.sh

* clean; fixed comments

* fixed comment

* add instances for fp32 output

* add instances for fp32 output

* add fp32 out client example

* fixed CI

* init commit for kbatch

* add splitk gridwise

* format

* fixed

* clean deviceop

* clean code

* finish splitk

* fixed instances

* change m_loops to tile_loops

* add setkbatch

* clean code

* add splitK+bias

* add instances

* opt mk_nk instances

* clean examples

* fixed CI

* remove zero

* finished non-zero

* clean

* clean code

* optimized global_barrier

* fixed ci

* fixed CI

* removed AddBias

* format

* fixed CI

* fixed CI

* move 20_grouped_gemm to 21_grouped_gemm

---------

Co-authored-by: Jing Zhang <jizha@amd.com>
2023-08-31 09:22:12 -05:00
rocking
866377de18 MaxPool & AvgPool bwd instances, test, ckProfiler, client example (#861)
* Add maxpool instances

* Rename index pool to max pool.

* Add maxpool bwd bf16 instances

* Add avg pool bwd instances

* Rename avgpool and maxpool to avg_pool3d and max_pool

* Add bf16 pool fwd instances

* Add max pool bwd to ckProfiler

* Add avg pool3d bwd to ckProfiler

* Add avg pool bwd test

* Fix bug of reference pool fwd (dilation)

* Fix bug of max pool bwd  (dilation and initZero)

* Support bf16 compute data type

* Force compute type be f32. Because atomicAdd only support f32

* Add max pool bwd test

* Rename folder

* Rename pool

* Add max pool bwd client example

* Add avg pool bwd client example

* Add missing workspace

* clang format

* Rename macro

* remove useless header

* remove useless layout
2023-08-31 21:01:50 +08:00
Bartłomiej Kocot
9e86ebd62d Add number of error when fail (#868) 2023-08-30 10:33:11 -05:00