Commit Graph

155 Commits

Author SHA1 Message Date
raramakr
e699dbd8a3 SWDEV-439954 - Use hard coded filename rather than using the macro __FILE__ for debug prints. (#1123)
* SWDEV-439954 - Use hard coded filename rather than using the macro __FILE__ for debug prints.

Hiptensor library is using the header files from CK. Hard coded ROCm path was getting embedded into the hiptensor library, since the header file was having the macro __FILE__. Replace the macro with filename.

* fix syntax

---------

Co-authored-by: illsilin <Illia.Silin@amd.com>
2024-01-09 08:21:47 -08:00
rocking
a69aa2a11a layernorm and groupnorm backward data (#1083)
* rename folder

* Add type string

* Remove typo

* Add deviceOp to backward x

* Add comment to describe the behavior of backward normalization

* Add kernel function, prepare to implement

* implement generic kernel

* Check vector size

* Add sweep once pipeline for small reduce size

* Fix bug of KRaw_ error

* Fix bug of dx stride

* sanity check for mean and rstd

* backward x for groupnorm

* Add bwd x instance

* add layernorm 2d bwd gamma beta instances

* Change save mean var type from f32 to f16 in f16 mode

* Change the example to f16

* Add groupnorm bwd gamma beta instance

* Add groupnorm bwd x instance

* Fix naming

* Add layernorm bwd x ckprofiler

* Add groupnorm bwd x profiler

* clang format

* Rename bwd x to bwd data

* Fix bug of verification in profiler

* Add test of layernorm and groupnorm bwd data

* Add missing cmake

* Add layernorm2d bwd data

* rename fwd example

* Add groupnorm client example

* Fix typo. replace Invarient with Invariant

* Add checking before running the best instance
2023-12-19 04:23:11 +08:00
Bartlomiej Wroblewski
89ee47460b Fix IsSupported check in the contraction op (#1066)
Current implementation of IsSupported method in contraction ops does not cover a lot of possible cases in which ScalarPerVector cannot really be used to read A, B or D, or write E.

This PR extends both the regular and multiABD contraction ops with improved checks and also adds new instances with smaller values of ScalarPerVector to support instances that are not supported by other instances.
2023-12-11 17:12:32 +01:00
Bartłomiej Kocot
f836984891 Support broadcast for bias in grouped conv fwd (#1081)
* Support broadcast for bias in grouped conv fwd

* Fix comment

* Comment fixes

* Remove GK layout
2023-12-08 11:07:42 +01:00
Bartlomiej Wroblewski
bc4bf9bd03 Add support for double buffering in direct load GEMM kernel (#1052)
This PR introduces support for double buffering in LDS into GEMM kernels that use direct load instructions.

Direct loads now use inline asm instead of intrinsics. Usage of intrinsics results in compiler adding additional waitcnt instructions what breaks possible load/compute overlap in case of double buffering.

Usage of inline asm results in the need to use sched_barrier in order to make sure that compiler cannot incorrectly reschedule instructions since it does not know the data dependencies between global->LDS and LDS->registers.
2023-12-03 23:08:47 +01:00
zjing14
49df1dc595 Fixed GroupedGemmFixedNK with hipGraph (#1065)
* fixed examples; add async_mem_set

* add stream to all deviceOp using SetWorkspace

---------

Co-authored-by: Jing Zhang <jizha@amd.com>
2023-11-30 15:09:27 -06:00
arai713
a2969aa8b6 Disable transpose device op for MI300 (#1050)
* added working example for 5D input using 1D kernel

* example with 5D input tensor and 2d kernel - not working: issues with arguments

* added updated version of 3d device op - changed descriptors/dims

* added example file to check kernel

* fixed descriptor and isSupportedArgument stride problem

* added and modified kernel for 3d - updated tids/loop

* adding some more 5d example files

* fixed some issues

* changes made for testing

* working version: fixed error in stride for A, still a bit inefficient

* cleaned up formatting/comments

* updating formatting

* more formatting fixes

* fixing cmake, adding back gpu targets in cmake script

* adding client example

* added instances for client example

* fixed errors in client example

* implemented client ex with device_elementwise.hpp and device_elementwise_3d_impl.hpp

* removed extra files

* minor formatting and naming fixes

* adding test files and profiler

* fixing minor error

* minor fix

* removed unneccesary comments, renamed files

* updated instance list for client example, added different layout example

* removing instances

* fixed error in instance generation

* remove comments

* update profiler and client example tensor layouts

* fixed errors in test/profiler

* updated vector dim access to enable vector load

* updated test/profiler files

* updated example with 1d kernel

* updating profiler

* renamed files

* disabled device op for MI300

* skip  elementwise_permute_2d on gfx94x

* Update CMakeLists.txt

* fixing CMake - disabling some GPU targets

---------

Co-authored-by: Jing Zhang <jizha@amd.com>
Co-authored-by: Jing Zhang <jizhan@amd.com>
Co-authored-by: zjing14 <zhangjing14@gmail.com>
2023-11-29 11:36:40 -06:00
Bartlomiej Wroblewski
627054b941 Add basic support for direct loads from global to LDS (#999)
* Add basic support for direct loads from global to LDS

* Clean the code and comments

* Add support for fp16

* Add comments

* Add check for thread cluster lengths

* Align non-direct-load fp16 example

* Small fixes

* Extend IsSupported to check for supported GPU gens

* Build examples only on the supported HW

* Do not throw when instance not supported in 04 example

* Review: Apply review suggestions

* Review: small fix

* Review: small fix
2023-11-25 13:35:22 +01:00
Bartłomiej Kocot
1fefd82ed8 Log CDEBlockTransferScalarPerVector_NPerBlock in conv fwd multiD xdl (#1042)
* Log CDEBlockTransferScalarPerVector_NPerBlock in conv_fwd_multi_d_xdl implementation

* Log CDEBlockTransferScalarPerVector_NPerBlock in conv fwd multiD xdl
2023-11-15 17:31:50 +01:00
Bartłomiej Kocot
3ef3102fc5 Fix check for conv Fwd Filter1x1Pad0 (#1040)
* Fix check for conv Fwd Filter1x1Pad0

* Fix check for conv Fwd Filter1x1Pad0
2023-11-15 17:28:33 +01:00
Bartłomiej Kocot
f2398f612d Introduce multiABD api and deprecate multiD (#1035)
* Introduce multiABD api and deprecate multiD

* Replace multiD with multiABD

* Mark structures as deprecated

* Change doxygen deprecated to note to avoid warnings
2023-11-14 17:00:40 +01:00
arai713
454cf7bd1f Hip tensor permute (#1002)
* adding files for F32 example

* adding functioning implementation with scalar multiplication and unary operator support

* added fp 16 type check in unary square

* updating scalar multiplication as an operator

* functioning version with scalar operator

* changing strides for col major

* updated column major implementation

* working column major implementation

* cleaned up comments, rearranged/renamed files
2023-11-13 11:15:48 -06:00
Bartłomiej Kocot
49e52bb357 Support multi AB for grouped conv fwd xdl (#1027)
* Support multi AB for grouped conv fwd xdl

* Add instances

* Add client example

* Add example

* Add interface test

* Minor fixes

Minor fixes

Minor fixes

* Comment fixes

* Fixes

* Reference fix

* Test xdl fixes

* Improve multi_ab interface test
2023-11-10 15:54:44 +01:00
rocking
1db7560365 Backward of gamma and beta for layernorm and groupnorm (#1013)
* Add layernorm backward reference code

* Add groupnorm backward reference code

* Add example

* clang format

* Fixc bug of reference layernorm and groupnorm

* Fix naming

* Refine naming

* Add device op for normalization bwd gamma and beta

* Refine template parameter

* Add bwd gamma & beta of kernel

* 1. Add groupnorm example
2. Refine layernorm naming

* Narrow down the static check for performance

* Refine variable name
2023-11-10 18:02:03 +08:00
arai713
3af8c81a72 Transpose 3d (#984)
* added working example for 5D input using 1D kernel

* example with 5D input tensor and 2d kernel - not working: issues with arguments

* added updated version of 3d device op - changed descriptors/dims

* added example file to check kernel

* fixed descriptor and isSupportedArgument stride problem

* added and modified kernel for 3d - updated tids/loop

* adding some more 5d example files

* fixed some issues

* changes made for testing

* working version: fixed error in stride for A, still a bit inefficient

* cleaned up formatting/comments

* updating formatting

* more formatting fixes

* fixing cmake, adding back gpu targets in cmake script

* adding client example

* added instances for client example

* fixed errors in client example

* implemented client ex with device_elementwise.hpp and device_elementwise_3d_impl.hpp

* removed extra files

* minor formatting and naming fixes

* adding test files and profiler

* fixing minor error

* minor fix

* removed unneccesary comments, renamed files

* updated instance list for client example, added different layout example

* removing instances

* fixed error in instance generation

* remove comments

* update profiler and client example tensor layouts

* fixed errors in test/profiler

* updated vector dim access to enable vector load

* updated test/profiler files

* updated example with 1d kernel

* updating profiler

* renamed files

---------

Co-authored-by: Jing Zhang <jizha@amd.com>
2023-11-08 19:45:07 -06:00
rocking
a3d9a2cd42 Layernorm4d (#1022)
* Rename folder

* Add layernorm 4d fwd example

* Rename original layernorm example

* Add layernorm 4d f16  test

* Add layernorm4d_fwd client example

* Support layernorm4D in ckProfiler

* Rename groupnorm to groupnorm fwd in example

* Rename layernorm and group fwd in test

* Rename normalization to normalization_fwd (instances)

* Add fwd to DeviceNormalization

* Rename external api header

* Rename folder, because we can also add bwd in this folder

* Add fwd in layernorm and groupnorm (profiler

* Fix compile error

---------

Co-authored-by: Po Yen Chen <PoYen.Chen@amd.com>
2023-11-09 08:34:51 +08:00
Illia Silin
ce52621123 Support fp64 contraction on gfx94x. (#1029)
* enable contraction fp64 on gfx94*

* fix the logic
2023-11-08 15:03:18 -08:00
zjing14
98fd41f597 Add Gemm instances for performance improvement (#1018)
* improve kpad

* more tuning parameters

* f16_f8_fp16

* cut test time

* add f16_f8_fp16

* add f16_f8_f16

* testing instances for skinny cases

* format

* clean

* add fp16_f8_fp16

* clang-format

* add grouped gemm instalces

* fixed profile grouped_gemm

* clean

* clean

* clean

* clean

* clean

* add missing instance func

* fixed inferface

---------

Co-authored-by: Jing Zhang <jizha@amd.com>
Co-authored-by: root <root@sh5-1e707-rc06-38.mkm.dcgpu>
2023-11-07 09:09:58 -06:00
Bartlomiej Wroblewski
4ef704d8a6 Add support for mixed precision in contraction scale and bilinear (#973)
* Add support for mixed precision in contraction scale and bilinear (#936)

* Extract common functionality to separate files

* Reference contraction: Remove incorrect consts from type_converts

* Reference contraction: Add missing type_convert for dst value

* Reference contraction: Fix incorrect order of B matrix dimensions

* Add support for mixed precision in contraction scale and bilinear

* Move using statements from instances to a common file

* Move using statements from examples to a common file

* Fix the order of B matrix dimensions across examples and profiler

* Fix the computation of error threshold

* Make ComputeDataType an optional argument

* Include possible DataType -> ComputeDataType casting error in the threshold

* Remove commented code

* Make the ComputeDataType an optional argument in instance

---------

Co-authored-by: Illia Silin <98187287+illsilin@users.noreply.github.com>
2023-11-02 14:26:33 -07:00
Po Yen Chen
675b69782b Enable gfx941 & gfx942 support for DeviceGemmXdl<> device op (#1017)
* Enable gfx942 support for DeviceGemmXdl<> device op

* Enable gfx941 support for DeviceGemmXdl<> device op
2023-11-01 03:17:31 +08:00
Bartłomiej Kocot
2e824c6d46 Add support for groups in Img2Col/Col2Img (#1007)
* Add support for groups in Img2Col/Col2Img

* Fix interface test

* Fix interface test G to N

* Improve performance

* Change gemm layout to 3d

* Fixes
2023-10-31 10:46:32 +01:00
Qianfeng
b4fc4d0b8d Misc fixes (#994)
* reinterpret_cast to const char* in dumpBufferToFile to be compatible with both const and non-const input pointers

* Add seed input to GeneratorTensor_4 for normal_distribution generator

* Add GetTypeString() for DeviceElementwiseImpl

* Add HIP_CHECK_ERROR macro
2023-10-19 11:26:04 -05:00
rocking
3696fe1c76 Layernorm and groupnorm support to save mean and inverse std in forward (#929)
* save mean and inverse std in normalization

* Save mean and inverse std in splitK

* Vector save mean and inv std

* Modify instance for save mean and std

* simplify the layernorm example

* Save mean and std in groupnorm example

* Save mean and inv std in ckProfiler and test

* Remove compute data type from base class

* Save mean and inv std in client example

* Add changelog

* clang format

* Fix compile error

* Refine naming

* Avoid error in bf16

* revert changelog
2023-10-19 07:36:29 +08:00
zjing14
58338bb203 fixed math-ci error; suspend a warning (#996)
Co-authored-by: Jing Zhang <jizha@amd.com>
2023-10-18 16:30:13 -07:00
zjing14
1cc36ba5fb Add contraction_multi_abd (#972)
* add gridwise_multi_abd

* move element_op into RunRead

* merge element_wise op with data read

* add multiABD example

* allow packed elementwise_op

* changed example

* clean

* clean

* add is_detected

* fix

* minor fix

* add scaleAdd_vec4 example

* init commit for contraction_multi_ABD

* add examples

* add examples of multiA and broadcast

* update example

* fixed comments

* Update cmake-ck-dev.sh

* Update cmake-ck-dev.sh

* Add comments into the example

* Update CMakeLists.txt

---------

Co-authored-by: Jing Zhang <jizha@amd.com>
2023-10-17 20:17:58 -05:00
zjing14
bf0addb575 added ab_elementwise_op support into splitK Gemm (#956)
* add ab_elementwise

* fixed ci

* fixed a merge issue

* fixed pr comments

* fixed a conflict

* remove 61_example

---------

Co-authored-by: Jing Zhang <jizha@amd.com>
2023-10-17 09:24:02 -05:00
Bartłomiej Kocot
16d7c4d2f7 Add grouped conv bwd weight wmma (#985)
* Add grouped conv bwd weight wmma

* Update README, changelog, profiler

* Minor fixes

* Fix grouped conv bwd wei dl kernel

* Minor fixes

* Minor stylistic fixes
2023-10-17 10:32:26 +02:00
Rostyslav Geyyer
fa753f27ba Add splitk gemm fp16 @ fp16 with fp8 compute instances (#983)
* Add ComputeType

* Update for compatibility

* Add instances

* Update profiler api
2023-10-13 16:27:11 -05:00
zjing14
c99323be6e Revert "Grouped Gemm with looping over the tiles. (#788)" (#982)
This reverts commit a4f72a314a.
2023-10-11 14:27:29 -05:00
Adam Osewski
a4f72a314a Grouped Gemm with looping over the tiles. (#788)
* Introduce LocalBlockToCTileMap.

* Change the signature of CalculateBottomIndex() function which now does
not accept any argument. The B2C map which is already passed as an
argument to the kernel Run function is calculating block's local id
already outside at kernel entry point __global__ function.
The LocalB2C map stores as members local block ID.

* Use LocalBlockToCTile map in device ops.

* First draft of tile loop work distribution.

* Fix typo.

* Simplify kernel arguments.

Calculate descriptors & B2C maps on the device.

* Use looping kernel.

* Fix B2C constructor.

* Fix Navi21 errors.

* Calculate tile start/end in device kernel.

* Change Run API to accept user provided workspace buffer.

* Add new line at EOF.

* Move Gemm KernelArguments to device op interface.

* Remove unused code.

* Update API.

* Launch grid size which is min of occupancy vs tile count

* Get back to use constant memory for gemm descriptors.

* Remove unused code.

* Add default virtual method implementation.

* Update comments to conform with doxygen style.

* Fix doc style and unused parameters.

* Add thread cluster lengths to kernel name.

* Remove old splitk impl and replace it with tile looping one.

* Modify instances.

* set KPerBlock to 64
* maximize wherever possible vector load size.

* Fix instances cluster lengths.

* Change comment style.

* Use 128b store where possible in instances.

* Update test cases, since KPerBlock has doubled.

* Update output stream operator for Sequence.

* Add pipeline version to GroupedGEMM device op type string.

* Fix pipeline version type logging.

* Fix input tensors type after merge.

* Fix compiler error.

* Fix output stream operator for Pipeline version.

* Store using 128b.

* Set of instances with kpb 32/64

* Limit number of instances

* Remove commented out instances.

* Fix function name.

* Limit the number of instances.

Add pipline version to the regular instances

* Change thr cluster layout for reading B tensor.

* disabled failed instances

---------

Co-authored-by: Adam Osewski <aosewski@amd.com>
Co-authored-by: zjing14 <zhangjing14@gmail.com>
Co-authored-by: Jing Zhang <jizha@amd.com>
2023-10-10 22:21:15 -05:00
Illia Silin
4daedf8ca5 Revert "Add support for mixed precision in contraction scale and bilinear" (#967)
* Revert "Add support for mixed precision in contraction scale and bilinear (#936)"

This reverts commit f07485060e.

* revert commits #957 and #960
2023-10-05 14:58:23 -07:00
zjing14
04f93aadb8 Grouped conv bwd data with fp16 input and bf8fp8 comp (#962)
* Add f8 bf8 gemm example

* Add element-wise ops

* Add intrinsics

* Update reference calculation

* Add an additional type option for xdlops gemm

* Fix build process

* Add bf8 to buffer addressing

* Update blockwise op, split typeA and typeB

* Update for compatibility

* Uppdate naming to f8->fp8

* Update naming

* Format

* Update naming (#937)

* Add a client example

* Add computetypes to device and gridwise ops

* Add instances, update instance factory

* Format

* Fix a flag

* Add ckProfiler mode

* Fix typos

* Add an example

* Add bf8 generator

* add bf8 mfma; fixed type_convert for bf8

* move verfication ahead of timing

* Update reference calculation

* Fix reference

* Narrow down float init range

* Fix bf8 bf8 mfma

* Add bf8 @ fp8 mfma

* Update example

* Update instances

* Update profiler api

* Update for compatibility

* Format

* Remove extra example

* Clean up

* workaround convert

* added instance of f16_bf8f8, and client example

* fixed mfma selector

* format

---------

Co-authored-by: Rostyslav Geyyer <rosty.geyyer@amd.com>
Co-authored-by: Rostyslav Geyyer <46627076+geyyer@users.noreply.github.com>
Co-authored-by: Jing Zhang <jizha@amd.com>
2023-10-04 18:04:27 -05:00
Rostyslav Geyyer
42facfc6b7 Add conv bwd weight fp16 comp bf8 fp8 op, instances and example (#945)
* Add f8 bf8 gemm example

* Add element-wise ops

* Add intrinsics

* Update reference calculation

* Add an additional type option for xdlops gemm

* Fix build process

* Add bf8 to buffer addressing

* Update blockwise op, split typeA and typeB

* Update for compatibility

* Uppdate naming to f8->fp8

* Update naming

* Format

* Update naming (#937)

* Add a client example

* Add computetypes to device and gridwise ops

* Add instances, update instance factory

* Format

* Fix a flag

* Add ckProfiler mode

* Fix typos

* Add an example

* Add bf8 generator

* add bf8 mfma; fixed type_convert for bf8

* move verfication ahead of timing

* Update reference calculation

* Fix reference

* Narrow down float init range

* Fix bf8 bf8 mfma

* Add bf8 @ fp8 mfma

* Update example

* Update instances

* Update profiler api

* Update for compatibility

* Format

* Remove extra example

* Clean up

* workaround convert

---------

Co-authored-by: Jing Zhang <jizha@amd.com>
2023-10-04 08:19:08 -05:00
zjing14
e921e1f08d 3d grouped conv fwd with input/output fp16 and comp fp8 (#931)
* add f8 comp instance

* fixed

* fixed comments

* rename

* fixed dtype

* format

* fixed CI

* fixed ci

* add missing ComputeType

* fixed cit

* fixed

* Update cmake-ck-dev.sh

---------

Co-authored-by: Jing Zhang <jizha@amd.com>
2023-10-03 20:04:26 -05:00
Rostyslav Geyyer
bd09b5c538 Add fp8 @ bf8 gemm support and example (#933)
* Add f8 bf8 gemm example

* Add element-wise ops

* Add intrinsics

* Update reference calculation

* Add an additional type option for xdlops gemm

* Fix build process

* Add bf8 to buffer addressing

* Update blockwise op, split typeA and typeB

* Update for compatibility

* Uppdate naming to f8->fp8

* Update naming

* Format
2023-10-02 16:39:03 -05:00
zjing14
9d58c42103 Contraction multi abd (#957)
* add gridwise_multi_abd

* move element_op into RunRead

* merge element_wise op with data read

* add multiABD example

* allow packed elementwise_op

* changed example

* clean

* clean

* add is_detected

* fix

* minor fix

* add scaleAdd_vec4 example

* init commit for contraction_multi_ABD

* add examples

* add examples of multiA and broadcast

* update example

* fixed comments

* Update cmake-ck-dev.sh

* Update cmake-ck-dev.sh

* Add comments into the example

---------

Co-authored-by: Jing Zhang <jizha@amd.com>
2023-10-02 09:18:36 -05:00
Bartlomiej Wroblewski
f07485060e Add support for mixed precision in contraction scale and bilinear (#936)
* Extract common functionality to separate files

* Reference contraction: Remove incorrect consts from type_converts

* Reference contraction: Add missing type_convert for dst value

* Reference contraction: Fix incorrect order of B matrix dimensions

* Add support for mixed precision in contraction scale and bilinear

* Move using statements from instances to a common file

* Move using statements from examples to a common file

* Fix the order of B matrix dimensions across examples and profiler

* Fix the computation of error threshold

* Make ComputeDataType an optional argument

* Include possible DataType -> ComputeDataType casting error in the threshold

* Remove commented code
2023-09-29 10:54:31 -05:00
Bartłomiej Kocot
cb53874002 Add grouped conv bwd data wmma (#950)
* Add grouped conv bwd data wmma

* Fix copyrights

* Add instances with smaller NPerBlock

* Update interface test

* Minor stylistic fixes

* Minor stylistic fixes
2023-09-28 23:10:18 +02:00
Bartłomiej Kocot
e2243a4d1e Add column to image kernel (#930)
* Add column to image kernel

* Minor fixes for dtypes and client examples

* Disable tests for disabled dtypes

* Disable add instances functions for disabled data types

* Minor stylistic fixes

* Revert "Disable add instances functions for disabled data types"

This reverts commit 728b869563.

* Instances reduction

* Add comments in device_column_to_image_impl

* Update changelog and Copyrights

* Improve changelog
2023-09-27 17:19:06 +02:00
zjing14
11676c7e49 Add multiple A/B support (#906)
* add gridwise_multi_abd

* move element_op into RunRead

* merge element_wise op with data read

* add multiABD example

* allow packed elementwise_op

* changed example

* clean

* clean

* add is_detected

* fix

* minor fix

* add scaleAdd_vec4 example

---------

Co-authored-by: Jing Zhang <jizha@amd.com>
2023-09-26 21:16:23 -05:00
Bartlomiej Wroblewski
63cd459248 Fix DL GEMM instances with too large vector size (#901)
* Fix vector lengths of DL GEMM instances with padding
* Add checks for correctness of vector lenghts in DL GEMM
2023-09-18 14:08:23 +02:00
Bartlomiej Kocot
bc2d0583d3 Stylistic improvements for grouped convolution code
Remove unnecessary ignoring

Update test/grouped_convnd_bwd_weight/test_grouped_convnd_bwd_weight.cpp
2023-09-15 20:03:47 +02:00
zjing14
f9d0eddb90 Add fp16/fp8 support into Grouped gemm FixedNK (#874)
* move all arguments into device

* add b2c_tile_map

* add examples

* add SetDeviceKernelArgs

* dedicated fixed_nk solution

* init client api

* add grouped_gemm_bias example

* add a instance

* add instances

* formatting

* fixed cmake

* Update EnableCompilerWarnings.cmake

* Update cmake-ck-dev.sh

* clean; fixed comments

* fixed comment

* add instances for fp32 output

* add instances for fp32 output

* add fp32 out client example

* fixed CI

* init commit for kbatch

* add splitk gridwise

* format

* fixed

* clean deviceop

* clean code

* finish splitk

* fixed instances

* change m_loops to tile_loops

* add setkbatch

* clean code

* add splitK+bias

* add instances

* opt mk_nk instances

* clean examples

* fixed CI

* remove zero

* finished non-zero

* clean

* clean code

* optimized global_barrier

* fixed ci

* fixed CI

* instance and client

* removed AddBias

* format

* fixed CI

* fixed CI

* move 20_grouped_gemm to 21_grouped_gemm

* clean

* formatting

* clean

* clean

* fixed computeType

---------

Co-authored-by: Jing Zhang <jizha@amd.com>
2023-09-14 21:04:10 -05:00
Bartłomiej Kocot
475188ca2e Add grouped conv bwd weight dl instances and new layout (#897)
* Add grouped conv bwd weight dl instances and new layout

* Add M and N padding

* Remove todo comment

* Enable grouped conv fwd dl k,c=1 generic instance

* Comment fixes
2023-09-13 10:14:31 -05:00
zjing14
a66d14edf2 fixed fp8 issues (#894)
* fixed fp8 init; and reference gemm

* Update host_tensor_generator.hpp

* fixed convert

* fixed reference gemm

* fixed comments

* fixed comments

* fixed ci

* fixed computeType

---------

Co-authored-by: Jing Zhang <jizha@amd.com>
2023-09-12 22:17:56 -05:00
Bartlomiej Wroblewski
8f84a01237 Enable DPP8 GEMM on Navi3 (#892) 2023-09-08 11:14:57 -05:00
Haocong WANG
562b4cec48 [Navi3x] Add fp16/int8 wmma conv forward instances (#746)
* fix wmma gemm int8; add grouped conv int8 example

* Add int8 gemm-bilinear instances

* compile sanity check unknown

* Sanity pass + clang-format

* add int8 conv profiler instances

* solve merge conflict

---------

Co-authored-by: zjing14 <zhangjing14@gmail.com>
Co-authored-by: Chao Liu <chao.liu2@amd.com>
2023-09-07 21:59:26 -05:00
Bartlomiej Wroblewski
37a8c1f756 Redesign the DPP8 GEMM kernel to use warp-wise component (#863)
* Redesign the DPP8 GEMM kernel to use warp-wise component

* Review: Improve error messages

* Review: Remove unnecessary empty lines

* Review: Fix M, N per thread names

* Review: Rename mfma_input_type to dpp_input_type

* Review: Fix tensor adaptor; remove unnecessary element

* Review: Remove calls to dpp_gemm's MakeCDescriptor

* Review: Add blockwise doc, change function names to include dimension names

* Review: Remove duplicated code; Move Block2CtileMap alias to the top of the file

* Review: Add __restrict__ keywords

* Review: Use MatrixPadder for padding A, B, C matrices

* Review: Remove hardcoded datatypes

* Review: Change names from FloatX to XDataType

* Review: Introduce AK0 and BK0 instead of a single K0

* Review: Remove construction of dpp_datatypes object

* Review: Rename DppInstrRunner to DppLanegroupGemm
2023-09-06 11:44:09 -05:00
zjing14
3786bfe1cc added padding of K into gemm_v2r3 (#887)
* added kpad support into v2r3

* add generic instances

* fixed comments

* fixed mnk padding

* Update device_batched_gemm_xdl.hpp

---------

Co-authored-by: Jing Zhang <jizha@amd.com>
2023-09-06 10:15:52 -05:00
Illia Silin
7dcb14d9d4 fix syntax (#890) 2023-09-05 11:29:44 -07:00