* parse examples inside the add_example_executable function
* fix the example 64 cmake file
* add xdl flag to the gemm_bias_softmax_gemm_permute example
* add filtering of tests based on architecture type
* enable test_grouped_gemm for gfx9 only
* enable test_transpose only for gfx9
* only linnk test_transpose if it gets built
* split the gemm instances by architectures
* split gemm_bilinear,grouped_conv_bwd_weight instances by targets
* split instances by architecture
* split grouped_conv instances by architecture
* fix clang format
* fix the if-else logic in group_conv headers
* small fix for grouped convolution instances
* fix the grouped conv bwd weight dl instances
* fix client examples
* only enable client examples 3 and 4 on gfx9
* set the gfx9 macro
* make sure the architecture macros are set by cmake
* use separate set of xdl/wmma flags for host code
* sinmplify the main cmake file
* add conv_fwd_bf8 instance declaration
* split up splitk-gemm instances
* clean up some unused variables
* split the mk_kn_mn interwave splitk-gemm instances
* split up f16_f16_f16 mk_nk_mn splitk gemm instances
* fix clang format
* fix function names
* fix typo
* split up the 2 largest fp16*fp8 splitk gemm instances
* get rid of unused variables
* split up the largest splitk-gemm fp8*fp16 instance file
* split up the instances for xdl fp8 gemms
* split the headers for f16 and i8 for wmmma convolution instances
* fix cppcheck errors, first pass
* fix format
* fix returned value in examples
* add macro definitions for cppcheck
* fix the profile_gemm logic
* update the gemm profiler logic
* add more difinitions to cppcheck, fix couple more errors
* replace runtime error with message in device function
* fix a couple of int4 issues
* no return for fill function
* fix errors in data_types.hpp
* fix format
* fix few remaining errors
* fix errors in data_types.hpp
* fix last couple of errors in datat_types.hpp
* added working example for 5D input using 1D kernel
* example with 5D input tensor and 2d kernel - not working: issues with arguments
* added updated version of 3d device op - changed descriptors/dims
* added example file to check kernel
* fixed descriptor and isSupportedArgument stride problem
* added and modified kernel for 3d - updated tids/loop
* adding some more 5d example files
* fixed some issues
* changes made for testing
* working version: fixed error in stride for A, still a bit inefficient
* cleaned up formatting/comments
* updating formatting
* more formatting fixes
* fixing cmake, adding back gpu targets in cmake script
* adding client example
* added instances for client example
* fixed errors in client example
* implemented client ex with device_elementwise.hpp and device_elementwise_3d_impl.hpp
* removed extra files
* minor formatting and naming fixes
* adding test files and profiler
* fixing minor error
* minor fix
* removed unneccesary comments, renamed files
* updated instance list for client example, added different layout example
* removing instances
* fixed error in instance generation
* remove comments
* update profiler and client example tensor layouts
* fixed errors in test/profiler
* updated vector dim access to enable vector load
* updated test/profiler files
* updated example with 1d kernel
* updating profiler
* renamed files
* disabled device op for MI300
* skip elementwise_permute_2d on gfx94x
* Update CMakeLists.txt
* fixing CMake - disabling some GPU targets
* added transpose profiler to CMake
* fixed transpose profiler errors
* fixed instances for tests/profiler
* cleaned up code in transpose profiler source code
* added some comments, updated copyright
* made function arguments const where possible
---------
Co-authored-by: Jing Zhang <jizha@amd.com>
Co-authored-by: Jing Zhang <jizhan@amd.com>
Co-authored-by: zjing14 <zhangjing14@gmail.com>
* enable compilation of INSTANCES_ONLY for Windows
* suppress ROCMChecks warnings on GoogleTests
* suppress -Wfloat-equal warning on GoogleTests
---------
Co-authored-by: Illia Silin <98187287+illsilin@users.noreply.github.com>
* adding files for F32 example
* adding functioning implementation with scalar multiplication and unary operator support
* added fp 16 type check in unary square
* updating scalar multiplication as an operator
* functioning version with scalar operator
* changing strides for col major
* updated column major implementation
* working column major implementation
* cleaned up comments, rearranged/renamed files
* small edits to 3d transpose profiler
* adding test/profiler/instance files for hipTensor permute unit test
* added more test instances
* cleaned up errors, randomized input tensor, added more instances
* turned off time printouts
* removed conflicting transpose profiler
* rearranged some files
* rename folder
* Add type string
* Remove typo
* Add deviceOp to backward x
* Add comment to describe the behavior of backward normalization
* Add kernel function, prepare to implement
* implement generic kernel
* Check vector size
* Add sweep once pipeline for small reduce size
* Fix bug of KRaw_ error
* Fix bug of dx stride
* sanity check for mean and rstd
* backward x for groupnorm
* Add bwd x instance
* add layernorm 2d bwd gamma beta instances
* Change save mean var type from f32 to f16 in f16 mode
* Change the example to f16
* Add groupnorm bwd gamma beta instance
* Add groupnorm bwd x instance
* Fix naming
* Add layernorm bwd x ckprofiler
* Add groupnorm bwd x profiler
* clang format
* Rename bwd x to bwd data
* Fix bug of verification in profiler
* Add test of layernorm and groupnorm bwd data
* Add missing cmake
* Add layernorm2d bwd data
* rename fwd example
* Add groupnorm client example
* Fix typo. replace Invarient with Invariant
* Add checking before running the best instance
This PR optimizes fp16 instances of direct load GEMM kernel introduced in #999 and #1052.
Measured the performance of new instances on CDNA2 GPU and compared it against the performance of the best non-direct-load GEMM instances. Used 76 different GEMM problems.
On average, this change improves the performance of the tested problems by 47%. For cases known as latency-bound, the speedup is around 126%.
* disabling some fp8 gemm instances to reduce build time
* disable fp8 gemm instances to reduce build time
* remove the unused variable
* build fp8 gemm default and padded instances separately
* fix include pathsc
Current implementation of IsSupported method in contraction ops does not cover a lot of possible cases in which ScalarPerVector cannot really be used to read A, B or D, or write E.
This PR extends both the regular and multiABD contraction ops with improved checks and also adds new instances with smaller values of ScalarPerVector to support instances that are not supported by other instances.
This PR introduces support for double buffering in LDS into GEMM kernels that use direct load instructions.
Direct loads now use inline asm instead of intrinsics. Usage of intrinsics results in compiler adding additional waitcnt instructions what breaks possible load/compute overlap in case of double buffering.
Usage of inline asm results in the need to use sched_barrier in order to make sure that compiler cannot incorrectly reschedule instructions since it does not know the data dependencies between global->LDS and LDS->registers.
* spolit the static library into several
* update lib paths and fix client example
* do not use device_mha_operarions for client examples
* use appropriate libs to link to client examples
* remove the gpu/transpose path from the list
* try fixing clinet examples 3,4,9
* add necessary libs for client examples
* fix the layernorm client example
* fix the client examples 23 and 24
* fix typo
* add interface library and refresh clang format
* Add basic support for direct loads from global to LDS
* Clean the code and comments
* Add support for fp16
* Add comments
* Add check for thread cluster lengths
* Align non-direct-load fp16 example
* Small fixes
* Extend IsSupported to check for supported GPU gens
* Build examples only on the supported HW
* Do not throw when instance not supported in 04 example
* Review: Apply review suggestions
* Review: small fix
* Review: small fix
* Introduce multiABD api and deprecate multiD
* Replace multiD with multiABD
* Mark structures as deprecated
* Change doxygen deprecated to note to avoid warnings
* Support multi AB for grouped conv fwd xdl
* Add instances
* Add client example
* Add example
* Add interface test
* Minor fixes
Minor fixes
Minor fixes
* Comment fixes
* Fixes
* Reference fix
* Test xdl fixes
* Improve multi_ab interface test
* added working example for 5D input using 1D kernel
* example with 5D input tensor and 2d kernel - not working: issues with arguments
* added updated version of 3d device op - changed descriptors/dims
* added example file to check kernel
* fixed descriptor and isSupportedArgument stride problem
* added and modified kernel for 3d - updated tids/loop
* adding some more 5d example files
* fixed some issues
* changes made for testing
* working version: fixed error in stride for A, still a bit inefficient
* cleaned up formatting/comments
* updating formatting
* more formatting fixes
* fixing cmake, adding back gpu targets in cmake script
* adding client example
* added instances for client example
* fixed errors in client example
* implemented client ex with device_elementwise.hpp and device_elementwise_3d_impl.hpp
* removed extra files
* minor formatting and naming fixes
* adding test files and profiler
* fixing minor error
* minor fix
* removed unneccesary comments, renamed files
* updated instance list for client example, added different layout example
* removing instances
* fixed error in instance generation
* remove comments
* update profiler and client example tensor layouts
* fixed errors in test/profiler
* updated vector dim access to enable vector load
* updated test/profiler files
* updated example with 1d kernel
* updating profiler
* renamed files
---------
Co-authored-by: Jing Zhang <jizha@amd.com>
* Rename folder
* Add layernorm 4d fwd example
* Rename original layernorm example
* Add layernorm 4d f16 test
* Add layernorm4d_fwd client example
* Support layernorm4D in ckProfiler
* Rename groupnorm to groupnorm fwd in example
* Rename layernorm and group fwd in test
* Rename normalization to normalization_fwd (instances)
* Add fwd to DeviceNormalization
* Rename external api header
* Rename folder, because we can also add bwd in this folder
* Add fwd in layernorm and groupnorm (profiler
* Fix compile error
---------
Co-authored-by: Po Yen Chen <PoYen.Chen@amd.com>
* add compute type check for fp16 in forward convolution instances
* Add compute type check for default compute types
---------
Co-authored-by: Bartlomiej Kocot <barkocot@amd.com>
* Add support for mixed precision in contraction scale and bilinear (#936)
* Extract common functionality to separate files
* Reference contraction: Remove incorrect consts from type_converts
* Reference contraction: Add missing type_convert for dst value
* Reference contraction: Fix incorrect order of B matrix dimensions
* Add support for mixed precision in contraction scale and bilinear
* Move using statements from instances to a common file
* Move using statements from examples to a common file
* Fix the order of B matrix dimensions across examples and profiler
* Fix the computation of error threshold
* Make ComputeDataType an optional argument
* Include possible DataType -> ComputeDataType casting error in the threshold
* Remove commented code
* Make the ComputeDataType an optional argument in instance
---------
Co-authored-by: Illia Silin <98187287+illsilin@users.noreply.github.com>
* Fix instances dtype check
* Fix source dtypes seletor for examples and tests
* Sync with new cmakefile changes
* Remove not needed ifdefs
* Remove not needed ifdefs
* reinterpret_cast to const char* in dumpBufferToFile to be compatible with both const and non-const input pointers
* Add seed input to GeneratorTensor_4 for normal_distribution generator
* Add GetTypeString() for DeviceElementwiseImpl
* Add HIP_CHECK_ERROR macro