* fix cppcheck errors, first pass
* fix format
* fix returned value in examples
* add macro definitions for cppcheck
* fix the profile_gemm logic
* update the gemm profiler logic
* add more difinitions to cppcheck, fix couple more errors
* replace runtime error with message in device function
* fix a couple of int4 issues
* no return for fill function
* fix errors in data_types.hpp
* fix format
* fix few remaining errors
* fix errors in data_types.hpp
* fix last couple of errors in datat_types.hpp
* add cppcheck to the CK CI
* fix the path to CK source for cppcheck
* fix the path to CK source for cppcheck one more time
* fix the path to CK source for cppcheck third time
* change the path to ck_cppcheck.log
* install latest cppcheck from source
* fix bug in ck.hpp and use 20 threads for cppcheck
* create a switch to turn cppckeck on and off in CI
* SWDEV-439954 - Use hard coded filename rather than using the macro __FILE__ for debug prints.
Hiptensor library is using the header files from CK. Hard coded ROCm path was getting embedded into the hiptensor library, since the header file was having the macro __FILE__. Replace the macro with filename.
* fix syntax
---------
Co-authored-by: illsilin <Illia.Silin@amd.com>
* doc reorg and edits
* Update wrapper.rst with changes from PR #1098
* Update docs/dockerhub.rst
Co-authored-by: Bartlomiej Wroblewski <bwroblewski10@gmail.com>
* Update docs/index.rst
Co-authored-by: Bartlomiej Wroblewski <bwroblewski10@gmail.com>
* Update docs/what-is-ck.rst
Co-authored-by: Bartlomiej Wroblewski <bwroblewski10@gmail.com>
* Update docs/what-is-ck.rst
Restored to 4 bullets, with additional text for wrapper.
Co-authored-by: Bartlomiej Wroblewski <bwroblewski10@gmail.com>
* Update docs/Contributors_Guide.rst
Co-authored-by: Lisa <lisajdelaney@gmail.com>
* Update API_Reference_Guide.rst
using sentence case for title
* updated index structure per Lisa
* separate docker hub and tutorial
---------
Co-authored-by: Bartlomiej Wroblewski <bwroblewski10@gmail.com>
Co-authored-by: Lisa <lisajdelaney@gmail.com>
Co-authored-by: Illia Silin <98187287+illsilin@users.noreply.github.com>
* add docker for rocm6.0.1 rc1
* modify the path to clang for test compilers in CI
* fix the hipcc/clang path for test compilers in CI
* fix the dockerfile for older rocm versions
* added working example for 5D input using 1D kernel
* example with 5D input tensor and 2d kernel - not working: issues with arguments
* added updated version of 3d device op - changed descriptors/dims
* added example file to check kernel
* fixed descriptor and isSupportedArgument stride problem
* added and modified kernel for 3d - updated tids/loop
* adding some more 5d example files
* fixed some issues
* changes made for testing
* working version: fixed error in stride for A, still a bit inefficient
* cleaned up formatting/comments
* updating formatting
* more formatting fixes
* fixing cmake, adding back gpu targets in cmake script
* adding client example
* added instances for client example
* fixed errors in client example
* implemented client ex with device_elementwise.hpp and device_elementwise_3d_impl.hpp
* removed extra files
* minor formatting and naming fixes
* adding test files and profiler
* fixing minor error
* minor fix
* removed unneccesary comments, renamed files
* updated instance list for client example, added different layout example
* removing instances
* fixed error in instance generation
* remove comments
* update profiler and client example tensor layouts
* fixed errors in test/profiler
* updated vector dim access to enable vector load
* updated test/profiler files
* updated example with 1d kernel
* updating profiler
* renamed files
* disabled device op for MI300
* skip elementwise_permute_2d on gfx94x
* Update CMakeLists.txt
* fixing CMake - disabling some GPU targets
* added transpose profiler to CMake
* fixed transpose profiler errors
* fixed instances for tests/profiler
* cleaned up code in transpose profiler source code
* added some comments, updated copyright
* made function arguments const where possible
---------
Co-authored-by: Jing Zhang <jizha@amd.com>
Co-authored-by: Jing Zhang <jizhan@amd.com>
Co-authored-by: zjing14 <zhangjing14@gmail.com>
* enable compilation of INSTANCES_ONLY for Windows
* suppress ROCMChecks warnings on GoogleTests
* suppress -Wfloat-equal warning on GoogleTests
---------
Co-authored-by: Illia Silin <98187287+illsilin@users.noreply.github.com>
* adding files for F32 example
* adding functioning implementation with scalar multiplication and unary operator support
* added fp 16 type check in unary square
* updating scalar multiplication as an operator
* functioning version with scalar operator
* changing strides for col major
* updated column major implementation
* working column major implementation
* cleaned up comments, rearranged/renamed files
* small edits to 3d transpose profiler
* adding test/profiler/instance files for hipTensor permute unit test
* added more test instances
* cleaned up errors, randomized input tensor, added more instances
* turned off time printouts
* removed conflicting transpose profiler
* rearranged some files
* rename folder
* Add type string
* Remove typo
* Add deviceOp to backward x
* Add comment to describe the behavior of backward normalization
* Add kernel function, prepare to implement
* implement generic kernel
* Check vector size
* Add sweep once pipeline for small reduce size
* Fix bug of KRaw_ error
* Fix bug of dx stride
* sanity check for mean and rstd
* backward x for groupnorm
* Add bwd x instance
* add layernorm 2d bwd gamma beta instances
* Change save mean var type from f32 to f16 in f16 mode
* Change the example to f16
* Add groupnorm bwd gamma beta instance
* Add groupnorm bwd x instance
* Fix naming
* Add layernorm bwd x ckprofiler
* Add groupnorm bwd x profiler
* clang format
* Rename bwd x to bwd data
* Fix bug of verification in profiler
* Add test of layernorm and groupnorm bwd data
* Add missing cmake
* Add layernorm2d bwd data
* rename fwd example
* Add groupnorm client example
* Fix typo. replace Invarient with Invariant
* Add checking before running the best instance
This PR optimizes fp16 instances of direct load GEMM kernel introduced in #999 and #1052.
Measured the performance of new instances on CDNA2 GPU and compared it against the performance of the best non-direct-load GEMM instances. Used 76 different GEMM problems.
On average, this change improves the performance of the tested problems by 47%. For cases known as latency-bound, the speedup is around 126%.
Copied from the llvm-project LLVM_PARALLEL_*_JOBS
Concurrent linking can break the build as well as having too many
compile jobs for the avaiable memory. These options allow the user
to fine tune the build to fit within their machines memory
constraints.
An example use on linux is
COMPILE_JOBS=`cat /proc/cpuinfo | grep -m 1 'cpu cores' | awk '{ print $4 }'`
if [ ${COMPILE_JOBS}x = x ]; then
COMPILE_JOBS=1
fi
BUILD_MEM=4
MEM_KB=0
MEM_KB=`cat /proc/meminfo | grep MemTotal | awk '{ print $2 }'`
MEM_MB=`eval "expr ${MEM_KB} / 1024"`
MEM_GB=`eval "expr ${MEM_MB} / 1024"`
COMPILE_JOBS_MEM=`eval "expr 1 + ${MEM_GB} / ${BUILD_MEM}"`
if [ "$COMPILE_JOBS_MEM" -lt "$COMPILE_JOBS" ]; then
COMPILE_JOBS=$COMPILE_JOBS_MEM
fi
LINK_MEM=32
LINK_JOBS=`eval "expr 1 + ${MEM_GB} / ${LINK_MEM}"`
cmake -G Ninja -DCK_PARALLEL_LINK_JOBS=$LINK_JOBS
-DCK_PARALLEL_COMPILE_JOBS=$COMPILE_JOBS
Signed-off-by: Tom Rix <trix@redhat.com>
* disabling some fp8 gemm instances to reduce build time
* disable fp8 gemm instances to reduce build time
* remove the unused variable
* build fp8 gemm default and padded instances separately
* fix include pathsc
Current implementation of IsSupported method in contraction ops does not cover a lot of possible cases in which ScalarPerVector cannot really be used to read A, B or D, or write E.
This PR extends both the regular and multiABD contraction ops with improved checks and also adds new instances with smaller values of ScalarPerVector to support instances that are not supported by other instances.