Chao Liu
7b1ec41e5b
refactor
2021-08-06 20:50:01 +00:00
Chao Liu
49c33aaea7
refactor
2021-08-06 19:59:53 +00:00
Chao Liu
54b3e73d17
rename
2021-08-06 18:07:15 +00:00
Chao Liu
f6edda6119
Merge pull request #3 from ROCmSoftwarePlatform/format
...
Update to clang-format-10
2021-07-30 17:16:23 -05:00
Chao Liu
82fae390fb
update to clang-format-10
2021-07-30 16:37:00 -05:00
Chao Liu
bd27ed6c38
Merge pull request #2 from asroy/master
...
Update readme
2021-07-28 09:43:56 -05:00
Chao Liu
85a1429301
Update README.md
2021-07-28 09:41:38 -05:00
Chao Liu
56f93c6f33
Update README.md
2021-07-28 09:40:44 -05:00
Chao Liu
594f1dbe96
Merge pull request #1 from ROCmSoftwarePlatform/some_fix_210727
...
fix building issue
2021-07-27 13:19:11 -05:00
Chao Liu
6a1bc5939c
fix building
2021-07-27 13:12:43 -05:00
Chao Liu
f63a23acb1
[MIOpen Downstream] Initial MIOpen integration ( #52 )
...
* update online kernel wrapper bundle all descriptors in a tuple
* change __CONSTANT__ to CONSTANT
* rename
* adding tuning
* added IsValidCompileParameter
* reorginze
* adding tunable for fp16 and int8
* fix kernel compile warning and bug fixes
* suppress warning about cast CONSTANT (address space 4) pointer
* fix building issue
2021-07-27 00:02:27 -05:00
Chao Liu
1264925422
reorganize files to prepare for MIOpen integration ( #51 )
...
* change olc cmake
* adding online compile to fwd-v4r5r2
* update scripts
* remane fwd-v4r5r2 to fwd-v6r1
* clean up
2021-07-18 00:43:05 -05:00
zjing14
fbdf4332c7
Add xdlops v4r4r4 into online compilation ( #48 )
...
* init for v4r4 xdlops olc
* refactor wrap
* init impl of v4r4 nchw xdlops olc
* tuning
* test perf
* fixed v4r4 nhwc
* tuned v4r4 nhwc
* use gridwise_gemm_xdlops_v2r3
* swap a/b
* add pointer support into offline v2r3
* debugging v4r4r4 transform for olc
* change timer of olc
* refactor v4r4 xdlops nchw olc
* remove transform fun in v4r4 xdlops nhwc olc
Co-authored-by: Chao Liu <chao.liu2@amd.com >
2021-07-16 23:27:08 -05:00
Chao Liu
0a72e4df94
Change initialization method of tensor for iGEMM ( #49 )
...
* change init method
2021-07-16 22:55:01 -05:00
Chao Liu
58a8057011
default iterator hack for blockwise copy ( #47 )
2021-07-16 08:57:15 -05:00
Chao Liu
1c1b56fe61
fix bug: config for ThreadwiseDynamicTensorSliceTransfer_v2 ( #46 )
2021-07-09 10:13:20 -05:00
Chao Liu
4682d070a6
Create README.md ( #45 )
...
* Create README.md
2021-07-08 13:32:29 -05:00
Chao Liu
aafb5eb187
Tweak ( #44 )
...
* tweak
2021-07-08 12:17:43 -05:00
Chao Liu
2f82cfb190
Update default launch bounds ( #43 )
...
* update default launch bounds
2021-07-08 11:26:57 -05:00
Chao Liu
81c942cd7e
Deprecate static kernel ( #42 )
...
* deprecate static kernels
2021-07-08 10:40:00 -05:00
Chao Liu
b8b2d0a6d1
DL GEMM fp32/fp16/int8 ( #41 )
...
* add threadwise copy the copy a tensor in one copy, added kpack to DL GEMM
* add kpack into fwd v4r5 nchw fp32
2021-07-04 22:50:29 -05:00
Chao Liu
11ec07e9d1
fix complain about divide by zero ( #40 )
2021-07-01 16:50:57 -05:00
zjing14
3835318cc3
xdlops_v4r4_fwd fp32/fp16 ( #34 )
...
* create files for xdlops
* working on blockwise_gemm_xdlops
* add KReduction
* add m/n repeats
* add 2x2 pipeline
* added 128x128 wavegemm
* use StaticBuffer of vector_type
* break vector type to blk_size
* add kpack into xldops_gemm and blockwise_gemm
* abroadcast only
* add fp32 mfma instructions
* adding fp16 mfma
* pack half4_t
* rename kperwave to kpack
* add 32x32x8fp16
* add fp16 mfma
* clean code
* clean code
* V4r4 xdlops kpack (#35 )
* add kpack with incorrect results
* bug fix for make_dynamic_naive_tensor_descriptor_aligned_v2
* add 1x1 kernel
* add gridwise_gemm_v2 - single_buffer
* enabled dwordx4 for fp16
Co-authored-by: Chao Liu <chao.liu2@amd.com >
* refactor fwd-v4r4-xdlops
* add v4r4-nhwc-xdlop
* improve some perf of nhwc and nchw by tuning parameters, and change scheuduling in gridwise-gemm loop
* tweak scheduling in gridwise gemm
* add v4r3 with a single output copy
* init commit: output with slice win
* adding sliceWin
* add multiple repeats pattern
* starting adding bwd-v4r1-xdlops
* use tuple as SrcBuffer
* adding bwd-data v4r1 nhwc xdlops
* fix bug in make_dynamic_naive_tensor_descriptor_aligned_v2()
* fix bug in host bwd-data conv
* initial implementation of bwd-data v4r1 nhwc xdlops
* add launch bound flags
* enable launch bound
* add m/nrepeat=4
* tweak bwd-data v4r1 nhwc xdlops
* added bwd-data v4r1 nhwc xlops with output A and weight B
* add fwd-v4r4 nhwc xdlops, A input, B weight, C output
Co-authored-by: Chao Liu <chao.liu2@amd.com >
2021-07-01 14:33:00 -05:00
Qianfeng
1685048a67
Add online compilation for dynamic kernels ( #37 )
...
* Add online-compiling facility
* Synchronize from fwd-v4r5 and implement host interfaces to call conv-fwd v4r4/v4r5 using on-line compiling method
* Tiny adjustment to time reporting
* Use object assignment to replace explicit bytes copying in the first kernel of v4r4/v4r5
* Use single thread to assign descriptor object to device memory
* Adjust to the workload assignment of the two kernels of v4r4 (experimental)
* Revert "Adjust to the workload assignment of the two kernels of v4r4 (experimental)"
This reverts commit eb38461456bb0c82b6c0d32cdd616e181907e20c.
* Update to make constexpr for generating descriptor types in kernel 2 of dynamic conv-fwd v4r4
* Update to dynamic conv-fwd v4r4 online-compiling
* Update to dynamic conv-fwd v4r5 online-compiling (result not accurate)
* Tiny update to driver/CMakeLists.txt
* clang-format
* Tiny comments change
* Add env OLC_DUMP_SAVE_TMP_DIR to support saving of temperary dir
* Fwd v4r5 olc perf (#39 )
* added hip-clang flags that fix perf issue of online compilation
* fix bug for olc fwd-v4r5-nchw
* Move constexpr and type reference statements out of the function body in conv-fwd v4r4/v4r5 kernel wrapper
* Remove printing in hip_build_utils.cpp
* Update to root CMakeLists.txt
* Revert "Move constexpr and type reference statements out of the function body in conv-fwd v4r4/v4r5 kernel wrapper"
This reverts commit 3d2c5d8ecdd8298b72d127110500ed5b38d9835c.
Co-authored-by: Chao Liu <chao.liu2@amd.com >
Co-authored-by: Chao Liu <lc.roy86@gmail.com >
Co-authored-by: root <root@dc-smc-18.amd.com >
2021-06-24 08:34:19 -05:00
Chao Liu
d2315b0dfc
pass-by-void-pointer for gridwise_dynamic_gemm_v1r2 ( #38 )
...
* pass-by-void-pointer for gridwise_dynamic_gemm_v1r2
* use pass-by-value by default
2021-06-19 13:43:45 -05:00
Chao Liu
30072aec37
Restructure gridwise and blockwise GEMM, add tensor contraction and FWD-v4r5 ( #36 )
...
* experimenting magic number division
* overhauling fwd-v4r4 to clearly reflect transformation graph
* added fwd-v4r5
* bug fix for make_dynamic_naive_tensor_descriptor_aligned_v2
* bug fix and added sanity-check in transform_dynamic_tensor_descriptor
* added conv_driver_v2
2021-06-09 23:53:08 -05:00
Chao Liu
71d6b19d18
reorganize some files ( #33 )
2021-05-12 14:15:38 -05:00
Chao Liu
78b987fbd6
Use DynamicBuffer instead of raw pointer ( #32 )
...
* Use DynamicBuffer to hold raw pointer (to global and LDS memory)
* add workaround for compiler issue (inefficient ISA) of ds_write for int8x4, int8x8, int8x16
2021-05-12 13:10:42 -05:00
Chao Liu
01055d95d9
No raw index calculation ( #31 )
...
* Replace most raw index calculation to coordinate transformation
* Overhaul blockwise and threadwise GEMM
* Overhaul driver for gridwies GEMM kernel
Co-authored-by: Jing Zhang <jizhan@amd.com >
2021-05-11 00:09:25 -05:00
Chao Liu
d075adf126
Use Tuple and vector_type instead of Array for holding tensor data ( #30 )
...
* replacing array with tuple and vector for tensor data
2021-04-28 13:10:33 -05:00
Chao Liu
e4790c250c
Overhaul vector_type and use real vector for int8x4_t instead of aliasing from int32_t ( #29 )
...
* overhaul vector_type, make int8x4_t real vector instead of aliasing from int32_t
2021-04-12 23:48:43 -05:00
Chao Liu
3bf52e60c5
Initial implementation of magic number division and "Merge" transformation that use it ( #28 )
...
* initial implementation for magic number division and DynamicMerge_v2_magic_division that uses it
* turn off DynamicMerge_v2_magic_division that use magic number division by default
2021-04-12 21:32:55 -05:00
zjing14
792a20fa5b
Hybrid direct + implicit GEMM forward convolution NCHWc v5r1 ( #25 )
...
* Hybrid direct + implicit GEMM forward convolution NCHWc v5r1. Input tensor bypass LDS. Support fp32/fp16/int8
2021-04-07 16:47:29 -05:00
Chao Liu
d2217f3040
Fix performance issue when passing tensor descriptor from host to kernel by void pointers ( #27 )
...
* use address_space(4) in kernel signature to fix performance issue when passing tensor descriptor from host to kernel by (void) pointers
* remove passing by pointer* option (only use pass by value or void*)
2021-04-06 17:49:57 -05:00
zjing14
6a5ea49309
bug fix for buffer resource setting ( #26 )
2021-04-06 16:59:52 -05:00
Chao Liu
fcbb978828
Dynamic tensor descriptor ( #24 )
...
* support dynamic tensor descriptor
* use buffer load OOB feature for padding case
* add navi support
* add int8x4 inference kernel
Co-authored-by: Chao Liu <chao@ixt-rack-81.local.lan >
Co-authored-by: Jing Zhang <jizhan@amd.com >
2021-03-25 13:51:11 -05:00
Chao Liu
bbcb67d0aa
Bwd Data NHWC ( #22 )
...
* fix buffer_store bug
* remove obsolete kernels
* add bwd-data-v5r1-nhwc
2020-08-06 12:22:11 -05:00
Chao Liu
ac62d13ecd
Improve buffer address for out of bound check ( #21 )
...
* Use buffer load built-in OOB check. buffer size is limited to 2GB.
* buffer APIs use combined wave and thread offset
* use uint32_t for addr shift in buffer addressing
2020-07-29 18:04:09 -05:00
Chao Liu
5c7cec1115
Code clean up ( #20 )
...
* tuning para,
* testing on v100
* add fp16
* remove deprecated tensor descriptor
* sync with miopen
* update build script
Co-authored-by: Jing Zhang <jizhan@amd.com >
2020-06-23 20:31:27 -05:00
Chao Liu
7d09790a0a
MIOpen integration ( #15 )
...
* renaming
2020-02-18 10:42:18 -06:00
Chao Liu
1a66e35b6f
MIopen integration ( #13 )
...
* update for miopen integration: cosmetic refactor
2020-02-17 09:53:20 -06:00
Chao Liu
3406a1148a
Update for recent MIOpen integration ( #11 )
...
* update for MIOpen integration
2020-01-27 15:29:33 -06:00
Chao Liu
c5da0377fb
Added bwd data v3r1 v4r1, tweaking v1 ( #10 )
...
* Added bwd data v3r1: breaking down compute into a series of load balanced GEMM, and launch in a single kernel
* Added bwd data v4r1: like v3r1, but launch GEMMs in multiple kernels
* Tweaked v1r1 and v1r2 (atomic) on AMD GPU
2020-01-20 10:20:03 -06:00
Chao Liu
e2b4c5b469
update implicit GEMM forward v4r4 to use gridwise gemm ( #9 )
...
* updated fwd v4r4 to use gridwise gemm
* updated gridwise gemm api calls in bwd-data v1r1 and v2r1
2019-12-05 12:36:36 -06:00
Chao Liu
19a93dac05
fixed faulty padding API calls ( #8 )
2019-12-03 01:46:44 -06:00
Chao Liu
8f5f64960e
backward data ( #7 )
...
* enabled atomic add in tensor copy
* added gridwise GEMM
* added backward data conv using GEMM + atomic
* added backward data conv using GEMM, no atomic
2019-12-03 01:16:12 -06:00
Chao Liu
31ded4ac4b
remove dead file ( #6 )
2019-11-04 17:13:38 -06:00
Chao Liu
562e1e2767
MIOpen integration: recent bug fixes from MIOpen ( #5 )
2019-11-04 16:51:12 -06:00
Chao Liu
52c3fe05be
Refactor for MIOpen integration ( #4 )
...
Refactor, so can bring multi-index transformation and padding support into MIOpen
2019-10-11 11:37:31 -05:00
Chao Liu
9aaeacc82b
Merge pull request #3 from asroy/clean_up
...
enable type conversion in ThreadwiseGenericTensorSliceCopy_v2r1 and BlockwiseGenericTensorSliceCopy_v2
2019-09-30 15:14:02 -05:00