zjing14
0a66c54e95
fixed multiple definition issue of bfp16/fp32 conversion function when building ckProfiler ( #51 )
...
* fixed bfloat16 issues
* refactor type_convert
Co-authored-by: Chao Liu <chao.liu2@amd.com >
2021-11-16 15:44:17 -06:00
Jing Zhang
89e1ebd4d5
updated bfloat16_to_float
2021-11-16 18:01:25 +00:00
zjing14
3737bb039a
Add bfp16/int8 support into XDL GEMM operator ( #50 )
...
* init StaticBufferV2
* clean
* adopt old output stage for staticBufferV2
* clean
* remove hack
* clean
* clean
* add parameters
* clean code
* move c_buffer alloc into blockwise gemm
* add adaptors for m/n_thread_data_on_grid
* tweak gemm
* adjust blockwise_gemm_xdlops
* tweak
* update conv
* update script
* adding bwd 1x1
* update script
* adding 1x1 bwd
* debugging bwd 1x1 failure
* update script
* update script
* test
* test v100
* add bf16_1k
* clang-format
* clean
* add bfp16 for gfx908
* add verification
* clean up
* clean code
* restore bfl16
* clean
* add bfp16 support into gemm_driver
* apply new generator to other drivers
* add int8 support
* cleanb
* clean
* clean
* clean
Co-authored-by: Chao Liu <chao.liu2@amd.com >
Co-authored-by: Chao Liu <lc.roy86@gmail.com >
Co-authored-by: root <root@hayabusa6111.amd.com >
2021-11-15 10:24:39 -06:00
Chao Liu
b491ebf384
FP16 data in-register transpose ( #41 )
...
* start fixing 16bit data packing
* adding StaticTensor
* adding StaticTensor
* adding StaticTensor
* add missing constexpr
* adding static tensor
* adding static tensor
* adding transpose
* add inline asm for transpose 2x2 of half_t
* add general transpose_vectors(), but have unnecessary register initialization using v_mov
* fix unnecessary register initialization in transpose_vector by using more pass-by-reference
* add hardcoded logic for NHWC wrw
* improve asm for v_pack
* make ThreadwiseTensorSliceTransfer_v3r2 support any tensor
* tweak
* reorganize file
2021-11-15 10:05:58 -06:00
Chao Liu
e823d518cb
ckProfiler and device-level XDL GEMM operator ( #48 )
...
* add DeviceGemmXdl
* update script
* fix naming issue
* fix comment
* output HostTensorDescriptor
* rename
* padded GEMM for fwd v4r4r4 nhwc
* refactor
* refactor
* refactor
* adding ckProfiler
* adding ckProfiler
* refactor
* fix tuning parameter bug
* add more gemm instances
* add more fp16 GEMM instances
* fix profiler driver
* fix bug in tuning parameter
* add fp32 gemm instances
* small fix
* refactor
* rename
* refactor gemm profiler; adding DeviceConv and conv profiler
* refactor
* fix
* add conv profiler
* refactor
* adding more GEMM and Conv instance
* Create README.md
Add build instruction for ckProfiler
* Create README.md
Add Readme for gemm_xdl example
* Update README.md
Remove build instruction from top most folder
* Update README.md
* clean up
2021-11-14 11:28:32 -06:00
ltqin
fd49ff8080
add nchw atomic , nhwc and nhwc atomic method for backward weight ( #30 )
...
* add add new algorithm from v4r4r2
* program once issue
* add split k functiion
* redefine code
* add a matrix unmerge
* add b matrix unmerge k0
* trans a and b to gridegemm
* nhwc init
* no hacks and vector load
* add hacks
* modify some parameter
* fix tuning prometer for fp32
* fix tuning prometer for fp16
* start change gridwise k split
* init ok
* revome a b matrix k0mk1 desc in grid
* carewrite lculate gridsize
* add kbatch to CalculateBottomIndex
* remove some unused funtion
* add clear data function before call kernel
* out hacks
* in hacks
* rename device convolution file and function name
* modify kBatch value
* fix some tuning code
* start from v4r4 nhwc
* nhwc atomic is able to run
* just for fp32
* enable nchw atomic
* tweak
* tweak
* re-arrange gridwise gemm hot loop for wrw
* add wrw v4r5
* v4r4r5 fp16
* v4r4r4 fp16
* v4r4r2 fp16
* V4R4R4XDLNHWC fp16
* V4R4R2XDLATOMICNCHW fp16
* adjust for fp16
* input gridsize
* change kbatch to gridsize
* testing wrw
* clean up
* k_batch to gridsize
* fix bug
* wrw v4r4r4 kbatch change to gride size
* wrw v4r4r2 kbatch change to gride size
* after merge , change gridwise gemm v2r4
* change MakeCBlockClusterAdaptor
* other method use new gridwise gemm
* clean up
* chapad method nge to make_right_pad_transform
* kbatch out from transform function
* clean up and fix bug
* fix bug
* using function type reduce template parameters
* using auto replace define fuction type
* clean up
Co-authored-by: ltqin <letaoqin@amd.com >
Co-authored-by: Chao Liu <chao.liu2@amd.com >
Co-authored-by: Jing Zhang <jizhan@amd.com >
2021-10-19 18:42:34 -05:00
Chao Liu
b3e8d57d51
Tweak GEMM kernel ( #38 )
...
* add parameters
* tweak gemm
* tweak
* update conv
* update script
* adding bwd 1x1
* update script
* adding 1x1 bwd
* debugging bwd 1x1 failure
* update script
* update script
* test
* test v100
* clean up
2021-10-06 11:12:36 -05:00
Chao Liu
19613902b5
GEMM driver and kernel ( #29 )
...
* add gemm driver
* tweak
* add gemm kernel: mk_kn_mn and km_kn_mn
* tweak
* add GEMM km_nk_mn
* fix comment
2021-09-05 12:41:28 -05:00
ltqin
627d8ef35a
Backward weight v4r4r2 with xdlops ( #18 )
...
* start
* modify transformat
* modify device convolutiion
* modify host
* added host conv bwd and wrw
* remove bwd, seperate wrw
* clean
* hacall k to zero
* out log
* fixed
* fixed
* change to (out in wei)
* input hack
* hack to out
* format
* fix by comments
* change wei hacks(wei transform has not merge)
* fix program once issue
* fix review comment
* fix vector load issue
* tweak
Co-authored-by: ltqin <letaoqin@amd.com >
Co-authored-by: Jing Zhang <jizhan@amd.com >
Co-authored-by: Chao Liu <chao.liu2@amd.com >
2021-08-30 22:49:17 -05:00
zjing14
ba6f79a75e
Added host_conv_wrw for verification ( #15 )
...
* added host conv wrw
2021-08-19 01:00:41 -05:00
Chao Liu
c03045ce2d
rename
2021-08-10 23:45:36 +00:00
Chao Liu
2c48039d0e
fix kernel filename
2021-08-10 22:15:23 +00:00
Chao Liu
4f566c6221
vector/scalar pointer cast use c-style pointer cast instead of reinterpret_cast
2021-08-10 05:55:20 +00:00
Chao Liu
76f3131939
tidy
2021-08-09 18:49:59 -05:00
Chao Liu
d18428901e
tidy
2021-08-09 18:20:02 -05:00
Chao Liu
f885c131d8
tidy
2021-08-09 22:13:47 +00:00
Chao Liu
56fc0842b3
tidy
2021-08-09 19:27:49 +00:00
Chao Liu
82fae390fb
update to clang-format-10
2021-07-30 16:37:00 -05:00
Chao Liu
1264925422
reorganize files to prepare for MIOpen integration ( #51 )
...
* change olc cmake
* adding online compile to fwd-v4r5r2
* update scripts
* remane fwd-v4r5r2 to fwd-v6r1
* clean up
2021-07-18 00:43:05 -05:00