Chao Liu
41cdd3801a
GEMM/Conv+BiasAdd+ReLU+Add ( #55 )
...
* gemm+activation
* move C pointwise operation into threadwise copy
* add pointwise operation to A/B matrix
* update ckProfiler
* adding bias add
* adding bias add
* adding bias add
* added bias add; worked around compiler issues
* clean up
* clean up
* Update README.md
* Update README.md
* Update README.md
* clean up
* add conv_xdl example
* adding conv_xdl_bias_relu_add example
* add conv+bias+relu+add, but has register spill issue
* tweak
* tweak
* refactor
* Update README.md
update readme for example/2_gemm_xdl_bias_relu_add
* clean up
* Update README.md
update readme for example/3_conv_xdl
* Update README.md
2021-12-02 20:07:37 -06:00
zjing14
970fa3e92e
v5r1 fusion kernels for inference ( #49 )
...
* init
* refactor for 1x1
* rename e0_e1
* add e1 with bugs
* debug
* fixed
* fixed e1
* add timer
* imprve threadwise gemm with dot2
* add e2
* tuning
* seperate c2
* add nhwc
* restore nchwc
* clean
* opt
* fixed; tuning
* add BGlobalMoveSliceWindowStepHacks{}
* tuning
* repeat running
* adjust
* merge v5r1 nchwc
* add adaptors
* split k0 k1 in c_thread_grid
* split h and w
* remove v5r1 nhwc
* clean for pr
* remove host_conv_add
* clean code
* clean
* add dynamic support
* static mode
* test static
* add conv+add fusion
* fixed validation
* naming fix
* use activ_enum
* make static
* refactor conv_add for InMem::add
* add bias
* add conv_out
* add configurable makeddesc
* add maxpool fusion
* add maxpool host for validation
* enable static desc
* conv-only use v5r1_add
* test
* test
* for binary dumps
* fixed incorrect results due to typo
* clean
* debugging maxpool
* workaround with offset trick
* clean code
* modularize ops of fusion
* add gridwise_gemm_v3
* create seperate fusion fun
* enable dynamic mode of conv and conv+resize_add
* add dynamic mode of maxpool
* add pass by point
* add activ_type as arguments
* merge develop
* clean
* reset config to old default
Co-authored-by: Chao Liu <chao.liu2@amd.com >
2021-11-18 08:34:07 -06:00
zjing14
0a66c54e95
fixed multiple definition issue of bfp16/fp32 conversion function when building ckProfiler ( #51 )
...
* fixed bfloat16 issues
* refactor type_convert
Co-authored-by: Chao Liu <chao.liu2@amd.com >
2021-11-16 15:44:17 -06:00
Jing Zhang
89e1ebd4d5
updated bfloat16_to_float
2021-11-16 18:01:25 +00:00
zjing14
3737bb039a
Add bfp16/int8 support into XDL GEMM operator ( #50 )
...
* init StaticBufferV2
* clean
* adopt old output stage for staticBufferV2
* clean
* remove hack
* clean
* clean
* add parameters
* clean code
* move c_buffer alloc into blockwise gemm
* add adaptors for m/n_thread_data_on_grid
* tweak gemm
* adjust blockwise_gemm_xdlops
* tweak
* update conv
* update script
* adding bwd 1x1
* update script
* adding 1x1 bwd
* debugging bwd 1x1 failure
* update script
* update script
* test
* test v100
* add bf16_1k
* clang-format
* clean
* add bfp16 for gfx908
* add verification
* clean up
* clean code
* restore bfl16
* clean
* add bfp16 support into gemm_driver
* apply new generator to other drivers
* add int8 support
* cleanb
* clean
* clean
* clean
Co-authored-by: Chao Liu <chao.liu2@amd.com >
Co-authored-by: Chao Liu <lc.roy86@gmail.com >
Co-authored-by: root <root@hayabusa6111.amd.com >
2021-11-15 10:24:39 -06:00
Chao Liu
b491ebf384
FP16 data in-register transpose ( #41 )
...
* start fixing 16bit data packing
* adding StaticTensor
* adding StaticTensor
* adding StaticTensor
* add missing constexpr
* adding static tensor
* adding static tensor
* adding transpose
* add inline asm for transpose 2x2 of half_t
* add general transpose_vectors(), but have unnecessary register initialization using v_mov
* fix unnecessary register initialization in transpose_vector by using more pass-by-reference
* add hardcoded logic for NHWC wrw
* improve asm for v_pack
* make ThreadwiseTensorSliceTransfer_v3r2 support any tensor
* tweak
* reorganize file
2021-11-15 10:05:58 -06:00
Chao Liu
e823d518cb
ckProfiler and device-level XDL GEMM operator ( #48 )
...
* add DeviceGemmXdl
* update script
* fix naming issue
* fix comment
* output HostTensorDescriptor
* rename
* padded GEMM for fwd v4r4r4 nhwc
* refactor
* refactor
* refactor
* adding ckProfiler
* adding ckProfiler
* refactor
* fix tuning parameter bug
* add more gemm instances
* add more fp16 GEMM instances
* fix profiler driver
* fix bug in tuning parameter
* add fp32 gemm instances
* small fix
* refactor
* rename
* refactor gemm profiler; adding DeviceConv and conv profiler
* refactor
* fix
* add conv profiler
* refactor
* adding more GEMM and Conv instance
* Create README.md
Add build instruction for ckProfiler
* Create README.md
Add Readme for gemm_xdl example
* Update README.md
Remove build instruction from top most folder
* Update README.md
* clean up
2021-11-14 11:28:32 -06:00
ltqin
fd49ff8080
add nchw atomic , nhwc and nhwc atomic method for backward weight ( #30 )
...
* add add new algorithm from v4r4r2
* program once issue
* add split k functiion
* redefine code
* add a matrix unmerge
* add b matrix unmerge k0
* trans a and b to gridegemm
* nhwc init
* no hacks and vector load
* add hacks
* modify some parameter
* fix tuning prometer for fp32
* fix tuning prometer for fp16
* start change gridwise k split
* init ok
* revome a b matrix k0mk1 desc in grid
* carewrite lculate gridsize
* add kbatch to CalculateBottomIndex
* remove some unused funtion
* add clear data function before call kernel
* out hacks
* in hacks
* rename device convolution file and function name
* modify kBatch value
* fix some tuning code
* start from v4r4 nhwc
* nhwc atomic is able to run
* just for fp32
* enable nchw atomic
* tweak
* tweak
* re-arrange gridwise gemm hot loop for wrw
* add wrw v4r5
* v4r4r5 fp16
* v4r4r4 fp16
* v4r4r2 fp16
* V4R4R4XDLNHWC fp16
* V4R4R2XDLATOMICNCHW fp16
* adjust for fp16
* input gridsize
* change kbatch to gridsize
* testing wrw
* clean up
* k_batch to gridsize
* fix bug
* wrw v4r4r4 kbatch change to gride size
* wrw v4r4r2 kbatch change to gride size
* after merge , change gridwise gemm v2r4
* change MakeCBlockClusterAdaptor
* other method use new gridwise gemm
* clean up
* chapad method nge to make_right_pad_transform
* kbatch out from transform function
* clean up and fix bug
* fix bug
* using function type reduce template parameters
* using auto replace define fuction type
* clean up
Co-authored-by: ltqin <letaoqin@amd.com >
Co-authored-by: Chao Liu <chao.liu2@amd.com >
Co-authored-by: Jing Zhang <jizhan@amd.com >
2021-10-19 18:42:34 -05:00
Chao Liu
b3e8d57d51
Tweak GEMM kernel ( #38 )
...
* add parameters
* tweak gemm
* tweak
* update conv
* update script
* adding bwd 1x1
* update script
* adding 1x1 bwd
* debugging bwd 1x1 failure
* update script
* update script
* test
* test v100
* clean up
2021-10-06 11:12:36 -05:00
Chao Liu
19613902b5
GEMM driver and kernel ( #29 )
...
* add gemm driver
* tweak
* add gemm kernel: mk_kn_mn and km_kn_mn
* tweak
* add GEMM km_nk_mn
* fix comment
2021-09-05 12:41:28 -05:00
ltqin
627d8ef35a
Backward weight v4r4r2 with xdlops ( #18 )
...
* start
* modify transformat
* modify device convolutiion
* modify host
* added host conv bwd and wrw
* remove bwd, seperate wrw
* clean
* hacall k to zero
* out log
* fixed
* fixed
* change to (out in wei)
* input hack
* hack to out
* format
* fix by comments
* change wei hacks(wei transform has not merge)
* fix program once issue
* fix review comment
* fix vector load issue
* tweak
Co-authored-by: ltqin <letaoqin@amd.com >
Co-authored-by: Jing Zhang <jizhan@amd.com >
Co-authored-by: Chao Liu <chao.liu2@amd.com >
2021-08-30 22:49:17 -05:00
zjing14
ba6f79a75e
Added host_conv_wrw for verification ( #15 )
...
* added host conv wrw
2021-08-19 01:00:41 -05:00
Chao Liu
c03045ce2d
rename
2021-08-10 23:45:36 +00:00
Chao Liu
2c48039d0e
fix kernel filename
2021-08-10 22:15:23 +00:00
Chao Liu
4f566c6221
vector/scalar pointer cast use c-style pointer cast instead of reinterpret_cast
2021-08-10 05:55:20 +00:00
Chao Liu
76f3131939
tidy
2021-08-09 18:49:59 -05:00
Chao Liu
d18428901e
tidy
2021-08-09 18:20:02 -05:00
Chao Liu
f885c131d8
tidy
2021-08-09 22:13:47 +00:00
Chao Liu
56fc0842b3
tidy
2021-08-09 19:27:49 +00:00
Chao Liu
82fae390fb
update to clang-format-10
2021-07-30 16:37:00 -05:00
Chao Liu
1264925422
reorganize files to prepare for MIOpen integration ( #51 )
...
* change olc cmake
* adding online compile to fwd-v4r5r2
* update scripts
* remane fwd-v4r5r2 to fwd-v6r1
* clean up
2021-07-18 00:43:05 -05:00