Commit Graph

20 Commits

Author SHA1 Message Date
Chao Liu
b8b2d0a6d1 DL GEMM fp32/fp16/int8 (#41)
* add threadwise copy the copy a tensor in one copy, added kpack to DL GEMM

* add kpack into fwd v4r5 nchw fp32
2021-07-04 22:50:29 -05:00
Chao Liu
30072aec37 Restructure gridwise and blockwise GEMM, add tensor contraction and FWD-v4r5 (#36)
* experimenting magic number division

* overhauling fwd-v4r4 to clearly reflect transformation graph

* added fwd-v4r5

* bug fix for make_dynamic_naive_tensor_descriptor_aligned_v2

* bug fix and added sanity-check in transform_dynamic_tensor_descriptor

* added conv_driver_v2
2021-06-09 23:53:08 -05:00
Chao Liu
78b987fbd6 Use DynamicBuffer instead of raw pointer (#32)
* Use DynamicBuffer to hold raw pointer (to global and LDS memory)

* add workaround for compiler issue (inefficient ISA) of ds_write for int8x4, int8x8, int8x16
2021-05-12 13:10:42 -05:00
Chao Liu
01055d95d9 No raw index calculation (#31)
* Replace most raw index calculation to coordinate transformation
* Overhaul blockwise and threadwise GEMM
* Overhaul driver for gridwies GEMM kernel

Co-authored-by: Jing Zhang <jizhan@amd.com>
2021-05-11 00:09:25 -05:00
Chao Liu
d075adf126 Use Tuple and vector_type instead of Array for holding tensor data (#30)
* replacing array with tuple and vector for tensor data
2021-04-28 13:10:33 -05:00
Chao Liu
3bf52e60c5 Initial implementation of magic number division and "Merge" transformation that use it (#28)
* initial implementation for magic number division and DynamicMerge_v2_magic_division that uses it

* turn off DynamicMerge_v2_magic_division that use magic number division by default
2021-04-12 21:32:55 -05:00
Chao Liu
fcbb978828 Dynamic tensor descriptor (#24)
* support dynamic tensor descriptor

* use buffer load OOB feature for padding case

* add navi support

* add int8x4 inference kernel

Co-authored-by: Chao Liu <chao@ixt-rack-81.local.lan>
Co-authored-by: Jing Zhang <jizhan@amd.com>
2021-03-25 13:51:11 -05:00
Chao Liu
bbcb67d0aa Bwd Data NHWC (#22)
* fix buffer_store bug
* remove obsolete kernels
* add bwd-data-v5r1-nhwc
2020-08-06 12:22:11 -05:00
Chao Liu
5c7cec1115 Code clean up (#20)
* tuning para,

* testing on v100

* add fp16

* remove deprecated tensor descriptor

* sync with miopen

* update build script

Co-authored-by: Jing Zhang <jizhan@amd.com>
2020-06-23 20:31:27 -05:00
Chao Liu
8f5f64960e backward data (#7)
* enabled atomic add in tensor copy
* added gridwise GEMM
* added backward data conv using GEMM + atomic
* added backward data conv using GEMM, no atomic
2019-12-03 01:16:12 -06:00
Chao Liu
562e1e2767 MIOpen integration: recent bug fixes from MIOpen (#5) 2019-11-04 16:51:12 -06:00
Chao Liu
52c3fe05be Refactor for MIOpen integration (#4)
Refactor, so can bring multi-index transformation and padding support into MIOpen
2019-10-11 11:37:31 -05:00
Chao Liu
6c2c50b020 done: explicitly separate offset component into compile-time, block-invariant and per-thread components. Experimenting 2019-09-22 03:17:41 -05:00
Chao Liu
184c6e7d37 nvidia build 2019-09-20 21:45:03 -05:00
Chao Liu
ca42e9101d adding merge transform 2019-09-10 01:53:49 -05:00
Chao Liu
7a7fe16086 more utility code 2019-09-09 00:29:33 -05:00
Chao Liu
bd44e6390d adding dimension transformation 2019-09-02 00:21:00 -05:00
Chao Liu
1f2cfcebb3 fixed amd build 2019-06-19 18:51:19 -05:00
Chao Liu
21f7e9f103 refactor 2019-06-19 17:43:56 -05:00
Chao Liu
1566b31736 reorginzed files 2019-06-13 15:12:12 -05:00