Commit Graph

382 Commits

Author SHA1 Message Date
Chao Liu
30072aec37 Restructure gridwise and blockwise GEMM, add tensor contraction and FWD-v4r5 (#36)
* experimenting magic number division

* overhauling fwd-v4r4 to clearly reflect transformation graph

* added fwd-v4r5

* bug fix for make_dynamic_naive_tensor_descriptor_aligned_v2

* bug fix and added sanity-check in transform_dynamic_tensor_descriptor

* added conv_driver_v2
2021-06-09 23:53:08 -05:00
Chao Liu
71d6b19d18 reorganize some files (#33) 2021-05-12 14:15:38 -05:00
Chao Liu
78b987fbd6 Use DynamicBuffer instead of raw pointer (#32)
* Use DynamicBuffer to hold raw pointer (to global and LDS memory)

* add workaround for compiler issue (inefficient ISA) of ds_write for int8x4, int8x8, int8x16
2021-05-12 13:10:42 -05:00
Chao Liu
01055d95d9 No raw index calculation (#31)
* Replace most raw index calculation to coordinate transformation
* Overhaul blockwise and threadwise GEMM
* Overhaul driver for gridwies GEMM kernel

Co-authored-by: Jing Zhang <jizhan@amd.com>
2021-05-11 00:09:25 -05:00
Chao Liu
d075adf126 Use Tuple and vector_type instead of Array for holding tensor data (#30)
* replacing array with tuple and vector for tensor data
2021-04-28 13:10:33 -05:00
Chao Liu
e4790c250c Overhaul vector_type and use real vector for int8x4_t instead of aliasing from int32_t (#29)
* overhaul vector_type, make int8x4_t real vector instead of aliasing from int32_t
2021-04-12 23:48:43 -05:00
Chao Liu
3bf52e60c5 Initial implementation of magic number division and "Merge" transformation that use it (#28)
* initial implementation for magic number division and DynamicMerge_v2_magic_division that uses it

* turn off DynamicMerge_v2_magic_division that use magic number division by default
2021-04-12 21:32:55 -05:00
zjing14
792a20fa5b Hybrid direct + implicit GEMM forward convolution NCHWc v5r1 (#25)
* Hybrid direct + implicit GEMM forward convolution NCHWc v5r1. Input tensor bypass LDS. Support fp32/fp16/int8
2021-04-07 16:47:29 -05:00
Chao Liu
d2217f3040 Fix performance issue when passing tensor descriptor from host to kernel by void pointers (#27)
* use address_space(4) in kernel signature to fix performance issue when passing tensor descriptor from host to kernel by (void) pointers

* remove passing by pointer* option (only use pass by value or void*)
2021-04-06 17:49:57 -05:00
zjing14
6a5ea49309 bug fix for buffer resource setting (#26) 2021-04-06 16:59:52 -05:00
Chao Liu
fcbb978828 Dynamic tensor descriptor (#24)
* support dynamic tensor descriptor

* use buffer load OOB feature for padding case

* add navi support

* add int8x4 inference kernel

Co-authored-by: Chao Liu <chao@ixt-rack-81.local.lan>
Co-authored-by: Jing Zhang <jizhan@amd.com>
2021-03-25 13:51:11 -05:00
Chao Liu
bbcb67d0aa Bwd Data NHWC (#22)
* fix buffer_store bug
* remove obsolete kernels
* add bwd-data-v5r1-nhwc
2020-08-06 12:22:11 -05:00
Chao Liu
ac62d13ecd Improve buffer address for out of bound check (#21)
* Use buffer load built-in OOB check. buffer size is limited to 2GB.
* buffer APIs use combined wave and thread offset
* use uint32_t for addr shift in buffer addressing
2020-07-29 18:04:09 -05:00
Chao Liu
5c7cec1115 Code clean up (#20)
* tuning para,

* testing on v100

* add fp16

* remove deprecated tensor descriptor

* sync with miopen

* update build script

Co-authored-by: Jing Zhang <jizhan@amd.com>
2020-06-23 20:31:27 -05:00
Chao Liu
7d09790a0a MIOpen integration (#15)
* renaming
2020-02-18 10:42:18 -06:00
Chao Liu
1a66e35b6f MIopen integration (#13)
* update for miopen integration: cosmetic refactor
2020-02-17 09:53:20 -06:00
Chao Liu
3406a1148a Update for recent MIOpen integration (#11)
* update for MIOpen integration
2020-01-27 15:29:33 -06:00
Chao Liu
c5da0377fb Added bwd data v3r1 v4r1, tweaking v1 (#10)
* Added bwd data v3r1: breaking down compute into a series of load balanced GEMM, and launch in a single kernel
* Added bwd data v4r1: like v3r1, but launch GEMMs in multiple kernels
* Tweaked v1r1  and v1r2 (atomic) on AMD GPU
2020-01-20 10:20:03 -06:00
Chao Liu
e2b4c5b469 update implicit GEMM forward v4r4 to use gridwise gemm (#9)
* updated fwd v4r4 to use gridwise gemm
* updated gridwise gemm api calls in bwd-data v1r1 and v2r1
2019-12-05 12:36:36 -06:00
Chao Liu
19a93dac05 fixed faulty padding API calls (#8) 2019-12-03 01:46:44 -06:00
Chao Liu
8f5f64960e backward data (#7)
* enabled atomic add in tensor copy
* added gridwise GEMM
* added backward data conv using GEMM + atomic
* added backward data conv using GEMM, no atomic
2019-12-03 01:16:12 -06:00
Chao Liu
31ded4ac4b remove dead file (#6) 2019-11-04 17:13:38 -06:00
Chao Liu
562e1e2767 MIOpen integration: recent bug fixes from MIOpen (#5) 2019-11-04 16:51:12 -06:00
Chao Liu
52c3fe05be Refactor for MIOpen integration (#4)
Refactor, so can bring multi-index transformation and padding support into MIOpen
2019-10-11 11:37:31 -05:00
Chao Liu
9aaeacc82b Merge pull request #3 from asroy/clean_up
enable type conversion in ThreadwiseGenericTensorSliceCopy_v2r1 and BlockwiseGenericTensorSliceCopy_v2
2019-09-30 15:14:02 -05:00
Chao Liu
cf21818455 enable type conversion in blockwise copy v2 and threadwise copy v2r1 2019-09-30 15:11:05 -05:00
Chao Liu
012d3a071b tweaking 2019-09-27 16:38:11 -05:00
Chao Liu
14315b72f3 tweaking 2019-09-27 15:24:27 -05:00
Chao Liu
ebe38f3d48 debugging 2019-09-27 11:31:01 -05:00
Chao Liu
9b280cc50d remove dead code 2019-09-27 02:00:59 -05:00
Chao Liu
98a2cfcc84 nvidia build 2019-09-27 00:15:05 -05:00
Chao Liu
00089cd6e5 clean up 2019-09-26 21:39:28 -05:00
Chao Liu
b12bbceebc clean up 2019-09-26 14:59:19 -05:00
Chao Liu
51a9fa1dbd removing dependency on old tensor descriptor 2019-09-26 11:49:05 -05:00
Chao Liu
0f52c4c0e4 added type conversion in threadwise and blockwise copy 2019-09-26 00:00:25 -05:00
Chao Liu
b3d4595f5a added type conversion in threadwise and blockwise copy 2019-09-25 23:38:26 -05:00
Chao Liu
3cb2a7d09f removing old implementation of tensor descriptor 2019-09-25 22:43:34 -05:00
Chao Liu
39d92e7dfd removing old implementation of tensor descriptor 2019-09-25 22:24:06 -05:00
Chao Liu
012b525377 clean up 2019-09-25 03:28:53 -05:00
Chao Liu
e1ae8f18f7 added GetLinearDimensionMask 2019-09-25 02:52:41 -05:00
Chao Liu
4f4aba4872 adding GetLinearDimensionMask() 2019-09-24 23:59:47 -05:00
Chao Liu
545d930568 refactor 2019-09-24 18:06:05 -05:00
Chao Liu
37f4e2b6d8 nvidia build 2019-09-22 03:23:19 -05:00
Chao Liu
6c2c50b020 done: explicitly separate offset component into compile-time, block-invariant and per-thread components. Experimenting 2019-09-22 03:17:41 -05:00
Chao Liu
51884fc214 WIP: explicitly separate offset component into compile-time, block-invariant and per-thread components 2019-09-21 22:53:03 -05:00
Chao Liu
740da00aa2 refactor 2019-09-20 21:45:20 -05:00
Chao Liu
184c6e7d37 nvidia build 2019-09-20 21:45:03 -05:00
Chao Liu
f00c138145 adding logic to judge linear dimension 2019-09-20 20:43:13 -05:00
Chao Liu
bf7e7d62a8 refactor 2019-09-19 23:44:23 -05:00
Chao Liu
b6e1c52a80 use buffer_load buffer_store intrinsic 2019-09-19 15:39:07 -05:00