Commit Graph

75 Commits

Author SHA1 Message Date
Chao Liu
01055d95d9 No raw index calculation (#31)
* Replace most raw index calculation to coordinate transformation
* Overhaul blockwise and threadwise GEMM
* Overhaul driver for gridwies GEMM kernel

Co-authored-by: Jing Zhang <jizhan@amd.com>
2021-05-11 00:09:25 -05:00
Chao Liu
d075adf126 Use Tuple and vector_type instead of Array for holding tensor data (#30)
* replacing array with tuple and vector for tensor data
2021-04-28 13:10:33 -05:00
Chao Liu
e4790c250c Overhaul vector_type and use real vector for int8x4_t instead of aliasing from int32_t (#29)
* overhaul vector_type, make int8x4_t real vector instead of aliasing from int32_t
2021-04-12 23:48:43 -05:00
zjing14
792a20fa5b Hybrid direct + implicit GEMM forward convolution NCHWc v5r1 (#25)
* Hybrid direct + implicit GEMM forward convolution NCHWc v5r1. Input tensor bypass LDS. Support fp32/fp16/int8
2021-04-07 16:47:29 -05:00
Chao Liu
fcbb978828 Dynamic tensor descriptor (#24)
* support dynamic tensor descriptor

* use buffer load OOB feature for padding case

* add navi support

* add int8x4 inference kernel

Co-authored-by: Chao Liu <chao@ixt-rack-81.local.lan>
Co-authored-by: Jing Zhang <jizhan@amd.com>
2021-03-25 13:51:11 -05:00
Chao Liu
bbcb67d0aa Bwd Data NHWC (#22)
* fix buffer_store bug
* remove obsolete kernels
* add bwd-data-v5r1-nhwc
2020-08-06 12:22:11 -05:00
Chao Liu
5c7cec1115 Code clean up (#20)
* tuning para,

* testing on v100

* add fp16

* remove deprecated tensor descriptor

* sync with miopen

* update build script

Co-authored-by: Jing Zhang <jizhan@amd.com>
2020-06-23 20:31:27 -05:00
Chao Liu
1a66e35b6f MIopen integration (#13)
* update for miopen integration: cosmetic refactor
2020-02-17 09:53:20 -06:00
Chao Liu
3406a1148a Update for recent MIOpen integration (#11)
* update for MIOpen integration
2020-01-27 15:29:33 -06:00
Chao Liu
c5da0377fb Added bwd data v3r1 v4r1, tweaking v1 (#10)
* Added bwd data v3r1: breaking down compute into a series of load balanced GEMM, and launch in a single kernel
* Added bwd data v4r1: like v3r1, but launch GEMMs in multiple kernels
* Tweaked v1r1  and v1r2 (atomic) on AMD GPU
2020-01-20 10:20:03 -06:00
Chao Liu
e2b4c5b469 update implicit GEMM forward v4r4 to use gridwise gemm (#9)
* updated fwd v4r4 to use gridwise gemm
* updated gridwise gemm api calls in bwd-data v1r1 and v2r1
2019-12-05 12:36:36 -06:00
Chao Liu
8f5f64960e backward data (#7)
* enabled atomic add in tensor copy
* added gridwise GEMM
* added backward data conv using GEMM + atomic
* added backward data conv using GEMM, no atomic
2019-12-03 01:16:12 -06:00
Chao Liu
52c3fe05be Refactor for MIOpen integration (#4)
Refactor, so can bring multi-index transformation and padding support into MIOpen
2019-10-11 11:37:31 -05:00
Chao Liu
9b280cc50d remove dead code 2019-09-27 02:00:59 -05:00
Chao Liu
b12bbceebc clean up 2019-09-26 14:59:19 -05:00
Chao Liu
51a9fa1dbd removing dependency on old tensor descriptor 2019-09-26 11:49:05 -05:00
Chao Liu
39d92e7dfd removing old implementation of tensor descriptor 2019-09-25 22:24:06 -05:00
Chao Liu
545d930568 refactor 2019-09-24 18:06:05 -05:00
Chao Liu
51884fc214 WIP: explicitly separate offset component into compile-time, block-invariant and per-thread components 2019-09-21 22:53:03 -05:00
Chao Liu
bf7e7d62a8 refactor 2019-09-19 23:44:23 -05:00
Chao Liu
b6e1c52a80 use buffer_load buffer_store intrinsic 2019-09-19 15:39:07 -05:00
Chao Liu
86cc678f18 add global_load and buffer_load inline asm 2019-09-18 15:41:55 -05:00
Chao Liu
5b7a18c506 experimenting global and buffer load/store 2019-09-18 02:05:42 -05:00
Chao Liu
c7a6545ec4 experimenting global and buffer load/store 2019-09-18 01:37:28 -05:00
Chao Liu
9f46cdf5fa experimenting global and buffer load/store 2019-09-18 00:15:57 -05:00
Chao Liu
f58bf38445 enable hip compiler flag: -amdgpu-enable-global-sgpr-addr 2019-09-17 17:34:39 -05:00
Chao Liu
f7be86b9e4 refactor 2019-09-16 22:47:55 -05:00
Chao Liu
bf97542846 add lds doble buffer to nchw padded v4r1 and v4r4 2019-09-15 16:58:16 -05:00
Chao Liu
2c93b3057d initial implementation for nchw v4r4 padding 2019-09-15 16:31:54 -05:00
Chao Liu
53094f7fae clean up 2019-09-15 12:13:58 -05:00
Chao Liu
bd7a230006 clean up 2019-09-12 14:55:46 -05:00
Chao Liu
724e984bff enabling padding for chwn format 2019-09-11 01:13:13 -05:00
Chao Liu
7a7fe16086 more utility code 2019-09-09 00:29:33 -05:00
Chao Liu
625838def0 added tuple 2019-09-06 18:07:56 -05:00
Chao Liu
86ceded98b Merge remote-tracking branch 'origin/master' into add_padding 2019-08-15 13:48:45 -05:00
Chao Liu
0979fb4af9 clean up 2019-08-15 13:21:51 -05:00
Chao Liu
4fb81e008c adding padding to implicit gemm v1r3 2019-08-14 10:55:34 -05:00
Chao Liu
40836ab926 add back some code 2019-08-13 12:21:38 -05:00
Chao Liu
8bdaba51f8 clean up 2019-08-13 00:37:23 -05:00
Chao Liu
fab2f10a55 clean up 2019-08-12 15:48:35 -05:00
Chao Liu
1c4ef23cff cleaning up 2019-08-09 22:48:28 -05:00
Chao Liu
4908fe3fdc tweak on amd 2019-08-08 12:14:06 -05:00
Chao Liu
a9b2b1dcd7 added ThreadwiseGenericTensorSliceCopy_v2r1 2019-08-08 02:42:52 -05:00
Chao Liu
bc9ea646f8 use ford/for instead of static_ford/static_for in threadwise copy, somehow register spill is greatly reduced on AMD 2019-08-07 19:09:13 -05:00
Chao Liu
5636576f9b bug fix in ford, forgot to reorder lengths 2019-08-07 18:27:10 -05:00
Chao Liu
9d99a58072 adding ThreadwiseGenericTensorSliceCopy_v1r2 2019-08-07 16:51:14 -05:00
Chao Liu
1b3c2e4035 reworked ThreadwiseGenericTensorSliceCopy_v1 2019-08-07 00:52:13 -05:00
Chao Liu
fdcfae3a62 reimplement threadwise copy 2019-08-06 17:41:58 -05:00
Chao Liu
adc1008836 tweak 2019-08-03 15:05:25 -05:00
Chao Liu
c2d246696f added implicit gemm v4r4 and double buffer 2019-08-03 00:19:19 -05:00