Commit Graph

82 Commits

Author SHA1 Message Date
Chao Liu
459dc6cf2f Deprecate static kernel (#42)
* deprecate static kernels

[ROCm/composable_kernel commit: 81c942cd7e]
2021-07-08 10:40:00 -05:00
Chao Liu
0d7baf0e50 DL GEMM fp32/fp16/int8 (#41)
* add threadwise copy the copy a tensor in one copy, added kpack to DL GEMM

* add kpack into fwd v4r5 nchw fp32

[ROCm/composable_kernel commit: b8b2d0a6d1]
2021-07-04 22:50:29 -05:00
zjing14
2331d228e2 xdlops_v4r4_fwd fp32/fp16 (#34)
* create files for xdlops

* working on blockwise_gemm_xdlops

* add KReduction

* add m/n repeats

* add 2x2 pipeline

* added 128x128 wavegemm

* use StaticBuffer of vector_type

* break vector type to blk_size

* add kpack into xldops_gemm and blockwise_gemm

* abroadcast only

* add fp32 mfma instructions

* adding fp16 mfma

* pack half4_t

* rename kperwave to kpack

* add 32x32x8fp16

* add fp16 mfma

* clean code

* clean code

* V4r4 xdlops kpack (#35)

* add kpack with incorrect results

* bug fix for make_dynamic_naive_tensor_descriptor_aligned_v2

* add 1x1 kernel

* add gridwise_gemm_v2 - single_buffer

* enabled dwordx4 for fp16

Co-authored-by: Chao Liu <chao.liu2@amd.com>

* refactor fwd-v4r4-xdlops

* add v4r4-nhwc-xdlop

* improve some perf of nhwc and nchw by tuning parameters, and change scheuduling in gridwise-gemm loop

* tweak scheduling in gridwise gemm

* add v4r3 with a single output copy

* init commit: output with slice win

* adding sliceWin

* add multiple repeats pattern

* starting adding bwd-v4r1-xdlops

* use tuple as SrcBuffer

* adding bwd-data v4r1 nhwc xdlops

* fix bug in make_dynamic_naive_tensor_descriptor_aligned_v2()

* fix bug in host bwd-data conv

* initial implementation of bwd-data v4r1 nhwc xdlops

* add launch bound flags

* enable launch bound

* add m/nrepeat=4

* tweak bwd-data v4r1 nhwc xdlops

* added bwd-data v4r1 nhwc xlops with output A and weight B

* add fwd-v4r4 nhwc xdlops, A input, B weight, C output

Co-authored-by: Chao Liu <chao.liu2@amd.com>

[ROCm/composable_kernel commit: 3835318cc3]
2021-07-01 14:33:00 -05:00
Qianfeng
0d278b8cc8 Add online compilation for dynamic kernels (#37)
* Add online-compiling facility

* Synchronize from fwd-v4r5 and implement host interfaces to call conv-fwd v4r4/v4r5 using on-line compiling method

* Tiny adjustment to time reporting

* Use object assignment to replace explicit bytes copying in the first kernel of v4r4/v4r5

* Use single thread to assign descriptor object to device memory

* Adjust to the workload assignment of the two kernels of v4r4 (experimental)

* Revert "Adjust to the workload assignment of the two kernels of v4r4 (experimental)"

This reverts commit eb38461456bb0c82b6c0d32cdd616e181907e20c.

* Update to make constexpr for generating descriptor types in kernel 2 of dynamic conv-fwd v4r4

* Update to dynamic conv-fwd v4r4 online-compiling

* Update to dynamic conv-fwd v4r5 online-compiling (result not accurate)

* Tiny update to driver/CMakeLists.txt

* clang-format

* Tiny comments change

* Add env OLC_DUMP_SAVE_TMP_DIR to support saving of temperary dir

* Fwd v4r5 olc perf (#39)

* added hip-clang flags that fix perf issue of online compilation

* fix bug for olc fwd-v4r5-nchw

* Move constexpr and type reference statements out of the function body in conv-fwd v4r4/v4r5 kernel wrapper

* Remove printing in hip_build_utils.cpp

* Update to root CMakeLists.txt

* Revert "Move constexpr and type reference statements out of the function body in conv-fwd v4r4/v4r5 kernel wrapper"

This reverts commit 3d2c5d8ecdd8298b72d127110500ed5b38d9835c.

Co-authored-by: Chao Liu <chao.liu2@amd.com>
Co-authored-by: Chao Liu <lc.roy86@gmail.com>
Co-authored-by: root <root@dc-smc-18.amd.com>

[ROCm/composable_kernel commit: 1685048a67]
2021-06-24 08:34:19 -05:00
Chao Liu
c55129e8f5 Restructure gridwise and blockwise GEMM, add tensor contraction and FWD-v4r5 (#36)
* experimenting magic number division

* overhauling fwd-v4r4 to clearly reflect transformation graph

* added fwd-v4r5

* bug fix for make_dynamic_naive_tensor_descriptor_aligned_v2

* bug fix and added sanity-check in transform_dynamic_tensor_descriptor

* added conv_driver_v2

[ROCm/composable_kernel commit: 30072aec37]
2021-06-09 23:53:08 -05:00
Chao Liu
d87338c208 reorganize some files (#33)
[ROCm/composable_kernel commit: 71d6b19d18]
2021-05-12 14:15:38 -05:00
Chao Liu
0ac401f4f3 Use DynamicBuffer instead of raw pointer (#32)
* Use DynamicBuffer to hold raw pointer (to global and LDS memory)

* add workaround for compiler issue (inefficient ISA) of ds_write for int8x4, int8x8, int8x16

[ROCm/composable_kernel commit: 78b987fbd6]
2021-05-12 13:10:42 -05:00
Chao Liu
e100ee5732 No raw index calculation (#31)
* Replace most raw index calculation to coordinate transformation
* Overhaul blockwise and threadwise GEMM
* Overhaul driver for gridwies GEMM kernel

Co-authored-by: Jing Zhang <jizhan@amd.com>

[ROCm/composable_kernel commit: 01055d95d9]
2021-05-11 00:09:25 -05:00
Chao Liu
c67332b930 Use Tuple and vector_type instead of Array for holding tensor data (#30)
* replacing array with tuple and vector for tensor data

[ROCm/composable_kernel commit: d075adf126]
2021-04-28 13:10:33 -05:00
Chao Liu
2501a44530 Overhaul vector_type and use real vector for int8x4_t instead of aliasing from int32_t (#29)
* overhaul vector_type, make int8x4_t real vector instead of aliasing from int32_t

[ROCm/composable_kernel commit: e4790c250c]
2021-04-12 23:48:43 -05:00
zjing14
2457224dc9 Hybrid direct + implicit GEMM forward convolution NCHWc v5r1 (#25)
* Hybrid direct + implicit GEMM forward convolution NCHWc v5r1. Input tensor bypass LDS. Support fp32/fp16/int8

[ROCm/composable_kernel commit: 792a20fa5b]
2021-04-07 16:47:29 -05:00
Chao Liu
e2753e68bd Dynamic tensor descriptor (#24)
* support dynamic tensor descriptor

* use buffer load OOB feature for padding case

* add navi support

* add int8x4 inference kernel

Co-authored-by: Chao Liu <chao@ixt-rack-81.local.lan>
Co-authored-by: Jing Zhang <jizhan@amd.com>

[ROCm/composable_kernel commit: fcbb978828]
2021-03-25 13:51:11 -05:00
Chao Liu
821c1ff9b9 Bwd Data NHWC (#22)
* fix buffer_store bug
* remove obsolete kernels
* add bwd-data-v5r1-nhwc 

[ROCm/composable_kernel commit: bbcb67d0aa]
2020-08-06 12:22:11 -05:00
Chao Liu
a3c89131fa Code clean up (#20)
* tuning para,

* testing on v100

* add fp16

* remove deprecated tensor descriptor

* sync with miopen

* update build script

Co-authored-by: Jing Zhang <jizhan@amd.com>

[ROCm/composable_kernel commit: 5c7cec1115]
2020-06-23 20:31:27 -05:00
Chao Liu
ed1eafcec8 MIopen integration (#13)
* update for miopen integration: cosmetic refactor


[ROCm/composable_kernel commit: 1a66e35b6f]
2020-02-17 09:53:20 -06:00
Chao Liu
bd24dfbea7 Update for recent MIOpen integration (#11)
* update for MIOpen integration


[ROCm/composable_kernel commit: 3406a1148a]
2020-01-27 15:29:33 -06:00
Chao Liu
7c9100b53f Added bwd data v3r1 v4r1, tweaking v1 (#10)
* Added bwd data v3r1: breaking down compute into a series of load balanced GEMM, and launch in a single kernel
* Added bwd data v4r1: like v3r1, but launch GEMMs in multiple kernels
* Tweaked v1r1  and v1r2 (atomic) on AMD GPU

[ROCm/composable_kernel commit: c5da0377fb]
2020-01-20 10:20:03 -06:00
Chao Liu
24f7d66609 update implicit GEMM forward v4r4 to use gridwise gemm (#9)
* updated fwd v4r4 to use gridwise gemm
* updated gridwise gemm api calls in bwd-data v1r1 and v2r1

[ROCm/composable_kernel commit: e2b4c5b469]
2019-12-05 12:36:36 -06:00
Chao Liu
4414e495ed backward data (#7)
* enabled atomic add in tensor copy
* added gridwise GEMM
* added backward data conv using GEMM + atomic
* added backward data conv using GEMM, no atomic


[ROCm/composable_kernel commit: 8f5f64960e]
2019-12-03 01:16:12 -06:00
Chao Liu
a9e6c3340c Refactor for MIOpen integration (#4)
Refactor, so can bring multi-index transformation and padding support into MIOpen

[ROCm/composable_kernel commit: 52c3fe05be]
2019-10-11 11:37:31 -05:00
Chao Liu
a87fa81015 remove dead code
[ROCm/composable_kernel commit: 9b280cc50d]
2019-09-27 02:00:59 -05:00
Chao Liu
c5e4a623f9 clean up
[ROCm/composable_kernel commit: b12bbceebc]
2019-09-26 14:59:19 -05:00
Chao Liu
bd8f263b70 removing dependency on old tensor descriptor
[ROCm/composable_kernel commit: 51a9fa1dbd]
2019-09-26 11:49:05 -05:00
Chao Liu
313eb881cd removing old implementation of tensor descriptor
[ROCm/composable_kernel commit: 39d92e7dfd]
2019-09-25 22:24:06 -05:00
Chao Liu
ca661e1f52 refactor
[ROCm/composable_kernel commit: 545d930568]
2019-09-24 18:06:05 -05:00
Chao Liu
c3a1be3865 WIP: explicitly separate offset component into compile-time, block-invariant and per-thread components
[ROCm/composable_kernel commit: 51884fc214]
2019-09-21 22:53:03 -05:00
Chao Liu
b9722daae4 refactor
[ROCm/composable_kernel commit: bf7e7d62a8]
2019-09-19 23:44:23 -05:00
Chao Liu
056e48a3ac use buffer_load buffer_store intrinsic
[ROCm/composable_kernel commit: b6e1c52a80]
2019-09-19 15:39:07 -05:00
Chao Liu
67f93cb50c add global_load and buffer_load inline asm
[ROCm/composable_kernel commit: 86cc678f18]
2019-09-18 15:41:55 -05:00
Chao Liu
6c5f82174b experimenting global and buffer load/store
[ROCm/composable_kernel commit: 5b7a18c506]
2019-09-18 02:05:42 -05:00
Chao Liu
741a647405 experimenting global and buffer load/store
[ROCm/composable_kernel commit: c7a6545ec4]
2019-09-18 01:37:28 -05:00
Chao Liu
03740b5c4a experimenting global and buffer load/store
[ROCm/composable_kernel commit: 9f46cdf5fa]
2019-09-18 00:15:57 -05:00
Chao Liu
0342a42f47 enable hip compiler flag: -amdgpu-enable-global-sgpr-addr
[ROCm/composable_kernel commit: f58bf38445]
2019-09-17 17:34:39 -05:00
Chao Liu
2b26fb76ca refactor
[ROCm/composable_kernel commit: f7be86b9e4]
2019-09-16 22:47:55 -05:00
Chao Liu
a4edaf2ae3 add lds doble buffer to nchw padded v4r1 and v4r4
[ROCm/composable_kernel commit: bf97542846]
2019-09-15 16:58:16 -05:00
Chao Liu
e6285a3b55 initial implementation for nchw v4r4 padding
[ROCm/composable_kernel commit: 2c93b3057d]
2019-09-15 16:31:54 -05:00
Chao Liu
89dc09f929 clean up
[ROCm/composable_kernel commit: 53094f7fae]
2019-09-15 12:13:58 -05:00
Chao Liu
79e13d9ded clean up
[ROCm/composable_kernel commit: bd7a230006]
2019-09-12 14:55:46 -05:00
Chao Liu
9a8d9e0e40 enabling padding for chwn format
[ROCm/composable_kernel commit: 724e984bff]
2019-09-11 01:13:13 -05:00
Chao Liu
399be319a2 more utility code
[ROCm/composable_kernel commit: 7a7fe16086]
2019-09-09 00:29:33 -05:00
Chao Liu
b0f3708397 added tuple
[ROCm/composable_kernel commit: 625838def0]
2019-09-06 18:07:56 -05:00
Chao Liu
eb6a36d393 Merge remote-tracking branch 'origin/master' into add_padding
[ROCm/composable_kernel commit: 86ceded98b]
2019-08-15 13:48:45 -05:00
Chao Liu
c8e088723f clean up
[ROCm/composable_kernel commit: 0979fb4af9]
2019-08-15 13:21:51 -05:00
Chao Liu
3b234309c9 adding padding to implicit gemm v1r3
[ROCm/composable_kernel commit: 4fb81e008c]
2019-08-14 10:55:34 -05:00
Chao Liu
15a2fc9029 add back some code
[ROCm/composable_kernel commit: 40836ab926]
2019-08-13 12:21:38 -05:00
Chao Liu
18bc57cd93 clean up
[ROCm/composable_kernel commit: 8bdaba51f8]
2019-08-13 00:37:23 -05:00
Chao Liu
d54e146f28 clean up
[ROCm/composable_kernel commit: fab2f10a55]
2019-08-12 15:48:35 -05:00
Chao Liu
301348f208 cleaning up
[ROCm/composable_kernel commit: 1c4ef23cff]
2019-08-09 22:48:28 -05:00
Chao Liu
807d22de0b tweak on amd
[ROCm/composable_kernel commit: 4908fe3fdc]
2019-08-08 12:14:06 -05:00
Chao Liu
7b5f9bebc6 added ThreadwiseGenericTensorSliceCopy_v2r1
[ROCm/composable_kernel commit: a9b2b1dcd7]
2019-08-08 02:42:52 -05:00