example of conv bwd weight 1d/2d/3d fp32/fp16/bf16 xdl (#244)

* enable example of conv 1d/3d for bwd weight

* make bf16 kernel do not use atomic add

* using new gridwise gemm for bwd weight on convnd bwd weight

Co-authored-by: Chao Liu <chao.liu2@amd.com>
This commit is contained in:
Shaojie WANG
2022-05-21 06:20:10 +08:00
committed by GitHub
parent 44943e0e21
commit ac543313bf
6 changed files with 1759 additions and 47 deletions

View File

@@ -0,0 +1,17 @@
#pragma once
namespace ck {
namespace tensor_operation {
namespace device {
enum struct ConvolutionBackwardWeightSpecialization
{
Default,
Filter1x1Stride1Pad0,
Filter1x1Pad0,
OddC,
};
} // namespace device
} // namespace tensor_operation
} // namespace ck