Fix example_grouped_gemm_multiple_d_xdl_fp16 on gfx950 (#2203)

* Fix example_grouped_gemm_multiple_d_xdl_fp16 on gfx950

* Run clang format
This commit is contained in:
jefyang1
2025-05-19 14:25:50 -07:00
committed by GitHub
parent 6342f6b5e8
commit b8b12bb81e

View File

@@ -141,8 +141,8 @@ bool run_grouped_gemm(const ProblemSize& problem_size, const ExecutionConfig& co
a_tensors_device.reserve(group_count);
b_tensors_device.reserve(group_count);
d_tensors_device.reserve(group_count);
c_tensors_device.reserve(group_count);
d_tensors_device.resize(group_count); // reserve and update vector size
std::size_t flop = 0, num_btype = 0;