[rocm-libraries] ROCm/rocm-libraries#6302 (commit 8d419e8)

CK: Remove 41 commented-out dead code blocks (~200 lines)
 (#6302)
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Depends on #6300

## Summary

Remove 41 commented-out code blocks across 33 files in Composable
Kernel, totaling ~200 lines.

Identified using an automated dead code scanning skill (`ck-dead-code`)
with a calibrated two-stage pipeline:
1. **Pre-filter**: Keyword-based scan found 1,338 `//`-commented blocks.
Calibrated heuristics (trained on 50-sample expert classification)
reduced to 89 high-confidence candidates — 93% noise reduction.
2. **Expert triage**: LLM expert classified each block in context as
CODE_REMOVE, CODE_KEEP, or NOT_CODE.

| Classification | Count |
|---------------|-------|
| Removed (this PR) | 41 |
| Kept (debug helpers, alt configs, reference impls) | 32 |
| Not code (false positives) | 16 |

Removed blocks include: superseded implementations, old test data,
abandoned stubs, unreachable code, and buggy dead code.
This commit is contained in:
Aviral Goel
2026-04-10 15:18:02 +00:00
committed by assistant-librarian[bot]
parent 4d0bbe5d17
commit e0dfe58d66
82 changed files with 22 additions and 2883 deletions

View File

@@ -339,16 +339,6 @@ struct GroupedFlatmmKernel : FlatmmKernel<TilePartitioner_, FlatmmPipeline_, Epi
{
return hostArgs;
}
// CK_TILE_HOST static constexpr auto
// MakeKernelArgs(const ContiguousGroupedFlatmmHostArgs& hostArgs)
// {
// return hostArgs;
// }
// CK_TILE_HOST static constexpr auto
// MakeKernelArgs(const MaskedGroupedFlatmmHostArgs& hostArgs)
// {
// return hostArgs;
// }
template <class ScaleM = FlatmmScalePointer<-1>,
class ScaleN = FlatmmScalePointer<-1>,

View File

@@ -483,13 +483,6 @@ struct MoeFlatmmKernel
if constexpr(std::is_same_v<BLayout, tensor_layout::gemm::RowMajor>)
{
// if(kargs.N % TilePartitioner::NPerBlock != 0 && FlatmmPipeline::kPadN == false)
// {
// std::cerr << "Can't support N that is not a multiple of NPerBlock"
// " without padding!"
// << std::endl;
// return false;
// }
if(kargs.N % FlatmmPipeline::GetVectorSizeB() != 0)
{
std::cerr << "N is not a multiple of vector load size for B tensor!" << std::endl;