[rocm-libraries] ROCm/rocm-libraries#4292 (commit b7f1367)

Enable group mode (varlen) kernel generation for PyTorch
 integration (#4292)
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

## Proposed changes

This PR enables group mode (variable-length attention) kernel generation
for PyTorch's CK SDPA backend.

## Checklist

Please put an `x` into the boxes that apply. You can also fill these out
after creating the PR. If you're not sure, please don't hesitate to ask.

- [X] I have added tests relevant to the introduced functionality, and
the unit tests are passing locally
- [ ] I have added the test to REGRESSION_TESTS list defined at the top
of CMakeLists.txt in tests/CMakeLists.txt, **IF** the test takes more
than 30 seconds to run.
- [ ] I have added inline documentation which enables the maintainers
with understanding the motivation
- [ ] I have removed the stale documentation which is no longer relevant
after this pull request
- [ ] (If this change is user-facing) I have added release notes which
provide the end users with a brief summary of the improvement from this
pull request
- [X] I have run `clang-format` on all changed files
- [ ] Any dependent changes have been merged

## Discussion

The change is minimal (single line deletion) but enables a significant
feature: variable-length attention support for ROCm users via PyTorch's
torch.nn.attention.varlen API.
This commit is contained in:
Chinmay Dattanand Kuchinad
2026-02-09 20:59:55 +00:00
committed by assistant-librarian[bot]
parent ea6363ad78
commit 0cafa68b6f

View File

@@ -1219,7 +1219,6 @@ def get_product(receipt: int) -> Product:
cond &= kernel_ctx.pipeline.F_vlayout == "row"
cond &= kernel_ctx.pipeline.F_bias in ["no", "bias"]
cond &= kernel_ctx.pipeline.F_qscale == "no"
cond &= problem_ctx.mode == "batch"
cond &= kernel_ctx.pipeline.F_skip == "f"
cond &= kernel_ctx.pipeline.F_logits == "f"
return cond