Enable group mode (varlen) kernel generation for PyTorch integration (#4292)

## Proposed changes

This PR enables group mode (variable-length attention) kernel generation
for PyTorch's CK SDPA backend.

## Checklist

Please put an `x` into the boxes that apply. You can also fill these out
after creating the PR. If you're not sure, please don't hesitate to ask.

- [X] I have added tests relevant to the introduced functionality, and
the unit tests are passing locally
- [ ] I have added the test to REGRESSION_TESTS list defined at the top
of CMakeLists.txt in tests/CMakeLists.txt, **IF** the test takes more
than 30 seconds to run.
- [ ] I have added inline documentation which enables the maintainers
with understanding the motivation
- [ ] I have removed the stale documentation which is no longer relevant
after this pull request
- [ ] (If this change is user-facing) I have added release notes which
provide the end users with a brief summary of the improvement from this
pull request
- [X] I have run `clang-format` on all changed files
- [ ] Any dependent changes have been merged

## Discussion

The change is minimal (single line deletion) but enables a significant
feature: variable-length attention support for ROCm users via PyTorch's
torch.nn.attention.varlen API.



---
🔁 Imported from
[ROCm/composable_kernel#3553](https://github.com/ROCm/composable_kernel/pull/3553)
🧑‍💻 Originally authored by @chinmaydk99

Co-authored-by: Chinmay_Kuchinad <ChinmayDattanand.Kuchinad@amd.com>
This commit is contained in:
assistant-librarian[bot]
2026-02-09 20:58:57 +00:00
committed by GitHub
parent 784a03af29
commit fdb1a08e6f

View File

@@ -1219,7 +1219,6 @@ def get_product(receipt: int) -> Product:
cond &= kernel_ctx.pipeline.F_vlayout == "row"
cond &= kernel_ctx.pipeline.F_bias in ["no", "bias"]
cond &= kernel_ctx.pipeline.F_qscale == "no"
cond &= problem_ctx.mode == "batch"
cond &= kernel_ctx.pipeline.F_skip == "f"
cond &= kernel_ctx.pipeline.F_logits == "f"
return cond