* update docker tag for gfx950 ci build
* update compiler path for gfx950 ci build
* suppress compiler path override for gfx950
* clean up
[ROCm/composable_kernel commit: f5d1e3fa48]
Without this DataType = unknown -
``` sh
Run Flatmm kernel with DataType = unknown M =1280 N =16384 K =1024 StrideA =1024 StrideB =1024 StrideC =16384 : 0.228837 ms, 187.687 TFlops, 341.374 GB/s,
```
after this change
```sh
Run Flatmm kernel with DataType = bf16 M =1280 N =16384 K =1024 StrideA =1024 StrideB =1024 StrideC =16384 : 0.227029 ms, 189.181 TFlops, 344.092 GB/s,
```
[ROCm/composable_kernel commit: 6b09f0823e]
* fix bug in loops that need use local tokens to compute
* support extra chain local_token
* update
* update
* refine some main
* update
* support dispatch_policy
* fix 15 example
[ROCm/composable_kernel commit: cfe211cc60]
* [draft] Add pk_fp4 and test
* Add hw conversion for fp4
* Refine test code and pk_fp4 constructor.
* fix test indent
* modify according to comment.
* fix clang-format
* modify according comments.
---------
Co-authored-by: asleepzzz <hanwen.chang@amd.com>
[ROCm/composable_kernel commit: 141bf2d54d]
* Initial commit
* Adding new tile partitioner to flatmm
* intermediate changes
* debugging kernels
* Updating flatmm example to universal gemm example
* updated flatmm kernel to run via gemmKernel
* update universal gemm to incorporate flatmm
* debug
* Fix flatmm call
* Fixing other kernels and tests for API changes
* clang formatted
* fixing gemm tests
* added test for flatmm and simplify kernel arguments
* adding flatmm test
* fix test for flatmm
* simplify gemm kernel with flatmm
* remove flatmm related files
* addressing review comments and code clean up
* resolving empty file
* resolving empty file
* clang formatted
* addressing review comments
* enable persistent kernel for flatmm
* reverted the removed files for flatmm
* reverted the removed files for flatmm
* changed flatmm to weightPReshuffle; removed the _1 added in teh faltmm example
* some more renames
* clang formatted
[ROCm/composable_kernel commit: d239b91fd5]
* add template for fp16 atomic add
* add template for unsigned short atomic add
* use atomicCAS in atomic add for fp16 and unsigned short
* revrt back to atomic add using casting
[ROCm/composable_kernel commit: 1b66f3f4a3]
* mask support ratio for y axis
* format code
* add notes for param y_ratio
* fix comments error
* support template and mdiv for ratio mask
* refactor y-ratio mask constructor
* optimize coordinate calculation
* add SimplifiedRatioAttentionMask
[ROCm/composable_kernel commit: d814fefe18]
* Wrap tile size mapping as class method
* Warp pipeline generating as class method
* Add constraint as kernel dispatching criteria
* Support mutltiple tile size for a (hdim, hdim_v) combination
* Use smaller tile size if CU utilization is low
* Use integar as the key of the tile size map
* Fix type error
* Simply override parent class method return value
* Add attribute to eliminate warnging
* Allow using environment variables to turn on/off custom factory
* Unify param naming style
* Add missing HIP runtime include directive
* Fix os.environ.get() usage
[ROCm/composable_kernel commit: ad9863fe05]
* Adding ninja log json convertion utility
* Updating to match old ninjatracing
* Updating Jenkins to use new ninjatracing
* Ensuring v7 works
* Removing old ninjatracing from dockerfile
[ROCm/composable_kernel commit: e391b025a0]
* add template for fp16 atomic add
* add template for unsigned short atomic add
* use atomicCAS in atomic add for fp16 and unsigned short
[ROCm/composable_kernel commit: 112b47e885]
* add prefetching physical block id for pagedkv
* start add pagedkv prefill
* rename pipeline
* add kernel for pagedkv
* add an init version pagedkv prefill
* fix redefine issue
* add struct BlockFmhaFwdPagedKVPipelineProblem and fmha_fwd_pagedkv_args
* generate dispatch code
* add body generating code
* comipling pass
* remove dropout from pagedkv
* set lse to false in generating code
* start changing qr kernel to pagedkv
* init version of kernerl with pagedkv
* change names of file that are generated
* chang host validation for pagedkv prefill
* using iglp to change blockgemm
* add kernel files to op head file
* show parameters
* rewrite print parameter fun
* add fwd
* remove default parameter of GridSize
* format
* fix nhead issue and add seqlen_k_ptr to batch mode
* format code
* remove no-longer used code
* format
* fix some comments
---------
Co-authored-by: ltqin <letaoqin@amd.com>
Co-authored-by: Po Yen Chen <PoYen.Chen@amd.com>
[ROCm/composable_kernel commit: 9f4c5d7372]
* support slice cross p
* fix some bug in y_len
* more case
* fix a bug when R exist
* support -1 to hint end of current length
* format
* change commit
[ROCm/composable_kernel commit: a8742f7e31]
* Always force output clearing
* dont run set zero for residual
---------
Co-authored-by: Bartlomiej Kocot <barkocot@amd.com>
[ROCm/composable_kernel commit: 3d70c638d1]
* works on mi300
* fix(profiler): add error message for unsupported type/layout
* refactor(preshuffle.inc): add type aliases for code readability
[ROCm/composable_kernel commit: 36df1cbd0a]
* fix(precommit_install): script now installs packages in virtual env
* fix(precommit_install): installs packages in virtual env
* feat(precommit): added ruff for python linting and formatting
* feat(precommit): added ruff for python linting and formatting
* feat(precommit): run ruff when py files are commited
* feat(precommit): remod.py is run when ck_tile modified
* add empty line at the end
* style(precommit.yaml): remove empty line
---------
Co-authored-by: Max Podkorytov <4273004+tenpercent@users.noreply.github.com>
[ROCm/composable_kernel commit: e9036a8fc2]
* Testing assignment of param fix
* Removing redundant changes
* Adding back unit test runs
* Ensuring Jenkins changes work on develop - to be reverted
* Revert "Ensuring Jenkins changes work on develop - to be reverted"
This reverts commit cf1cab4a43.
[ROCm/composable_kernel commit: 2fa9270a25]
1. When conv spec is 1x1 stride1 pad0, nchw is equal with matrix A + column major, we only need minor change in conv transformer to support it.
2. when out is NKHW, it is equal with matrix C with column major. we need swap A & B to get best performance.
3. Add new instance device_grouped_conv_fwd_xdl_f16_nchw_instances for nchw.
[ROCm/composable_kernel commit: 1749c0409e]