Dave, Harsh 7c6c04a457 More optimizations in 6x8m DGEMM SUP Kernel using prefetching (#34)
* Enhance Prefetching in 6x8m DGEMM SUP Kernel for Improved Performance

This update optimizes the DGEMM kernel by implementing well suited prefetching techniques.

Key changes include:

- **Prefetching Strategy**:
  - Introduced prefetching instructions to load matrix data into cache ahead of computation.
  - Prefetching for matrix A is based on the k-loop, starting from columns close to the ones being loaded and computed.
  - Prefetching for matrix B follows a similar approach, focusing on rows close to the ones being loaded and computed.

- **Unrolling Optimization**:
  - Increased the unroll factor of the k-loop from 4 to 8, allowing for more efficient prefetching of matrices A and B.
  - This adjustment enhances data locality and reduces the overhead associated with loop control.

- **Performance Improvements**:
  - Reduced memory access latency by ensuring data is preloaded into cache.
  - Enhanced computational throughput by minimizing stalls due to memory access delays.
  - Improved overall efficiency of matrix multiplication operations.

These enhancements lead to faster DGEMM computations, leveraging improved cache utilization and loop unrolling to boost overall performance.

AMD-Internal: [CPUPL-6435]

* added unroll K by 4 along with unroll K by 8

* Added descriptive comments explaining prefetch strategy

* Added descriptive comments explaining prefetch strategy

* More optimizations in 6x8m DGEMM SUP Kernel using prefetching

- Restructured main loop with 8× and 4× unrolling (k_iter_8, k_iter_4, k_left) for deeper pipeline utilization.
- Introduced forward prefetching for A and future B rows to better align with unrolled access patterns.
- Interleaved alpha scaling with FMA for computation of alpha*AB + C more efficiently.

These enhancements lead to faster DGEMM computations, leveraging improved cache utilization and loop unrolling
to boost overall performance.

AMD-Internal: [CPUPL-6435]

* Enhance Prefetching in 6x8m DGEMM SUP Kernel for Improved Performance

This update optimizes the DGEMM kernel by implementing well suited prefetching techniques.

Key changes include:

- **Prefetching Strategy**:
  - Introduced prefetching instructions to load matrix data into cache ahead of computation.
  - Prefetching for matrix A is based on the k-loop, starting from columns close to the ones being loaded and computed.
  - Prefetching for matrix B follows a similar approach, focusing on rows close to the ones being loaded and computed.

- **Unrolling Optimization**:
  - Increased the unroll factor of the k-loop from 4 to 8, allowing for more efficient prefetching of matrices A and B.
  - This adjustment enhances data locality and reduces the overhead associated with loop control.

- **Performance Improvements**:
  - Reduced memory access latency by ensuring data is preloaded into cache.
  - Enhanced computational throughput by minimizing stalls due to memory access delays.
  - Improved overall efficiency of matrix multiplication operations.

These enhancements lead to faster DGEMM computations, leveraging improved cache utilization and loop unrolling to boost overall performance.

AMD-Internal: [CPUPL-6435]

* added unroll K by 4 along with unroll K by 8

* Added descriptive comments explaining prefetch strategy

* More optimizations in 6x8m DGEMM SUP Kernel using prefetching

- Restructured main loop with 8× and 4× unrolling (k_iter_8, k_iter_4, k_left) for deeper pipeline utilization.
- Introduced forward prefetching for A and future B rows to better align with unrolled access patterns.
- Interleaved alpha scaling with FMA for computation of alpha*AB + C more efficiently.

These enhancements lead to faster DGEMM computations, leveraging improved cache utilization and loop unrolling
to boost overall performance.

AMD-Internal: [CPUPL-6435]

---------

Co-authored-by: Harsh Dave <harsdave@amd.com>
Co-authored-by: Varaganti, Kiran <Kiran.Varaganti@amd.com>
2025-07-01 15:02:50 +05:30
2025-02-07 05:03:49 -05:00
2024-08-05 16:18:51 -04:00
2019-05-23 12:51:17 -05:00
2024-08-05 15:35:08 -04:00
2025-02-07 05:41:44 -05:00
2024-08-05 15:35:08 -04:00
2021-10-05 14:24:17 -05:00
2019-10-02 10:16:22 +01:00
2021-10-06 10:22:34 -05:00
2024-08-22 07:55:55 -04:00
2021-03-22 17:42:33 -05:00
2025-03-11 11:59:02 +00:00
2024-07-08 06:09:11 -04:00
2018-08-07 14:21:07 -05:00
2023-05-25 14:46:33 +00:00

AOCL-BLAS library

AOCL-BLAS is AMD's optimized version of BLAS targeted for AMD EPYC and Ryzen CPUs. It is developed as a forked version of BLIS (https://github.com/flame/blis), which is developed by members of the Science of High-Performance Computing (SHPC) group in the Institute for Computational Engineering and Sciences at The University of Texas at Austin and other collaborators (including AMD). All known features and functionalities of BLIS are retained and supported in AOCL-BLAS library. AOCL-BLAS is regularly updated with the improvements from the upstream repository.

AOCL BLAS is optimized with SSE2, AVX2, AVX512 instruction sets which would be enabled based on the target Zen architecture using the dynamic dispatch feature. All prominent Level 3, Level 2 and Level 1 APIs are designed and optimized for specific paths targeting different size spectrums e.g., Small, Medium and Large sizes. These algorithms are designed and customized to exploit the architectural improvements of the target platform.

For detailed instructions on how to configure, build, install, and link against AOCL-BLAS on AMD CPUs, please refer to the AOCL User Guide located on AMD developer portal.

The upstream repository (https://github.com/flame/blis) contains further information on BLIS, including background information on BLIS design, usage examples, and a complete BLIS API reference.

AOCL-BLAS is developed and maintained by AMD. You can contact us on the email-id toolchainsupport@amd.com. You can also raise any issue/suggestion on the git-hub repository at https://github.com/amd/blis/issues.

Description
BLAS-like Library Instantiation Software Framework
Readme BSD-3-Clause 71 MiB
Languages
C 86.2%
C++ 9.7%
Fortran 1.9%
Makefile 0.8%
MATLAB 0.4%
Other 0.9%