Commit Graph

590 Commits

Author SHA1 Message Date
Harsh Dave
e437469a99 Optimized AVX2 DGEMM SUP edge kernels
- For edge kernels which handles the corner cases and specially
for cases where there is really small amount of computation to
be done, executing FMA efficiently becomes very crucial.

- In previous implementation, edge kernels were using same, limited
number of vector register to hold FMA result, which indirectly creates
dependency on previous FMA to complete before CPU can issue new FMA.

- This commit address this issue by using different vector registers
that are available at disposal to hold FMA result.

- That way we hold FMA results in two sets of vector registers, so that
sub-sequent FMA won't have to wait for previous FMA to complete.

- At the end of un-rolled K loop these two sets of vector registers are
added together to store correct result in intended vector registers.

AMD-Internal: [CPUPL-3574]
Change-Id: I48fa9e29b6650a785321097b9feeddc3326e3c54
2023-09-22 03:43:47 -04:00
Edward Smyth
bb4c158e63 Merge commit 'b683d01b' into amd-main
* commit 'b683d01b':
  Use extra #undef when including ba/ex API headers.
  Minor preprocessor/header cleanup.
  Fixed typo in cpp guard in bli_util_ft.h.
  Defined eqsc, eqv, eqm to test object equality.
  Defined setijv, getijv to set/get vector elements.
  Minor API breakage in bli_pack API.
  Add err_t* "return" parameter to malloc functions.
  Always stay initialized after BLAS compat calls.
  Renamed membrk files/vars/functions to pba.
  Switch allocator mutexes to static initialization.

AMD-Internal: [CPUPL-2698]
Change-Id: Ied2ca8619f144d4b8a7123ac45a1be0dda3875df
2023-08-21 07:01:38 -04:00
Harihara Sudhan S
278ca71706 Fixes for GEMV Functionality Issues
- Added call to dsetv in dscalv. When DSCALV is invoked by
  DGEMV the SCAL function is expected to SET the vector to
  zero when alpha is 0. This change is done to ensure BLAS
  compatibility of DGEMV.
- Fixed bug in DGEMV var 1. Reverted changes in DGEMV var
  1 to remove packing and dispatch logic.
- CMAKE now builds with _amd files for unf_var2 of GEMV.

AMD-Internal: [CPUPL-3772]
Change-Id: I0d60c9e1025a3a56419d6ae47ded509d50e5eade
2023-08-14 13:54:48 +05:30
Harihara Sudhan S
03fa660792 Optimized xGEMV for non-unit stride X vector
- In GEMV variant 1, the input matrix A is in row major. X vector
  has to be of unit stride if the operation is to be vectorized.
- In cases when X vector is non-unit stride, vectorization of the GEMV
  operation inside the kernel has been ensured by packing the input X
  vector to a temporary buffer with unit stride. Currently, the
  packing is done using the SCAL2V.
- In case of DGEMV, X vector is scaled by alpha as part of packing.
  In CGEMV and ZGEMV, alpha is passed as 1 while packing.
- The temporary buffer created is released once the GEMV operation
  is complete.
- In DGEMV variant 1, moved problem decomposition for Zen architecture
  to the DOTXF kernel.
- Removed flag check based kernel dispatch logic from DGEMV. Now,
  kernels will be picked from the context for non-avx machines. For
  avx machines, the kernel(s) to be dispatched is(are) assigned to
  the function pointer in the unf_var layer.

AMD-Internal: [CPUPL-3475]
Change-Id: Icd9fd91eccd831f1fcb9fbf0037fcbbc2e34268e
2023-08-08 01:01:22 -04:00
Edward Smyth
c445f192d5 BLIS: Missing clobbers (batch 6)
More missing clobbers in skx and zen4 kernels, missed in
previous commits.

AMD-Internal: [CPUPL-3521]
Change-Id: I838240f0539af4bf977a10d20302a40c34710858
2023-08-07 10:52:23 -04:00
Harihara Sudhan S
3be43d264f Optimized xGEMV for non-unit stride Y vector
- In variant 2 of GEMV, A matrix is in column major. Y vector has
  to be of unit stride if the operation is to be vectorized.
- In cases when Y vector is non-unit stride, vectorization of the
  GEMV operation inside the kernel has been ensured by packing the
  input Y vector to a temporary buffer with unit stride. As part of
  the packing Y is scaled by beta to reduce the number of times Y
  vector is to be loaded.
- After performing the GEMV operation, the results in the temporary
  buffer are copied to the original buffer and the temporary one is
  released.
- In DGEMV var 2, moved problem decomposition for Zen architecture
  to the AXPYF kernel.
- Removed flag check based kernel dispatch logic from DGEMV. Now,
  kernels will be picked from the context for non-avx machines. For
  avx machines, the kernel(s) to be dispatched is(are) assigned to
  the function pointer in the unf_var layer.

AMD-Internal: [CPUPL-3485]
Change-Id: I7b2efb00a9fa9abca65abca07ee80f38229bf654
2023-08-07 08:12:44 -04:00
Harsh Dave
5bdf5e2aaa Optimized AVX2 DGEMM SUP and small edge kernels.
- Re-designed the new edge kernels that uses masked load-store
  instructions for handling corner cases.

- Mask load-store instruction macros are added.
  vmovdqu, VMOVDQU for setting up the mask.
  vmaskmovpd, VMASKMOVPD for masked load-store

- Following edge kernels are added for 6x8m dgemm sup.
  n-left edge kernels
  - bli_dgemmsup_rv_haswell_asm_6x7m
  - bli_dgemmsup_rv_haswell_asm_6x5m
  - bli_dgemmsup_rv_haswell_asm_6x3m

  m-left edge kernels
  - bli_dgemmsup_rv_haswell_asm_5x7
  - bli_dgemmsup_rv_haswell_asm_4x7
  - bli_dgemmsup_rv_haswell_asm_3x7
  - bli_dgemmsup_rv_haswell_asm_2x7
  - bli_dgemmsup_rv_haswell_asm_1x7

  - bli_dgemmsup_rv_haswell_asm_5x5
  - bli_dgemmsup_rv_haswell_asm_4x5
  - bli_dgemmsup_rv_haswell_asm_3x5
  - bli_dgemmsup_rv_haswell_asm_2x5
  - bli_dgemmsup_rv_haswell_asm_1x5

  - bli_dgemmsup_rv_haswell_asm_5x3
  - bli_dgemmsup_rv_haswell_asm_4x3
  - bli_dgemmsup_rv_haswell_asm_3x3
  - bli_dgemmsup_rv_haswell_asm_2x3
  - bli_dgemmsup_rv_haswell_asm_1x3

- For 16x3 dgemm_small, m_left computation is handled
  with masked load-store instructions avoid overhead
  of conditional checks for edge cases.

- It improves performance by reducing branching overhead
  and by being more cache friendly.

AMD-Internal: [CPUPL-3574]

Change-Id: I976d6a9209d2a1a02b2830d03d21d200a5aad173
2023-08-07 07:30:50 -04:00
Vignesh Balasubramanian
758ec3b5ca ZGEMM optimizations for cases with k = 1
- Implemented bli_zgemm_4x4_avx2_k1_nn( ... ) kernel to replace
  bli_zgemm_4x6_avx2_k1_nn( ... ) kernel in the BLAS layer of
  ZGEMM. The kernel is built for handling the GEMM computation
  with inputs having k = 1, and the transpose values for A and
  B as N.

- The kernel dimension has been changed from 4x6 to 4x4,
  due to the following reasons :

  - The 1xNR block of B in the n-loop can be reused over multiple
    MRx1 blocks of A in the m-loop during computation. Similar
    analogy exists for the fringe cases.

  - Every 1xNR block of B was scaled with alpha and stored in
    registers before traversing in the m-dimension. Similar change
    was done for fringe cases in n-dimension.

  - These registers should not be modified during compute, hence
    the kernel dimension was changed from 4x6 to 4x4.

- The check for early exit(with regards to BLAS mandate) has been
  removed, since it is already present in the BLAS layer.

- The check for parallel ZGEMM has been moved post the redirection to
  this kernel, since the kernel is single-threaded.

- The bli_kernels_zen.h file was updated with the new kernel signature.

AMD-Internal: [CPUPL-3622]
Change-Id: Iaf03b00d5075dd74cc412290d77a401986ba0bea
2023-08-07 15:10:08 +05:30
Harihara Sudhan S
c97471dce0 Added AVX512 ZDSCALV kernel
- Added AVX512-based kernel for ZDSCAL. This will be dispatched from
  the BLAS layer for machines that have AVX512 flags.
- In AVX2 kernel for ZDSCALV, vectorized fringe compute using SSE
  instructions.
- Removed the negative incx handling checks from the blis_impli layer
  of ZDSCAL as BLAS expects early return for incx <= 0.

AMD-Internal: [CPUPL-3648]
Change-Id: I820808e3158036502b78b703f5f7faa799e5f7d9
2023-08-06 01:51:47 -04:00
Harihara Sudhan S
b126c9943b ZSCALV kernel optimization
- ZSCALV kernel now uses fmaddsub intrinsics instead of mul
  followed by addsub instrinsics.
- Removed the negative incx handling checks from the BLAS impli
  layer as BLAS expects early return for incx <= 0.
- Moved all exceptions in the kernel to the BLAS impli layer.

AMD-Internal: [SWLCSG-2224]
Change-Id: I03b968d21ca5128cb78ddcef5acfd5e579b22674
2023-08-04 06:57:18 -04:00
Eleni Vlachopoulou
9c613c4c03 Windows CMake bugfix in object libraries for shared library option
Defining BLIS_IS_BUILDING_LIBRARY if BUILD_SHARED_LIBS=ON for the object libraries created in kernels/ directory.
The macro definition was not propagated from high level CMake, so we need to define explicitly for the object libraries.

AMD-Internal: [CPUPL-3241]
Change-Id: Ifc5243861eb94670e7581367ef4bc7467c664d52
2023-05-24 17:30:16 +05:30
Edward Smyth
dea5fe4d12 BLIS: Missing clobbers (batch 5)
Add missing clobbers for AVX512 mask registers k0-k7
in zen4 kernels.

AMD-Internal: [CPUPL-3456]
Change-Id: I5f28c725d7af1466df4db4cdfa2d456bbc6ab36d
2023-05-23 15:40:29 -04:00
Edward Smyth
a3adfb68cf BLIS: Missing clobbers (batch 4)
Add missing clobbers haswell (sup) kernels.

AMD-Internal: [CPUPL-3456]
Change-Id: I19fa97b85f75c8b8fe15d31b13768f937cc5e4cc
2023-05-23 14:57:08 -04:00
Edward Smyth
03965a4f07 BLIS: Missing clobbers (batch 3)
Add missing clobbers in haswell (non-sup) kernels.

AMD-Internal: [CPUPL-3456]
Change-Id: I68f6ad0c01557fcde73b1775d250d48b5162c521
2023-05-23 14:37:31 -04:00
Edward Smyth
e960141fe2 BLIS: Missing clobbers (batch 2)
Add missing clobbers in other zen4 kernels.

AMD-Internal: [CPUPL-3456]
Change-Id: I5cceb44fe100e03269cfe21d8c4c0d2171b921c3
2023-05-23 13:12:20 -04:00
Edward Smyth
ea2eea5097 BLIS: Missing clobbers (batch 1)
Add missing clobbers in first batch of assembly kernels:
- zen3 bli_gemmsup*
- bli_zgemm_zen4_asm_12x4
- bli_gemmsup_rv_haswell_asm_sMx6

AMD-Internal: [CPUPL-3456]
Change-Id: I33c321043a197b2b885cfd6cd589532fc633a6a1
2023-05-23 11:51:18 -04:00
Mangala V
5f5bc24989 Bug fix: AVX2 code being invoked on non-avx2 machine for ZGEMM API
Prevented calling avx2 based bli_zgemm_ref_k1_nn code on
non-supported systems.
Changed the name of the function bli_zgemm_ref_k1_nn to bli_zgemm_4x6_avx2_k1_nn().
Changed the name of the function bli_dgemm_ref_k1_nn to bli_dgemm_8x6_avx2_k1_nn().

Thanks to Kiran Varaganti <Kiran.Varaganti@amd.com>
for identifying and helping to fix the issue.

AMD-Internal: [CPUPL-3352]
Change-Id: I02530ab197ed84c96cbad4f7dd56eedca0109c35
2023-05-21 23:13:46 +05:30
eashdash
2c4f032e0f Fix for lack of BF16 instruction when compiled with GCC-11
GCC-11 and below support AVX512-BF16.
However, it doesn't support all the bf16 instructions required.

For bf16 downscale APIs, when beta scaling is done, C output
elements must be upscaled from BF16 type to Float type for
beta scaling operation.

For this upscaling operation of bf16 to float,
_mm512_cvtpbh_ps is used.

This however is not supported by GCC-11 and below
(but is supported on GCC 12 onwards)

Lack of this instruction support in gcc11, and below leads to
compilation issues with this instruction (_mm512_cvtpbh_ps)
not being recognized.

To fix, this, we use a set of instructions:
1. register containing bf16 type
   __m256bh a1
2. Convert bf16 to float with shift left ops
   __m512 float_a1 = (__m512)
   (_mm512_sllv_epi32
   (_mm512_cvtepi16_epi32 ((__m256i) a1), _mm512_set1_epi32 (16)));

AMD-Internal: [CPUPL-3454]
Change-Id: Ie4a9f04881c59ced088608633774b27f22b4ab8e
2023-05-19 10:15:08 +00:00
eashdash
061a68ff0d BF16 Downscale and Performance fix for bf16 API
This change contains the following:

1. Downscale optimization fix
   a. Similar to downscale optimizations made for s32 and s16 gemm,
      the following optimizations are done to improve the downscale
      performance for BF16 gemm
   b. The store to temporary float buffer can be avoided when k < KC
      since intermediate accumulation will not be required for the
      pc loop (only 1 iteration). The downscaled values (bf16) are
      written directly to the output C matrix.
   c. Within the micro-kernel when beta != 0, the bf16 data from the
      original C output matrix is loaded to a register, converted to
      float and beta scaling is applied on it at register level.
      This eliminates the requirement of previous design of copying the
      bf16 value to the temporary float buffer inside jc loop.

2. Alpha scaling
   a. Alpha scaling (multiply instruction) by default was resulting in
      performance regression when k dimension is small and alpha=1 in
      bf16 micro-kernels.
   b. Alpha scaling is now only done when alpha != 1.

3. K Fringe optimization
   a. Previously memcpy was used for K fringe case to load elements
      from A matrix in the microkernels
   b. Now, masked stores are used to store the downscaled and
      non-downscaled outputs without the need to use
      memcpy functions

4. N LT-16 fringe optimization
   a. Previously memcpy was used for N LT 16 fringe case in the
      microkernelsfor storing the downscaled and non-downscaled output.
   b. Now, masked stores are used to store the downscaled and
      non-downscaled outputs of BF16 without the need to use
      memcpy functions

5. Framework updates to avoid unnecessary pack buffer allocation
   a. The default allocation of the temporary pack buffer is removed
      and the pack buffer is now only allocated if k > KC.

AMD-Internal: [CPUPL-3437]
Change-Id: I71ff862e7d250559409a12a3533678c7a7951044
2023-05-18 10:02:56 -04:00
Shubham Sharma
26e120ea25 Fixed diagonal packing for C/Z TRSM small
- In C/Z TRSM small, packing in case of unit diagonal
  is not handled properly.
- Diagonal elements are still being read even in case of
  unit diagonal.
- This causes "Conditional jump or move depends on
  uninitialised value" error during valgrind tests.
- To fix this, diagonal elements should not be read
  in case of unit diagonal.

AMD-Internal: [CPUPL-3406]
Change-Id: If3d6965299998a83d87f3a032f654fc7f8c43d4e
2023-05-18 07:57:21 -04:00
Eleni Vlachopoulou
1a7f60ff5b Update CMake system to use object libraries for haswell, skx and zen4.
- AVX2 and AVX512 flags are set up locally for each object library that requires them.
- Default ENABLE_SIMD_FLAGS value is set to none and for AVX2 option the corresponding compiler flag is set globally.
- To be able to build zen4 codepath when ENABLE_SIMD_FLAGS=AVX2, the compiler option is removed by removing the definition before building the corresponding object library.

AMD-Internal: [CPUPL-3241]
Change-Id: Ia570e60f06c4c72b7c58f4c9ca73bac4c060ae73
2023-05-12 10:04:16 -04:00
Harsh Dave
30b931ae60 Fixed compilation error due to inconsistent compiler behavior towards AVX512 zero masking instruction syntax
- Since the code used whitespace variant of AVX512 mask instruction. But some compilers
accept whitespace variant and some don't - to be safe, we removed whitespace.

- Whitespace variant of masked instruction "vmovupd    (%rax,%r8,1),%zmm8{%k2} {z}" is replaced with
  this instruction "vmovupd    (%rax,%r8,1),%zmm8{%k2}{z}" to resolve the compilation failure issue.

- Thanks to Shubham Sharma<shubham.sharma3@amd.com> for identifying issue.

AMD-Internal: [CPUPL-1963]

Change-Id: I290589132e8cce25cab0d1e4c195a7dd0a014937
2023-05-12 06:16:15 -04:00
mkadavil
b167e47091 LPGEMM frame and micro-kernel updates to fix gcc9.4 compilation issue.
-Micro-kernel: Some AVX512 intrinsics(eg: _mm512_loadu_epi32) were
introduced in later versions of gcc (>10) in addition to already
existing masked intrinsic(eg: _mm512_mask_loadu_epi32). In order to
support compilation using gcc 9.4, either the masked intrinsic or other
gcc 9.4 compatible intrinsic needs to be used (eg: _mm512_loadu_si512)
in LPGEMM Zen4 micro-kernels.
-Frame: BF16 LPGEMM api's (aocl_gemm_bf16bf16f32obf16/bf16bf16f32of32)
needs to be disabled if aocl_gemm (LPGEMM) addon is compiled using gcc
9.4. BF16 intrinsics are not supported in gcc 9.4, and the micro-kernels
for BF16 LPGEMM is excluded from compilation based on GNUC macro.

AMD-Internal: [CPUPL-3396]
Change-Id: I096b05cdceea77e3e7fec18a5e41feccdf47f0e7
2023-05-11 18:00:18 +05:30
Mangala V
7739a3fbfe Bug fix for 4xk AVX512 packing kernel
Few tests failed on windows OS as some registers were not added as part
of cobbler list

Updated below registers into clobber list:
In function bli_zpackm_zen4_asm_12xk : ZMM12-ZMM15
In function bli_zpackm_zen4_asm_4xk : ZMM4-ZMM7

AMD-Internal: [CPUPL-3253]

Change-Id: I3e42130bf1a3b48717c4b437179ae3f116e5cf1d
2023-05-05 04:15:25 +05:30
Eleni Vlachopoulou
bf26b8ffbc Removing /arch:AVX2 flag from-high level CMake
- Previously, this flag was set as a default at the high-level CMakeLists.txt which means that this flag is used to build everything,all files and all subdirectories, including ref_kernels and testsuite. Also, all files as target sources for this project and compiled with the same flags.
 - Now, we create object files using the source in kernels/ directory and add to the object files the AVX2 flag explicitly. So, now only those files will have this flag and it should not be used to compile ref_kernels, etc.
 - This is a quick solution to enable runs on non-AVX2 machines.

AMD-Internal: [CPUPL-3241]
Change-Id: Id569b26ffeea40eaa36ab4465b0c52b6446d7650
2023-04-28 09:22:13 -04:00
Edward Smyth
7e50ba669b Code cleanup: No newline at end of file
Some text files were missing a newline at the end of the file.
One has been added.

Also correct file format of windows/tests/inputs.yaml, which
was missed in commit 0f0277e104

AMD-Internal: [CPUPL-2870]
Change-Id: Icb83a4a27033dc0ff325cb84a1cf399e953ec549
2023-04-21 10:02:48 -04:00
Edward Smyth
0f0277e104 Code cleanup: dos2unix file conversion
Source and other files in some directories were a mixture of
Unix and DOS file formats. Convert all relevant files to Unix
format for consistency. Some Windows-specific files remain in
DOS format.

AMD-Internal: [CPUPL-2870]
Change-Id: Ic9a0fddb2dba6dc8bcf0ad9b3cc93774a46caeeb
2023-04-21 08:41:16 -04:00
eashdash
a72fff2be9 Added NEW LPGEMM TYPE- s8s8s16os16 and s8s8s16os8
1. New LPGEMM type - s8s8s16os16 and s8s8s16os8 are added.
2. New interface, frame and kernel files are added.
3. Frame and kernel level files added and modified for s8s8s16
4. s8s8s16 type involves design changes of 2 operations -
   Pack B and Mat Mul
5. Pack B kernel routines to pack B matrix for s16 FMA and compute the
   sum of every column of B matrix to implement the s8s8s16 operation
   using the s16 FMA instructions.
5. Mat Mul Kernel files to compute the GEMM output using s16 FMA.
   Here the A matrix elements are converted from int8 to uint8 (s16 FMA
   works with A matrix type uint8 only) by adding extra 128 to
   every A matrix element
6. Post GEMM computation, additional operations are performed on the
   accumulated outputs to get the correct results.
   Final C = C - ( (sum of column of B matrix) * 128 )
   This is done to compensate for the addition of extra 128 to every
   A matrix elements
7. With this change, two new LPGEMM APIs are introduced in LPGEMM -
   s8s8s16os16 and s8s8s16os8.
8. All previously added post-ops are supported on s8s8os16/os8 also.

AMD-Internal: [CPUPL-3234]
Change-Id: I3cc23e3dcf27f215151dda7c8db29b3a7505f05c
2023-04-21 05:30:38 -04:00
mkadavil
3572baa9d3 aocl_softmax_f32 api's for softmax computation as part of lpgemm.
-Softmax is often used as the last activation function in a neural
network - softmax(xi) = exp(xi)/(exp(x0) + exp(x1) + ... + exp(xn))).
This step happens after the final low precision gemm computation,
and it helps to have the softmax functionality that can be invoked
as part of the lpgemm workflow. In order to support this, a new api,
aocl_softmax_f32 is introduced as part of aocl_gemm. This api
computes element-wise softmax of a matrix/vector of floats. This api
invokes ISA specific vectorized micro-kernels (vectorized only when
incx=1), and a cntx based mechanism (similar to lpgemm_cntx) is used
to dispatch to the appropriate kernel.

AMD-Internal: [CPUPL-3247]
Change-Id: If15880360947435985fa87b6436e475571e4684a
2023-04-21 05:26:08 -04:00
Edward Smyth
6835205ba8 Code cleanup: spelling corrections
Corrections for spelling and other mistakes in code comments
and doc files.

AMD-Internal: [CPUPL-2870]
Change-Id: Ifbb5df7df2d6312fe73e06ee6d41c00b16c593ce
2023-04-19 12:44:56 -04:00
mkadavil
99d10c3f88 Low precision gemm u8s8s16 downscale optimization.
-Similar to downscale optimizations made for u8s8s32 gemm, the following
optimizations are made to improve the downscale performance for u8s8s16
gemm:
a. The store to temporary s16 buffer can be avoided when k < KC since
intermediate accumulation will not required for the pc loop (only 1
iteration). The downscaled values (s8) are written directly to the
output C matrix.
b. Within the micro-kernel when beta != 0, the s8 data from the original
C output matrix is loaded to a register, converted to s16 and beta
scaling applied on it. The previous design of copying the s8 value to
the s16 temporary buffer inside jc loop and using the same in beta
scaling is removed.
-Alpha scaling (multiply instruction) by default was resulting in
performance regression when k dimension is small and alpha=1 in s16
micro-kernels. Alpha scaling is now only done when alpha != 1.

AMD-Internal: [CPUPL-3237]
Change-Id: If25f9d1de8b9b8ffbe1bd7bce3b7b0b5094e51ef
2023-04-19 06:40:06 -04:00
eashdash
462f9e0012 Added Custom Clip post-op support for u8s8s32os32/os8 and s8s8s32os32/os8
1. Custom Clip is a post-op which is used to clip the
   accumulated GEMM output within a certain range.
2. This post-op is implemented for u8s8s32os32/os8 and
   s8s8s32os32/os8 LPGEMM types.
3. Changes are done at the microkernel level for these
   2 APIs to support Clip Post-Op

AMD-Internal: [CPUPL-3207]
Change-Id: I8b4da5807de6a93711b0ae9343970c55192f75d4
2023-04-18 15:21:27 -04:00
Meghana Vankadari
42d05a5aa0 DGEMM: Added decision logic to choose between sup vs native for zen4 architecture
Details:
- Added a new function for choosing between SUP and
  native implementation for a given size.
- This function pointer is stored in cntx for zen4 config.
- Divided total combinations of sizes into 3 categories:
  - one dimension is small
  - Two dimensions are small
  - All dimensions are small
- Added different threshold conditions for each of the
  categories.

AMD-Internal: [CPUPL-2755]
Change-Id: Iae4bf96bb7c9bf9f68fd909fb757d7fe13bc6caf
2023-04-17 13:08:34 -04:00
mkadavil
e23765010d aocl_gelu_<tanh|erf>_f32 api's for gelu computation as part of lpgemm.
-Currently in aocl_gemm, gelu (both tanh and erf based) computation is
only supported as a post-op as part of low precision gemm api call (done
at micro-kernel level). However gelu computation alone without gemm is
required in certain cases for users of aocl_gemm.
-In order to support this, two new api's - aocl_gelu_tanh_f32 and
aocl_gelu_erf_f32 are introduced as part of aocl_gemm. These api's
computes element-wise gelu_tanh and gelu_erf respectively of a matrix/
vector of floats. Both the api's invokes ISA specific vectorized micro-
kernels (vectorized only when incx=1), and a cntx based mechanism
(similar to lpgemm_cntx) is used to dispatch to the appropriate kernel.

AMD-Internal: [CPUPL-3218]
Change-Id: Ifebbaf5566d7462288a9a67f479104268b0cc704
2023-04-17 05:15:56 -04:00
eashdash
12c97021a1 Added New Post-Op - Custom Clipping for LPGEMM and SGEMM
1. Custom Clip is an element-wise post-op which is used to
   clip the accumulated GEMM output within a certain range.
2. The Clip Post-Op is used in downscaled and non-downscaled
   LPGEMM APIs and SGEMM.
3. Changes are done at frame and microkernel level to implement
   this post-op.
4. Different versions are implemented - AVX-512, AVX-2, SSE-2
   to enable custom clipping for various LPGEMM types and SGEMM

AMD-Internal: [CPUPL-3207]
Change-Id: I71c60be69e5a0dc47ca9336d58181c097b9aa0c6
2023-04-17 04:38:20 -04:00
Aayush Kumar
71272ab574 .Fixed Compiler warnings for GCC 12 and AOCC 4.0
- Set the variables to zero to avoid the compiler warning
  (-Wmaybe-uninitialized) in bli_dgemm_ref_k1.c,
  bli_gemm_small.c, bli_trsm_small.c, bli_zgemm_ref_k1.c and
  bli_trsm_small_AVX512.c

- Changed the datatype from dim_t to siz_t for i,k,j
  in bli_hemv_unf_var1_amd.c and bli_hemv_unf_var3_amd.c to
  avoid the compiler warning (-Waggressive-loop-optimizations)

AMD-Internal: [CPUPL-2870]

Change-Id: Ib2bc050fa47cb8a280d719283ab4539c70e19d03
2023-04-14 13:29:17 +00:00
Harihara Sudhan S
32bbd96652 Moving AOCL Dynamic logic from BLIS impli layer
Threading related changes
--------------------------

- Created function bli_nthreads_l1 that dispatches the AOCL dynamic
  logic for a L1 function based on the kernel ID and input datatypes.
- bli_nthreads_l1 gets the number of threads to be launched from the
  rntm variable.
- Added aocl_'ker?'_dynamic function for DAXPYV, DSCALV, ZDSCALV and
  DDOTV. This function contains the AOCL dynamic logic for the
  respective kernels.
- Added handling for cases when number of elements (n) is less than
  number of threads spawned (nt) in AOCL dynamic.
- Added function bli_thread_vector_partition that calculates the
  amount of work the calling thread is supposed to perform on a
  vector.

Interface changes
-----------------

- In BLIS impli layer of DSCALV, ZDSCALV and AXPYV, added logic to pick
  kernel based on architecture ID and removed AVX2 flag check.
- Modified function signature of ZDSCALV. Alpha is passed as dcomplex
  and only the real part of the alpha passed is used inside the kernel.
  The change was done to facilitate kernel dispatch based on arch ID.
- Added n <= 0, BLAS exception in BLAS layer of DAXPYV and DDOTV.
  Without this multithreaded code might crash because of minimum work
  calculation.

Misc
-----

- Removed unused variables from ZSCAL2V and AXPYV kernels.

AMD-Internal: [CPUPL-3095]
Change-Id: I4fc7ef53d21f2d86846e86d88ed853deb8fe59e9
2023-04-14 02:05:38 -04:00
Harihara Sudhan S
15bd0f9646 Added AVX512 based double and float AXPYV
- Added AVX512 based double and float AXPYV which will be used in
  Zen4 context.
- Added n <= 0 check and alpha == 0 check to the BLAS layer of
  SAXPY.
- Modified BLAS framework of float AXPYV to remove flag check and
  pick kernels based on architecture ID.
- AVX512 kernel is disabled for other Zen configurations using
  BLIS_KERNELS_ZEN4 macro.

AMD-Internal: [CPUPL-2793]
Change-Id: Ie6a0976c2cfcf81ae5125f5f9aad14477d4ebbd1
2023-04-14 01:06:57 -04:00
mkadavil
5e510727a9 Masked load/store to replace copy macros in u8s8s32 micro-kernels.
-As part of an earlier optimization, the memcpy function call in k
fringe ((k % 4) != 0 case, to utilize vpdpbusd instruction) and n fringe
(n < 16 - beta scale and C store) were replaced with copy macros
specifically optimized for less than 4 and 16 elements each. However
upon further analysis it was observed that masked load/broadcast and
masked store performed better on average than the copy macros. The copy
macros contained more if conditions, which resulted in more branching
and thus resulting in perf variations. It was also noted that code
generation varied a lot based on the compilers when using the copy
macros due to the extra conditional code.
-As part of this change, the copy macros are completely replaced with
masked load/broadcast/store. Performance was observed to be better and
less prone to variations for the k fringe and n fringe (< 16) cases.

AMD-Internal: [CPUPL-3173]
Change-Id: I73e6e65302ecf02e1397541b4a32b2a536f19503
2023-04-13 09:17:26 -04:00
Aayush Kumar
6ad387c2aa Added DTRSM Small Path AVX512 based LUNN/LLTN Variant Kernels
- 8x8 kernels are used for DTRSM SMALL
- Matrix A(a10) is packed for GEMM operations.
- Packed martix A will be re-used in all the col-block
  along N-dimension.
- Diagonal elements of A matrix are packed(a11) for
  TRSM operations.
- Implemented fringe cases with below block sizes
   8x8, 8x4, 8x3, 8x2, 8x1
   4x8, 4x4, 4x3, 4x2, 4x1
   3x8, 3x4, 3x3, 3x2, 3x1
   2x8, 2x4, 2x3, 2x2, 2x1
   1x8, 1x4, 1x3, 1x2, 1x1

AMD-Internal: [CPUPL-2745]

Change-Id: I5bb57501f6d3783eb654e375d63901467dd14734
2023-04-13 01:44:31 -04:00
Harihara Sudhan S
6b8f4744a4 Added AVX512 based double and float DOTV
- Added AVX512 based double and float DOTV which will be used in
  Zen4 context.
- Added n <= 0 check to the BLAS layer of SDOTV.
- Modified BLAS framework of float DOTV to remove flag check and
  pick kernels based on architecture ID.
- AVX512 kernel is disabled for other Zen configurations using
  BLIS_KERNELS_ZEN4 macro.

AMD-Internal: [CPUPL-2800]
Change-Id: I550fbcbb17d6d887b9ecbea23237dc806b208702
2023-04-12 12:36:52 +05:30
Harihara Sudhan S
be7fb342c1 Added AVX512 based double and float SCALV
- Added AVX512 based double and float SCALV which will be used in
  Zen4 context.
- Added incx <= 0 check and alpha == 1 check to the BLAS layer of
  SSCAL.
- Modified BLAS framework of float SCAL to remove flag check and
  pick kernels based on architecture ID.
- AVX512 kernel is disabled for other Zen configurations using
  BLIS_KERNELS_ZEN4 macro.

AMD-Internal: [CPUPL-2766],[CPUPL-2765]
Change-Id: I4cdd93c9adbfbf8f7632730b8606ddcf70edd1dc
2023-04-11 14:41:56 +05:30
Shubham
cc25cff864 Added AVX512 flag for d24xk pack kernel for windows
- on windows 24xk kernel is compiled without avx512 flag which
  causes out of bounds writes for DTRSM.
- to fix this avx512 flag has been added to the CMakeLists.txt
  file for 24xk kernel.

AMD-Internal: [CPUPL-3186]
Change-Id: I0314dea88302fc4964a303853a4b9b719ecd8064
2023-04-09 22:38:33 +05:30
Aayush Kumar
8c537b0cd5 Added DTRSM Small Path AVX512 based LLNN/LUTN Variant Kernels
- 8x8 kernels are used for DTRSM SMALL
- Implemented fringe cases with below block sizes
   8x8, 8x4, 8x3, 8x2, 8x1
   4x8, 4x4, 4x3, 4x2, 4x1
   3x8, 3x4, 3x3, 3x2, 3x1
   2x8, 2x4, 2x3, 2x2, 2x1
   1x8, 1x4, 1x3, 1x2, 1x1

AMD-Internal: [CPUPL-2745]

Change-Id: I58d28912bddbaadb404052c0f3449ebbe3c97b68
2023-04-07 08:50:28 +00:00
Edward Smyth
1885540c5a Code cleanup: compiler warning fixes
Modify code to correct some warning messages from GCC 12.2 or
AOCC 4.0:
- Increase size of nbuf in blastest/f2c/endfile.c
- Remove unused variables in kernels/zen/1/bli_scal2v_zen_int.c
  and kernels/zen/1/bli_axpyv_zen_int10.c
- Remove extraneous parentheses in frame/compat/bla_trsm_amd.c
  and kernels/zen4/3/bli_zgemm_zen4_asm_12x4.c
- Add __attribute__ ((unused)) to several variables in
  frame/1m/packm/bli_packm_struc_cxk.c and
  frame/1m/packm/bli_packm_struc_cxk_md.c

AMD-Internal: [CPUPL-2870]
Change-Id: I595e46f0a3d737beb393c3ab531717565220b10d
2023-04-06 06:56:09 -04:00
vignbala
775ce1f13c Implemented AVX-512 based 12x4 m-variant SUP kernels for ZGEMM
- Implemented 12x4m column preferential SUP kernels(main and fringe
  cases). The main kernel dimension is 12x4, and the associated fringe
  kernel dimensions are : 12x3m, 12x2m, 12x1m
                          8x4, 8x3, 8x2, 8x1
                          4x4, 4x3, 4x2, 4x1
                          2x4, 2x3, 2x2, 2x1.

- Included in-register transposition support for C, thus extending
  the storage scheme supports to CCC, CCR, RCC and RCR inside the
  milli-kernel.

- Integrated conditional packing of A onto the SUP front end for
  dcomplex datatype. This redirects RRC and CRC storage schemes
  onto the preceding set of SUP kernels through storage scheme
  transformation(RCC and CCC respectively).

- Updated the zen4 context file with the new set of SUP kernels, to
  get enabled appropriately. Furthermore, the context file was updated
  with the AVX-2 dotxv signatures for dcomplex datatype. This redirects
  the fringe cases of type 1x? to the pre-existing AVX-2 GEMV routines.

- Added C prefetching onto L2-cache, and an unroll factor of 4 for the
  k loop in all the kernels.

- Work in progress to include conjugate support and input spectrum
  extension for the AVX-512 SUP kernels. The current thresholds in zen4
  context is the same as that of the zen3 thresholds for ZGEMM SUP.

AMD-Internal: [CPUPL-3122]

Change-Id: If40bc4409c6eb188765329508cf1f24c0eb12d1e
2023-04-06 04:49:15 -04:00
Edward Smyth
1ac03e64b5 BLIS cpuid tidy and bugfix.
Improvements to BLIS cpuid functionality:
- Tidy names of avx support test functions, especially rename
  bli_cpuid_is_avx_supported() to bli_cpuid_is_avx2fma3_supported()
  to more accurately describe what it tests.
- Fix bug in frame/base/bli_check.c related to changes in commit
  6861fcae91

AMD-Internal: [CPUPL-3031]
Change-Id: Iacd8fb0ffbd45288e536fc6314660709055ea2d5
2023-04-03 08:46:37 -04:00
mkadavil
27a9e2a0ff u8s8s32 fringe kernel optimizations.
-The n fringe micro kernels uses only a few zmm registers for computing
the output (eg: 6x16 uses 6 zmm registers for output as opposed to 24
used in 6x64). This results in lot of wasted registers that if utilized
can help increase the MR dimension and thus improve the reuse of
registers loaded with B. Based on this concept, the existing n fringe
kernels are modified (6x16 -> 12x16, 6x32 -> 9x32). It is to be noted
that the maximum number of registers are not used, since it results in
cache inefficient code due to the increase in MR and thus more
broadcasts required from unpacked A matrix.
-Compiler flag updates for AOCC build to generate loops with 64 byte
alignment. This has been observed to improve performance slightly when
k dimension is small.

AMD-Internal: [CPUPL-3173]
Change-Id: I199ce75ef71d994ffe0067dac1ed804dce1742ca
2023-04-03 05:35:18 -05:00
eashdash
bd8cd763ff Added NEW LPGEMM TYPE- S8S8S32/S8
1. New LPGEMM type - S8S8S32/S8 is added.
2. New interface, frame and kernel files are added.
3. Frame and kernel files added/modified for S8S8S32/S8 have
   2 operations - Pack B and Mat Mul
4. Pack B kernel routines to pack B matrix for VNNI and compute the sum
   of every column of B matrix to implement the S8S8S32 operation using
   the VNNI instructions.
5. Mat Mul Kernel files to compute the GEMM output using the VNNI.
   Here the A matrix elements are converted from int8 to uint8 (VNNI
   works with A matrix type uint8 only).
6. Post GEMM computation, additional operations are performed on the
   accumulated outputs to get the correct results.
7. With this change, two new LPGEMM APIs are introduced in LPGEMM -
   s8s8s32os32 and s8s8s32os8.
8. All previously added post-ops are supported on S8S8S32/S8 also.

AMD-Internal: [CPUPL-3154]
Change-Id: Ib18f82bde557ea4a815a63adc7870c4234bfb9d3
2023-03-31 05:44:54 -04:00
Mangala V
245fdf072c AVX-512 based col-preferred kernels for ZGEMM in native path
- Kernel block size is 12x4
- Updated the zen4 config to enable these kernels in zen4 path.
- Tuned MC,KC,NC for better performance for m/n/k size > 500
- Updated CMakeLists.txt with ZGEMM kernels for windows build.

Kernel supports:
1. Preload and prebroadcast of A and B
2. Prefecth of C Matrix
3. K loop is sub divided in to multiple loops to maintain distance between c prefetchs.
4. Special case when alpha/beta imag component is zero
5. Row/Col/General stride of Matrix C

AMD-Internal: [CPUPL-2998]
Change-Id: I62e3c352d475b1add3f43270805fbcee00e2e440
2023-03-28 23:05:06 -04:00