- For edge kernels which handles the corner cases and specially
for cases where there is really small amount of computation to
be done, executing FMA efficiently becomes very crucial.
- In previous implementation, edge kernels were using same, limited
number of vector register to hold FMA result, which indirectly creates
dependency on previous FMA to complete before CPU can issue new FMA.
- This commit address this issue by using different vector registers
that are available at disposal to hold FMA result.
- That way we hold FMA results in two sets of vector registers, so that
sub-sequent FMA won't have to wait for previous FMA to complete.
- At the end of un-rolled K loop these two sets of vector registers are
added together to store correct result in intended vector registers.
AMD-Internal: [CPUPL-3574]
Change-Id: I48fa9e29b6650a785321097b9feeddc3326e3c54
* commit 'b683d01b':
Use extra #undef when including ba/ex API headers.
Minor preprocessor/header cleanup.
Fixed typo in cpp guard in bli_util_ft.h.
Defined eqsc, eqv, eqm to test object equality.
Defined setijv, getijv to set/get vector elements.
Minor API breakage in bli_pack API.
Add err_t* "return" parameter to malloc functions.
Always stay initialized after BLAS compat calls.
Renamed membrk files/vars/functions to pba.
Switch allocator mutexes to static initialization.
AMD-Internal: [CPUPL-2698]
Change-Id: Ied2ca8619f144d4b8a7123ac45a1be0dda3875df
- Added call to dsetv in dscalv. When DSCALV is invoked by
DGEMV the SCAL function is expected to SET the vector to
zero when alpha is 0. This change is done to ensure BLAS
compatibility of DGEMV.
- Fixed bug in DGEMV var 1. Reverted changes in DGEMV var
1 to remove packing and dispatch logic.
- CMAKE now builds with _amd files for unf_var2 of GEMV.
AMD-Internal: [CPUPL-3772]
Change-Id: I0d60c9e1025a3a56419d6ae47ded509d50e5eade
- In GEMV variant 1, the input matrix A is in row major. X vector
has to be of unit stride if the operation is to be vectorized.
- In cases when X vector is non-unit stride, vectorization of the GEMV
operation inside the kernel has been ensured by packing the input X
vector to a temporary buffer with unit stride. Currently, the
packing is done using the SCAL2V.
- In case of DGEMV, X vector is scaled by alpha as part of packing.
In CGEMV and ZGEMV, alpha is passed as 1 while packing.
- The temporary buffer created is released once the GEMV operation
is complete.
- In DGEMV variant 1, moved problem decomposition for Zen architecture
to the DOTXF kernel.
- Removed flag check based kernel dispatch logic from DGEMV. Now,
kernels will be picked from the context for non-avx machines. For
avx machines, the kernel(s) to be dispatched is(are) assigned to
the function pointer in the unf_var layer.
AMD-Internal: [CPUPL-3475]
Change-Id: Icd9fd91eccd831f1fcb9fbf0037fcbbc2e34268e
More missing clobbers in skx and zen4 kernels, missed in
previous commits.
AMD-Internal: [CPUPL-3521]
Change-Id: I838240f0539af4bf977a10d20302a40c34710858
- In variant 2 of GEMV, A matrix is in column major. Y vector has
to be of unit stride if the operation is to be vectorized.
- In cases when Y vector is non-unit stride, vectorization of the
GEMV operation inside the kernel has been ensured by packing the
input Y vector to a temporary buffer with unit stride. As part of
the packing Y is scaled by beta to reduce the number of times Y
vector is to be loaded.
- After performing the GEMV operation, the results in the temporary
buffer are copied to the original buffer and the temporary one is
released.
- In DGEMV var 2, moved problem decomposition for Zen architecture
to the AXPYF kernel.
- Removed flag check based kernel dispatch logic from DGEMV. Now,
kernels will be picked from the context for non-avx machines. For
avx machines, the kernel(s) to be dispatched is(are) assigned to
the function pointer in the unf_var layer.
AMD-Internal: [CPUPL-3485]
Change-Id: I7b2efb00a9fa9abca65abca07ee80f38229bf654
- Implemented bli_zgemm_4x4_avx2_k1_nn( ... ) kernel to replace
bli_zgemm_4x6_avx2_k1_nn( ... ) kernel in the BLAS layer of
ZGEMM. The kernel is built for handling the GEMM computation
with inputs having k = 1, and the transpose values for A and
B as N.
- The kernel dimension has been changed from 4x6 to 4x4,
due to the following reasons :
- The 1xNR block of B in the n-loop can be reused over multiple
MRx1 blocks of A in the m-loop during computation. Similar
analogy exists for the fringe cases.
- Every 1xNR block of B was scaled with alpha and stored in
registers before traversing in the m-dimension. Similar change
was done for fringe cases in n-dimension.
- These registers should not be modified during compute, hence
the kernel dimension was changed from 4x6 to 4x4.
- The check for early exit(with regards to BLAS mandate) has been
removed, since it is already present in the BLAS layer.
- The check for parallel ZGEMM has been moved post the redirection to
this kernel, since the kernel is single-threaded.
- The bli_kernels_zen.h file was updated with the new kernel signature.
AMD-Internal: [CPUPL-3622]
Change-Id: Iaf03b00d5075dd74cc412290d77a401986ba0bea
- Added AVX512-based kernel for ZDSCAL. This will be dispatched from
the BLAS layer for machines that have AVX512 flags.
- In AVX2 kernel for ZDSCALV, vectorized fringe compute using SSE
instructions.
- Removed the negative incx handling checks from the blis_impli layer
of ZDSCAL as BLAS expects early return for incx <= 0.
AMD-Internal: [CPUPL-3648]
Change-Id: I820808e3158036502b78b703f5f7faa799e5f7d9
- ZSCALV kernel now uses fmaddsub intrinsics instead of mul
followed by addsub instrinsics.
- Removed the negative incx handling checks from the BLAS impli
layer as BLAS expects early return for incx <= 0.
- Moved all exceptions in the kernel to the BLAS impli layer.
AMD-Internal: [SWLCSG-2224]
Change-Id: I03b968d21ca5128cb78ddcef5acfd5e579b22674
Defining BLIS_IS_BUILDING_LIBRARY if BUILD_SHARED_LIBS=ON for the object libraries created in kernels/ directory.
The macro definition was not propagated from high level CMake, so we need to define explicitly for the object libraries.
AMD-Internal: [CPUPL-3241]
Change-Id: Ifc5243861eb94670e7581367ef4bc7467c664d52
Prevented calling avx2 based bli_zgemm_ref_k1_nn code on
non-supported systems.
Changed the name of the function bli_zgemm_ref_k1_nn to bli_zgemm_4x6_avx2_k1_nn().
Changed the name of the function bli_dgemm_ref_k1_nn to bli_dgemm_8x6_avx2_k1_nn().
Thanks to Kiran Varaganti <Kiran.Varaganti@amd.com>
for identifying and helping to fix the issue.
AMD-Internal: [CPUPL-3352]
Change-Id: I02530ab197ed84c96cbad4f7dd56eedca0109c35
GCC-11 and below support AVX512-BF16.
However, it doesn't support all the bf16 instructions required.
For bf16 downscale APIs, when beta scaling is done, C output
elements must be upscaled from BF16 type to Float type for
beta scaling operation.
For this upscaling operation of bf16 to float,
_mm512_cvtpbh_ps is used.
This however is not supported by GCC-11 and below
(but is supported on GCC 12 onwards)
Lack of this instruction support in gcc11, and below leads to
compilation issues with this instruction (_mm512_cvtpbh_ps)
not being recognized.
To fix, this, we use a set of instructions:
1. register containing bf16 type
__m256bh a1
2. Convert bf16 to float with shift left ops
__m512 float_a1 = (__m512)
(_mm512_sllv_epi32
(_mm512_cvtepi16_epi32 ((__m256i) a1), _mm512_set1_epi32 (16)));
AMD-Internal: [CPUPL-3454]
Change-Id: Ie4a9f04881c59ced088608633774b27f22b4ab8e
This change contains the following:
1. Downscale optimization fix
a. Similar to downscale optimizations made for s32 and s16 gemm,
the following optimizations are done to improve the downscale
performance for BF16 gemm
b. The store to temporary float buffer can be avoided when k < KC
since intermediate accumulation will not be required for the
pc loop (only 1 iteration). The downscaled values (bf16) are
written directly to the output C matrix.
c. Within the micro-kernel when beta != 0, the bf16 data from the
original C output matrix is loaded to a register, converted to
float and beta scaling is applied on it at register level.
This eliminates the requirement of previous design of copying the
bf16 value to the temporary float buffer inside jc loop.
2. Alpha scaling
a. Alpha scaling (multiply instruction) by default was resulting in
performance regression when k dimension is small and alpha=1 in
bf16 micro-kernels.
b. Alpha scaling is now only done when alpha != 1.
3. K Fringe optimization
a. Previously memcpy was used for K fringe case to load elements
from A matrix in the microkernels
b. Now, masked stores are used to store the downscaled and
non-downscaled outputs without the need to use
memcpy functions
4. N LT-16 fringe optimization
a. Previously memcpy was used for N LT 16 fringe case in the
microkernelsfor storing the downscaled and non-downscaled output.
b. Now, masked stores are used to store the downscaled and
non-downscaled outputs of BF16 without the need to use
memcpy functions
5. Framework updates to avoid unnecessary pack buffer allocation
a. The default allocation of the temporary pack buffer is removed
and the pack buffer is now only allocated if k > KC.
AMD-Internal: [CPUPL-3437]
Change-Id: I71ff862e7d250559409a12a3533678c7a7951044
- In C/Z TRSM small, packing in case of unit diagonal
is not handled properly.
- Diagonal elements are still being read even in case of
unit diagonal.
- This causes "Conditional jump or move depends on
uninitialised value" error during valgrind tests.
- To fix this, diagonal elements should not be read
in case of unit diagonal.
AMD-Internal: [CPUPL-3406]
Change-Id: If3d6965299998a83d87f3a032f654fc7f8c43d4e
- AVX2 and AVX512 flags are set up locally for each object library that requires them.
- Default ENABLE_SIMD_FLAGS value is set to none and for AVX2 option the corresponding compiler flag is set globally.
- To be able to build zen4 codepath when ENABLE_SIMD_FLAGS=AVX2, the compiler option is removed by removing the definition before building the corresponding object library.
AMD-Internal: [CPUPL-3241]
Change-Id: Ia570e60f06c4c72b7c58f4c9ca73bac4c060ae73
- Since the code used whitespace variant of AVX512 mask instruction. But some compilers
accept whitespace variant and some don't - to be safe, we removed whitespace.
- Whitespace variant of masked instruction "vmovupd (%rax,%r8,1),%zmm8{%k2} {z}" is replaced with
this instruction "vmovupd (%rax,%r8,1),%zmm8{%k2}{z}" to resolve the compilation failure issue.
- Thanks to Shubham Sharma<shubham.sharma3@amd.com> for identifying issue.
AMD-Internal: [CPUPL-1963]
Change-Id: I290589132e8cce25cab0d1e4c195a7dd0a014937
-Micro-kernel: Some AVX512 intrinsics(eg: _mm512_loadu_epi32) were
introduced in later versions of gcc (>10) in addition to already
existing masked intrinsic(eg: _mm512_mask_loadu_epi32). In order to
support compilation using gcc 9.4, either the masked intrinsic or other
gcc 9.4 compatible intrinsic needs to be used (eg: _mm512_loadu_si512)
in LPGEMM Zen4 micro-kernels.
-Frame: BF16 LPGEMM api's (aocl_gemm_bf16bf16f32obf16/bf16bf16f32of32)
needs to be disabled if aocl_gemm (LPGEMM) addon is compiled using gcc
9.4. BF16 intrinsics are not supported in gcc 9.4, and the micro-kernels
for BF16 LPGEMM is excluded from compilation based on GNUC macro.
AMD-Internal: [CPUPL-3396]
Change-Id: I096b05cdceea77e3e7fec18a5e41feccdf47f0e7
Few tests failed on windows OS as some registers were not added as part
of cobbler list
Updated below registers into clobber list:
In function bli_zpackm_zen4_asm_12xk : ZMM12-ZMM15
In function bli_zpackm_zen4_asm_4xk : ZMM4-ZMM7
AMD-Internal: [CPUPL-3253]
Change-Id: I3e42130bf1a3b48717c4b437179ae3f116e5cf1d
- Previously, this flag was set as a default at the high-level CMakeLists.txt which means that this flag is used to build everything,all files and all subdirectories, including ref_kernels and testsuite. Also, all files as target sources for this project and compiled with the same flags.
- Now, we create object files using the source in kernels/ directory and add to the object files the AVX2 flag explicitly. So, now only those files will have this flag and it should not be used to compile ref_kernels, etc.
- This is a quick solution to enable runs on non-AVX2 machines.
AMD-Internal: [CPUPL-3241]
Change-Id: Id569b26ffeea40eaa36ab4465b0c52b6446d7650
Some text files were missing a newline at the end of the file.
One has been added.
Also correct file format of windows/tests/inputs.yaml, which
was missed in commit 0f0277e104
AMD-Internal: [CPUPL-2870]
Change-Id: Icb83a4a27033dc0ff325cb84a1cf399e953ec549
Source and other files in some directories were a mixture of
Unix and DOS file formats. Convert all relevant files to Unix
format for consistency. Some Windows-specific files remain in
DOS format.
AMD-Internal: [CPUPL-2870]
Change-Id: Ic9a0fddb2dba6dc8bcf0ad9b3cc93774a46caeeb
1. New LPGEMM type - s8s8s16os16 and s8s8s16os8 are added.
2. New interface, frame and kernel files are added.
3. Frame and kernel level files added and modified for s8s8s16
4. s8s8s16 type involves design changes of 2 operations -
Pack B and Mat Mul
5. Pack B kernel routines to pack B matrix for s16 FMA and compute the
sum of every column of B matrix to implement the s8s8s16 operation
using the s16 FMA instructions.
5. Mat Mul Kernel files to compute the GEMM output using s16 FMA.
Here the A matrix elements are converted from int8 to uint8 (s16 FMA
works with A matrix type uint8 only) by adding extra 128 to
every A matrix element
6. Post GEMM computation, additional operations are performed on the
accumulated outputs to get the correct results.
Final C = C - ( (sum of column of B matrix) * 128 )
This is done to compensate for the addition of extra 128 to every
A matrix elements
7. With this change, two new LPGEMM APIs are introduced in LPGEMM -
s8s8s16os16 and s8s8s16os8.
8. All previously added post-ops are supported on s8s8os16/os8 also.
AMD-Internal: [CPUPL-3234]
Change-Id: I3cc23e3dcf27f215151dda7c8db29b3a7505f05c
-Softmax is often used as the last activation function in a neural
network - softmax(xi) = exp(xi)/(exp(x0) + exp(x1) + ... + exp(xn))).
This step happens after the final low precision gemm computation,
and it helps to have the softmax functionality that can be invoked
as part of the lpgemm workflow. In order to support this, a new api,
aocl_softmax_f32 is introduced as part of aocl_gemm. This api
computes element-wise softmax of a matrix/vector of floats. This api
invokes ISA specific vectorized micro-kernels (vectorized only when
incx=1), and a cntx based mechanism (similar to lpgemm_cntx) is used
to dispatch to the appropriate kernel.
AMD-Internal: [CPUPL-3247]
Change-Id: If15880360947435985fa87b6436e475571e4684a
Corrections for spelling and other mistakes in code comments
and doc files.
AMD-Internal: [CPUPL-2870]
Change-Id: Ifbb5df7df2d6312fe73e06ee6d41c00b16c593ce
-Similar to downscale optimizations made for u8s8s32 gemm, the following
optimizations are made to improve the downscale performance for u8s8s16
gemm:
a. The store to temporary s16 buffer can be avoided when k < KC since
intermediate accumulation will not required for the pc loop (only 1
iteration). The downscaled values (s8) are written directly to the
output C matrix.
b. Within the micro-kernel when beta != 0, the s8 data from the original
C output matrix is loaded to a register, converted to s16 and beta
scaling applied on it. The previous design of copying the s8 value to
the s16 temporary buffer inside jc loop and using the same in beta
scaling is removed.
-Alpha scaling (multiply instruction) by default was resulting in
performance regression when k dimension is small and alpha=1 in s16
micro-kernels. Alpha scaling is now only done when alpha != 1.
AMD-Internal: [CPUPL-3237]
Change-Id: If25f9d1de8b9b8ffbe1bd7bce3b7b0b5094e51ef
1. Custom Clip is a post-op which is used to clip the
accumulated GEMM output within a certain range.
2. This post-op is implemented for u8s8s32os32/os8 and
s8s8s32os32/os8 LPGEMM types.
3. Changes are done at the microkernel level for these
2 APIs to support Clip Post-Op
AMD-Internal: [CPUPL-3207]
Change-Id: I8b4da5807de6a93711b0ae9343970c55192f75d4
Details:
- Added a new function for choosing between SUP and
native implementation for a given size.
- This function pointer is stored in cntx for zen4 config.
- Divided total combinations of sizes into 3 categories:
- one dimension is small
- Two dimensions are small
- All dimensions are small
- Added different threshold conditions for each of the
categories.
AMD-Internal: [CPUPL-2755]
Change-Id: Iae4bf96bb7c9bf9f68fd909fb757d7fe13bc6caf
-Currently in aocl_gemm, gelu (both tanh and erf based) computation is
only supported as a post-op as part of low precision gemm api call (done
at micro-kernel level). However gelu computation alone without gemm is
required in certain cases for users of aocl_gemm.
-In order to support this, two new api's - aocl_gelu_tanh_f32 and
aocl_gelu_erf_f32 are introduced as part of aocl_gemm. These api's
computes element-wise gelu_tanh and gelu_erf respectively of a matrix/
vector of floats. Both the api's invokes ISA specific vectorized micro-
kernels (vectorized only when incx=1), and a cntx based mechanism
(similar to lpgemm_cntx) is used to dispatch to the appropriate kernel.
AMD-Internal: [CPUPL-3218]
Change-Id: Ifebbaf5566d7462288a9a67f479104268b0cc704
1. Custom Clip is an element-wise post-op which is used to
clip the accumulated GEMM output within a certain range.
2. The Clip Post-Op is used in downscaled and non-downscaled
LPGEMM APIs and SGEMM.
3. Changes are done at frame and microkernel level to implement
this post-op.
4. Different versions are implemented - AVX-512, AVX-2, SSE-2
to enable custom clipping for various LPGEMM types and SGEMM
AMD-Internal: [CPUPL-3207]
Change-Id: I71c60be69e5a0dc47ca9336d58181c097b9aa0c6
- Set the variables to zero to avoid the compiler warning
(-Wmaybe-uninitialized) in bli_dgemm_ref_k1.c,
bli_gemm_small.c, bli_trsm_small.c, bli_zgemm_ref_k1.c and
bli_trsm_small_AVX512.c
- Changed the datatype from dim_t to siz_t for i,k,j
in bli_hemv_unf_var1_amd.c and bli_hemv_unf_var3_amd.c to
avoid the compiler warning (-Waggressive-loop-optimizations)
AMD-Internal: [CPUPL-2870]
Change-Id: Ib2bc050fa47cb8a280d719283ab4539c70e19d03
Threading related changes
--------------------------
- Created function bli_nthreads_l1 that dispatches the AOCL dynamic
logic for a L1 function based on the kernel ID and input datatypes.
- bli_nthreads_l1 gets the number of threads to be launched from the
rntm variable.
- Added aocl_'ker?'_dynamic function for DAXPYV, DSCALV, ZDSCALV and
DDOTV. This function contains the AOCL dynamic logic for the
respective kernels.
- Added handling for cases when number of elements (n) is less than
number of threads spawned (nt) in AOCL dynamic.
- Added function bli_thread_vector_partition that calculates the
amount of work the calling thread is supposed to perform on a
vector.
Interface changes
-----------------
- In BLIS impli layer of DSCALV, ZDSCALV and AXPYV, added logic to pick
kernel based on architecture ID and removed AVX2 flag check.
- Modified function signature of ZDSCALV. Alpha is passed as dcomplex
and only the real part of the alpha passed is used inside the kernel.
The change was done to facilitate kernel dispatch based on arch ID.
- Added n <= 0, BLAS exception in BLAS layer of DAXPYV and DDOTV.
Without this multithreaded code might crash because of minimum work
calculation.
Misc
-----
- Removed unused variables from ZSCAL2V and AXPYV kernels.
AMD-Internal: [CPUPL-3095]
Change-Id: I4fc7ef53d21f2d86846e86d88ed853deb8fe59e9
- Added AVX512 based double and float AXPYV which will be used in
Zen4 context.
- Added n <= 0 check and alpha == 0 check to the BLAS layer of
SAXPY.
- Modified BLAS framework of float AXPYV to remove flag check and
pick kernels based on architecture ID.
- AVX512 kernel is disabled for other Zen configurations using
BLIS_KERNELS_ZEN4 macro.
AMD-Internal: [CPUPL-2793]
Change-Id: Ie6a0976c2cfcf81ae5125f5f9aad14477d4ebbd1
-As part of an earlier optimization, the memcpy function call in k
fringe ((k % 4) != 0 case, to utilize vpdpbusd instruction) and n fringe
(n < 16 - beta scale and C store) were replaced with copy macros
specifically optimized for less than 4 and 16 elements each. However
upon further analysis it was observed that masked load/broadcast and
masked store performed better on average than the copy macros. The copy
macros contained more if conditions, which resulted in more branching
and thus resulting in perf variations. It was also noted that code
generation varied a lot based on the compilers when using the copy
macros due to the extra conditional code.
-As part of this change, the copy macros are completely replaced with
masked load/broadcast/store. Performance was observed to be better and
less prone to variations for the k fringe and n fringe (< 16) cases.
AMD-Internal: [CPUPL-3173]
Change-Id: I73e6e65302ecf02e1397541b4a32b2a536f19503
- 8x8 kernels are used for DTRSM SMALL
- Matrix A(a10) is packed for GEMM operations.
- Packed martix A will be re-used in all the col-block
along N-dimension.
- Diagonal elements of A matrix are packed(a11) for
TRSM operations.
- Implemented fringe cases with below block sizes
8x8, 8x4, 8x3, 8x2, 8x1
4x8, 4x4, 4x3, 4x2, 4x1
3x8, 3x4, 3x3, 3x2, 3x1
2x8, 2x4, 2x3, 2x2, 2x1
1x8, 1x4, 1x3, 1x2, 1x1
AMD-Internal: [CPUPL-2745]
Change-Id: I5bb57501f6d3783eb654e375d63901467dd14734
- Added AVX512 based double and float DOTV which will be used in
Zen4 context.
- Added n <= 0 check to the BLAS layer of SDOTV.
- Modified BLAS framework of float DOTV to remove flag check and
pick kernels based on architecture ID.
- AVX512 kernel is disabled for other Zen configurations using
BLIS_KERNELS_ZEN4 macro.
AMD-Internal: [CPUPL-2800]
Change-Id: I550fbcbb17d6d887b9ecbea23237dc806b208702
- Added AVX512 based double and float SCALV which will be used in
Zen4 context.
- Added incx <= 0 check and alpha == 1 check to the BLAS layer of
SSCAL.
- Modified BLAS framework of float SCAL to remove flag check and
pick kernels based on architecture ID.
- AVX512 kernel is disabled for other Zen configurations using
BLIS_KERNELS_ZEN4 macro.
AMD-Internal: [CPUPL-2766],[CPUPL-2765]
Change-Id: I4cdd93c9adbfbf8f7632730b8606ddcf70edd1dc
- on windows 24xk kernel is compiled without avx512 flag which
causes out of bounds writes for DTRSM.
- to fix this avx512 flag has been added to the CMakeLists.txt
file for 24xk kernel.
AMD-Internal: [CPUPL-3186]
Change-Id: I0314dea88302fc4964a303853a4b9b719ecd8064
Modify code to correct some warning messages from GCC 12.2 or
AOCC 4.0:
- Increase size of nbuf in blastest/f2c/endfile.c
- Remove unused variables in kernels/zen/1/bli_scal2v_zen_int.c
and kernels/zen/1/bli_axpyv_zen_int10.c
- Remove extraneous parentheses in frame/compat/bla_trsm_amd.c
and kernels/zen4/3/bli_zgemm_zen4_asm_12x4.c
- Add __attribute__ ((unused)) to several variables in
frame/1m/packm/bli_packm_struc_cxk.c and
frame/1m/packm/bli_packm_struc_cxk_md.c
AMD-Internal: [CPUPL-2870]
Change-Id: I595e46f0a3d737beb393c3ab531717565220b10d
- Implemented 12x4m column preferential SUP kernels(main and fringe
cases). The main kernel dimension is 12x4, and the associated fringe
kernel dimensions are : 12x3m, 12x2m, 12x1m
8x4, 8x3, 8x2, 8x1
4x4, 4x3, 4x2, 4x1
2x4, 2x3, 2x2, 2x1.
- Included in-register transposition support for C, thus extending
the storage scheme supports to CCC, CCR, RCC and RCR inside the
milli-kernel.
- Integrated conditional packing of A onto the SUP front end for
dcomplex datatype. This redirects RRC and CRC storage schemes
onto the preceding set of SUP kernels through storage scheme
transformation(RCC and CCC respectively).
- Updated the zen4 context file with the new set of SUP kernels, to
get enabled appropriately. Furthermore, the context file was updated
with the AVX-2 dotxv signatures for dcomplex datatype. This redirects
the fringe cases of type 1x? to the pre-existing AVX-2 GEMV routines.
- Added C prefetching onto L2-cache, and an unroll factor of 4 for the
k loop in all the kernels.
- Work in progress to include conjugate support and input spectrum
extension for the AVX-512 SUP kernels. The current thresholds in zen4
context is the same as that of the zen3 thresholds for ZGEMM SUP.
AMD-Internal: [CPUPL-3122]
Change-Id: If40bc4409c6eb188765329508cf1f24c0eb12d1e
Improvements to BLIS cpuid functionality:
- Tidy names of avx support test functions, especially rename
bli_cpuid_is_avx_supported() to bli_cpuid_is_avx2fma3_supported()
to more accurately describe what it tests.
- Fix bug in frame/base/bli_check.c related to changes in commit
6861fcae91
AMD-Internal: [CPUPL-3031]
Change-Id: Iacd8fb0ffbd45288e536fc6314660709055ea2d5
-The n fringe micro kernels uses only a few zmm registers for computing
the output (eg: 6x16 uses 6 zmm registers for output as opposed to 24
used in 6x64). This results in lot of wasted registers that if utilized
can help increase the MR dimension and thus improve the reuse of
registers loaded with B. Based on this concept, the existing n fringe
kernels are modified (6x16 -> 12x16, 6x32 -> 9x32). It is to be noted
that the maximum number of registers are not used, since it results in
cache inefficient code due to the increase in MR and thus more
broadcasts required from unpacked A matrix.
-Compiler flag updates for AOCC build to generate loops with 64 byte
alignment. This has been observed to improve performance slightly when
k dimension is small.
AMD-Internal: [CPUPL-3173]
Change-Id: I199ce75ef71d994ffe0067dac1ed804dce1742ca
1. New LPGEMM type - S8S8S32/S8 is added.
2. New interface, frame and kernel files are added.
3. Frame and kernel files added/modified for S8S8S32/S8 have
2 operations - Pack B and Mat Mul
4. Pack B kernel routines to pack B matrix for VNNI and compute the sum
of every column of B matrix to implement the S8S8S32 operation using
the VNNI instructions.
5. Mat Mul Kernel files to compute the GEMM output using the VNNI.
Here the A matrix elements are converted from int8 to uint8 (VNNI
works with A matrix type uint8 only).
6. Post GEMM computation, additional operations are performed on the
accumulated outputs to get the correct results.
7. With this change, two new LPGEMM APIs are introduced in LPGEMM -
s8s8s32os32 and s8s8s32os8.
8. All previously added post-ops are supported on S8S8S32/S8 also.
AMD-Internal: [CPUPL-3154]
Change-Id: Ib18f82bde557ea4a815a63adc7870c4234bfb9d3
- Kernel block size is 12x4
- Updated the zen4 config to enable these kernels in zen4 path.
- Tuned MC,KC,NC for better performance for m/n/k size > 500
- Updated CMakeLists.txt with ZGEMM kernels for windows build.
Kernel supports:
1. Preload and prebroadcast of A and B
2. Prefecth of C Matrix
3. K loop is sub divided in to multiple loops to maintain distance between c prefetchs.
4. Special case when alpha/beta imag component is zero
5. Row/Col/General stride of Matrix C
AMD-Internal: [CPUPL-2998]
Change-Id: I62e3c352d475b1add3f43270805fbcee00e2e440