Address sanitizer reports error when rbp regitser is modified.
Register rbp was stored with rs_a which was used during prefetch
of Matrix A. Usage of rbp is avoided by using rcx register as a
temporary storage register.
Hence rcx is updated with Matrix C address before storing the
computed data.
This fix address the issue reported by GEQP3 API of libflame
AMD-Internal: [CPUPL-2587]
Change-Id: Ica790259010d8e71528c3d0ab1cd49069c56fc1d
- For the cases where AVX2 is available, an optimized function is called,
based on Blue's algorithm. The fallback method based on sumsqv is used
otherwise.
- Scaling is used to avoid overflow and underflow.
- Works correctly for negative increments.
- Cleaned up some white space in the AVX2 implementation for DNRM2.
AMD-Internal: [CPUPL-2551]
Change-Id: I0875234ea735540307168fe7efc3f10fe6c40ffc
Description:
1. Calculate the optimal number of threads required based on input
dimension.
2. Request dynamic memory from memory pool to store the each
thread output.
3. Create optimal number of threads and divide the given input data
among all threads.
4. Accumulate the results in rho variable.
5. Free the dynamically allocated memory back to memory pool.
AMD-Internal: [CPUPL-2560]
Change-Id: I30982b622d531b29d3570b2f5f7569bd1d951d68
- Implemented optimized intrinsic kernel for zdscalv for the cases where AVX2 is supported.
- Also added multithreaded support for the same.
- The optimal number of threads is being calculated on the basis of input size.
AMD-Internal: [CPUPL-2602]
Change-Id: I4d05c3b1cc365a7770703286a89c6dce3875c067
Details:
1. Fixed the memory access paritial overflows for the variables
AlphaVal,ones reported by ASAN.
2. Using 128 bit packed broadcast with the 64 bit data types
after type casting would cause the garbage data to be filled
in the destination register.
3. Fixed this issue by using set_ps instruction instead of broadcast.
4. In cases of n remainder being 1, extra elements were accessed that
could cause out of memory access. Removed the extra element access.
AMD-Internal: [CPUPL-2578][CPUPL-2587]
Change-Id: Iaa918060c66287f2f46bcb9f69e9323f6707cf75
- BFloat16 flags added to zen4 make_defs in order to enable
compilation of low precision gemm by using zen4 config.
- Avoid -ftree-partial-pre optimization flag with gcc due to
non optimal code generation for intrinsics based kernels in
low precision gemm.
- Enable only Zen3 specific low precision gemm kernels (s16)
compilation when aocl_gemm addon is compiled on Zen3 machines.
AMD-Internal: [CPUPL-1545]
Change-Id: Id3be3410bfbf141bb6fc4b4e3391115a4e0bb79f
1. Addressed uninitialized variables reported in coverity for all
datatypes of trsm small algo.
AMD-Internal: [CPUPL-2542]
Change-Id: Ifae57ef6435493942732526720e6a9d6bec70e71
- Added multithreaded support for dscalv.
- Optimal number of threads is calculated based on the input size.
AMD-Internal: [CPUPL-2583]
Change-Id: I0a253c28cb389a4f5214439dbca62304888ca5ae
Details:
- Updated Makefile and common.mk so that the targeted configuration's
kernel CFLAGS are applied to source files that are found in a
'kernels' subdirectory within an enabled addon. For now, this
behavior only applies when the 'kernels' directory is at the top
level of the addon directory structure. For example, if there is an
addon named 'foobar', the source code must be located in
addon/foobar/kernels/ in order for it to be compiled with the target
configurations's kernel CFLAGS. Any other source code within
addon/foobar/ will be compiled with general-purpose CFLAGS (the same
ones that were used on all addon code prior to this commit). Thanks
to AMD (esp. Mithun Mohan) for suggesting this change and catching an
intermediate bug in the PR.
- Comment/whitespace updates.
(cherry picked from commit fd885cf98f)
Change-Id: I9a678f78bde90b23a6293ce90377004876f51067
The local_mem_s is allocated on stack but in the inner scop of the
“if” block, however, it can be accessed through cntl_mem_p outside
the if block. This error is flagged by address sanitizer. Fixed this
issue by moving the variables declaration at the functions scope.
This fix address the issue reported for follwoing libflame APIs
geqp3, geqrf, gerq2, gerqf, gesvd, ggev, ggevx, potrf, potrs,
stedc, steqr, syevd
AMD-Internal: [CPUPL-2587]
Change-Id: I63749c7d406c7339d2b45b0488108ccd3f90a248
1. Correcting the type of alpha, and beta values from C_type
(output type) to accumulation type.
For the downscaled LPGEMM APIs, C_type will be the downscaled
type but the required type for alpha and beta values should
be the accumulation type.
2. BF16 bench integration with the LPGEMM bench for both the BF16
(bf16bf16f32of32 and bf16bf16f32obf16) APIs
AMD-Internal: [CPUPL-2561]
Change-Id: I3a99336c743f3880be1b96605ceeeae7c3bd4797
The local_mem_s is allocated on stack but in the inner scop of the
“if” block, however, it can be accessed through cntl_mem_p outside
the if block. This error is flagged by address sanitizer. Fixed this
issue by moving the variables declaration at the functions scope.
This fix address the issue reported for follwoing libflame APIs
geqp3, geqrf, gerq2, gerqf, gesvd, ggev, ggevx, potrf, potrs,
stedc, steqr, syevd
AMD-Internal: [CPUPL-2587]
Change-Id: Ib1e9f89bc4b6911e4b7e9b910d1ecc2ba00b286a
1. Corrected B buffer accessing to access by its offset instead of
starting address which is required incase of MT.
3. When num_threads > 1, B buffer is divided in to blocks in m or n
dimension based on side right or left. Hence need to access by its
offset to access starting of the block.
4. Currently B Matrix is divided in to blocks for each thread and
complete matrix A is used by all threads.
Incase of design change in future, modified A buffer accessing by its
offset to support partition of matrix A for MT
AMD-Internal:[CPUPL-2520]
Change-Id: Ic09e9e945417b86e2bc2e2d4548f65db308cd2ea
- Updated zen4 configuration to enable AVX512 flags for the
reference kernels
- Reference and vector kernels will use the same compiler flags
AMD-Internal: [CPUPL-2533]
Change-Id: I5a2ba7e584dc3fb93625df12cca6b6c18f514ea8
- Removed some read code from the macros for downscale
- Store permute correction
- Simplified macros for edge cases and corrected intermediate
operation
AMD-Internal:[CPUPL-2171]
Change-Id: Ifd2ff6b3d1c3874ac5cb8a545ff6daa7fb40ee68
-The bf16 gemm framework is modified to swap input column major matrices
and compute gemm for the transposed matrices (now row major) using the
existing row-major kernels. The output is written to C matrix assuming
it is transposed.
-Framework changes to support leading dimensions that are greater than
matrix widths.
-Bench changes to test low precision gemm for column major inputs.
AMD-Internal: [CPUPL-2570]
Change-Id: I22c76f52619fd76d0c0e41531828b437a1935495
- Removed prototypes for float(sgemm3m) and double(dgemm3m) types,
as BLIS implements this API only for scomplex(cgemm3m) and
dcomplex(zgemm3m)
AMD-Internal: [SWLCSG-1477]
Change-Id: Ifad86a74b4c939ed240743894b85bb4fa5e6d754
- For the cases where AVX2 is available, an optimized function is called,
based on Blue's algorithm. The fallback method based on sumsqv is used
otherwise.
- Scaling is used to avoid overflow and underflow.
- Works correctly for negative increments.
AMD-Internal: [CPUPL-2551]
Change-Id: I5d8976b29b5af463a8981061b2be907ea647123c
Details:
1. Changes are made in dtrsm small MT path,to avoid accuracy issues.
AMD-Internal: [SWLCSG-1470]
Change-Id: I65237225892f97b7222fe71f66b02841b5956560
-In BLIS, the CBLAS interface is implemented as a wrapper around
the BLAS interface. For example the CBLAS API ‘cblas_dscal’
internally invokes the BLAS API ‘dscal_’.
-This coupling between CBLAS and BLAS interface prevents the end
user from overriding them individually by the application or
other libraries.
-This change separates the CBLAS and BLAS implementation by adding
an additional level of abstraction. The implementation of the
API is moved to the new function which is invoked directly from
the CBLAS and BLAS wrappers.
AMD-Internal: [SWLCSG-1477]
Change-Id: I0e80071398af29c9313296d2a92e61e3897ac28e
-In BLIS, the CBLAS interface is implemented as a wrapper around
the BLAS interface. For example the CBLAS API ‘cblas_dgemv’
internally invokes the BLAS API ‘dgemv_’.
-This coupling between CBLAS and BLAS interface prevents the end
user from overriding them individually by the application or
other libraries.
-This change separates the CBLAS and BLAS implementation by adding
an additional level of abstraction. The implementation of the
API is moved to the new function which is invoked directly from
the CBLAS and BLAS wrappers.
AMD-Internal: [SWLCSG-1477]
Change-Id: Ie7cbbac86bbfa1075a5064b31b365e911f67786c
-In BLIS, the CBLAS interface is implemented as a wrapper around
the BLAS interface. For example the CBLAS API ‘cblas_dgemm’
internally invokes the BLAS API ‘dgemm_’.
-This coupling between CBLAS and BLAS interface prevents the end
user from overriding them individually by the application or
other libraries.
-This change separates the CBLAS and BLAS implementation by adding
an additional level of abstraction. The implementation of the
API is moved to the new function which is invoked directly from
the CBLAS and BLAS wrappers.
AMD-Internal: [SWLCSG-1477]
Change-Id: Id9e307154342d2c17b0ac6db580c36f1a9ee6409
- BLIS uses callback function to report the progress of the
operation. The callback is implemented in the user application
and is invoked by BLIS.
- Updated callback function prototype to make all arguments const.
This will ensure that any attempt to write using callback’s
argument is prevented at the compile time itself.
AMD-Internal: [CPUPL-2504]
Change-Id: I8ceb671242365d2a9155b485301cd8c75043e667
- The temporary buffer allocated for C matrix when downscaling is
enabled is not filled properly. This results in wrong gemm accumulation
when beta != 0, and thus wrong output after downscaling. The C panel
iterators used for filling the temporary buffer are updated to fix it.
- Low precision gemm bench updated for testing/benchmarking downscaling.
AMD-Internal: [CPUPL-2514]
Change-Id: Ib1ba25ba9df2d2997edaaf0763ff0113fb35d6eb
Introduced multi-thread panel based balancing for BF16 to improve the
overall MT performance.
AMD-Internal: [CPUPL-2502]
Change-Id: Iddce9548fa96e5f57bd3d3eb3e8268855ca47f25
Remove test on is_arch in bli_cpuid_is_zen4() so it will report true for
all Zen4 models, based on family and AVX512 feature tests.
AMD-Internal: [CPUPL-2474]
Change-Id: I85d2e230b33391d5c9779df4585ae2a358788e72
- BF16 instructions output is accumulated at a higher precision of
FP32 which needs to be converted to a lower precison of bf16 post
the GEMM operations. This is required in AI workloads where both
input and output are in BF16 format.
- BF16 downscaling is implemented as post-ops inside the GEMM
microkernels.
Change-Id: Id1606746e3db4f3ed88cba385a7709c8604002a8
- int16 c matrix intermediate values are converted to int32,
then the int32 values are converted to fp32. On these fp32
values scaling is done
- The resultant value is down scaled to int8 and stored in a
separate buffer
AMD-Internal: [2171]
Change-Id: I76ff04098def04d55d1bd88ac8c8d3f267964cab
- Downscaling is used when GEMM output is accumulated at a higher
precision and needs to be converted to a lower precision afterwards.
This is required in AI workloads where quantization/dequantization
routines are used.
- New GEMM APIs are introduced specifically to support this use case.
Currently downscaling support is added for s32, s16 and bfloat16 GEMM.
AMD-Internal: [CPUPL-2475]
Change-Id: I81c3ee1ba5414f62427a7a0abb6ecef0c5ff71bf
Details:
1. In zen4 dgemm and sgemm native kernels are column-prefer kernels,
cgemm and zgemm kernels are row-prefer kernels. zen3 and older arch
(uses row-prefer kernels for all datatypes). Induced-transpose carried
out based on kernel preference check in both crc and ccr. we don't
require kernel preference check to apply Induced-transpose.
2. In case of ccr, B matrix is real. We must Induce a transposition and
perform C+=A*B (crc). where A (formerly B) is real.
3. In case of crc, A matrix is real. We don't require any
Induced-transpose.
AMD-Internal: [CPUPL-2440] [CPUPL-2449]
Change-Id: I44c53a20c8def7ddbb84797ba20260acec2086a2
Details:
1. Changes are made in ztrsm small MT path,to avoid accuracy issues
reported in libflame tests.
AMD-Internal: [CPUPL-2476]
Change-Id: Ic279106343fb1744e89ff4c920023adbe1d0158a
Functionality - Post-ops is a set of operations performed elemnent
wise on the output matrix post GEMM operation. The support for
the same is added by fusing post-ops with GEMM operations.
- Post-ops Bias, Relu and Parametric Relu are added to all the
compute kernels of bf16bf16f32of32
- Modified bf16 interface files to add check for bf16 ISA support
Change-Id: I2f7069a405037a59ea188a41bd8d10c4aae72fb3
-OpenMP based multi-threading support added for BFloat16 gemm.
Both gemm and reorder api's are parallelized.
-Multi-threading support for u8s8s16 reorder api.
-Typecast issues fixed for bfloat16 gemm kernels.
AMD-Internal: [CPUPL-2459]
Change-Id: I6502d71ab32aa73bb159245976ea3d3a8e0ed109
Details:
1. In sgemmsup_zen_rv_?x2 kernels "vmovps" instruction
is used to load B matrix in k loop and k last loop,
which is loading 128 bit into xmm than 64 bit as expected.
2. Changed vmovps instruction to vmovsd instrucntions
which load only 64 bit in xmm register
3. Avoided C memory access by vfma instruction when multiplying
with non-beta at corner cases with required access to 128 bit
which leads to out of bound. Replaced with vmovq first to
get 64 bit data then peformed vfma on xmm register in rv_6x8m
and rv_6x4m
AMD-Internal: [CPUPL-2472]
Change-Id: Iad397f8f5b5cc607b4278b603b1e0ea3f6b082f2
- Removed some additional compiler warnings reported by GCC 12.1
- Fixed a couple of typos in comments
- frame/3/bli_l3_sup.c: routines were returning before final call
to AOCL_DTL_TRACE_EXIT
- frame/2/gemv/bli_gemv_unf_var1_amd.c: bli_multi_sgemv_4x2 is
only defined in header file if BLIS_ENABLE_OPENMP is defined
AMD-Internal: [CPUPL-2460]
Change-Id: I2eacd5687f2548d8f40c24bd1b930859eefbbcde
- While calculating the diagonal and corner elements, the combined
operation of calculating the product of x and x hermitian and
simultaneously scaling it with alpha and adding the result to the matrix
was the cause of increased underflow and overflow errors in netlib
tests.
- So the above calculation is now being done in three steps: scaling x
vector with alpha, then calculating its product with x hermitian and
later adding the final result to the matrix.
AMD-Internal: [CPUPL-2213]
Change-Id: I32df572b013bc3189340662dbf17eddcaec9f0f8
- In BLIS the cblas interface is implemented as a wrapper around
the blas interface. For example the CBLAS api ‘cblas_dgemm’
internally invokes BLAS API ‘dgemm_’.
- If the end user wants to use the different libraries for CBLAS
and BLAS, current implantation of BLIS doesn’t allow it.
- This change separates the CBLAS and BLAS implantation by adding
an additional level of abstraction. The implementation of the
API is moved to the new function which is invoked directly from
the CBLAS and BLAS wrappers.
AMD-Internal: [SWLCSG-1477]
Change-Id: I8d81072aaca739f175318b82f6510d386103c24b
- Removed all compiler warnings as reported by GCC 11 and AOCC 3.2
- Removed unused files
- Removed commented and disabled code (#if 0, #if 1) from some
files
AMD-Internal: [CPUPL-2460]
Change-Id: Ifc976f6fe585b09e2e387b6793961ad6ef05bb4a
- In BLIS the cblas interface is implemented as a wrapper around
the blas interface. For example the CBLAS api ‘cblas_dgemm’
internally invokes BLAS API ‘dgemm_’.
- If the end user wants to use the different libraries for CBLAS
and BLAS, current implantation of BLIS doesn’t allow it and may
result in recursion
- This change separate the CBLAS and BLAS implantation by adding
an additional level of abstraction. The implementation of the
API is moved to the new function which is invoked directly from
the CBLAS and BLAS wrappers.
AMD-Internal: [SWLCSG-1477]
Change-Id: I0f4521e70a02f6132bdadbd4c07715c9d52fe62a
- Performance of u8s8s16os16 came down by 40% after the
introduction of post-ops
- Analysis revealed that the target compiler assumed false
dependency and was generating sub-optimal code due to the
post-ops structure
- Inserted vzeroupper to hint the compiler that no ISA change
will occur
AMD-Internal: [CPUPL-2447]
Change-Id: I0b383b9742ad237d0e053394602428872691ef0c