Details:
- Handled Overflow and Underflow Vulnerabilites in
ztrsm small right implementations.
- Fixed failures observed in Scalapack testing.
AMD-Internal: [CPUPL-2115]
Change-Id: I22c1ba583e0ba14d1a4684a85fa1ca6e152e8439
Description:
1. Decision logic to choose optimal number of threads for
given input dgemm dimensions under aocl dynamic feature
were retuned based on latest code.
2. Updated code in few file to avoid compilation warnings.
3. Added a min check for nt in bli_sgemv_var1_smart_threading
function
AMD-Internal: [ CPUPL-2100 ]
Change-Id: I2bc70cc87c73505dd5d2bdafb06193f664760e02
- Cache aware factorization.
Experiments shows that ic,jc factorization based on m,n gives better
results compared to mu,nu on a generic data set in SUP path. Also
slight adjustments in the factorizations w.r.t matrix data loads can
help in improving perf further.
- Moving native path inputs to SUP path.
Experiments shows that in multi-threaded scenarios if the per thread
data falls under SUP thresholds, taking SUP path instead of native path
results in improved performance. This is the case even if the original
matrix dimensions falls in native path. This is not applicable if A
matrix transpose is required.
- Enabling B matrix packing in SUP path.
Performance improvement is observed when B matrix is packed in cases
where gemm takes SUP path instead of native path based on per thread
matrix dimensions.
AMD-Internal: [CPUPL-659]
Change-Id: I3b8fc238a0ece1ababe5d64aebab63092f7c6914
- Implemented an OpenMP based stand alone SGEMV kernel for
row-major (var 1) for multithread scenarios
- Smart threading is enabled when AOCL DYNAMIC is defined
- Number of threads are decided based on the input dims
using smart threading
AMD-Internal: [CPUPL-1984]
Change-Id: I9b191e965ba7468e95aabcce21b35a533017502e
- Previously zgemm computation failures were due to
status variable did not have pre-defined initial
value which resulted in zgemm computation to return
without being computed by any kernel. Reflected
same change in dgemm_ function as well.
- Enabled sup zgemm as the issue is fixed with
status variable with bli_zgemm_small call.
-Removed calling sqp method as it is disabled
Change-Id: I0f4edfd619bc4877ebfc5cb6532c26c3888f919d
The logs can be enabled with following two methods:
-- Environment variable based control: The feature can be enabled
by specifying environment variable AOCL_VERBOSE=1.
-- API based control: Two API's will be added to enable/disable
logging at runtime
1. AOCL_DTL_Enable_Logs()
2. AOCL_DTL_Disable_Logs()
-- The API takes precedence over the environment settings.
AMD-Internal: [CPUPL-2101]
Change-Id: Ie71c1095496fae89226049c9b9f80b00400350d5
1. Parallelized dtrsm_small across m-dimension or n-dimension based on side(Left/Right).
2. Fine-tuning with AOCL_DYNAMIC to achieve better performance.
AMD-Internal: [CPUPL-2103]
Change-Id: I6be6a2b579de7df9a3141e0d68bdf3e8a869a005
Details:
- Intrinsic implementation of zdotxv, cdotxv kernel
- Unrolling in multiple of 8, remaining corner
cases are handled serially for zdotxv kernel
- Unrolling in multiple of 16, remainig corner
cases are handled serially for cdotxv kernel
- Added declaration in zen contexts
AMD-Internal: [CPUPL-2050]
Change-Id: Id58b0dbfdb7a782eb50eecc7142f051b630d9211
-- AOCL libraries are used for lengthy computations which can go
on for hours or days, once the operation is started, the user
doesn’t get any update on current state of the computation.
This (AOCL progress) feature enables user to receive a periodic
update from the libraries.
-- User registers a callback with the library if it is interested
in receiving the periodic update.
-- The library invokes this callback periodically with information
about current state of the operation.
-- The update frequency is statically set in the code, it can be
modified as needed if the library is built from source.
-- These feature is supported for GEMM and TRSM operations.
-- Added example for GEMM and TRSM.
-- Cleaned up and reformatted test_gemm.c and test_trsm.c to
remove warnings and making indentation consistent across the
file.
AMD-Internal: [CPUPL-2082]
Change-Id: I2aacdd8fb76f52e19e3850ee0295df49a8b7a90e
Details:
- Enable ctrsm small implementation
- Handled Overflow and Underflow Vulnerabilites in
ctrsm small implementations.
- Fixed failures observed in libflame testing.
- For small sizes, ctrsm small implementation is
used for all variants.
Change-Id: I17b862dcb794a5af0ec68f585992131fef57b179
Details:
- Optimized implementation of DOTXAXPYF fused kernel for single and double precision complex datatype using AVX2 Intrinsics
- Updated definitions zen context
AMD-Internal: [CPUPL-2059]
Change-Id: Ic657e4b66172ae459173626222af2756a4125565
Details:
- Optimization of ztrsm for Non-unit Diag Variants.
- Handled Overflow and Underflow Vulnerabilites in
ztrsm small implementations.
- Fixed failures observed in libflame testing.
- Fine-tuned ztrsm small implementations for specific
sizes 64<= m,n <= 256, by keeping the number of
threads to the optimum value, under AOCL_DYNAMIC flag.
- For small sizes, ztrsm small implementation is
used for all variants.
AMD-Internal: [SWLCSG-1194]
Change-Id: I066491bb03e5cda390cb699182af4350ae60be2d
1. Removed small gemm call from native path to avoid Single threaded
calls as a part of MultiThreaded scenarios.
2. SUP and INDUCED Method path disabled.
3. Added AOCL Dynamic for optimum number of threads to achieve higher
performance.
Change-Id: I3c41641bef4906bdbdb5f05e67c0f61e86025d92
Details:
- Enable ztrsm small implementation
- For small sizes, Right Variants and Left Unit Diag
Variants are using ztrsm_small implementations.
- Optimization of Left Non-Unit Diagonal Variants,
Work In Progress
AMD-Internal: [SWLCSG-1194]
Change-Id: Ib3cce6e2e4ac0817ccd4dff4bb0fa4a23e231ca4
- Fixed memory access for edge cases such that
all load are within memory boundary only.
- Corrected ztrsm utility APIs for dcomplex
multiplication and division.
AMD-Internal: [CPUPL-2093]
Change-Id: Ib2c65e7921f6391b530cd20d6ea6b50f24bd705e
Details:
- Intrinsic implementation of zgemm_small nn kernel.
- Intrinsic implementation of zgemm_small_At kernel.
- Added support conjugate and hermitian transpose
- Main loop operates in multiple of 4x3 tile.
- Edge cases are handles separately.
AMD-Internal: [CPUPL-2084]
Change-Id: I512da265e4d4ceec904877544f1d15cddc147a66
Description:
1. For small dimensions single threads dgemm_small performing
better than dgemmsup and native paths.
2. Irrespecive of given number of threads we are redirecting
into single thread dgemm_small
AMD-Internal:[CPUPL-2053]
Change-Id: If591152d18282c2544249f70bd2f0a8cd816b94e
Details:
- During runtime, Application can set the desired number of threads using
standard OpenMP API omp_set_num_threads(nt).
- BLIS Library uses standard OpenMP API omp_get_max_threads() internally,
to fetch the latest value set by the application.
- This value will be used to decide the number of threads in the subsequent
BLAS calls.
- At the time of BLIS Initialization, BLIS_NUM_THREADS environment variable
will be given precedence, over the OpenMP standard API omp_set_num_threads(nt)
and OMP_NUM_THREADS environment variable.
- Order of precedence followed during BLIS Initialization is as follows
1. Valid value of BLIS_NUM_THREADS
2. omp_set_num_threads(nt)
3. valid value of OMP_NUM_THREADS
4. Number of cores
- After BLIS initialization, if the Application issues omp_set_num_threads(nt)
during runtime, number of threads set during BLIS Initialization,
is overridden by the latest value set by the Application.
- Existing precedence of BLIS_*_NT environment variables and the decision of
optimal number of threads over the number of threads derived from the
above process remains as it is.
AMD-Internal: [CPUPL-2076]
Change-Id: I935ba0246b1c256d0fee7d386eac0f5940fabff8
Details :
- SUP Threshold change for native vs SUP
- Improved the ST performances for sizes n<800
- Introduce PACKB in SUP to improve ST performance between 320<n<800
- 16T SUP Tuning for n<1600.
AMD-Internal: [CPUPL-1981]
Change-Id: Ie59afa4d31570eb0edccf760c088deaa2e10cdda
The framework cleanup was done for linux as part of
f63f78d7 Removed Arch specific code from BLIS framework.
This commit adds changes needed for windows build.
AMD-Internal: [CPUPL-2052]
Change-Id: Ibd503a0adeea66850de156fb95657b124e1c4b9d
- dher2 did not have avx check for platform.
It was calling avx kernel regardless of platform
support. Which resulted in core dump.
- Added avx based platform check in both variant of dher2 for
fixing the issue.
AMD-Internal: [CPUPL-2043]
Change-Id: I1fd1dcc9336980bfb7ffa9376f491f107c889c0b
Removed the "target_link_libraries("${PROJECT_NAME}" PRIVATE OpenMP::OpenMP_CXX)" statement for the static ST library builb.
This statement is not needed for static ST library build, mistakenly added.
Change-Id: I577a28c75644043fd077d938bf7f51cdea8ee13d
Details:
- Intrinsic implementation of ZAXPY2V fused kernel for AVX2
- Updated definitions in zen contexts
AMD-Internal: [CPUPL-2023]
Change-Id: I8889ae08c826d26e66ae607c416c4282136937fa
Updated the windows build system to link the user given openmp
library using -DOpenMP_libomp_LIBRARY=<Desired lib name> option
using command line or through cmake-gui application to build
blis library and its test applications. If user not given any
openmp library then by default openmp library will be
C:/Program Files/LLVM/lib/libomp.lib.
Change-Id: I07542c79454496f88e65e26327ad76a7f49c7a8c
- Removed BLIS_CONFIG_EPYC macro
- The code dependent on this macro is handled in
one of the three ways
-- It is updated to work across platforms.
-- Added in architecture/feature specific runtime checks.
-- Duplicated in AMD specific files. Build system is updated to
pick AMD specific files when library is built for any of the
zen architecture
AMD-Internal: [CPUPL-1960]
Change-Id: I6f9f8018e41fa48eb43ae4245c9c2c361857f43b
- Impplemented her2 framework calls for transposed and non
transposed kernel variants.
- dher2 kernel operate over 4 columns at a time. It computes
4x4 triangular part of matrix first and remainder part is
computed in chunk of 4x4 tile upto m rows.
- remainder cases(m < 4) are handled serially.
AMD-Internal: [CPUPL-1968]
Change-Id: I12ae97b2ad673a7fd9b733c607f27b1089142313
Details:
- Intrinsic implementation of axpbyv for AVX2
- Bench written for axpbyv
- Added definitions in zen contexts
AMD-Internal: [CPUPL-1963]
Change-Id: I9bc21a6170f5c944eb6e9e9f0e994b9992f8b539
We were using add_compile_options(-Xclang -fopenmp) statement to set
omp library compiler flags on MSVC using cmake. Observing there is
an performance regression because of the compiler version which is
using in MSVC(clang 10), so removing it from the windows build
system and configuring the compiler version(clang 13) and compiler
options manually on MSVC gui to gain a performance on matlab bench.
Change-Id: I37d778abdceb7c1fae9b1caaeea8adb114677dd2
All AMD specific optimization in BLIS are enclosed in BLIS_CONFIG_EPYC
pre-preprocessor, this was not defined in CMake which are resulting in
overall lower performance.
Updated version number to 3.1.1
Change-Id: I9848b695a599df07da44e77e71a64414b28c75b9
Enabled AVX-512 skylake kernels in zen4 configuration.
AVX-512 kernels are added for float and double types.
AMD-Internal: [CPUPL-2108]
Change-Id: Idfe3f64a037db019cbdf43318954db52ad241a51
Details:
- Added a new sandbox called 'gemmlike', which implements sequential and
multithreaded gemm in the style of gemmsup but also unconditionally
employs packing. The purpose of this sandbox is to
(1) avoid select abstractions, such as objects and control trees, in
order to allow readers to better understand how a real-world
implementation of high-performance gemm can be constructed;
(2) provide a starting point for expert users who wish to build
something that is gemm-like without "reinventing the wheel."
Thanks to Jeff Diamond, Tze Meng Low, Nicholai Tukanov, and Devangi
Parikh for requesting and inspiring this work.
- The functions defined in this sandbox currently use the "bls_" prefix
instead of "bli_" in order to avoid any symbol collisions in the main
library.
- The sandbox contains two variants, each of which implements gemm via a
block-panel algorithm. The only difference between the two is that
variant 1 calls the microkernel directly while variant 2 calls the
microkernel indirectly, via a function wrapper, which allows the edge
case handling to be abstracted away from the classic five loops.
- This sandbox implementation utilizes the conventional gemm microkernel
(not the skinny/unpacked gemmsup kernels).
- Updated some typos in the comments of a few files in the main
framework.
Change-Id: Ifc3c50e9fd0072aada38eace50c57552c88cc6cf
Details:
- Implemented a new feature called addons, which are similar to
sandboxes except that there is no requirement to define gemm or any
other particular operation.
- Updated configure to accept --enable-addon=<name> or -a <name> syntax
for requesting an addon be included within a BLIS build. configure now
outputs the list of enabled addons into config.mk. It also outputs the
corresponding #include directives for the addons' headers to a new
companion to the bli_config.h header file named bli_addon.h. Because
addons may wish to make use of existing BLIS types within their own
definitions, the addons' headers must be included sometime after that
of bli_config.h (which currently is #included before bli_type_defs.h).
This is why the #include directives needed to go into a new top-level
header file rather than the existing bli_config.h file.
- Added a markdown document, docs/Addons.md, to explain addons, how to
build with them, and what assumptions their authors should keep in
mind as they create them.
- Added a gemmlike-like implementation of sandwich gemm called 'gemmd'
as an addon in addon/gemmd. The code uses a 'bao_' prefix for local
functions, including the user-level object and typed APIs.
- Updated .gitignore so that git ignores bli_addon.h files.
Change-Id: Ie7efdea366481ce25075cb2459bdbcfd52309717
- Altered the framework to use 2 more fused kernels for
better problem decomposition
- Increased unroll factor in AXPYF5 and AXPYF8 kernels
to improve register usage
AMD-Internal: [CPUPL-1970]
Change-Id: I79750235d9554466def5ff93898f832834990343
We added new API to check if the CPU architecture has support
for AVX instruction. This API was calling CPUID instruction
every time it is invoked. However, since this information does
not change at runtime, it is sufficient to determine it once
and use the cached results for subsequent calls. This optimization
is needed to improve performance for small size matrix vector
operations.
AMD-Internal: [CPUPL-2009]
Change-Id: If6697e1da6dd6b7f28fbfed45215ea3fdd569c5f
details: Changes Made for 4.0 branch to enable wrapper code by default
and also removed ENABLE_API_WRAPPER macro.
Change-Id: I5c9ede7ae959d811bc009073a266e66cbf07ef1a
- Removed BLIS_CONFIG_EPYC macro
- The code dependent on this macro is handled in
one of the three ways
-- It is updated to work across platforms.
-- Added in architecture/feature specific runtime checks.
-- Duplicated in AMD specific files. Build system is updated to
pick AMD specific files when library is built for any of the
zen architecture
AMD-Internal: [CPUPL-1960]
Change-Id: I6f9f8018e41fa48eb43ae4245c9c2c361857f43b
- Optimized dotxf implementation for double
and single precision complex datatype by
handling dot product computation in tile 2x6
and 4x6 handling 6 columns at a time, and rows
in multiple of 2 and 4.
- Dot product computation is arranged such a way
that multiple rho vector register will hold the
temporary result till the end of loop and finally
does horizontal addition to get final dot product
result.
- Corner cases are handled serially.
- Optimal and reuse of vector registers for
faster computation.
AMD-Internal: [CPUPL-1975]
Change-Id: I7dd305e73adf54100d54661769c7d5aada9b0098
-Current gemm SUP path uses bli_thrinfo_sup_grow, bli_thread_range_sub
to generate per thread data ranges at each loop of gemm algorithm.
bli_thrinfo_sup_grow involves usage of multiple barriers for cross
thread synchronization. These barriers are necessary in cases where
either the A or B matrix are packed for centralized pack buffer
allocation/deallocation (bli_thread_am_ochief thread).
-However for cases where both A and B matrices are unpacked, these
barrier are resulting in overhead for smaller dimensions. Here creation
of unnecessary communicators are avoided and subsequently the
requirement for barriers are eliminated when packing is disabled for
both the input matrices in SUP path.
Change-Id: Ic373dfd2d6b08b8f577dc98399a83bb08f794afa
- Impplemented her2 framework calls for transposed and non
transposed kernel variants.
- dher2 kernel operate over 4 columns at a time. It computes
4x4 triangular part of matrix first and remainder part is
computed in chunk of 4x4 tile upto m rows.
- remainder cases(m < 4) are handled serially.
AMD-Internal: [CPUPL-1968]
Change-Id: I12ae97b2ad673a7fd9b733c607f27b1089142313