Commit Graph

1122 Commits

Author SHA1 Message Date
Minh Quan Ho
6d4d6a7514 Alloc at least 1 elem in pool_t block_ptrs. (#560)
Details:
- Previously, the block_ptrs field of the pool_t was allowed to be
  initialized as any unsigned integer, including 0. However, a length of
  0 could be problematic given that malloc(0) is undefined and therefore
  variable across implementations. As a safety measure, we check for
  block_ptrs array lengths of 0 and, in that case, increase them to 1.
- Co-authored-by: Minh Quan Ho <minh-quan.ho@kalray.eu>

Change-Id: I1e885d887aaba5e73df091ef52e6c327fd6418de
2022-08-01 08:45:00 +05:30
Minh Quan Ho
3c01fcb9fc Fix insufficient pool-growing logic in bli_pool.c. (#559)
Details:
- The current mechanism for growing a pool_t doubles the length of the
  block_ptrs array every time the array length needs to be increased
  due to new blocks being added. However, that logic did not take in
  account the new total number of blocks, and the fact that the caller
  may be requesting more blocks that would fit even after doubling the
  current length of block_ptrs. The code comments now contain two
  illustrating examples that show why, even after doubling, we must
  always have at least enough room to fit all of the old blocks plus
  the newly requested blocks.
- This commit also happens to fix a memory corruption issue that stems
  from growing any pool_t that is initialized with a block_ptrs length
  of 0. (Previously, the memory pool for packed buffers of C was
  initialized with a block_ptrs length of 0, but because it is unused
  this bug did not manifest by default.)
- Co-authored-by: Minh Quan Ho <minh-quan.ho@kalray.eu>

Change-Id: Ie4963c56e03cbc197d26e29f2def6494f0a6046d
2022-08-01 08:19:30 +05:30
Chandrashekara K R
fde812015f Updated blis library version from 4.0 to 3.2.1
AMD-Internal: [CPUPL-2322]
Change-Id: I3a6a61543dd2754e2590d7f5f22442c9fdeaee95
2022-07-29 15:55:10 +05:30
Mangala V
6c1acc74c8 ZGEMM optimizations
-- Conditionally packing of B matrix is enabled in zgemmsup path
   which is performing better when B matrix is large

-- Incorporated decision logic to choose between zgemm_small vs
   zgemm sup based on matrix dimensions "m, n and k".

-- Calling of ZGEMV when matrix dimension m or n = 1.
   Very good performance improvement is observed.

Change-Id: I7c64020f4f78a6a51617b184cc88076213b5527d
2022-07-28 14:55:24 +05:30
Vignesh Balasubramanian
808d79a610 Implemented efficient ZGEMM algorithm when k=1
Problem statement :
To improve the performance of the zgemm kernel for dealing with input sizes with k=1 by fine tuning its previous implementation.
In the previous implementation, usage of SIMD parallelism along m and n dimensions instead of the k dimension proved to provide a better performance to the zgemm kernel. This code was subjected to further improvements along the following lines:

- Cases to deal with alpha=0 and beta!=0 (i.e. just scaling of C) were handled at the beginning separately, using the bli_zscalm api.
- Register blocking was further improved, resulting in the kernel size to increase from 4x5 to 4x6.
- Prefetching was added to the code, by empirically finding out a suitable value to be added to the pointer. Overall, it provided a mild improvement to the performance.
- Conditional statements were removed from the kernel loop, and a logic was deduced to allow such removal without affecting the output.

The performance improvement of this single threaded implementation also proved to compete with that of the default implementation for multiple threads, as long as m and n are under 128. An improvement to this patch would be to find out a suitable feature which would establish a relationship between the number of threads and the input size constraints, thereby providing a unique size constraint for different number of threads.

AMD-Internal: [CPUPL-2236]
Change-Id: I3d401c8fd78bec80ce62eef390fa85e6287df847
2022-07-28 02:09:45 -04:00
Arnav Sharma
4f96bb712e AOCL Dynamic Optimization for DGEMMT
- Fine-tuned the thread allocation logic for parallelizing DGEMMT for the cases where n <= 220. This results in performance improvement in multi-threaded DGEMMT for small values of n.

AMD-Internal: [CPUPL-2215]
Change-Id: I2654bc64d2dc43c2db911e0c9175755be3aa8ba5
2022-07-18 06:52:48 -04:00
Vignesh Balasubramanian
2ad25a7180 ZGEMM kernel performance improvement for k=1 sizes:
The current implementation for handling zgemm exploits SIMD parallelism
along the k dimension. This would give great performance in cases of k
being large. But for input sizes with k=1, it is better to exploit SIMD
parallelism along the m and n dimensions, thereby giving better
performance. This commit does the same through loop reordering, by
loading column vectors from A.

AMD-Internal: [CPUPL-2236]
Change-Id: Ibfa29f271395497b6e2d0127c319ecb4b883d19f
2022-06-30 07:19:52 -04:00
Arnav Sharma
25cf7517ab AOCL Dynamic Optimization for DGEMMT
- Optimized thread allocation for cases with n <= 220 for DGEMMT.

AMD-Internal: [CPUPL-2215]
Change-Id: Id01edf268a90fd96a41ef947db54f6afc490548f
2022-06-30 15:00:34 +05:30
Dipal M Zambare
2ba2fb2b63 Add AVX2 path for TRSM+GEMM combination.
- Enabled AVX2 TRSM + GEMM kernel path, when GEMM is called
  from TRSM context it will invoke AVX2 GEMM kernels instead
  of the default AVX-512 GEMM kernels.

- The default context has the block sizes for AVX512 GEMM
  kernels, however, TRSM uses AVX2 GEMM kernels and they
  need different block sizes.

- Added new API bli_zen4_override_trsm_blkszs(). It overrides
  default block sizes in context with block sizes needed for
  AVX2 GEMM kernels.

- Added new API bli_zen4_restore_default_blkszs(). It restores
  The block sizes to there default values (as needed by default
   AVX512 GEMM kernels).

- Updated bli_trsm_front() to override the block sizes in the
  context needed by TRSM + AVX2 GEMM kernels and restore them
  to the default values at the end of this function. It is done
  in bli_trsm_front() so that we override the context before
  creating different threads.

AMD-Internal: [CPUPL-2225]
Change-Id: Ie92d0fc40f94a32dfb865fe3771dc14ed7884c55
2022-06-29 10:16:24 +00:00
Harihara Sudhan S
d4bb906094 Exception handling in GEMV smart-threading
- Added condition to check if n or m is 0 in smart threading
	  logic

AMD-Internal: [CPUPL2219]
Change-Id: Idd58cd13a11aa5bdb4117b4c9262f38ef3c1afc4
2022-06-29 01:52:18 -04:00
satish kumar nuggu
aaf840d86e Disabled zgemm SUP path
- Need to identify new Thresholds for zgemm SUP path to avoid performance regression.

    AMD-Internal: [CPUPL-2148]

Change-Id: I0baa2b415dc5e296780566ba7450249445b93d43
2022-06-27 08:19:12 +00:00
mkadavil
6c112632a7 Low precision gemm integrated as aocl_gemm addon.
- Multi-Threaded int8 GEMM (Input - uint8_t, int8_t, Output - int32_t).
AVX512_vnni based micro-kernel for int8 gemm. Paralellization supported
along m and n dimensions.
- Multi-Threaded B matrix reorder support for sgemm. Reordering B matrix
is packing entire B matrix upfront before sgemm. It allows sgemm to
take advantage of packed B matrix without incurring packing costs during
runtime.
- Makefile updates to addon make rules to compile avx512 code for
selected files in addon folder.
- CPU features query enhancements to check for AVX512_VNNI flag.
- Bench for int8 gemm and sgemm with B matrix reorder. Supports
performance mode for benchmarking and accuracy mode for testing code
correctness.

AMD-Internal: [CPUPL-2102]

Change-Id: I8fb25f5c2fbd97d756f95b623332cb29e3b8d182
2022-06-09 10:28:38 -04:00
Dipal M Zambare
c87b9aab75 Added support for AVX512 for Windows and AMAVX
- Completed zen4 configuration support on windows
 - Enabled AVX512 kernels for AMAXV
 - Added zen4 configuration in amdzen for windows
 - Moved all zen4 kernels inside kernels/zen4 folder

AMD-Internal: [CPUPL-2108]
Change-Id: I9d2336998bbcdb8e2c4ca474977b5939bfa578ba
2022-06-08 11:09:48 +05:30
Dipal M Zambare
8cc15107ed Enabled AVX-512 kernels for Zen4 config
- Enabled AVX-512 skylake kernels in zen4 configuration.
    AVX-512 kernels are added for GEMM float and double types.

  - Enabled reference kernel for TRSM native path

AMD-Internal: [CPUPL-2108]
Change-Id: I66f3468346085c17183cbcbf4f2c8cfe07579b6f
2022-06-03 06:34:35 +00:00
Arnav Sharma
66b2231b65 Fixed CMake files for HER
- Removed subdirectory addition

Change-Id: I419085db0b9034777409207a7d79b7ffa91eb8f1
2022-06-01 12:25:43 +05:30
Arnav Sharma
e5d5a43eab Optimized ZHER Implementation
- Implemented optimized her framework calls for double precision complex numbers.
- The zher kernel operates over 4 columns at a time. Initially, it computes the diagonal elements of the matrix, then the 4x4 triangular part is computed and finally the remaining part is computed as 4x4 tiles of the matrix upto m rows.

AMD-Internal: [CPUPL-2151]

Change-Id: I27430ee33ffb901b3ef4bdd97b034e3f748e9cca
2022-05-25 14:03:01 +05:30
Dipal M Zambare
6e2f536590 Removed Arch specific code from BLIS framework.
- Removed BLIS_CONFIG_EPYC macro
- The code dependent on this macro is handled in
  one of the three ways

  -- It is updated to work across platforms.
  -- Added in architecture/feature specific runtime checks.
  -- Duplicated in AMD specific files. Build system is updated to
      pick AMD specific files when library is built for any of the
     zen architecture

AMD-Internal: [CPUPL-1960]
Change-Id: I6f9f8018e41fa48eb43ae4245c9c2c361857f43b
2022-05-17 20:35:40 +05:30
Harsh Dave
f48ced0811 Optimized dher2 implementation
- Impplemented her2 framework calls for transposed and non
  transposed kernel variants.

- dher2 kernel operate over 4 columns at a time. It computes
  4x4 triangular part of matrix first and remainder part is
  computed in chunk of 4x4 tile upto m rows.

- remainder cases(m < 4) are handled serially.

AMD-Internal: [CPUPL-1968]

Change-Id: I12ae97b2ad673a7fd9b733c607f27b1089142313
2022-05-17 18:13:07 +05:30
mkadavil
31f8820bab Bug fixes for open mp based multi-threaded GEMM/GEMMT SUP path.
- auto_factor to be disabled if BLIS_IC_NT/BLIS_JC_NT is set
irrespective of whether num_threads (BLIS_NUM_THREADS) is modified at
runtime. Currently the auto_factor is enabled if num_threads > 0 and not
reverted if ic/jc/pc/jr/ir ways are set in bli_rntm_set_ways_from_rntm.
This results in gemm/gemmt SUP path applying 2x2 factorization of
num_threads, and thereby modifying the preset factorization. This issue
is not observed in native path since factorization happens without
checking auto_factor value.
- Setting omp threads to n_threads using omp_set_num_threads after the
global_rntm n_threads update in bli_thread_set_num_threads. This ensures
that in bli_rntm_init_from_global, omp_get_max_threads returns the same
value as set previously.

AMD-Internal: [CPUPL-2137]
Change-Id: I6c5de0462c5837cfb64793c3e6d49ec3ac2b6426
2022-05-17 18:10:40 +05:30
mkadavil
8670992c3d Default sgemv kernel to be used in single-threaded scenarios.
- sgemv calls a multi-threading friendly kernel whenever it is compiled
with open mp and multi-threading enabled. However it was observed that
this kernel is not suited for scenarios where sgemv is invoked in a
single-threaded context (eg: sgemv from ST sgemm fringe kernels and with
matrix blocking). Falling back to the default single-threaded sgemv
kernel resulted in better performance for this scenario.

AMD-Internal: [CPUPL-2136]
Change-Id: Ic023db4d20b2503ea45e56a839aa35de0337d5a6
2022-05-17 18:10:40 +05:30
Dipal M Zambare
7247e6a150 Fixed crash issue in TRSM on non-avx platform.
- Ensured that FMA, AVX2 based kernels are called only on platforms
   supporting these instructions, otherwise standard ‘C’ kernels will
   be called.
 - Code cleanup for optimization and consistency

AMD-Internal: [CPUPL-2126]
Change-Id: I203270892b2fad2ccc9301fb55e2bae75508e050
2022-05-17 18:10:39 +05:30
Nallani Bhaskar
7658067107 Added AOCL Dynamic feature for dtrmm
Description:

1. Tuned number of threads to achive better performance
   for dtrmm

  AMD-Internal: [ CPUPL-2100 ]

Change-Id: Ib2e3df224ba76d86185721bef1837cd7855dd593
2022-05-17 18:10:39 +05:30
Dipal M Zambare
4ccb438c18 Updated Zen3 architecture detection for Ryzen 5000
- Added support to detect Ryzen 5000 Desktop and APUs

AMD-Internal: [CPUPL-2117]
Change-Id: I312a7de1a84cf368b74ba20e58192803a9f7dace
2022-05-17 18:10:39 +05:30
Nallani Bhaskar
2acb3f6ed0 Tuned aocl dynamic for specific range in dgemm
Description:

1. Decision logic to choose optimal number of threads for
   given input dgemm dimensions under aocl dynamic feature
   were retuned based on latest code.

2. Updated code in few file to avoid compilation warnings.

3. Added a min check for nt in bli_sgemv_var1_smart_threading
   function

AMD-Internal: [ CPUPL-2100 ]
Change-Id: I2bc70cc87c73505dd5d2bdafb06193f664760e02
2022-05-17 18:10:39 +05:30
mkadavil
a3836a560d Smart Threading for GEMM (sgemm) v1.
- Cache aware factorization.
Experiments shows that ic,jc factorization based on m,n gives better
results compared to mu,nu on a generic data set in SUP path. Also
slight adjustments in the factorizations w.r.t matrix data loads can
help in improving perf further.

- Moving native path inputs to SUP path.
Experiments shows that in multi-threaded scenarios if the per thread
data falls under SUP thresholds, taking SUP path instead of native path
results in improved performance. This is the case even if the original
matrix dimensions falls in native path. This is not applicable if A
matrix transpose is required.

- Enabling B matrix packing in SUP path.
Performance improvement is observed when B matrix is packed in cases
where gemm takes SUP path instead of native path based on per thread
matrix dimensions.

AMD-Internal: [CPUPL-659]

Change-Id: I3b8fc238a0ece1ababe5d64aebab63092f7c6914
2022-05-17 18:10:39 +05:30
S, HariharaSudhan
a8bc55c373 Multithreaded SGEMV var 1 with smart threading
- Implemented an OpenMP based stand alone SGEMV kernel for
	  row-major (var 1) for multithread scenarios
	- Smart threading is enabled when AOCL DYNAMIC is defined
	- Number of threads are decided based on the input dims
	  using smart threading

AMD-Internal: [CPUPL-1984]
Change-Id: I9b191e965ba7468e95aabcce21b35a533017502e
2022-05-17 18:10:39 +05:30
Dave
963a6aa099 Enabled zgemm_sup path and removed sqp path
- Previously zgemm computation failures were due to
  status variable did not have pre-defined initial
  value which resulted in zgemm computation to return
  without being computed by any kernel. Reflected
  same change in dgemm_ function as well. 

- Enabled sup zgemm as the issue is fixed with
  status variable with bli_zgemm_small call.

-Removed calling sqp method as it is disabled





Change-Id: I0f4edfd619bc4877ebfc5cb6532c26c3888f919d
2022-05-17 18:10:39 +05:30
satish kumar nuggu
1a3428ddfc Parallelization of dtrsm_small routine
1. Parallelized dtrsm_small across m-dimension or n-dimension based on side(Left/Right).
2. Fine-tuning with AOCL_DYNAMIC to achieve better performance.

    AMD-Internal: [CPUPL-2103]

Change-Id: I6be6a2b579de7df9a3141e0d68bdf3e8a869a005
2022-05-17 18:10:39 +05:30
Chandrashekara K R
8e6da6b844 Added the checks to not defining the bool type for C++ code in windows to avoid redefinition build time errror.
AMD-Internal: [CPUPL-2037]
Change-Id: I065da9206ab06f60876324f258ee12fb9fe83f88
2022-05-17 18:10:39 +05:30
Dipal M Zambare
e712ffe139 Added AOCL progress support for BLIS
-- AOCL libraries are used for lengthy computations which can go
     on for hours or days, once the operation is started, the user
     doesn’t get any update on current state of the computation.
     This (AOCL progress) feature enables user to receive a periodic
     update from the libraries.
  -- User registers a callback with the library if it is interested
     in receiving the periodic update.
  -- The library invokes this callback periodically with information
     about current state of the operation.
  -- The update frequency is statically set in the code, it can be
     modified as needed if the library is built from source.
  -- These feature is supported for GEMM and TRSM operations.

  -- Added example for GEMM and TRSM.
  -- Cleaned up and reformatted test_gemm.c and test_trsm.c to
     remove warnings and making indentation consistent across the
     file.

AMD-Internal: [CPUPL-2082]
Change-Id: I2aacdd8fb76f52e19e3850ee0295df49a8b7a90e
2022-05-17 18:10:39 +05:30
Harsh Dave
52e4fd0f11 Performance Improvement for ctrsm small sizes
Details:
- Enable ctrsm small implementation
- Handled Overflow and Underflow Vulnerabilites in
ctrsm small implementations.
- Fixed failures observed in libflame testing.
- For small sizes, ctrsm small implementation is
used for all variants.

Change-Id: I17b862dcb794a5af0ec68f585992131fef57b179
2022-05-17 18:10:39 +05:30
Sireesha Sanga
cc3069fb5e Performance Improvement for ztrsm small sizes
Details:
- Optimization of ztrsm for Non-unit Diag Variants.
- Handled Overflow and Underflow Vulnerabilites in
  ztrsm small implementations.
- Fixed failures observed in libflame testing.
- Fine-tuned ztrsm small implementations for specific
  sizes 64<= m,n <= 256, by keeping the number of
  threads to the optimum value, under AOCL_DYNAMIC flag.
- For small sizes, ztrsm small implementation is
  used for all variants.

AMD-Internal: [SWLCSG-1194]
Change-Id: I066491bb03e5cda390cb699182af4350ae60be2d
2022-05-17 18:10:39 +05:30
satish kumar nuggu
fe7f0a9085 Changes to enable zgemm small from BLAS Layer
1. Removed small gemm call from native path to avoid Single threaded
calls as a part of MultiThreaded scenarios.
2. SUP and INDUCED Method path disabled.
3. Added AOCL Dynamic for optimum number of threads to achieve higher
performance.

Change-Id: I3c41641bef4906bdbdb5f05e67c0f61e86025d92
2022-05-17 18:10:38 +05:30
Sireesha Sanga
9621ef3067 Performance Improvement for ztrsm small sizes
Details:
- Enable ztrsm small implementation
- For small sizes, Right Variants and Left Unit Diag
  Variants are using ztrsm_small implementations.
- Optimization of Left Non-Unit Diagonal Variants,
  Work In Progress

AMD-Internal: [SWLCSG-1194]
Change-Id: Ib3cce6e2e4ac0817ccd4dff4bb0fa4a23e231ca4
2022-05-17 18:09:22 +05:30
Harsh Dave
0976ed9ce5 Implement zgemm_small kernel
Details:
- Intrinsic implementation of zgemm_small nn kernel.
- Intrinsic implementation of zgemm_small_At kernel.
- Added support conjugate and hermitian transpose
- Main loop operates in multiple of 4x3 tile.
- Edge cases are handles separately.

AMD-Internal: [CPUPL-2084]
Change-Id: I512da265e4d4ceec904877544f1d15cddc147a66
2022-05-17 18:09:22 +05:30
Nallani Bhaskar
eb0ff01871 Fine-tuning dynamic threading logic of DGEMM for small dimensions
Description:
    1. For small dimensions single threads dgemm_small performing
       better than dgemmsup and native paths.
    2. Irrespecive of given number of threads we are redirecting
       into single thread dgemm_small

       AMD-Internal:[CPUPL-2053]

Change-Id: If591152d18282c2544249f70bd2f0a8cd816b94e
2022-05-17 18:09:22 +05:30
Sireesha Sanga
6a2c4acc66 Runtime Thread Control using OpenMP API
Details:
-  During runtime, Application can set the desired number of threads using
   standard OpenMP API omp_set_num_threads(nt).
-  BLIS Library uses standard OpenMP API omp_get_max_threads() internally,
   to fetch the latest value set by the application.
-  This value will be used to decide the number of threads in the subsequent
   BLAS calls.
-  At the time of BLIS Initialization, BLIS_NUM_THREADS environment variable
   will be given precedence, over the OpenMP standard API omp_set_num_threads(nt)
   and OMP_NUM_THREADS environment variable.
-  Order of precedence followed during BLIS Initialization is as follows
	1. Valid value of BLIS_NUM_THREADS
	2. omp_set_num_threads(nt)
	3. valid value of OMP_NUM_THREADS
	4. Number of cores
-  After BLIS initialization, if the Application issues omp_set_num_threads(nt)
   during runtime, number of threads set during BLIS Initialization,
   is overridden by the latest value set by the Application.
-  Existing precedence of BLIS_*_NT environment variables and the decision of
   optimal number of threads over the number of threads derived from the
   above process remains as it is.

AMD-Internal: [CPUPL-2076]

Change-Id: I935ba0246b1c256d0fee7d386eac0f5940fabff8
2022-05-17 18:09:22 +05:30
mkurumel
ab06f17689 DGEMMT : Tuning SUP threshold to improve ST and MT performance.
Details :
          - SUP Threshold change for native vs SUP
          - Improved the ST performances for sizes n<800
	  - Introduce PACKB in SUP to improve ST performance between 320<n<800
	  - 16T SUP Tuning for n<1600.

        AMD-Internal: [CPUPL-1981]

Change-Id: Ie59afa4d31570eb0edccf760c088deaa2e10cdda
2022-05-17 18:09:22 +05:30
Dipal M Zambare
06e386f054 Updated Windows build system to pick AMD specific sources.
The framework cleanup was done for linux as part of
f63f78d7 Removed Arch specific code from BLIS framework.

This commit adds changes needed for windows build.

AMD-Internal: [CPUPL-2052]

Change-Id: Ibd503a0adeea66850de156fb95657b124e1c4b9d
2022-05-17 18:09:20 +05:30
Harsh Dave
d50d607995 dher2 API in blis make check fails on non avx2 platform
- dher2 did not have avx check for platform.
  It was calling avx kernel regardless of platform
  support. Which resulted in core dump.

- Added avx based platform check in both variant of dher2 for
  fixing the issue.

AMD-Internal: [CPUPL-2043]
Change-Id: I1fd1dcc9336980bfb7ffa9376f491f107c889c0b
2022-05-17 18:08:57 +05:30
Dipal M Zambare
f69f59c32c Removed Arch specific code from BLIS framework.
- Removed BLIS_CONFIG_EPYC macro
- The code dependent on this macro is handled in
  one of the three ways

  -- It is updated to work across platforms.
  -- Added in architecture/feature specific runtime checks.
  -- Duplicated in AMD specific files. Build system is updated to
      pick AMD specific files when library is built for any of the
     zen architecture

AMD-Internal: [CPUPL-1960]
Change-Id: I6f9f8018e41fa48eb43ae4245c9c2c361857f43b
2022-05-17 18:08:56 +05:30
Harsh Dave
d116780616 Optimized dher2 implementation
- Impplemented her2 framework calls for transposed and non
  transposed kernel variants.

- dher2 kernel operate over 4 columns at a time. It computes
  4x4 triangular part of matrix first and remainder part is
  computed in chunk of 4x4 tile upto m rows.

- remainder cases(m < 4) are handled serially.

AMD-Internal: [CPUPL-1968]

Change-Id: I12ae97b2ad673a7fd9b733c607f27b1089142313
2022-05-17 18:05:08 +05:30
Meghana Vankadari
c11fd5a8f6 Added functionality support for dzgemm
AMD-Internal: [SWLCSG-1012]
Change-Id: I2eac3131d2dcd534f84491289cbd3fe7fb7de3da
2022-05-17 18:01:55 +05:30
Dipal M. Zambare
b90420627a Revert "Enabled AVX-512 kernels for Zen4 config"
This reverts commit 62c96a4190.
Was committed without review.
2022-04-21 06:46:00 +00:00
Dipal M. Zambare
62c96a4190 Enabled AVX-512 kernels for Zen4 config
Enabled AVX-512 skylake kernels in zen4 configuration.
  AVX-512 kernels are added for float and double types.

AMD-Internal: [CPUPL-2108]
2022-04-21 06:28:29 +00:00
Field G. Van Zee
a4abb10831 Added a new 'gemmlike' sandbox.
Details:
- Added a new sandbox called 'gemmlike', which implements sequential and
  multithreaded gemm in the style of gemmsup but also unconditionally
  employs packing. The purpose of this sandbox is to
  (1) avoid select abstractions, such as objects and control trees, in
      order to allow readers to better understand how a real-world
      implementation of high-performance gemm can be constructed;
  (2) provide a starting point for expert users who wish to build
      something that is gemm-like without "reinventing the wheel."
  Thanks to Jeff Diamond, Tze Meng Low, Nicholai Tukanov, and Devangi
  Parikh for requesting and inspiring this work.
- The functions defined in this sandbox currently use the "bls_" prefix
  instead of "bli_" in order to avoid any symbol collisions in the main
  library.
- The sandbox contains two variants, each of which implements gemm via a
  block-panel algorithm. The only difference between the two is that
  variant 1 calls the microkernel directly while variant 2 calls the
  microkernel indirectly, via a function wrapper, which allows the edge
  case handling to be abstracted away from the classic five loops.
- This sandbox implementation utilizes the conventional gemm microkernel
  (not the skinny/unpacked gemmsup kernels).
- Updated some typos in the comments of a few files in the main
  framework.

Change-Id: Ifc3c50e9fd0072aada38eace50c57552c88cc6cf
2022-04-01 13:55:30 +05:30
Field G. Van Zee
7a0ba4194f Added support for addons.
Details:
- Implemented a new feature called addons, which are similar to
  sandboxes except that there is no requirement to define gemm or any
  other particular operation.
- Updated configure to accept --enable-addon=<name> or -a <name> syntax
  for requesting an addon be included within a BLIS build. configure now
  outputs the list of enabled addons into config.mk. It also outputs the
  corresponding #include directives for the addons' headers to a new
  companion to the bli_config.h header file named bli_addon.h. Because
  addons may wish to make use of existing BLIS types within their own
  definitions, the addons' headers must be included sometime after that
  of bli_config.h (which currently is #included before bli_type_defs.h).
  This is why the #include directives needed to go into a new top-level
  header file rather than the existing bli_config.h file.
- Added a markdown document, docs/Addons.md, to explain addons, how to
  build with them, and what assumptions their authors should keep in
  mind as they create them.
- Added a gemmlike-like implementation of sandwich gemm called 'gemmd'
  as an addon in addon/gemmd. The code uses a 'bao_' prefix for local
  functions, including the user-level object and typed APIs.
- Updated .gitignore so that git ignores bli_addon.h files.

Change-Id: Ie7efdea366481ce25075cb2459bdbcfd52309717
2022-03-31 12:03:27 +05:30
Meghana Vankadari
0792eb8608 Fixed a bug in deriving dimensions from objects in gemm_front files
Change-Id: I1f796c3a7ce6efacb6ef64651a7818b7ee38c6bb
2022-02-16 23:24:14 -05:00
Harihara Sudhan S
6696f91f41 Improved DGEMV performance for column-major cases
- Altered the framework to use 2 more fused kernels for
	  better problem decomposition
	- Increased unroll factor in AXPYF5 and AXPYF8 kernels
	  to improve register usage

AMD-Internal: [CPUPL-1970]

Change-Id: I79750235d9554466def5ff93898f832834990343
2022-02-02 23:13:10 -05:00
Dipal M Zambare
6d1edca727 Optimized CPU feature determination.
We added new API to check if the CPU architecture has support
for AVX instruction. This API was calling CPUID instruction
every time it is invoked. However, since this information does
not change at runtime, it is sufficient to determine it once
and use the cached results for subsequent calls. This optimization
is needed to improve performance for small size matrix vector
operations.

AMD-Internal: [CPUPL-2009]
Change-Id: If6697e1da6dd6b7f28fbfed45215ea3fdd569c5f
2022-02-01 11:15:55 +05:30