Defined rntm_t to relocate cntx_t.thrloop (#235).

Details:
- Defined a new struct datatype, rntm_t (runtime), to house the thrloop
  field of the cntx_t (context). The thrloop array holds the number of
  ways of parallelism (thread "splits") to extract per level-3
  algorithmic loop until those values can be used to create a
  corresponding node in the thread control tree (thrinfo_t structure),
  which (for any given level-3 invocation) usually happens by the time
  the macrokernel is called for the first time.
- Relocating the thrloop from the cntx_t remedies a thread-safety issue
  when invoking level-3 operations from two or more application threads.
  The race condition existed because the cntx_t, a pointer to which is
  usually queried from the global kernel structure (gks), is supposed to
  be a read-only. However, the previous code would write to the cntx_t's
  thrloop field *after* it had been queried, thus violating its read-only
  status. In practice, this would not cause a problem when a sequential
  application made a multithreaded call to BLIS, nor when two or more
  application threads used the same parallelization scheme when calling
  BLIS, because in either case all application theads would be using
  the same ways of parallelism for each loop. The true effects of the
  race condition were limited to situations where two or more application
  theads used *different* parallelization schemes for any given level-3
  call.
- In remedying the above race condition, the application or calling
  library can now specify the parallelization scheme on a per-call basis.
  All that is required is that the thread encode its request for
  parallelism into the rntm_t struct prior to passing the address of the
  rntm_t to one of the expert interfaces of either the typed or object
  APIs. This allows, for example, one application thread to extract 4-way
  parallelism from a call to gemm while another application thread
  requests 2-way parallelism. Or, two threads could each request 4-way
  parallelism, but from different loops.
- A rntm_t* parameter has been added to the function signatures of most
  of the level-3 implementation stack (with the most notable exception
  being packm) as well as all level-1v, -1d, -1f, -1m, and -2 expert
  APIs. (A few internal functions gained the rntm_t* parameter even
  though they currently have no use for it, such as bli_l3_packm().)
  This required some internal calls to some of those functions to
  be updated since BLIS was already using those operations internally
  via the expert interfaces. For situations where a rntm_t object is
  not available, such as within packm/unpackm implementations, NULL is
  passed in to the relevant expert interfaces. This is acceptable for
  now since parallelism is not obtained for non-level-3 operations.
- Revamped how global parallelism is encoded. First, the conventional
  environment variables such as BLIS_NUM_THREADS and BLIS_*_NT  are only
  read once, at library initialization. (Thanks to Nathaniel Smith for
  suggesting this to avoid repeated calls getenv(), which can be slow.)
  Those values are recorded to a global rntm_t object. Public APIs, in
  bli_thread.c, are still available to get/set these values from the
  global rntm_t, though now the "set" functions have additional logic
  to ensure that the values are set in a synchronous manner via a mutex.
  If/when NULL is passed into an expert API (meaning the user opted to
  not provide a custom rntm_t), the values from the global rntm_t are
  copied to a local rntm_t, which is then passed down the function stack.
  Calling a basic API is equivalent to calling the expert APIs with NULL
  for the cntx and rntm parameters, which means the semantic behavior of
  these basic APIs (vis-a-vis multithreading) is unchanged from before.
- Renamed bli_cntx_set_thrloop_from_env() to bli_rntm_set_ways_for_op()
  and reimplemented, with the function now being able to treat the
  incoming rntm_t in a manner agnostic to its origin--whether it came
  from the application or is an internal copy of the global rntm_t.
- Removed various global runtime APIs for setting the number of ways of
  parallelism for individual loops (e.g. bli_thread_set_*_nt()) as well
  as the corresponding "get" functions. The new model simplifies these
  interfaces so that one must either set the total number of threads, OR
  set all of the ways of parallelism for each loop simultaneously (in a
  single function call).
- Updated sandbox/ref99 according to above changes.
- Rewrote/augmented docs/Multithreading.md to document the three methods
  (and two specific ways within each method) of requesting parallelism
  in BLIS.
- Removed old, disabled code from bli_l3_thrinfo.c.
- Whitespace changes to code (e.g. bli_obj.c) and docs/BuildSystem.md.
This commit is contained in:
Field G. Van Zee
2018-07-17 18:37:32 -05:00
parent 323eaaab99
commit ecbebe7c2e
177 changed files with 2210 additions and 1166 deletions

View File

@@ -44,6 +44,7 @@ void blx_gemm_front
obj_t* beta,
obj_t* c,
cntx_t* cntx,
rntm_t* rntm,
cntl_t* cntl
)
{
@@ -96,6 +97,19 @@ void blx_gemm_front
bli_obj_induce_trans( &c_local );
}
// Parse and interpret the contents of the rntm_t object to properly
// set the ways of parallelism for each loop, and then make any
// additional modifications necessary for the current operation.
bli_rntm_set_ways_for_op
(
BLIS_GEMM,
BLIS_LEFT, // ignored for gemm
bli_obj_length( &c_local ),
bli_obj_width( &c_local ),
bli_obj_width( &a_local ),
rntm
);
{
// A sort of hack for communicating the desired pach schemas for A and
// B to bli_gemm_cntl_create() (via bli_l3_thread_decorator() and
@@ -117,17 +131,6 @@ void blx_gemm_front
}
}
// Record the threading for each level within the context.
bli_cntx_set_thrloop_from_env
(
BLIS_GEMM,
BLIS_LEFT, // ignored for gemm
bli_obj_length( &c_local ),
bli_obj_width( &c_local ),
bli_obj_width( &a_local ),
cntx
);
// Invoke the internal back-end via the thread handler.
blx_gemm_thread
(
@@ -137,6 +140,7 @@ void blx_gemm_front
&b_local,
&c_local,
cntx,
rntm,
cntl
);
}

View File

@@ -40,6 +40,7 @@ void blx_gemm_front
obj_t* beta,
obj_t* c,
cntx_t* cntx,
rntm_t* rntm,
cntl_t* cntl
);

View File

@@ -41,6 +41,7 @@ void blx_gemm_int
obj_t* b,
obj_t* c,
cntx_t* cntx,
rntm_t* rntm,
cntl_t* cntl,
thrinfo_t* thread
)
@@ -56,7 +57,7 @@ void blx_gemm_int
bli_obj_alias_to( c, &c_local );
// Create the next node in the thrinfo_t structure.
bli_thrinfo_grow( cntx, cntl, thread );
bli_thrinfo_grow( rntm, cntl, thread );
// Extract the function pointer from the current control tree node.
f = bli_cntl_var_func( cntl );
@@ -68,6 +69,7 @@ void blx_gemm_int
&b_local,
&c_local,
cntx,
rntm,
cntl,
thread
);

View File

@@ -38,6 +38,7 @@ void blx_gemm_int
obj_t* b,
obj_t* c,
cntx_t* cntx,
rntm_t* rntm,
cntl_t* cntl,
thrinfo_t* thread
);

View File

@@ -54,7 +54,9 @@ void blx_l3_cntl_create_if
// values for unpacked objects. Notice that we do this even if the
// caller passed in a custom control tree; that's because we still need
// to reset the pack schema of a and b, which were modified by the
// operation's _front() function.
// operation's _front() function. However, in order for this to work,
// the level-3 thread entry function (or omp parallel region) must
// alias thread-local copies of objects a and b.
pack_t schema_a = bli_obj_pack_schema( a );
pack_t schema_b = bli_obj_pack_schema( b );

View File

@@ -50,15 +50,20 @@ void bli_gemmnat
obj_t* b,
obj_t* beta,
obj_t* c,
cntx_t* cntx
cntx_t* cntx,
rntm_t* rntm
)
{
bli_init_once();
// Obtain a valid native context from the gks, if necessary.
// Obtain a valid native context from the gks if necessary.
if ( cntx == NULL ) cntx = bli_gks_query_cntx();
// Initialize a local runtime object if necessary.
rntm_t rntm_l;
if ( rntm == NULL ) { rntm = &rntm_l; bli_thread_init_rntm( rntm ); }
// Invoke the operation's front end.
blx_gemm_front( alpha, a, b, beta, c, cntx, NULL );
blx_gemm_front( alpha, a, b, beta, c, cntx, rntm, NULL );
}

View File

@@ -40,6 +40,7 @@ void blx_l3_packm
obj_t* x,
obj_t* x_pack,
cntx_t* cntx,
rntm_t* rntm,
cntl_t* cntl,
thrinfo_t* thread
)

View File

@@ -37,6 +37,7 @@ void blx_l3_packm
obj_t* x,
obj_t* x_pack,
cntx_t* cntx,
rntm_t* rntm,
cntl_t* cntl,
thrinfo_t* thread
);

View File

@@ -46,11 +46,12 @@ void blx_gemm_thread
obj_t* b,
obj_t* c,
cntx_t* cntx,
rntm_t* rntm,
cntl_t* cntl
)
{
// Query the total number of threads from the context.
dim_t n_threads = bli_cntx_get_num_threads( cntx );
dim_t n_threads = bli_rntm_num_threads( rntm );
// Allcoate a global communicator for the root thrinfo_t structures.
thrcomm_t* gl_comm = bli_thrcomm_create( n_threads );
@@ -59,27 +60,38 @@ void blx_gemm_thread
{
dim_t id = omp_get_thread_num();
obj_t a_t, b_t, c_t;
cntl_t* cntl_use;
thrinfo_t* thread;
// Alias thread-local copies of A, B, and C. These will be the objects
// we pass down the algorithmic function stack. Making thread-local
// alaises IS ABSOLUTELY IMPORTANT and MUST BE DONE because each thread
// will read the schemas from A and B and then reset the schemas to
// their expected unpacked state (in blx_l3_cntl_create_if()).
bli_obj_alias_to( a, &a_t );
bli_obj_alias_to( b, &b_t );
bli_obj_alias_to( c, &c_t );
// Create a default control tree for the operation, if needed.
blx_l3_cntl_create_if( family, a, b, c, cntl, &cntl_use );
blx_l3_cntl_create_if( family, &a_t, &b_t, &c_t, cntl, &cntl_use );
// Create the root node of the current thread's thrinfo_t structure.
bli_l3_thrinfo_create_root( id, gl_comm, cntx, cntl_use, &thread );
bli_l3_thrinfo_create_root( id, gl_comm, rntm, cntl_use, &thread );
func
(
a,
b,
c,
&a_t,
&b_t,
&c_t,
cntx,
rntm,
cntl_use,
thread
);
// Free the control tree, if one was created locally.
blx_l3_cntl_free_if( a, b, c, cntl, cntl_use, thread );
blx_l3_cntl_free_if( &a_t, &b_t, &c_t, cntl, cntl_use, thread );
// Free the current thread's thrinfo_t structure.
bli_l3_thrinfo_free( thread );
@@ -92,6 +104,10 @@ void blx_gemm_thread
#endif
#ifdef BLIS_ENABLE_PTHREADS
#error "Sandbox does not yet implement pthreads."
#endif
// This code is enabled only when multithreading is disabled.
#ifndef BLIS_ENABLE_MULTITHREADING
@@ -103,6 +119,7 @@ void blx_gemm_thread
obj_t* b,
obj_t* c,
cntx_t* cntx,
rntm_t* rntm,
cntl_t* cntl
)
{
@@ -120,7 +137,7 @@ void blx_gemm_thread
blx_l3_cntl_create_if( family, a, b, c, cntl, &cntl_use );
// Create the root node of the thread's thrinfo_t structure.
bli_l3_thrinfo_create_root( id, gl_comm, cntx, cntl_use, &thread );
bli_l3_thrinfo_create_root( id, gl_comm, rntm, cntl_use, &thread );
func
(
@@ -128,6 +145,7 @@ void blx_gemm_thread
b,
c,
cntx,
rntm,
cntl_use,
thread
);

View File

@@ -39,6 +39,7 @@ typedef void (*gemmint_t)
obj_t* b,
obj_t* c,
cntx_t* cntx,
rntm_t* rntm,
cntl_t* cntl,
thrinfo_t* thread
);
@@ -51,6 +52,7 @@ void blx_gemm_thread
obj_t* b,
obj_t* c,
cntx_t* cntx,
rntm_t* rntm,
cntl_t* cntl
);

View File

@@ -41,6 +41,7 @@ void blx_gemm_blk_var1
obj_t* b,
obj_t* c,
cntx_t* cntx,
rntm_t* rntm,
cntl_t* cntl,
thrinfo_t* thread
)
@@ -71,7 +72,7 @@ void blx_gemm_blk_var1
// Perform gemm subproblem.
blx_gemm_int
(
&a1, b, &c1, cntx,
&a1, b, &c1, cntx, rntm,
bli_cntl_sub_node( cntl ),
bli_thrinfo_sub_node( thread )
);

View File

@@ -41,6 +41,7 @@ void blx_gemm_blk_var2
obj_t* b,
obj_t* c,
cntx_t* cntx,
rntm_t* rntm,
cntl_t* cntl,
thrinfo_t* thread
)
@@ -71,7 +72,7 @@ void blx_gemm_blk_var2
// Perform gemm subproblem.
blx_gemm_int
(
a, &b1, &c1, cntx,
a, &b1, &c1, cntx, rntm,
bli_cntl_sub_node( cntl ),
bli_thrinfo_sub_node( thread )
);

View File

@@ -41,6 +41,7 @@ void blx_gemm_blk_var3
obj_t* b,
obj_t* c,
cntx_t* cntx,
rntm_t* rntm,
cntl_t* cntl,
thrinfo_t* thread
)
@@ -67,7 +68,7 @@ void blx_gemm_blk_var3
// Perform gemm subproblem.
blx_gemm_int
(
&a1, &b1, c, cntx,
&a1, &b1, c, cntx, rntm,
bli_cntl_sub_node( cntl ),
bli_thrinfo_sub_node( thread )
);

View File

@@ -51,6 +51,7 @@ typedef void (*gemm_fp)
void* beta,
void* c, inc_t rs_c, inc_t cs_c,
cntx_t* cntx,
rntm_t* rntm,
thrinfo_t* thread
);
@@ -70,6 +71,7 @@ void blx_gemm_ker_var2
obj_t* b,
obj_t* c,
cntx_t* cntx,
rntm_t* rntm,
cntl_t* cntl,
thrinfo_t* thread
)
@@ -135,6 +137,7 @@ void blx_gemm_ker_var2
buf_beta,
buf_c, rs_c, cs_c,
cntx,
rntm,
thread );
}
@@ -157,6 +160,7 @@ void PASTECH2(blx_,ch,varname) \
void* beta, \
void* c, inc_t rs_c, inc_t cs_c, \
cntx_t* cntx, \
rntm_t* rntm, \
thrinfo_t* thread \
) \
{ \

View File

@@ -41,6 +41,7 @@ void blx_gemm_packa
obj_t* b,
obj_t* c,
cntx_t* cntx,
rntm_t* rntm,
cntl_t* cntl,
thrinfo_t* thread
)
@@ -53,6 +54,7 @@ void blx_gemm_packa
a,
&a_pack,
cntx,
rntm,
cntl,
thread
);
@@ -64,6 +66,7 @@ void blx_gemm_packa
b,
c,
cntx,
rntm,
bli_cntl_sub_node( cntl ),
bli_thrinfo_sub_node( thread )
);
@@ -77,6 +80,7 @@ void blx_gemm_packb
obj_t* b,
obj_t* c,
cntx_t* cntx,
rntm_t* rntm,
cntl_t* cntl,
thrinfo_t* thread
)
@@ -89,6 +93,7 @@ void blx_gemm_packb
b,
&b_pack,
cntx,
rntm,
cntl,
thread
);
@@ -100,6 +105,7 @@ void blx_gemm_packb
&b_pack,
c,
cntx,
rntm,
bli_cntl_sub_node( cntl ),
bli_thrinfo_sub_node( thread )
);

View File

@@ -46,6 +46,7 @@ void PASTECH(blx_,opname) \
obj_t* b, \
obj_t* c, \
cntx_t* cntx, \
rntm_t* rntm, \
cntl_t* cntl, \
thrinfo_t* thread \
);
@@ -80,6 +81,7 @@ void PASTECH2(blx_,ch,varname) \
void* beta, \
void* c, inc_t rs_c, inc_t cs_c, \
cntx_t* cntx, \
rntm_t* rntm, \
thrinfo_t* thread \
);