Details:
- Group quantization is technique to improve accuracy
where scale factors to quantize inputs and weights
varies at group level instead of per channel
and per tensor level.
- Added new bench files to test GEMM with symmetric static
quantization.
- Added new get_size and reorder functions to account for
storing sum of col-values separately per group.
- Added new framework, kernels to support the same.
- The scalefactors could be of type float or bf16.
AMD-Internal:[SWLCSG-3274]
Change-Id: I3e69ecd56faa2679a4f084031d35ffb76556230f