mirror of
https://github.com/ikawrakow/ik_llama.cpp.git
synced 2026-02-20 13:14:09 +00:00
* spec : add self speculative decoding and ngram-mod and refactor common : use common_ prefix for common library function llama : use LLAMA_TOKEN_NULL spec : add self speculative decoding (no draft model required) + refactor spec : add ngram-mod spec : various improvements ton ngram-map + docs spec : fix the check-rate logic of ngram-simple common : add common_speculative_is_compat() spec : simplify time measurement using common_time_meas refactor common_sampler_init refactor common_token_to_piece refactor and fix cur_p bug clean up * spec : remove check rate * spec: show warnings instead of abort --------- Co-authored-by: firecoperana <firecoperana> Co-authored-by: Sascha Rogmann <59577610+srogmann@users.noreply.github.com>
llama.cpp/examples/imatrix
Compute an importance matrix for a model and given text dataset. Can be used during quantization to enchance the quality of the quantized models. More information is available here: https://github.com/ggerganov/llama.cpp/pull/4861
Usage
./llama-imatrix \
-m model.gguf -f some-text.txt [-o imatrix.dat] [--process-output] [--verbosity 1] \
[--no-ppl] [--chunk 123] [--output-frequency 10] [--save-frequency 0] \
[--in-file imatrix-prev-0.dat --in-file imatrix-prev-1.dat ...]
Here -m with a model name and -f with a file containing training data (such as e.g. wiki.train.raw) are mandatory.
The parameters in square brackets are optional and have the following meaning:
-o(or--output-file) specifies the name of the file where the computed data will be stored. If missingimatrix.datis used.--verbosityspecifies the verbosity level. If set to0, no output other than the perplexity of the processed chunks will be generated. If set to1, each time the results are saved a message is written tostderr. If>=2, a message is output each time data is collected for any tensor. Default verbosity level is1.--output-frequencyspecifies how often the so far computed result is saved to disk. Default is 10 (i.e., every 10 chunks)--save-frequencyspecifies how often to save a copy of the imatrix in a separate file. Default is 0 (i.e., never)--process-outputspecifies if data will be collected for theoutput.weighttensor. My experience is that it is better to not utilize the importance matrix when quantizingoutput.weight, so this is set tofalseby default.
For faster computation, make sure to use GPU offloading via the -ngl argument
Example
GGML_CUDA=1 make -j
# generate importance matrix (imatrix.dat)
./llama-imatrix -m ggml-model-f16.gguf -f train-data.txt -ngl 99
# use the imatrix to perform a Q4_K_M quantization
./llama-quantize --imatrix imatrix.dat ggml-model-f16.gguf ./ggml-model-q4_k_m.gguf q4_k_m