Georgi Gerganov
6224f81799
ci : pip install gguf in editable mode ( #2782 )
...
ggml-ci
2023-08-25 13:03:25 +03:00
M. Yusuf Sarıgöz
08a1012230
gguf : export objects to user code ( #2780 )
...
* gguf export more objects to user code
* gguf export all objects to user code for now
* gguf : bump version
2023-08-25 12:43:41 +03:00
Henri Vasserman
984b7495ed
ROCm Port ( #1087 )
...
* use hipblas based on cublas
* Update Makefile for the Cuda kernels
* Expand arch list and make it overrideable
* Fix multi GPU on multiple amd architectures with rocblas_initialize() (#5 )
* add hipBLAS to README
* new build arg LLAMA_CUDA_MMQ_Y
* fix half2 decomposition
* Add intrinsics polyfills for AMD
* AMD assembly optimized __dp4a
* Allow overriding CC_TURING
* use "ROCm" instead of "CUDA"
* ignore all build dirs
* Add Dockerfiles
* fix llama-bench
* fix -nommq help for non CUDA/HIP
---------
Co-authored-by: YellowRoseCx <80486540+YellowRoseCx@users.noreply.github.com >
Co-authored-by: ardfork <134447697+ardfork@users.noreply.github.com >
Co-authored-by: funnbot <22226942+funnbot@users.noreply.github.com >
Co-authored-by: Engininja2 <139037756+Engininja2@users.noreply.github.com >
Co-authored-by: Kerfuffle <44031344+KerfuffleV2@users.noreply.github.com >
Co-authored-by: jammm <2500920+jammm@users.noreply.github.com >
Co-authored-by: jdecourval <7315817+jdecourval@users.noreply.github.com >
2023-08-25 12:09:42 +03:00
Georgi Gerganov
40c8c6dd6f
cuda : add RoPE kernel for mode == 2 (NeoX) ( #2760 )
...
* cuda : add RoPE kernel for mode == 2 (NeoX)
* falcon : do not offload the embeddings layer
2023-08-25 11:55:59 +03:00
M. Yusuf Sarıgöz
a67ec14fe9
gguf : make gguf pip-installable
...
* gitignore : add dist and rm pyproject.toml
* gguf: prepare as Pip package
* gguf: prepare as Pip package
* gguf : fix line endings
* requirements : add gguf
* gguf : update readme with build notes
* gguf : update readme with build notes
* gguf : add notes for tests
2023-08-25 09:26:05 +03:00
Shouzheng Liu
800ef93db4
ggml-alloc : enlarge size of parse_seq ( #2776 )
...
Since we also store barriers in this array, we need to double its size.
2023-08-25 08:58:00 +03:00
Marcus Dunn
1e8200c3d0
Added enum to llama_token_get_type return type ( #2774 )
2023-08-24 23:49:30 +02:00
slaren
506fd81d05
convert.py : try to determine n_ctx automatically for CodeLlama ( #2770 )
2023-08-24 21:10:39 +02:00
slaren
9818be3377
gguf : add rope_freq_base parameter for CodeLlama ( #2769 )
2023-08-24 21:04:05 +03:00
Georgi Gerganov
9042737101
falcon : write file type
2023-08-24 19:58:30 +03:00
Shouzheng Liu
dc51c17e4c
metal : bug-fix when enable ggml-alloc ( #2757 )
...
* metal: better memory alloc w/ concurrency dispatch
The ggml-alloc should only free tensors at memory barriers.
* ggml-alloc: avoid return silently
In certain cases, the allocate_node() function may silently return
without performing any memory allocation.
2023-08-24 19:27:25 +03:00
Georgi Gerganov
8b08abe24f
convert : auto-determine model name based on dir + scripts update
2023-08-24 19:26:47 +03:00
Kerfuffle
2a2645fd76
Fix for main example getting stuck when -n -2 and --interactive ( #2767 )
...
* Fix for main example getting stuck when -n -2 and --interactive
* Add a comment so future generations may suffer less.
2023-08-24 10:11:13 -06:00
slaren
3b743a5340
fix convert.py for codellama, add llama 34B to the list of recognized models ( #2768 )
2023-08-24 17:44:11 +02:00
DannyDaemonic
a74a205f64
Tag release with build number ( #2732 )
...
* Modified build.yml to use build number for release
* Add the short hash back into the tag
* Prefix the build number with b
2023-08-24 15:58:02 +02:00
Georgi Gerganov
25399c1197
metal : add Q8_0 support ( #2763 )
...
* metal : add dequantize_q8_0 kernel
* metal : add mul_mat_q8_0_f32 kernel
* metal : add Q8_0 mul_mm kernel
2023-08-24 16:19:57 +03:00
Georgi Gerganov
96e9fad81f
llama : escape all U+2581 in a string ( #2750 )
2023-08-24 12:26:01 +03:00
Evan Jones
f4102e260a
llama : fix grammar sometimes generating null char ( #2756 )
2023-08-24 07:07:13 +03:00
Georgi Gerganov
fc84c48240
readme : fix link
2023-08-23 23:44:19 +03:00
Georgi Gerganov
1fac3b2c0b
minor : fix trailing whitespace
2023-08-23 23:43:00 +03:00
Georgi Gerganov
eb5bf4480c
readme : update hot topics
2023-08-23 23:41:16 +03:00
Georgi Gerganov
5faba0e8a3
llm : add Falcon support ( #2717 )
...
* llama : refactor GGUF constants into static maps
* llama : check if model architecture is known
* llama : refactor llama_model_load_internal()
* gguf : add KV constant maps
* llm : read arch-specific KVs
* convert : add dummy scores + types
* falcon : load tensor data (CPU only)
* llama : fix loading progress bar
* llama : add arch member to llama_model
* falcon : CPU inference working
* falcon : support non-40B models
* falcon : minor
* llama : minor updates
ggml-ci
* convert-falcon-hf-to-gguf.py : fix special token mapping
* llama.cpp : llama default UNK token = id 0
* llama.cpp : fix bpe tokenizer
* llama.cpp : fix the fix of bpe tokenizer
* ggml : pass eps to ggml_norm
* metal : implement RoPE (mode = 2) + avoid ggml_repeat
* ggml : ggml_repeat always creates new tensor
* falcon : copy-paste self-attention from LLaMA
* metal : print extra compute pipeline info
* falcon : minor changes (still chasing the Metal problem)
* llama.cpp : fix linefeed token
* metal : fix GELU kernel numerical stability by using precise::tanh
* metal : temporary workaround for the concurrency optimization bug
* falcon : add CUDA offloading (#2739 )
* llama : better model naming and size reporting
* llama : prep new tokenizer support
* llama : advanced BPE tokenizer based on ggllm.cpp imlpementation
* llama : remove oboslete comment
ggml-ci
* common : remove obsolete BPE API + disable test-tokenizer-1
* llama : revert BPE special-case in llama_byte_to_token()
* cuda : add TODOs for RoPE NeoX implementation
* llama : default special tokens based on vocab type
* perplexity : add log for start of tokenization
---------
Co-authored-by: klosax <131523366+klosax@users.noreply.github.com >
Co-authored-by: slaren <slarengh@gmail.com >
2023-08-23 23:08:04 +03:00
Georgi Gerganov
4e35149e03
minor : fix trailing whitespace
2023-08-23 22:37:39 +03:00
Olivier Chafik
7bdaf1f167
examples : restore the functionality to import llama2.c models ( #2685 )
...
* Fix import of llama2.c models that don't share weights between embedding layers
* llama2c: reinstate ggmlv3 conversion output + update readme w/ gguf conv
* llama2.c: comment out legacy "load from ggml model" logic
* llama2.c: convert special-cased "<0xXX>" single byte tokens from tokenizer.bin
2023-08-23 22:33:05 +03:00
slaren
fb68e9c4e4
fix convert-lora-to-ggml.py ( #2738 )
2023-08-23 16:46:54 +02:00
klosax
ae180e1cec
main : insert bos if no tokens ( #2727 )
...
* main.cpp : insert bos if no tokens
* Update examples/main/main.cpp
* Update examples/main/main.cpp
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com >
2023-08-23 16:46:03 +02:00
akawrykow
e2108e57c8
gitignore : fix for windows ( #2729 )
2023-08-23 17:31:34 +03:00
Cebtenzzre
557d5f9edf
chmod : make scripts executable ( #2675 )
2023-08-23 17:29:09 +03:00
JohnnyB
61c5da152b
devops : RPM Specs ( #2723 )
...
* Create llama-cpp.srpm
* Rename llama-cpp.srpm to llama-cpp.srpm.spec
Correcting extension.
* Tested spec success.
* Update llama-cpp.srpm.spec
* Create lamma-cpp-cublas.srpm.spec
* Create lamma-cpp-clblast.srpm.spec
* Update lamma-cpp-cublas.srpm.spec
Added BuildRequires
* Moved to devops dir
2023-08-23 17:28:22 +03:00
Kawrakow
6d9174f956
Fix values shown in the quantize tool help ( #2735 )
...
Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com >
2023-08-23 12:57:12 +03:00
Kawrakow
0a7ab80b61
Strided perplexity ( #2714 )
...
* Implementing strided computation of perplexity
* Alternative way to output PPL results
---------
Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com >
2023-08-23 12:56:42 +03:00
IgnacioFDM
047c0403c4
Fix ggml to gguf conversion on Windows ( #2733 )
...
This fixes `RuntimeWarning: overflow encountered in long_scalars`
Credit: anon (not mine)
2023-08-23 03:31:09 -06:00
Xiao-Yong Jin
58656653b1
server : allow json array in prompt or content for direct token input ( #2306 )
...
* server: allow json array in prompt or content
We accept an array of strings and numbers representing tokens,
in addition to the current string valued prompt or content.
This allows direct token input, so that any special tokens
can be processed and used at the frontend during the construction
of the json data, before sending to the server. And the server
does not need to know or parse special tokens from textual input.
With this, we can use EOS and BOS used in llama-2-chat models.
* server: use tokenizePrompt(json) and default "" if empty prompt
* server: fix prompt check
* server: tokenize endpoint no longer adds BOS
2023-08-23 15:12:12 +08:00
Evan Jones
943bf8930c
docs : add grammar docs ( #2701 )
...
* docs : add grammar docs
* tweaks to grammar guide
* rework GBNF example to be a commented grammar
2023-08-22 21:01:57 -04:00
Kerfuffle
0ef7086455
Improve handling of special tokens in GGML to GGUF converter ( #2725 )
...
* Improve UNK, BOS, EOS token handling when converting without metadata.
* Allow importing as a module.
* Remove some obsolete code and minor cleanups.
* Set default UNK token mapping from -1 to 0 in llama.cpp
* Try to handle overflow due to buggy Windows Python with a better error message
2023-08-22 17:39:39 -06:00
goerch
d916cb3d85
llama : fix whitespace escaping in tokenizer ( #2724 )
2023-08-23 00:10:42 +03:00
Johannes Gäßler
466a79f7b4
CUDA: use mul_mat_q kernels by default ( #2683 )
2023-08-22 22:47:05 +02:00
Alex Petenchea
c358145028
convert.py : clarifying error message ( #2718 )
2023-08-22 21:58:16 +03:00
Jiahao Li
946bf0ad96
Fix CUDA softmax by subtracting max value before exp ( #2665 )
2023-08-22 20:27:06 +02:00
Georgi Gerganov
45b45614c0
gguf : add ftype meta info to the model ( #2710 )
...
* llama : add ftype meta info to the model
ggml-ci
* convert.py : add ftype when converting (does not work)
* convert.py : fix Enum to IntEnum
ggml-ci
2023-08-22 20:05:59 +03:00
Kawrakow
42e9f23b94
Quantization imrovements for k_quants ( #2707 )
...
* Improve LLaMA-2 2-, 3- and 4-bit quantization
* Q3_K_S: use Q5_K for 1st 2 layers of attention.wv and feed_forward.w2
* Q4_K_S: use Q6_K for 1st 2 layers of attention.wv and feed_forward.w2
* Q2_K and Q3_K_M: use Q5_K instead of Q4_K for 1st 2 layers of
attention.wv and feed_forward.w2
This leads to a slight model sized increase as follows:
Q2_K : 2.684G vs 2.670G
Q3_K_S: 2.775G vs 2.745G
Q3_K_M: 3.071G vs 3.057G
Q4_K_S: 3.592G vs 3.563G
LLaMA-2 PPL for context 512 changes as follows:
Q2_K : 6.6691 vs 6.8201
Q3_K_S: 6.2129 vs 6.2584
Q3_K_M: 6.0387 vs 6.1371
Q4_K_S: 5.9138 vs 6.0041
There are improvements for LLaMA-1 as well, but they are
way smaller than the above.
* Minor 4-bit quantization improvement
For the same model size as previus commit, we get
PPL = 5.9069 vs 5.9138.
* Some more fine tuning
* Adding make_qkx2_quants
With it, we get PPL = 5.8828 for L2-7B Q4_K_S.
* Another minor improvement
* Q2_K improvement
Smaller model, lower perplexity.
7B: file size = 2.632G, PPL = 6.3772 vs original 2.670G PPL = 6.8201
12B: file size = 5.056G, PPL = 5.4577 vs original 5.130G PPL = 5.7178
It is mostly Q3_K except for tok_embeddings, attention.wq, attention.wk,
which are Q2_K
* Iterating
* Revert Q5_K back to make_qkx1_quants
* Better Q6_K
* make_qkx2_quants is better for Q5_K after all
* Fix after rebasing on master
* Fix for changed tensor names
---------
Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com >
2023-08-22 19:14:09 +03:00
slaren
a1a27de242
embedding : evaluate prompt in batches ( #2713 )
2023-08-22 16:03:12 +02:00
slaren
127e9263b9
ggml-cuda : use graph allocator ( #2684 )
...
use a different function for no_alloc to avoid breaking backwards compat, fixes lora
remove 512 n_batch limit
fixed 2048 batch size
cleanup
Co-authored-by: Johannes Gäßler <johannesg@5d6.de >
2023-08-22 15:25:19 +02:00
Georgi Gerganov
cd3ea9a1aa
ggml : sync latest (SAM + SD operators, CUDA alibi) ( #2709 )
...
* ggml : sync latest (SAM + SD operators, CUDA alibi)
ggml-ci
* ggml : fix tabs
2023-08-22 14:22:08 +03:00
slaren
f7b77acb2d
llama-bench : minor fixes ( #2695 )
2023-08-22 10:56:03 +03:00
Kylin
6c828cecd4
ggml : support CUDA's half type for aarch64( #1455 ) ( #2670 )
...
* ggml: support CUDA's half type for aarch64(#1455 )
support CUDA's half type for aarch64 in ggml_fp16_t definition
* ggml: use __CUDACC__ to recognise nvcc compiler
2023-08-22 10:14:23 +03:00
Shouzheng Liu
546f5e93bb
metal : add missing barriers for mul-mat ( #2699 )
2023-08-22 09:18:40 +03:00
Jhen-Jie Hong
d5ef2c2437
server : fallback to default if client param is null ( #2688 )
...
* server : fallback to default if client param is null
* server : do not overwrite 404 if status is 500 from exception_handler
2023-08-22 08:32:00 +08:00
Kerfuffle
6b727cf519
Fix convert-llama-ggmlv3-to-gguf.py vocab conversion ( #2698 )
...
When converting without metadata, the hex value for bytes entries weren't 0 padded to 2 digits.
2023-08-21 18:01:34 -06:00
Georgi Gerganov
b8ca3dd433
py : remove obsolete script
2023-08-21 23:40:22 +03:00