Georgi Gerganov
f3ee043e24
ggml : sync (custom ops) ( #2537 )
...
ggml-ci
2023-08-07 13:20:09 +03:00
Johannes Gäßler
19cf204509
Fixed mmap prefetch for GPU offloading ( #2529 )
2023-08-07 10:09:40 +02:00
Georgi Gerganov
0637bae56a
metal : fix out-of-bounds access + inc concurrency nodes ( #2416 )
...
* metal : fix out-of-bounds access + style changes
* metal : increase concurrency nodes to 2*GGML_MAX_NODES
2023-08-07 10:52:57 +03:00
GiviMAD
c39e3477fd
[Makefile] Move ARM CFLAGS before compilation ( #2536 )
2023-08-07 09:21:46 +03:00
Henri Vasserman
8cc4ff2f1a
[Zig] Rewrite build for Zig 0.11 ( #2514 )
...
* zig build fixes
* Disable LTO on Windows.
2023-08-07 08:35:53 +03:00
DannyDaemonic
28fe11b65e
console : fix issue related to Windows 11 PowerShell console mode persistence ( #2521 )
2023-08-06 09:49:34 +03:00
Keiichi Tabata
49fe9844e7
convert.py : add missing abstract methods for quantized data ( #2491 )
2023-08-06 09:34:05 +03:00
Johannes Gäßler
d5060f4590
CUDA: faster k-quant mul_mat_q kernels ( #2525 )
2023-08-05 18:20:44 +02:00
Jonas Wunderlich
e45dbc8951
fix firefox autoscroll ( #2519 )
2023-08-04 22:16:11 +02:00
Cebtenzzre
6ae47d1968
server: regenerate completion.js.hpp ( #2515 )
2023-08-04 21:00:57 +02:00
Cebtenzzre
bcf5f2e1d8
CUDA: use min compute capability of GPUs actually used ( #2506 )
2023-08-04 17:35:22 +02:00
Cebtenzzre
9949b4fc34
CUDA: check if event is NULL before cudaStreamWaitEvent ( #2505 )
...
Fixes #2503
2023-08-04 17:34:32 +02:00
DannyDaemonic
b1cabde3b3
Add --simple-io option for subprocesses and break out console.h and cpp ( #1558 )
2023-08-04 08:20:12 -07:00
Stephen Nichols
d1d9a25cce
Fixing race condition in server and partial stream handling in frontend. ( #2391 )
...
* Fixing race condition in server.cpp and partial stream handling in completion.js
* Reverting assert edits.
* Adding newline to eof
2023-08-04 13:37:24 +02:00
l3utterfly
7bc95dcf62
Stream save llama context data to file instead of allocating entire buffer upfront ( #2488 )
...
* added stream saving context data to file to avoid allocating unnecessary amounts of memory
* generalised copying state data to file or buffer
* added comments explaining how copy_state_data works
* fixed trailing whitespaces
* fixed save load state example
* updated save load state to use public function in llama.cpp
* - restored breakage of the llama_copy_state_data API
- moved new logic for copying llama state data to internal function
* fixed function declaration order
* restored save load state example
* fixed whitepace
* removed unused llama-util.h include
* Apply suggestions from code review
Co-authored-by: slaren <slarengh@gmail.com >
* Apply code review suggestions
Co-authored-by: slaren <slarengh@gmail.com >
---------
Co-authored-by: slaren <slarengh@gmail.com >
2023-08-04 13:29:52 +02:00
Borislav Stanimirov
04e64ce94e
build : fix several cast and printf warnings ( #2499 )
2023-08-04 13:07:21 +03:00
Evan Jones
79f99c6b4d
examples : generate JSON according to schema ( #1887 )
...
* examples : add JSON schema grammars
* complete JSON grammar
* ensure primitive types can be used as root of schema
* support integer type and adjust usage text
2023-08-02 22:05:44 -04:00
Johannes Gäßler
5dc74e92ed
CUDA: faster non k-quant mul_mat_q kernels ( #2483 )
2023-08-02 18:04:04 +02:00
Johannes Gäßler
c06707dd78
CUDA: Fix models with output size != 32000 ( #2480 )
2023-08-02 16:48:10 +02:00
ldwang
fe9d3103ca
readme : add Aquila-7B model series to supported models ( #2487 )
...
* support bpe tokenizer in convert
Signed-off-by: ldwang <ftgreat@gmail.com >
* support bpe tokenizer in convert
Signed-off-by: ldwang <ftgreat@gmail.com >
* support bpe tokenizer in convert, fix
Signed-off-by: ldwang <ftgreat@gmail.com >
* Add Aquila-7B models in README.md
Signed-off-by: ldwang <ftgreat@gmail.com >
* Up Aquila-7B models in README.md
Signed-off-by: ldwang <ftgreat@gmail.com >
---------
Signed-off-by: ldwang <ftgreat@gmail.com >
Co-authored-by: ldwang <ftgreat@gmail.com >
2023-08-02 11:21:11 +03:00
Eve
31e5af188f
tests : Fix compilation warnings (Linux/GCC) ( #2451 )
...
* fix hellaswag print format, cast away warning in test-double-float
* c++11 cannot use designated initializers
* add static to test-grad0.c internal functions
* use memcpy in test-double-float.c
* port c tests to c++
* use initializer list for ggml_init_params
2023-08-02 11:06:19 +03:00
Yiming Cui
bb2e752345
readme : Add Chinese LLaMA-2 / Alpaca-2 to supported models ( #2475 )
...
* add support for chinese llama-2 / alpaca-2
* remove white spaces
2023-08-02 09:18:31 +03:00
Bono Lv
c8d9f77356
fix a typo in examples/server/README.md ( #2478 )
2023-08-01 14:54:28 +02:00
ebraminio
7386a7dd10
server : Support dark mode ( #2414 )
...
* server : Support dark mode
So it respects user system light / dark settings.
* Update index.html.hpp by running ./deps.sh
2023-08-01 10:56:23 +02:00
Matteo Boschini
f9967c22eb
metal : add gqa8 kernel to allow llama-2-70B on metal ( #2459 )
...
* Added gqa8 kernel to allow llama-2-70B on metal
* Update ggml-metal.m
Co-authored-by: Cebtenzzre <cebtenzzre@gmail.com >
* Extend kernel_mul_mat_f16_f32 to handle gqa broadcast
* Added ne03==ne13 assertion
---------
Co-authored-by: Cebtenzzre <cebtenzzre@gmail.com >
2023-08-01 10:43:12 +03:00
Johannes Gäßler
b2dc2f8155
CUDA: fixed LLAMA_FAST compilation option ( #2473 )
2023-07-31 21:02:19 +02:00
Johannes Gäßler
b4ab3782da
CUDA: fixed cmake F16 option ( #2471 )
2023-07-31 19:52:22 +02:00
Johannes Gäßler
83772f01d7
CUDA: mmq CLI option, fixed mmq build issues ( #2453 )
2023-07-31 15:44:35 +02:00
Johannes Gäßler
1050a60adc
CUDA: Implemented row flattening for non-glm RoPE ( #2468 )
2023-07-31 14:32:30 +02:00
Johannes Gäßler
f329c99fa1
CUDA: fewer memory bank conflicts for mul_mat_q ( #2458 )
2023-07-31 13:18:51 +02:00
slaren
023de5bf7d
Fix Metal backend broken from the allocator changes ( #2455 )
...
* fix Metal backend broken from the allocator changes
2023-07-31 11:02:53 +02:00
slaren
2786fff886
ggml : add graph tensor allocator ( #2411 )
...
* ggml : add graph tensor allocator
* ggml : don't calculate data pointer of unallocated tensors when creating a view with an offset
* ggml : refactor ggml_view_Nd into ggml_view_tensor_offset
2023-07-30 15:58:01 +02:00
Johannes Gäßler
28d49137d3
CUDA: Quantized matrix matrix multiplication ( #2160 )
...
* mmq implementation for non k-quants
* q6_K
* q2_K
* q3_k
* q4_K
* vdr
* q5_K
* faster q8_1 loading
* loop unrolling
* add __restrict__
* q2_K sc_high
* GGML_CUDA_MMQ_Y
* Updated Makefile
* Update Makefile
* DMMV_F16 -> F16
* Updated README, CMakeLists
* Fix CMakeLists.txt
* Fix CMakeLists.txt
* Fix multi GPU out-of-bounds
2023-07-29 23:04:44 +02:00
Johannes Gäßler
28cbbe6ad4
CUDA: faster multi GPU synchronization ( #2448 )
2023-07-29 23:04:10 +02:00
klosax
fdcc7db2a5
perplexity : add Hellaswag calculation ( #2389 )
...
* common.h : add hellaswag / remove perplexity-lines
* common.cpp : add hellaswag / remove perplexity-lines
* perplexity.cpp : add hellswag scores / remove perplexity-lines
* perplexity.cpp : clean up
* common.h : change default param value
* common.cpp : Change default param
* perplexity.cpp : alter wording
* common.h : alter wording
* common.cpp : alter wording
2023-07-28 21:25:36 +03:00
Lee
289f077ca7
ggml : workaround for missing _mm256_setr_m128i in GCC < 8 in k_quants.c ( #2405 )
2023-07-28 21:17:45 +03:00
eric8607242
0a36743aff
llama : support more diverse tokenizers? ( #2420 )
...
* supporting more diverse tokenizers
* Update llama.cpp
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com >
2023-07-28 21:10:05 +03:00
Georgi Gerganov
ca55f98a26
examples : fix whitespace
2023-07-28 21:05:08 +03:00
nhamanasu
01a14d2c58
examples : server chat mode with llama2 ( #2400 )
...
* add: server chat mode with llama2
* fix: remove the unnecessary last \n
2023-07-28 21:02:10 +03:00
Weird Constructor
fb09f01651
readme : fix the description of the Tail free sampling (TFS) method ( #2431 )
2023-07-28 11:44:43 +03:00
Rand Xie
0c7137b4c3
llama : use n_embd_gqa instead of n_embd to handle llama-2 70B ( #2433 )
2023-07-28 11:42:53 +03:00
niansa/tuxifan
be41ec86c5
Obtaining LLaMA 2 instructions ( #2308 )
...
* Obtaining LLaMA 2 instructions
* Removed sharing warning for LLaMA 2
* Linked TheBloke's GGML repos
* Add LLaMA 2 to list of supported models
* Added LLaMA 2 usage instructions
* Added links to LLaMA 2 70B models
2023-07-28 03:14:11 +02:00
mj-shifu
28b471d0e4
convert.py : Update to support 70B HF format model files ( #2427 )
...
* convert.py : fix llama 2 70b conversion from Huggingface
2023-07-27 14:39:17 -06:00
Georgi Gerganov
bab9a5358d
metal : disable graph concurrency optimization due to bug ( #2413 )
2023-07-27 11:00:54 +03:00
slaren
3a04da8e17
ggml : fix assert in ggml_set_unary_op ( #2410 )
2023-07-26 23:57:23 +02:00
Cebtenzzre
59b12eb025
make : build with -Wmissing-prototypes ( #2394 )
2023-07-26 21:00:04 +03:00
slaren
d81bd68d7d
ggml : allocate graphs in a context ( #2392 )
...
* ggml : graph allocation in contexts
* allocate work buffer as a ggml_object in ggml_graph_compute_with_ctx
* llama.cpp : allocate graph in the context
* add GGML_PAD
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com >
2023-07-26 15:56:53 +02:00
Kawrakow
abd493a08a
Add LLAMA_DEFAULT_RMS_EPS so we can change the default ( #2384 )
...
Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com >
2023-07-25 18:35:53 +03:00
slaren
e31220e203
ggml : fix ggml_flash_attn to use op_params ( #2387 )
...
* ggml : fix ggml_flash_attn to use op_params
2023-07-25 16:20:12 +02:00
ldwang
d954955e56
convert.py : support bpe tokenizer ( #2228 )
...
* support bpe tokenizer in convert
Signed-off-by: ldwang <ftgreat@gmail.com >
* support bpe tokenizer in convert
Signed-off-by: ldwang <ftgreat@gmail.com >
* support bpe tokenizer in convert, fix
Signed-off-by: ldwang <ftgreat@gmail.com >
---------
Signed-off-by: ldwang <ftgreat@gmail.com >
Co-authored-by: ldwang <ftgreat@gmail.com >
2023-07-25 16:22:09 +03:00