grahameth
70b6d930bd
add log_callback to llama_context_params for custom logging. ( #2234 )
...
* add log_callback to llama_context_params for custom logging.
* Fix macro expansion on gcc
* Add struct llama_state for global variables and move log_callback there
* Turn log level into enum and some minor changes.
* Remove model_for_logging parameter (not needed anymore)
* Convert remaining fprintf(stderr, ...) calls to use new macros.
* Fix enum and initialize g_state
* Fix log calls after merge
* Fix missing static
* Add back all the new lines in the logging strings
* Add comment for llama_log_callback and replace remaining printf calls
---------
Co-authored-by: grahameth <->
Co-authored-by: Helmut <helmut.buhler@inf.h-brs.de>
2023-08-09 22:46:40 +02:00
Johannes Gäßler
ade4f55531
CUDA: tuned mul_mat_q kernels ( #2546 )
2023-08-09 09:42:34 +02:00
Martin Krasser
220649a37f
Allow passing grammar to completion endpoint ( #2532 )
...
* Allow passing grammar to completion endpoint
2023-08-08 16:29:19 +03:00
Johannes Gäßler
fe11463cef
CUDA: tighter VRAM scratch size for 65b/70b ( #2551 )
2023-08-08 14:38:16 +02:00
chaihahaha
932dac4a50
llm.vim : multiline autocompletion, get rid of "^@" ( #2543 )
2023-08-08 15:07:02 +03:00
Georgi Gerganov
74f72ea675
vim : bring back simple llm.vim example
2023-08-08 15:06:18 +03:00
AustinMroz
d517b2f07c
vim : streaming and more ( #2495 )
...
* Update Vim plugin
* Remove getbufoneline usage, Add input bind example.
getbufoneline() appears to be a recently added function and has been
replaced with getbufline for compatibility.
An additional example that explains how to add a keybind that works in
insert mode was added.
2023-08-08 14:44:48 +03:00
klosax
fb17afa861
Add --rope-scale parameter ( #2544 )
...
* common.cpp : Add --rope-scale parameter
* README.md : Add info about using linear rope scaling
2023-08-07 19:07:19 +02:00
Georgi Gerganov
0248ddcded
ggml : mul mat tweaks ( #2372 )
...
* ggml : mul mat wip
ggml-ci
* ggml : alternative thread distribution for mul_mat
ggml-ci
* ggml : mul_mat block tiling attempt
* ggml : mul_mat threads yield
ggml-ci
2023-08-07 14:25:58 +03:00
Georgi Gerganov
0208ddf702
ggml : pad result of ggml_nbytes()
2023-08-07 14:24:42 +03:00
Georgi Gerganov
d922c47542
ggml : change params pointer (style change) ( #2539 )
...
ggml-ci
2023-08-07 13:55:18 +03:00
Georgi Gerganov
f3ee043e24
ggml : sync (custom ops) ( #2537 )
...
ggml-ci
2023-08-07 13:20:09 +03:00
Johannes Gäßler
19cf204509
Fixed mmap prefetch for GPU offloading ( #2529 )
2023-08-07 10:09:40 +02:00
Georgi Gerganov
0637bae56a
metal : fix out-of-bounds access + inc concurrency nodes ( #2416 )
...
* metal : fix out-of-bounds access + style changes
* metal : increase concurrency nodes to 2*GGML_MAX_NODES
2023-08-07 10:52:57 +03:00
GiviMAD
c39e3477fd
[Makefile] Move ARM CFLAGS before compilation ( #2536 )
2023-08-07 09:21:46 +03:00
Henri Vasserman
8cc4ff2f1a
[Zig] Rewrite build for Zig 0.11 ( #2514 )
...
* zig build fixes
* Disable LTO on Windows.
2023-08-07 08:35:53 +03:00
DannyDaemonic
28fe11b65e
console : fix issue related to Windows 11 PowerShell console mode persistence ( #2521 )
2023-08-06 09:49:34 +03:00
Keiichi Tabata
49fe9844e7
convert.py : add missing abstract methods for quantized data ( #2491 )
2023-08-06 09:34:05 +03:00
Johannes Gäßler
d5060f4590
CUDA: faster k-quant mul_mat_q kernels ( #2525 )
2023-08-05 18:20:44 +02:00
Jonas Wunderlich
e45dbc8951
fix firefox autoscroll ( #2519 )
2023-08-04 22:16:11 +02:00
Cebtenzzre
6ae47d1968
server: regenerate completion.js.hpp ( #2515 )
2023-08-04 21:00:57 +02:00
Cebtenzzre
bcf5f2e1d8
CUDA: use min compute capability of GPUs actually used ( #2506 )
2023-08-04 17:35:22 +02:00
Cebtenzzre
9949b4fc34
CUDA: check if event is NULL before cudaStreamWaitEvent ( #2505 )
...
Fixes #2503
2023-08-04 17:34:32 +02:00
DannyDaemonic
b1cabde3b3
Add --simple-io option for subprocesses and break out console.h and cpp ( #1558 )
2023-08-04 08:20:12 -07:00
Stephen Nichols
d1d9a25cce
Fixing race condition in server and partial stream handling in frontend. ( #2391 )
...
* Fixing race condition in server.cpp and partial stream handling in completion.js
* Reverting assert edits.
* Adding newline to eof
2023-08-04 13:37:24 +02:00
l3utterfly
7bc95dcf62
Stream save llama context data to file instead of allocating entire buffer upfront ( #2488 )
...
* added stream saving context data to file to avoid allocating unnecessary amounts of memory
* generalised copying state data to file or buffer
* added comments explaining how copy_state_data works
* fixed trailing whitespaces
* fixed save load state example
* updated save load state to use public function in llama.cpp
* - restored breakage of the llama_copy_state_data API
- moved new logic for copying llama state data to internal function
* fixed function declaration order
* restored save load state example
* fixed whitepace
* removed unused llama-util.h include
* Apply suggestions from code review
Co-authored-by: slaren <slarengh@gmail.com >
* Apply code review suggestions
Co-authored-by: slaren <slarengh@gmail.com >
---------
Co-authored-by: slaren <slarengh@gmail.com >
2023-08-04 13:29:52 +02:00
Borislav Stanimirov
04e64ce94e
build : fix several cast and printf warnings ( #2499 )
2023-08-04 13:07:21 +03:00
Evan Jones
79f99c6b4d
examples : generate JSON according to schema ( #1887 )
...
* examples : add JSON schema grammars
* complete JSON grammar
* ensure primitive types can be used as root of schema
* support integer type and adjust usage text
2023-08-02 22:05:44 -04:00
Johannes Gäßler
5dc74e92ed
CUDA: faster non k-quant mul_mat_q kernels ( #2483 )
2023-08-02 18:04:04 +02:00
Johannes Gäßler
c06707dd78
CUDA: Fix models with output size != 32000 ( #2480 )
2023-08-02 16:48:10 +02:00
ldwang
fe9d3103ca
readme : add Aquila-7B model series to supported models ( #2487 )
...
* support bpe tokenizer in convert
Signed-off-by: ldwang <ftgreat@gmail.com >
* support bpe tokenizer in convert
Signed-off-by: ldwang <ftgreat@gmail.com >
* support bpe tokenizer in convert, fix
Signed-off-by: ldwang <ftgreat@gmail.com >
* Add Aquila-7B models in README.md
Signed-off-by: ldwang <ftgreat@gmail.com >
* Up Aquila-7B models in README.md
Signed-off-by: ldwang <ftgreat@gmail.com >
---------
Signed-off-by: ldwang <ftgreat@gmail.com >
Co-authored-by: ldwang <ftgreat@gmail.com >
2023-08-02 11:21:11 +03:00
Eve
31e5af188f
tests : Fix compilation warnings (Linux/GCC) ( #2451 )
...
* fix hellaswag print format, cast away warning in test-double-float
* c++11 cannot use designated initializers
* add static to test-grad0.c internal functions
* use memcpy in test-double-float.c
* port c tests to c++
* use initializer list for ggml_init_params
2023-08-02 11:06:19 +03:00
Yiming Cui
bb2e752345
readme : Add Chinese LLaMA-2 / Alpaca-2 to supported models ( #2475 )
...
* add support for chinese llama-2 / alpaca-2
* remove white spaces
2023-08-02 09:18:31 +03:00
Bono Lv
c8d9f77356
fix a typo in examples/server/README.md ( #2478 )
2023-08-01 14:54:28 +02:00
ebraminio
7386a7dd10
server : Support dark mode ( #2414 )
...
* server : Support dark mode
So it respects user system light / dark settings.
* Update index.html.hpp by running ./deps.sh
2023-08-01 10:56:23 +02:00
Matteo Boschini
f9967c22eb
metal : add gqa8 kernel to allow llama-2-70B on metal ( #2459 )
...
* Added gqa8 kernel to allow llama-2-70B on metal
* Update ggml-metal.m
Co-authored-by: Cebtenzzre <cebtenzzre@gmail.com >
* Extend kernel_mul_mat_f16_f32 to handle gqa broadcast
* Added ne03==ne13 assertion
---------
Co-authored-by: Cebtenzzre <cebtenzzre@gmail.com >
2023-08-01 10:43:12 +03:00
Johannes Gäßler
b2dc2f8155
CUDA: fixed LLAMA_FAST compilation option ( #2473 )
2023-07-31 21:02:19 +02:00
Johannes Gäßler
b4ab3782da
CUDA: fixed cmake F16 option ( #2471 )
2023-07-31 19:52:22 +02:00
Johannes Gäßler
83772f01d7
CUDA: mmq CLI option, fixed mmq build issues ( #2453 )
2023-07-31 15:44:35 +02:00
Johannes Gäßler
1050a60adc
CUDA: Implemented row flattening for non-glm RoPE ( #2468 )
2023-07-31 14:32:30 +02:00
Johannes Gäßler
f329c99fa1
CUDA: fewer memory bank conflicts for mul_mat_q ( #2458 )
2023-07-31 13:18:51 +02:00
slaren
023de5bf7d
Fix Metal backend broken from the allocator changes ( #2455 )
...
* fix Metal backend broken from the allocator changes
2023-07-31 11:02:53 +02:00
slaren
2786fff886
ggml : add graph tensor allocator ( #2411 )
...
* ggml : add graph tensor allocator
* ggml : don't calculate data pointer of unallocated tensors when creating a view with an offset
* ggml : refactor ggml_view_Nd into ggml_view_tensor_offset
2023-07-30 15:58:01 +02:00
Johannes Gäßler
28d49137d3
CUDA: Quantized matrix matrix multiplication ( #2160 )
...
* mmq implementation for non k-quants
* q6_K
* q2_K
* q3_k
* q4_K
* vdr
* q5_K
* faster q8_1 loading
* loop unrolling
* add __restrict__
* q2_K sc_high
* GGML_CUDA_MMQ_Y
* Updated Makefile
* Update Makefile
* DMMV_F16 -> F16
* Updated README, CMakeLists
* Fix CMakeLists.txt
* Fix CMakeLists.txt
* Fix multi GPU out-of-bounds
2023-07-29 23:04:44 +02:00
Johannes Gäßler
28cbbe6ad4
CUDA: faster multi GPU synchronization ( #2448 )
2023-07-29 23:04:10 +02:00
klosax
fdcc7db2a5
perplexity : add Hellaswag calculation ( #2389 )
...
* common.h : add hellaswag / remove perplexity-lines
* common.cpp : add hellaswag / remove perplexity-lines
* perplexity.cpp : add hellswag scores / remove perplexity-lines
* perplexity.cpp : clean up
* common.h : change default param value
* common.cpp : Change default param
* perplexity.cpp : alter wording
* common.h : alter wording
* common.cpp : alter wording
2023-07-28 21:25:36 +03:00
Lee
289f077ca7
ggml : workaround for missing _mm256_setr_m128i in GCC < 8 in k_quants.c ( #2405 )
2023-07-28 21:17:45 +03:00
eric8607242
0a36743aff
llama : support more diverse tokenizers? ( #2420 )
...
* supporting more diverse tokenizers
* Update llama.cpp
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com >
2023-07-28 21:10:05 +03:00
Georgi Gerganov
ca55f98a26
examples : fix whitespace
2023-07-28 21:05:08 +03:00
nhamanasu
01a14d2c58
examples : server chat mode with llama2 ( #2400 )
...
* add: server chat mode with llama2
* fix: remove the unnecessary last \n
2023-07-28 21:02:10 +03:00