4149 Commits

Author SHA1 Message Date
Georgi Gerganov
d2336726ee Disable prompt verbosity by default and add option to enable (#480) 2023-03-25 17:17:16 +02:00
slaren
432b98793c Add AVX2 implementation of dequantize_row_q4_0 (#467) 2023-03-25 17:06:49 +02:00
Georgi Gerganov
9f8548b2d5 Don't interefe with BLAS for large prompts by running only 1 thread 2023-03-25 17:03:10 +02:00
Georgi Gerganov
f6a2b1fc20 Add longer DAN prompt for testing big batch numbers 2023-03-25 16:49:09 +02:00
slaren
e66804f2d7 Add timings for the prompt evaluation (#478) 2023-03-25 16:34:23 +02:00
Georgi Gerganov
1c1459f073 Remove obsolete information from README 2023-03-25 16:30:32 +02:00
Georgi Gerganov
39ab880ccd Remove obsolete assert and fix compiler warning 2023-03-25 16:22:05 +02:00
Georgi Gerganov
0bbf9a17e7 Fix nasty bug in ggml_compute_forward_mul_mat_f32() and reenable BLAS 2023-03-25 16:10:14 +02:00
anzz1
f60b207880 bounds checking for input prefix (#492) 2023-03-25 14:42:09 +02:00
anzz1
e0522e5dd3 feat: '--in-prefix STRING' option (#426)
Prefix user inputs with a string
2023-03-25 14:03:19 +02:00
Jed Fox
3261abc446 Add support for file load progress reporting callbacks (#434)
* File load progress reporting

* Move llama_progress_handler into llama_context_params

* Renames

* Use seekg to find file size instead

* More correct load progress

* Call progress callback more frequently

* Fix typo
2023-03-25 07:26:28 +02:00
Doomsdayrs
27d29a069f Add missing struct annotation (#483)
`llama_sample_top_p_top_k` was missing the struct annotation on line 126.

This causes a compiler issue when being parsed by the Kotlin C interop generator.

This commit fixes the above issue by adding the struct annotation.
2023-03-25 07:21:24 +02:00
Chris Kuehl
9ba873f48c Fix crash for 65B model with pre-allocated memory (#485) 2023-03-25 06:38:14 +02:00
Georgi Gerganov
0965918677 Disable BLAS altogether - the bug is not just for qunatized mat mul 2023-03-24 23:47:06 +02:00
Georgi Gerganov
76e580d933 Disable BLAS branch in mul_mat - seems there is a bug 2023-03-24 23:39:17 +02:00
Georgi Gerganov
ba186f7f64 Immediately start processing the prompt before user input has been provided (#476) 2023-03-24 23:17:58 +02:00
Georgi Gerganov
92dc17b275 Reduce memory usage and allocate enough memory for largest context (#473)
* Reduce memory usage and allocate enough memory for large contexts

* Simpler scratch buffer usage

* Reenable BLAS for quantized mul_mat

* Fix number of layers in 30B and 65B

* Fix KV cache size for F32
2023-03-24 23:17:37 +02:00
Georgi Gerganov
a1a48cfccb Temporary bump the memory buffer size - hopefully fix issues from 483bab2e 2023-03-24 18:23:56 +02:00
Gary Mulder
ccf5a1b08d Update README.md (#444)
Added explicit **bolded** instructions clarifying that people need to request access to models from Facebook and never through through this repo.
2023-03-24 15:23:09 +00:00
rabidcopy
7743aa368c fix instruct mode (#445)
changes to EOS behavior in interactive and reverse prompt handling broke instruct mode by erroneously injecting instruct mode's reverse prompt and an extra newline.
2023-03-24 17:22:39 +02:00
Georgi Gerganov
581994aef0 Properly free llama_context on failure 2023-03-24 17:21:01 +02:00
Cameron Kaiser
5571dc71c4 additional optimizations for POWER9 (#454) 2023-03-24 17:19:26 +02:00
comex
d86b7f08ad Support calling mlock() on loaded model data on Linux and macOS (#453)
* Support calling mlock() on loaded model data on Linux and macOS

This is enabled by a new --mlock command line option.

Using mlock() disables swapping and memory compression for the model
data.  Doing so can be useful on systems where the model takes up a
large fraction of system RAM.  In my experience, macOS is quite eager to
start compressing llama.cpp's memory, which then makes it halt for a few
seconds while it decompresses, even with a model that uses "only" 25GB
out of 32GB.

Of course, this comes at the cost of forcing the system to swap or
compress other processes' memory instead, so it needs to be used with
care and shouldn't be enabled by default.

In theory it should be possible to support this on Windows as well using
VirtualLock(), but I'm not much of a Windows user.

* Update llama.cpp

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-03-24 17:19:05 +02:00
Luciano
605a3aaef3 Add embedding mode with arg flag. Currently working (#282)
* working but ugly

* add arg flag, not working on embedding mode

* typo

* Working! Thanks to @nullhook

* make params argument instead of hardcoded boolean. remove useless time check

* start doing the instructions but not finished. This probably doesnt compile

* Embeddings extraction support

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-03-24 17:05:13 +02:00
Georgi Gerganov
1f369c619d Add link to Roadmap discussion 2023-03-24 09:13:35 +02:00
Georgi Gerganov
681cbacbe1 Revert "Fix memory allocation issues and seg faults"
This reverts commit 4870e455b3.

Will provide the correct fix later
2023-03-24 06:22:28 +02:00
Georgi Gerganov
3d8185edc9 Fix memory allocation issues and seg faults 2023-03-24 00:11:53 +02:00
Georgi Gerganov
370c9ecb96 Avoid the transposed X branch in the Z = X * Y matrix multiplication (#439)
Should make results reproducible for different number of threads and batch sizes
2023-03-23 23:22:01 +02:00
Jed Fox
d89d84ac0f Fix quantize script not finding models in parent directory (#428) 2023-03-23 22:42:52 +02:00
Georgi Gerganov
20c3c59bd4 Remove oboslete command from Docker script 2023-03-23 22:39:44 +02:00
Georgi Gerganov
0e661616e2 Obsolete 2023-03-23 22:32:21 +02:00
rabidcopy
8faa6c7718 Replace EOS with newline to prevent context/memory being flushed by EOS in interactive mode (#333)
* Improve interactive mode's coherence after EOS

Aims to improve coherence and ability to resume the interactive session when the user is given input back after an end of text token is reached.
Not sure what token 13 is or why it seems to help. See conversation for examples.

* Make newline token a constant

* dynamically determine newline token

* relocate previous newline token const

* cleanup whitespace

* print a new line on end of text in interactive

this may need to be looked into further when not using a reverse prompt

* only print manual newline with reverse prompt

fix formatting of reverse prompts so they don't end up at the end of the current line while not introducing unnecessary new lines otherwise

* alternate approach to replace end of text tokens

* Inject the reverse prompt again after eos in interactive mode

* tokenize reverse prompt when needed

makes this PR compatible with https://github.com/ggerganov/llama.cpp/pull/330

* tokenize and inject only first reverse prompt

thanks to tjohnman

* tokenize first reverse prompt once

* add newline token

* add newline token

* tokenize/inject reverse prompt for refactor

this doesn't seem right though

* tokenize nothing for antiprompt if no reverse

* Update main.cpp

* Update main.cpp

* tokenize and inject reverse prompt as needed

this doesn't seem to work if the reverse prompt is tokenized outside earlier on

* not needed

* remove newline token

* remove newline token

* tokenize newline token

* add space to comment

* Update main.cpp

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

---------

Co-authored-by: Slaren <2141330+slaren@users.noreply.github.com>
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-03-23 22:22:47 +02:00
Timmy Knight
9c5d5c52ce Fix GPTQ converter (#423)
* Fix GPTQ converter

* Fix comment

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-03-23 22:18:13 +02:00
nusu-github
fd1312648c Generate library with CMake (#430)
* Generate library with CMake

BUILD_SHARED_LIBS to allow llama library to be generated.

* Turn ON PIC when BUILD_SHARED_LIBS is ON
2023-03-23 21:16:48 +01:00
anzz1
662adbfdb6 Command line args bounds checking (#424)
* command line args bounds checking

* unknown and invalid param exit codes 0 -> 1
2023-03-23 19:54:28 +02:00
Ben Siraphob
bf0302d463 Fix Nix build 2023-03-23 17:51:26 +01:00
Stephan Walter
3ebb023fb2 Revert "Delete SHA256SUMS for now" (#429)
* Revert "Delete SHA256SUMS for now (#416)"

This reverts commit 8eea5ae0e5.

* Remove ggml files until they can be verified
* Remove alpaca json
* Add also model/tokenizer.model to SHA256SUMS + update README

---------

Co-authored-by: Pavol Rusnak <pavol@rusnak.io>
2023-03-23 15:15:48 +01:00
Kerfuffle
455fffe547 Fix Makefile echo escape codes (by removing them). (#418) 2023-03-23 12:41:32 +01:00
Gary Mulder
e689dccbad Move model section from issue template to README.md (#421)
* Update custom.md

* Removed Model section as it is better placed in README.md

* Updates to README.md model section

* Inserted text that was removed from  issue template about obtaining models from FB and links to papers describing the various models

* Removed IPF down links for the Alpaca 7B models as these look to be in the old data format and probably shouldn't be directly linked to, anyway

* Updated the perplexity section to point at Perplexity scores #406 discussion
2023-03-23 11:30:40 +00:00
anzz1
0c2b820e64 Delete SHA256SUMS for now (#416)
Delete this for now to avoid confusion since it contains some wrong checksums from the old tokenizer format
Re-add after #374 is resolved
2023-03-23 11:26:19 +01:00
Georgi Gerganov
a1b7fa8c60 Adjust repetition penalty .. 2023-03-23 10:46:58 +02:00
Georgi Gerganov
1d31d737d8 Add link to recent podcast about whisper.cpp and llama.cpp 2023-03-23 09:48:51 +02:00
anzz1
6eddca75b1 CI: CMake: Separate build and test steps (#376)
* CI: Separate Build and Test steps (CMake)

* CI: Make sure build passes before running tests (CMake)

* CI: Standardise step id names
2023-03-23 04:20:34 +02:00
tjohnman
1b4b61fb60 Fix instruct mode broken by PR #354 (#409)
Co-authored-by: Johnman <tjohnman@github>
2023-03-23 01:30:23 +01:00
Gary Mulder
ffdeece7c2 Update issue template so people will use it (#404) 2023-03-22 19:06:18 +00:00
Stephan Walter
43a021a260 Deduplicate q4 quantization functions (#383)
* Deduplicate q4 quantization functions

* Use const; add basic test

* Re-enable quantization test

* Disable AVX2 flags in CI

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-03-22 19:29:06 +02:00
Valentyn Bezshapkin
f520f9be86 fix: add POSIX functionality for Linux compilation (#51)
* fix: add POSIX functionality for Linux compilation

* fix: older standard for compatibility
2023-03-22 19:20:25 +02:00
tjohnman
815b60c690 Don't force immediate interactive without -i (#354)
* Don't force immediate interactive without -i

Sometimes we might want to use a reverse prompt but we want to let the
model generate tokens right after the initial prompt. So we don't force
user input mode if the -i flag wasn't specified and instead let it run
until we encounter the reverse prompt.

This gives use some more flexibility, since it doesn't force the user to
enter a newline if they want to let the model generate text right after
the initial prompt and only be asked for input if the reverse prompt is
encountered.

The `--interactive-first` flag is reintroduced to force the old
behavior. `-r` behaves like `-i` plus introduces a reverse prompt (it
can be specified more than once).

* Update help output.

---------

Co-authored-by: Johnman <tjohnman@github>
2023-03-22 19:16:35 +02:00
Erik Scholz
bc091a84e5 cmake: make llama an actual library (#392) 2023-03-22 18:37:10 +02:00
Erik Scholz
48c8ad5bcf fix perplexity after c-api refactor (#390)
* preallocate a buffer of fitting size for tokenization (utils.cpp)

* don't create a new std::string (especially here, where it's usually large)
2023-03-22 18:09:38 +02:00