slaren
fa27d2f82a
ggml-cuda : perform cublas fp16 matrix multiplication as fp16 ( #3370 )
...
* ggml-cuda : perform cublas fp16 matrix multiplication as fp16
* try to fix rocm build
* restrict fp16 mat mul to volta and up
2023-09-28 13:08:28 +03:00
Zhang Peiyuan
1dcfc8d0cf
convert : remove bug in convert.py permute function ( #3364 )
2023-09-27 20:45:20 +02:00
Richard Roberson
6b9fa77bfe
make-ggml.py : compatibility with more models and GGUF ( #3290 )
...
* Resync my fork with new llama.cpp commits
* examples : rename to use dash instead of underscore
* New model conversions
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com >
2023-09-27 19:25:12 +03:00
Cebtenzzre
6c09dbc451
gguf : fix a few general keys ( #3341 )
2023-09-27 12:18:07 -04:00
Rickard Hallerbäck
0c868ebe57
metal : reusing llama.cpp logging ( #3152 )
...
* metal : reusing llama.cpp logging
* cmake : build fix
* metal : logging callback
* metal : logging va_args memory fix
* metal : minor cleanup
* metal : setting function like logging macro to capital letters
* llama.cpp : trailing whitespace fix
* ggml : log level enum used by llama
* Makefile : cleanup ggml-metal recipe
* ggml : ggml_log_callback typedef
* ggml : minor
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com >
2023-09-27 18:48:33 +03:00
Jag Chadha
31e3b674ad
build : add ACCELERATE_NEW_LAPACK to fix warning on macOS Sonoma ( #3342 )
2023-09-27 18:34:32 +03:00
BarfingLemurs
9d92d67428
readme : add some recent perplexity and bpw measurements to READMES, link for k-quants ( #3340 )
...
* Update README.md
* Update README.md
* Update README.md with k-quants bpw measurements
2023-09-27 18:30:36 +03:00
DAN™
d9749e7cc3
cmake : fix build-info.h on MSVC ( #3309 )
2023-09-25 18:45:33 -04:00
2f38b454
be8fb3dc9b
docs: Fix typo CLBlast_DIR var. ( #3330 )
2023-09-25 20:24:52 +02:00
Erik Scholz
17a015ab3b
nix : add cuda, use a symlinked toolkit for cmake ( #3202 )
2023-09-25 13:48:30 +02:00
slaren
a71167dedf
llama-bench : add README ( #3317 )
...
* llama-bench : add README
* minor edit
2023-09-23 21:48:24 +02:00
Cebtenzzre
a2fe58d235
examples : fix RoPE defaults to match PR #3240 ( #3315 )
2023-09-23 12:28:50 +03:00
Kevin Ji
13ec5a1de9
scripts : use /usr/bin/env in shebang ( #3313 )
2023-09-22 23:52:23 -04:00
Lee Drake
1e8ebda8ce
Update README.md ( #3289 )
...
* Update README.md
* Update README.md
Co-authored-by: slaren <slarengh@gmail.com >
---------
Co-authored-by: slaren <slarengh@gmail.com >
2023-09-21 21:00:24 +02:00
shibe2
1f4f0754e3
ggml-opencl.cpp: Make private functions static ( #3300 )
2023-09-21 14:10:26 -04:00
Edward Taylor
2e75231cb3
zig : fix for updated c lib ( #3259 )
2023-09-21 12:08:20 +03:00
yuiseki
8b1dfa6af7
embedding : update README.md ( #3224 )
2023-09-21 11:57:40 +03:00
Johannes Gäßler
2c58f6bb1b
CUDA: use only 1 thread if fully offloaded ( #2915 )
2023-09-21 11:43:53 +03:00
Georgi Gerganov
7eca40bf4b
readme : update hot topics
2023-09-20 20:48:22 +03:00
Cebtenzzre
d6576ed07f
llama : allow gguf RoPE keys to be overridden with defaults ( #3240 )
2023-09-20 12:12:47 -04:00
Cebtenzzre
70993d7bbd
benchmark-matmult : do not use integer abs() on a float ( #3277 )
2023-09-20 12:06:08 -04:00
kang
eeee397010
flake : Restore default package's buildInputs ( #3262 )
2023-09-20 15:48:22 +02:00
Alon
b68da9373d
CI: FreeBSD fix ( #3258 )
...
* - freebsd ci: use qemu
2023-09-20 14:06:36 +02:00
Georgi Gerganov
7c5c6df732
examples : fix benchmark-matmult ( #1554 )
...
The precision for Q4_0 has degraded since #1508
2023-09-20 10:02:39 +03:00
Cebtenzzre
78ff726016
make : restore build-info.h dependency for several targets ( #3205 )
2023-09-18 10:03:53 -04:00
Erik Scholz
d997ba652f
ci : switch cudatoolkit install on windows to networked ( #3236 )
2023-09-18 02:21:47 +02:00
Johannes Gäßler
8821f4ae78
CUDA: fix peer access logic ( #3231 )
2023-09-17 23:35:20 +02:00
Johannes Gäßler
94a0ea6e76
CUDA: enable peer access between devices ( #2470 )
2023-09-17 16:37:53 +02:00
slaren
112bdc67c5
llama.cpp : show model size and BPW on load ( #3223 )
2023-09-17 14:33:28 +02:00
Johannes Gäßler
64dbee2c5e
CUDA: fix scratch malloced on non-main device ( #3220 )
2023-09-17 14:16:22 +02:00
IsaacDynamo
36be4e8ee8
Enable BUILD_SHARED_LIBS=ON on all Windows builds ( #3215 )
2023-09-16 19:35:25 +02:00
Vlad
aa4418d795
Enable build with CUDA 11.0 (make) ( #3132 )
...
* CUDA 11.0 fixes
* Cleaner CUDA/host flags separation
Also renamed GGML_ASSUME into GGML_CUDA_ASSUME
2023-09-16 16:55:43 +02:00
goerch
39897d794c
Fixing the last deviations from sentencepiece indicated by test-tokenizer-1 ( #3170 )
...
* Fix für #2721
* Reenable tokenizer test for LLaMa
* Add `console.cpp` dependency
* Fix dependency to `common`
* Fixing wrong fix.
* Make console usage platform specific
Work on compiler warnings.
* Adapting makefile
* Remove trailing whitespace
* Adapting the other parts of the makefile
* Fix typo.
* Fixing the last deviations from sentencepiece indicated by test-tokenizer-1
* Simplify logic
* Add missing change...
* Fix ugly compiler warning
* llama_tokenize should accept strings containing NUL now
* Adding huichen's test case
2023-09-16 13:41:33 +02:00
Cebtenzzre
481ac7803f
examples : add compiler version and target to build info ( #2998 )
2023-09-15 16:59:49 -04:00
Cebtenzzre
4e89732b50
check C++ code with -Wmissing-declarations ( #3184 )
2023-09-15 15:38:27 -04:00
Cebtenzzre
217da58978
fix build numbers by setting fetch-depth=0 ( #3197 )
2023-09-15 15:18:15 -04:00
Meng Zhang
7d434a09c5
llama : add support for StarCoder model architectures ( #3187 )
...
* add placeholder of starcoder in gguf / llama.cpp
* support convert starcoder weights to gguf
* convert MQA to MHA
* fix ffn_down name
* add LLM_ARCH_STARCODER to llama.cpp
* set head_count_kv = 1
* load starcoder weight
* add max_position_embeddings
* set n_positions to max_positioin_embeddings
* properly load all starcoder params
* fix head count kv
* fix comments
* fix vram calculation for starcoder
* store mqa directly
* add input embeddings handling
* add TBD
* working in cpu, metal buggy
* cleanup useless code
* metal : fix out-of-bounds access in soft_max kernels
* llama : make starcoder graph build more consistent with others
* refactor: cleanup comments a bit
* add other starcoder models: 3B, 7B, 15B
* support-mqa-directly
* fix: remove max_position_embeddings, use n_train_ctx
* Update llama.cpp
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com >
* Update llama.cpp
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com >
* Apply suggestions from code review
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com >
* fix: switch to space from tab
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com >
2023-09-15 22:02:13 +03:00
Cebtenzzre
cf213d4e5b
common : do not use GNU zero-length __VA_ARGS__ extension ( #3195 )
2023-09-15 21:02:01 +03:00
Georgi Gerganov
303a4f7baa
metal : fix bug in soft_max kernels (out-of-bounds access) ( #3194 )
2023-09-15 20:17:24 +03:00
Cebtenzzre
a78476e865
convert : make ftype optional in simple scripts ( #3185 )
2023-09-15 12:29:02 -04:00
Georgi Gerganov
a76ba4384c
sync : ggml (Metal F32 support + reduce ggml-alloc size) ( #3192 )
...
* sync : ggml (Metal F32 support + reduce ggml-alloc size)
ggml-ci
* llama-bench : fix ggml_cpu_has_metal() duplicate function
ggml-ci
2023-09-15 19:06:03 +03:00
Engininja2
d840da1bcd
cmake : fix building shared libs for clang (rocm) on windows ( #3176 )
2023-09-15 15:24:30 +03:00
Evgeny Kurnevsky
45e8e203c0
flake : use pkg-config instead of pkgconfig ( #3188 )
...
pkgconfig is an alias, it got removed from nixpkgs:
295a5e1e2b/pkgs/top-level/aliases.nix (L1408)
2023-09-15 11:10:22 +03:00
Georgi Gerganov
40a66e4cbd
metal : relax conditions on fast matrix multiplication kernel ( #3168 )
...
* metal : relax conditions on fast matrix multiplication kernel
* metal : revert the concurrnecy change because it was wrong
* llama : remove experimental stuff
2023-09-15 11:09:24 +03:00
Andrei
61681fa802
cmake : fix llama.h location when built outside of root directory ( #3179 )
2023-09-15 11:07:40 +03:00
Ali Tariq
d97a2124e4
ci : Cloud-V for RISC-V builds ( #3160 )
...
* Added Cloud-V File
* Replaced Makefile with original one
---------
Co-authored-by: moiz.hussain <moiz.hussain@10xengineers.ai >
2023-09-15 11:06:56 +03:00
Roland
0d44b199f5
llama : remove mtest ( #3177 )
...
* Remove mtest
* remove from common/common.h and examples/main/main.cpp
2023-09-15 10:28:45 +03:00
Cebtenzzre
2e7c1af3c6
llama : make quantize example up to 2.7x faster ( #3115 )
2023-09-14 21:09:53 -04:00
jneem
144cf127a8
flake : allow $out/include to already exist ( #3175 )
2023-09-14 21:54:47 +03:00
Andrei
7940ea477b
cmake : compile ggml-rocm with -fpic when building shared library ( #3158 )
2023-09-14 20:38:16 +03:00