Commit Graph

  • 1f30b7a9f1 ggml-quants : fix avx2 iq1_s vec_dot when compiled with gcc (#5742) Engininja2 2024-02-27 06:50:18 -06:00
  • 127296f9ad llama : fix defrag bugs + add parameter (#5735) Georgi Gerganov 2024-02-27 14:35:51 +02:00
  • 9d533a77d0 llama : fix defrag bugs + add parameter (#5735) Georgi Gerganov 2024-02-27 14:35:51 +02:00
  • 12c8455101 Makefile: use variables for cublas (#5689) le.chang 2024-02-27 10:03:06 +08:00
  • cbbd1efa06 Makefile: use variables for cublas (#5689) le.chang 2024-02-27 10:03:06 +08:00
  • a7b8cd4cec fix server hangs on empty prompt (#5733) Xuan Son Nguyen 2024-02-26 23:15:48 +01:00
  • b11a93df41 fix server hangs on empty prompt (#5733) Xuan Son Nguyen 2024-02-26 23:15:48 +01:00
  • 1e629318a7 Adding IQ2_S and IQ2_M to complete coverage of the 2-3 bit quantization range (#5721) Kawrakow 2024-02-26 18:28:38 +02:00
  • a33e6a0d2a Adding IQ2_S and IQ2_M to complete coverage of the 2-3 bit quantization range (#5721) Kawrakow 2024-02-26 18:28:38 +02:00
  • 470cef7162 CUDA: fix DEBUG_CUDA_MALLOC (#5729) Johannes Gäßler 2024-02-26 15:36:38 +01:00
  • 47bb7b48c7 CUDA: fix DEBUG_CUDA_MALLOC (#5729) Johannes Gäßler 2024-02-26 15:36:38 +01:00
  • 7d9d04da2c readme : update ui list (#5731) Artem 2024-02-26 17:15:28 +03:00
  • c4d7f81786 readme : update ui list (#5731) Artem 2024-02-26 17:15:28 +03:00
  • 10290145ff [SYCL] Add support for soft_max ALiBi (#5639) AidanBeltonS 2024-02-26 14:02:11 +00:00
  • e849078c6e [SYCL] Add support for soft_max ALiBi (#5639) AidanBeltonS 2024-02-26 14:02:11 +00:00
  • cd7ee2368a unicode : reuse iterator (#5726) Georgi Gerganov 2024-02-26 14:02:12 +02:00
  • 67fd33132f unicode : reuse iterator (#5726) Georgi Gerganov 2024-02-26 14:02:12 +02:00
  • aaf963317e server: CI fix trailing space (#5728) Pierrick Hymbert 2024-02-26 11:41:34 +01:00
  • 4804215cb8 server: CI fix trailing space (#5728) Pierrick Hymbert 2024-02-26 11:41:34 +01:00
  • 30ce0c897a server: CI tests reduce build matrix (#5725) Pierrick Hymbert 2024-02-26 09:56:10 +01:00
  • 8a533f0d90 server: CI tests reduce build matrix (#5725) Pierrick Hymbert 2024-02-26 09:56:10 +01:00
  • 775c4dc5ab llama : fix Gemma rope type (#5691) Georgi Gerganov 2024-02-26 08:30:17 +02:00
  • 269de86ba0 llama : fix Gemma rope type (#5691) Georgi Gerganov 2024-02-26 08:30:17 +02:00
  • c33b52a809 flake.lock: Update github-actions[bot] 2024-02-25 00:17:11 +00:00
  • c393733988 flake.lock: Update github-actions[bot] 2024-02-25 00:17:11 +00:00
  • 0c39081992 server: tests - slow inference causes timeout on the CI (#5715) Pierrick Hymbert 2024-02-25 22:48:33 +01:00
  • e3965cf35a server: tests - slow inference causes timeout on the CI (#5715) Pierrick Hymbert 2024-02-25 22:48:33 +01:00
  • ee3e6e4c97 server: docs - refresh and tease a little bit more the http server (#5718) Pierrick Hymbert 2024-02-25 21:46:29 +01:00
  • 8b350356b2 server: docs - refresh and tease a little bit more the http server (#5718) Pierrick Hymbert 2024-02-25 21:46:29 +01:00
  • 22fc891119 llama : refactor k-shift implementation + KV defragmentation (#5691) Georgi Gerganov 2024-02-25 22:12:24 +02:00
  • bf08e00643 llama : refactor k-shift implementation + KV defragmentation (#5691) Georgi Gerganov 2024-02-25 22:12:24 +02:00
  • 2184c6a353 server : fix crash when system prompt is bigger than batch size (#5714) compilade 2024-02-25 13:43:50 -05:00
  • f7625019c5 server : fix crash when system prompt is bigger than batch size (#5714) compilade 2024-02-25 13:43:50 -05:00
  • ece0b572f3 ggml-quants : provide ggml_vqtbl1q_u8 for 64bit compatibility (#5711) Radosław Gryta 2024-02-25 19:43:00 +01:00
  • abbabc5e51 ggml-quants : provide ggml_vqtbl1q_u8 for 64bit compatibility (#5711) Radosław Gryta 2024-02-25 19:43:00 +01:00
  • b0df3fe907 make : fix nvcc version is empty (#5713) kwin1412 2024-02-26 00:46:49 +08:00
  • f1a98c5254 make : fix nvcc version is empty (#5713) kwin1412 2024-02-26 00:46:49 +08:00
  • 4b1c0d1ef9 readme : add Msty to UI list (#5618) Ashok Gelal 2024-02-25 10:57:34 -05:00
  • 7d548a1827 readme : add Msty to UI list (#5618) Ashok Gelal 2024-02-25 10:57:34 -05:00
  • 6c8868e093 server: logs - unified format and --log-format option (#5700) Pierrick Hymbert 2024-02-25 13:50:32 +01:00
  • 930b178026 server: logs - unified format and --log-format option (#5700) Pierrick Hymbert 2024-02-25 13:50:32 +01:00
  • 7f6af033d4 server: concurrency fix + monitoring - add /metrics prometheus compatible endpoint (#5708) Pierrick Hymbert 2024-02-25 13:49:43 +01:00
  • d52d7819b8 server: concurrency fix + monitoring - add /metrics prometheus compatible endpoint (#5708) Pierrick Hymbert 2024-02-25 13:49:43 +01:00
  • c1b74e9276 cmake : fix compilation for Android armeabi-v7a (#5702) Radosław Gryta 2024-02-25 11:53:11 +01:00
  • 1289408817 cmake : fix compilation for Android armeabi-v7a (#5702) Radosław Gryta 2024-02-25 11:53:11 +01:00
  • a7fad1a351 code : normalize enum names (#5697) Georgi Gerganov 2024-02-25 12:09:09 +02:00
  • ab336a9d5e code : normalize enum names (#5697) Georgi Gerganov 2024-02-25 12:09:09 +02:00
  • f8089e8a4b py : fix StableLM conversion after config.json changes (#5703) Anas Ahouzi 2024-02-25 10:54:04 +01:00
  • 69917dfa55 py : fix StableLM conversion after config.json changes (#5703) Anas Ahouzi 2024-02-25 10:54:04 +01:00
  • 10557bf67f server: continue to update other slots on embedding concurrent request (#5699) Pierrick Hymbert 2024-02-24 19:16:04 +01:00
  • 9e359a4f47 server: continue to update other slots on embedding concurrent request (#5699) Pierrick Hymbert 2024-02-24 19:16:04 +01:00
  • 0f87b60a76 IQ3_S: a much better alternative to Q3_K (#5676) Kawrakow 2024-02-24 16:23:52 +02:00
  • 4c4cb30736 IQ3_S: a much better alternative to Q3_K (#5676) Kawrakow 2024-02-24 16:23:52 +02:00
  • cee5737777 server: init functional tests (#5566) Pierrick Hymbert 2024-02-24 12:28:55 +01:00
  • 525213d2f5 server: init functional tests (#5566) Pierrick Hymbert 2024-02-24 12:28:55 +01:00
  • f3ab6ae6d2 server : add KV cache quantization options (#5684) AlpinDale 2024-02-23 19:31:54 +00:00
  • fd43d66f46 server : add KV cache quantization options (#5684) AlpinDale 2024-02-23 19:31:54 +00:00
  • 7b192e568f convert : fix missing ftype for gemma (#5690) Jared Van Bortel 2024-02-23 13:39:14 -05:00
  • 54fbcd2ce6 convert : fix missing ftype for gemma (#5690) Jared Van Bortel 2024-02-23 13:39:14 -05:00
  • fd935de0c0 mpt : do not duplicate token_embd.weight on disk (#5670) Jared Van Bortel 2024-02-22 17:05:23 -05:00
  • 15499eb942 mpt : do not duplicate token_embd.weight on disk (#5670) Jared Van Bortel 2024-02-22 17:05:23 -05:00
  • f813d17ad9 gemma : use more bits for the token_embd.weight tensor (#5650) Georgi Gerganov 2024-02-22 23:23:46 +02:00
  • 96633eeca1 gemma : use more bits for the token_embd.weight tensor (#5650) Georgi Gerganov 2024-02-22 23:23:46 +02:00
  • 55ef93ee22 py : add Gemma conversion from HF models (#5647) Georgi Gerganov 2024-02-22 23:22:48 +02:00
  • 847eedbdb2 py : add Gemma conversion from HF models (#5647) Georgi Gerganov 2024-02-22 23:22:48 +02:00
  • 43a000c9a6 ggml : always define ggml_fp16_t as uint16_t (#5666) Georgi Gerganov 2024-02-22 23:21:39 +02:00
  • 7e4f339c40 ggml : always define ggml_fp16_t as uint16_t (#5666) Georgi Gerganov 2024-02-22 23:21:39 +02:00
  • dfb0d63843 sync : ggml Georgi Gerganov 2024-02-22 23:21:05 +02:00
  • 334f76fa38 sync : ggml Georgi Gerganov 2024-02-22 23:21:05 +02:00
  • 59f926f329 ggml : 32-bit arm compat (whisper/1891) Georgi Gerganov 2024-02-22 18:31:40 +02:00
  • efd56b1c21 ggml : 32-bit arm compat (whisper/1891) Georgi Gerganov 2024-02-22 18:31:40 +02:00
  • ff7d986691 nix: init singularity and docker images (#5056) Someone 2024-02-22 19:44:10 +00:00
  • 201294ae17 nix: init singularity and docker images (#5056) Someone 2024-02-22 19:44:10 +00:00
  • 96ac0b1e12 py : minor fixes (#5668) Georgi Gerganov 2024-02-22 20:13:25 +02:00
  • 5a9e2f60ba py : minor fixes (#5668) Georgi Gerganov 2024-02-22 20:13:25 +02:00
  • f5bb706eba Add Gemma chat template (#5665) Xuan Son Nguyen 2024-02-22 19:10:21 +01:00
  • 373ee3fbba Add Gemma chat template (#5665) Xuan Son Nguyen 2024-02-22 19:10:21 +01:00
  • 59a80ba43f workflows: nix: hardcode cachix ids, build unconditionally (#5663) Someone 2024-02-22 16:32:09 +00:00
  • 4cb4d8b22d workflows: nix: hardcode cachix ids, build unconditionally (#5663) Someone 2024-02-22 16:32:09 +00:00
  • 1b22fde79c minor : fix trailing whitespace (#5638) Georgi Gerganov 2024-02-22 13:54:03 +02:00
  • 3a03541ced minor : fix trailing whitespace (#5638) Georgi Gerganov 2024-02-22 13:54:03 +02:00
  • 2366d08782 readme : update hot topics Georgi Gerganov 2024-02-22 10:35:54 +02:00
  • 56d03d92be readme : update hot topics Georgi Gerganov 2024-02-22 10:35:54 +02:00
  • 967b99606a server : fallback to chatml, add AlphaMonarch chat template (#5628) Xuan Son Nguyen 2024-02-22 09:33:24 +01:00
  • a46f50747b server : fallback to chatml, add AlphaMonarch chat template (#5628) Xuan Son Nguyen 2024-02-22 09:33:24 +01:00
  • 5626d4898f server : clarify some params in the docs (#5640) Alexey Parfenov 2024-02-22 08:27:32 +00:00
  • c5688c6250 server : clarify some params in the docs (#5640) Alexey Parfenov 2024-02-22 08:27:32 +00:00
  • e204f6452d mpt : add optional bias tensors (#5638) Dat Quoc Nguyen 2024-02-22 18:15:13 +10:00
  • 4ef245a92a mpt : add optional bias tensors (#5638) Dat Quoc Nguyen 2024-02-22 18:15:13 +10:00
  • 1144411469 llama : fix loading models with shared tok_embd and output (#5651) slaren 2024-02-22 00:42:09 +01:00
  • 973053d8b0 llama : fix loading models with shared tok_embd and output (#5651) slaren 2024-02-22 00:42:09 +01:00
  • c46eadce5a Add docs for llama_chat_apply_template (#5645) Xuan Son Nguyen 2024-02-22 00:31:00 +01:00
  • 7c8bcc11dc Add docs for llama_chat_apply_template (#5645) Xuan Son Nguyen 2024-02-22 00:31:00 +01:00
  • f06a6c9879 llama : fix session save/load with quantized KV (#5649) slaren 2024-02-21 22:52:39 +01:00
  • 7fe4678b02 llama : fix session save/load with quantized KV (#5649) slaren 2024-02-21 22:52:39 +01:00
  • 7bef081a42 gemma : allow offloading the output tensor (#5646) slaren 2024-02-21 22:18:23 +01:00
  • ba2135ccae gemma : allow offloading the output tensor (#5646) slaren 2024-02-21 22:18:23 +01:00
  • 8cdd01f57b examples : do not assume BOS when shifting context (#5622) Jared Van Bortel 2024-02-21 10:33:54 -05:00
  • 89febfed93 examples : do not assume BOS when shifting context (#5622) Jared Van Bortel 2024-02-21 10:33:54 -05:00
  • f85c73f1d0 sync : ggml Georgi Gerganov 2024-02-21 16:52:39 +02:00