Commit Graph

  • e45dbc8951 fix firefox autoscroll (#2519) Jonas Wunderlich 2023-08-04 20:16:11 +00:00
  • 332311234a fix firefox autoscroll (#2519) Jonas Wunderlich 2023-08-04 20:16:11 +00:00
  • 6ae47d1968 server: regenerate completion.js.hpp (#2515) Cebtenzzre 2023-08-04 15:00:57 -04:00
  • 182af739c4 server: regenerate completion.js.hpp (#2515) Cebtenzzre 2023-08-04 15:00:57 -04:00
  • bcf5f2e1d8 CUDA: use min compute capability of GPUs actually used (#2506) Cebtenzzre 2023-08-04 11:35:22 -04:00
  • 4329d1acb0 CUDA: use min compute capability of GPUs actually used (#2506) Cebtenzzre 2023-08-04 11:35:22 -04:00
  • 9949b4fc34 CUDA: check if event is NULL before cudaStreamWaitEvent (#2505) Cebtenzzre 2023-08-04 11:34:32 -04:00
  • 02f9d96a86 CUDA: check if event is NULL before cudaStreamWaitEvent (#2505) Cebtenzzre 2023-08-04 11:34:32 -04:00
  • b1cabde3b3 Add --simple-io option for subprocesses and break out console.h and cpp (#1558) DannyDaemonic 2023-08-04 08:20:12 -07:00
  • 3498588e0f Add --simple-io option for subprocesses and break out console.h and cpp (#1558) DannyDaemonic 2023-08-04 08:20:12 -07:00
  • d1d9a25cce Fixing race condition in server and partial stream handling in frontend. (#2391) Stephen Nichols 2023-08-04 06:37:24 -05:00
  • 5f631c2679 Fixing race condition in server and partial stream handling in frontend. (#2391) Stephen Nichols 2023-08-04 06:37:24 -05:00
  • 7bc95dcf62 Stream save llama context data to file instead of allocating entire buffer upfront (#2488) l3utterfly 2023-08-04 19:29:52 +08:00
  • 415e99fec2 Stream save llama context data to file instead of allocating entire buffer upfront (#2488) l3utterfly 2023-08-04 19:29:52 +08:00
  • 04e64ce94e build : fix several cast and printf warnings (#2499) Borislav Stanimirov 2023-08-04 13:07:21 +03:00
  • ff966e7ca6 build : fix several cast and printf warnings (#2499) Borislav Stanimirov 2023-08-04 13:07:21 +03:00
  • 79f99c6b4d examples : generate JSON according to schema (#1887) Evan Jones 2023-08-02 22:05:44 -04:00
  • 8183159cf3 examples : generate JSON according to schema (#1887) Evan Jones 2023-08-02 22:05:44 -04:00
  • 5dc74e92ed CUDA: faster non k-quant mul_mat_q kernels (#2483) Johannes Gäßler 2023-08-02 18:04:04 +02:00
  • 468ea24fb4 CUDA: faster non k-quant mul_mat_q kernels (#2483) Johannes Gäßler 2023-08-02 18:04:04 +02:00
  • c06707dd78 CUDA: Fix models with output size != 32000 (#2480) Johannes Gäßler 2023-08-02 16:48:10 +02:00
  • 4f6b60c776 CUDA: Fix models with output size != 32000 (#2480) Johannes Gäßler 2023-08-02 16:48:10 +02:00
  • fe9d3103ca readme : add Aquila-7B model series to supported models (#2487) ldwang 2023-08-02 16:21:11 +08:00
  • 220d931864 readme : add Aquila-7B model series to supported models (#2487) ldwang 2023-08-02 16:21:11 +08:00
  • 31e5af188f tests : Fix compilation warnings (Linux/GCC) (#2451) Eve 2023-08-02 04:06:19 -04:00
  • 81844fbcfd tests : Fix compilation warnings (Linux/GCC) (#2451) Eve 2023-08-02 04:06:19 -04:00
  • bb2e752345 readme : Add Chinese LLaMA-2 / Alpaca-2 to supported models (#2475) Yiming Cui 2023-08-02 14:18:31 +08:00
  • a312193e18 readme : Add Chinese LLaMA-2 / Alpaca-2 to supported models (#2475) Yiming Cui 2023-08-02 14:18:31 +08:00
  • c8d9f77356 fix a typo in examples/server/README.md (#2478) Bono Lv 2023-08-01 20:54:28 +08:00
  • c574bddb36 fix a typo in examples/server/README.md (#2478) Bono Lv 2023-08-01 20:54:28 +08:00
  • 7386a7dd10 server : Support dark mode (#2414) ebraminio 2023-08-01 01:56:23 -07:00
  • 86aeb27734 server : Support dark mode (#2414) ebraminio 2023-08-01 01:56:23 -07:00
  • f9967c22eb metal : add gqa8 kernel to allow llama-2-70B on metal (#2459) Matteo Boschini 2023-08-01 09:43:12 +02:00
  • 1873ff586b metal : add gqa8 kernel to allow llama-2-70B on metal (#2459) Matteo Boschini 2023-08-01 09:43:12 +02:00
  • b2dc2f8155 CUDA: fixed LLAMA_FAST compilation option (#2473) Johannes Gäßler 2023-07-31 21:02:19 +02:00
  • 49e7cb5bb1 CUDA: fixed LLAMA_FAST compilation option (#2473) Johannes Gäßler 2023-07-31 21:02:19 +02:00
  • b4ab3782da CUDA: fixed cmake F16 option (#2471) Johannes Gäßler 2023-07-31 19:52:22 +02:00
  • b772bba42e CUDA: fixed cmake F16 option (#2471) Johannes Gäßler 2023-07-31 19:52:22 +02:00
  • 83772f01d7 CUDA: mmq CLI option, fixed mmq build issues (#2453) Johannes Gäßler 2023-07-31 15:44:35 +02:00
  • 0728c5a8b9 CUDA: mmq CLI option, fixed mmq build issues (#2453) Johannes Gäßler 2023-07-31 15:44:35 +02:00
  • 1050a60adc CUDA: Implemented row flattening for non-glm RoPE (#2468) Johannes Gäßler 2023-07-31 14:32:30 +02:00
  • 1215ed7d5c CUDA: Implemented row flattening for non-glm RoPE (#2468) Johannes Gäßler 2023-07-31 14:32:30 +02:00
  • f329c99fa1 CUDA: fewer memory bank conflicts for mul_mat_q (#2458) Johannes Gäßler 2023-07-31 13:18:51 +02:00
  • 2dbf518911 CUDA: fewer memory bank conflicts for mul_mat_q (#2458) Johannes Gäßler 2023-07-31 13:18:51 +02:00
  • 023de5bf7d Fix Metal backend broken from the allocator changes (#2455) slaren 2023-07-31 11:02:53 +02:00
  • 9d2382b3e4 Fix Metal backend broken from the allocator changes (#2455) slaren 2023-07-31 11:02:53 +02:00
  • 2786fff886 ggml : add graph tensor allocator (#2411) slaren 2023-07-30 15:58:01 +02:00
  • a113689571 ggml : add graph tensor allocator (#2411) slaren 2023-07-30 15:58:01 +02:00
  • 28d49137d3 CUDA: Quantized matrix matrix multiplication (#2160) Johannes Gäßler 2023-07-29 23:04:44 +02:00
  • 11f3ca06b8 CUDA: Quantized matrix matrix multiplication (#2160) Johannes Gäßler 2023-07-29 23:04:44 +02:00
  • 28cbbe6ad4 CUDA: faster multi GPU synchronization (#2448) Johannes Gäßler 2023-07-29 23:04:10 +02:00
  • 9baf9ef304 CUDA: faster multi GPU synchronization (#2448) Johannes Gäßler 2023-07-29 23:04:10 +02:00
  • fdcc7db2a5 perplexity : add Hellaswag calculation (#2389) klosax 2023-07-28 20:25:36 +02:00
  • 8a88e5855c perplexity : add Hellaswag calculation (#2389) klosax 2023-07-28 20:25:36 +02:00
  • 289f077ca7 ggml : workaround for missing _mm256_setr_m128i in GCC < 8 in k_quants.c (#2405) Lee 2023-07-29 02:17:45 +08:00
  • a9559bf77b ggml : workaround for missing _mm256_setr_m128i in GCC < 8 in k_quants.c (#2405) Lee 2023-07-29 02:17:45 +08:00
  • 0a36743aff llama : support more diverse tokenizers? (#2420) eric8607242 2023-07-29 02:10:05 +08:00
  • ee1b497c98 llama : support more diverse tokenizers? (#2420) eric8607242 2023-07-29 02:10:05 +08:00
  • ca55f98a26 examples : fix whitespace Georgi Gerganov 2023-07-28 21:05:08 +03:00
  • d73b8d48b4 examples : fix whitespace Georgi Gerganov 2023-07-28 21:05:08 +03:00
  • 01a14d2c58 examples : server chat mode with llama2 (#2400) nhamanasu 2023-07-29 03:02:10 +09:00
  • 34ae1caf7f examples : server chat mode with llama2 (#2400) nhamanasu 2023-07-29 03:02:10 +09:00
  • fb09f01651 readme : fix the description of the Tail free sampling (TFS) method (#2431) Weird Constructor 2023-07-28 10:44:43 +02:00
  • d91f3f0c55 readme : fix the description of the Tail free sampling (TFS) method (#2431) Weird Constructor 2023-07-28 10:44:43 +02:00
  • 0c7137b4c3 llama : use n_embd_gqa instead of n_embd to handle llama-2 70B (#2433) Rand Xie 2023-07-28 01:42:53 -07:00
  • 65cdf34bdc llama : use n_embd_gqa instead of n_embd to handle llama-2 70B (#2433) Rand Xie 2023-07-28 01:42:53 -07:00
  • be41ec86c5 Obtaining LLaMA 2 instructions (#2308) niansa/tuxifan 2023-07-28 03:14:11 +02:00
  • edcc7ae7d2 Obtaining LLaMA 2 instructions (#2308) niansa/tuxifan 2023-07-28 03:14:11 +02:00
  • 28b471d0e4 convert.py : Update to support 70B HF format model files (#2427) mj-shifu 2023-07-27 22:39:17 +02:00
  • 7c529cede6 convert.py : Update to support 70B HF format model files (#2427) mj-shifu 2023-07-27 22:39:17 +02:00
  • bab9a5358d metal : disable graph concurrency optimization due to bug (#2413) Georgi Gerganov 2023-07-27 11:00:54 +03:00
  • 1a941869cb metal : disable graph concurrency optimization due to bug (#2413) Georgi Gerganov 2023-07-27 11:00:54 +03:00
  • 3a04da8e17 ggml : fix assert in ggml_set_unary_op (#2410) slaren 2023-07-26 23:57:23 +02:00
  • b5472ea0ad ggml : fix assert in ggml_set_unary_op (#2410) slaren 2023-07-26 23:57:23 +02:00
  • 59b12eb025 make : build with -Wmissing-prototypes (#2394) Cebtenzzre 2023-07-26 14:00:04 -04:00
  • 6df1f5940f make : build with -Wmissing-prototypes (#2394) Cebtenzzre 2023-07-26 14:00:04 -04:00
  • d81bd68d7d ggml : allocate graphs in a context (#2392) slaren 2023-07-26 15:56:53 +02:00
  • 5488fb789e ggml : allocate graphs in a context (#2392) slaren 2023-07-26 15:56:53 +02:00
  • abd493a08a Add LLAMA_DEFAULT_RMS_EPS so we can change the default (#2384) Kawrakow 2023-07-25 18:35:53 +03:00
  • eb542d3932 Add LLAMA_DEFAULT_RMS_EPS so we can change the default (#2384) Kawrakow 2023-07-25 18:35:53 +03:00
  • e31220e203 ggml : fix ggml_flash_attn to use op_params (#2387) slaren 2023-07-25 16:20:12 +02:00
  • 07aaa0f63f ggml : fix ggml_flash_attn to use op_params (#2387) slaren 2023-07-25 16:20:12 +02:00
  • d954955e56 convert.py : support bpe tokenizer (#2228) ldwang 2023-07-25 21:22:09 +08:00
  • fce48caf9a convert.py : support bpe tokenizer (#2228) ldwang 2023-07-25 21:22:09 +08:00
  • 2a5c15cb3f ggml : relax contiguous constraints in activation function (#2371) Jiahao Li 2023-07-25 20:58:32 +08:00
  • 875086bdb9 ggml : relax contiguous constraints in activation function (#2371) Jiahao Li 2023-07-25 20:58:32 +08:00
  • 9897e125f4 ggml : improve graph build time via hash table lookup (#2329) slaren 2023-07-25 14:32:20 +02:00
  • da1889834a ggml : improve graph build time via hash table lookup (#2329) slaren 2023-07-25 14:32:20 +02:00
  • 978a82ae3a build : fix line breaking error in build-info.sh (#2349) Hesen Peng 2023-07-25 05:24:09 -07:00
  • 82552b7f54 build : fix line breaking error in build-info.sh (#2349) Hesen Peng 2023-07-25 05:24:09 -07:00
  • fdfc76e541 main : add --in-prefix-bos to prefix BOS to user inputs; keep EOS (#2304) Xiao-Yong Jin 2023-07-25 07:19:11 -05:00
  • 0c06204fb3 main : add --in-prefix-bos to prefix BOS to user inputs; keep EOS (#2304) Xiao-Yong Jin 2023-07-25 07:19:11 -05:00
  • b3331ff75f ci : add non-AVX scalar build/test (#2356) Eve 2023-07-25 08:16:13 -04:00
  • 1fed755b1f ci : add non-AVX scalar build/test (#2356) Eve 2023-07-25 08:16:13 -04:00
  • 04548b5f34 k_quants : add AVX support to dot functions with QK_K as 64 (#2339) katsu560 2023-07-25 21:13:41 +09:00
  • be2301bcda k_quants : add AVX support to dot functions with QK_K as 64 (#2339) katsu560 2023-07-25 21:13:41 +09:00
  • d4d59d1dab metal : concurrently dispatch commands (#2358) Shouzheng Liu 2023-07-25 08:00:19 -04:00
  • 1aa18ef994 metal : concurrently dispatch commands (#2358) Shouzheng Liu 2023-07-25 08:00:19 -04:00
  • 386f15bcfd Another speed gain for Q4_0 and Q4_1 on Metal (#2375) Kawrakow 2023-07-25 13:48:29 +03:00
  • 9a08eaf3c4 Another speed gain for Q4_0 and Q4_1 on Metal (#2375) Kawrakow 2023-07-25 13:48:29 +03:00