Commit Graph

  • faf3777e1c cuda : add half2 __shfl_xor() for ROCm 5.5 (#7263) Engininja2 2024-05-18 02:05:17 -06:00
  • d233b507cd cuda : add half2 __shfl_xor() for ROCm 5.5 (#7263) Engininja2 2024-05-18 02:05:17 -06:00
  • 1e9bede474 llama : add support for larger Granite Code Models (20B, 34B) (#7324) Steffen Röcker 2024-05-18 10:04:55 +02:00
  • 0f98acfac6 llama : add support for larger Granite Code Models (20B, 34B) (#7324) Steffen Röcker 2024-05-18 10:04:55 +02:00
  • 048941c1ee perplexity : ndot progress and show stats with < 100 tasks (#7348) strawberrymelonpanda 2024-05-18 00:57:08 -07:00
  • ca57e0f35e perplexity : ndot progress and show stats with < 100 tasks (#7348) strawberrymelonpanda 2024-05-18 00:57:08 -07:00
  • 1c30af8886 Update and fix Vulkan soft_max and argsort implementations (#7237) 0cc4m 2024-05-18 08:10:58 +02:00
  • c1b295eea5 Update and fix Vulkan soft_max and argsort implementations (#7237) 0cc4m 2024-05-18 08:10:58 +02:00
  • 85733c54b1 github-actions-labeler: initial commit (#7330) Brian 2024-05-18 16:04:23 +10:00
  • de73196344 github-actions-labeler: initial commit (#7330) Brian 2024-05-18 16:04:23 +10:00
  • 59f9af2239 convert : fix set_vocab_sentencepiece (#6866) Georgi Gerganov 2024-05-18 08:46:20 +03:00
  • b49a13dd2f convert : fix set_vocab_sentencepiece (#6866) Georgi Gerganov 2024-05-18 08:46:20 +03:00
  • 1e854ead0c ggml : fix quants nans when all the group weights are very close to zero (#7313) slaren 2024-05-18 02:39:54 +02:00
  • 05834841dc ggml : fix quants nans when all the group weights are very close to zero (#7313) slaren 2024-05-18 02:39:54 +02:00
  • d11ac19068 cmake : fix typo in AMDGPU_TARGETS (#7356) Engininja2 2024-05-17 18:39:25 -06:00
  • ef277de2ad cmake : fix typo in AMDGPU_TARGETS (#7356) Engininja2 2024-05-17 18:39:25 -06:00
  • 394a8a9e20 Unicode codepoint flags for custom regexs (#7245) jaime-m-p 2024-05-18 01:09:13 +02:00
  • b43272afa2 Unicode codepoint flags for custom regexs (#7245) jaime-m-p 2024-05-18 01:09:13 +02:00
  • 9eecf291b4 CUDA: faster large batch FA without tensor cores (#7314) Johannes Gäßler 2024-05-17 18:54:52 +02:00
  • 0fc1e820a9 CUDA: faster large batch FA without tensor cores (#7314) Johannes Gäßler 2024-05-17 18:54:52 +02:00
  • 37d4f164ef ROCm: use native CMake HIP support (#5966) Gavin Zhao 2024-05-17 11:03:03 -04:00
  • 82ca83db3c ROCm: use native CMake HIP support (#5966) Gavin Zhao 2024-05-17 11:03:03 -04:00
  • da03acb735 rpc : set SO_REUSEADDR for the server socket (#7320) Radoslav Gerganov 2024-05-17 17:25:44 +03:00
  • f4bd8b3d26 rpc : set SO_REUSEADDR for the server socket (#7320) Radoslav Gerganov 2024-05-17 17:25:44 +03:00
  • f0f87a2f2b Added a single test function script and fix debug-test.sh to be more robust (#7279) Brian 2024-05-17 22:40:14 +10:00
  • 51e9d02599 Added a single test function script and fix debug-test.sh to be more robust (#7279) Brian 2024-05-17 22:40:14 +10:00
  • 5ddcb26ba6 py : convert-hf-to-gguf-update improvements (#7340) Aarni Koskela 2024-05-17 15:11:45 +03:00
  • d273c1402b py : convert-hf-to-gguf-update improvements (#7340) Aarni Koskela 2024-05-17 15:11:45 +03:00
  • 16472b59b2 llama : use n_embd_head_v when reshaping kqv (#7327) fairydreaming 2024-05-17 13:24:38 +02:00
  • 27b040691c llama : use n_embd_head_v when reshaping kqv (#7327) fairydreaming 2024-05-17 13:24:38 +02:00
  • 228b8cd135 tokenization: add warning for double BOS (#7332) Johannes Gäßler 2024-05-17 09:59:57 +02:00
  • 29c60d8cdd tokenization: add warning for double BOS (#7332) Johannes Gäßler 2024-05-17 09:59:57 +02:00
  • 6b3ce6e8b3 ggml-quants, llama : removed excess checks (#7274) Herman Semenov 2024-05-17 07:08:49 +00:00
  • 359cbe3f46 ggml-quants, llama : removed excess checks (#7274) Herman Semenov 2024-05-17 07:08:49 +00:00
  • 938cbd3d10 convert : fix Qwen/Qwen-7b conversion (#7308) amd-lalithnc 2024-05-17 12:31:58 +05:30
  • e18bc6aaf3 convert : fix Qwen/Qwen-7b conversion (#7308) amd-lalithnc 2024-05-17 12:31:58 +05:30
  • d0b0e30ad0 server : add support for the RPC backend (#7305) Radoslav Gerganov 2024-05-17 10:00:17 +03:00
  • ee94172d33 server : add support for the RPC backend (#7305) Radoslav Gerganov 2024-05-17 10:00:17 +03:00
  • 3afea04b45 ggml : rewrite silu and softmax for cpu (#7154) Justine Tunney 2024-05-17 02:58:52 -04:00
  • 934266c0e0 ggml : rewrite silu and softmax for cpu (#7154) Justine Tunney 2024-05-17 02:58:52 -04:00
  • 7762ef55ba [Server] Added --verbose option to README [no ci] (#7335) Leon Knauer 2024-05-17 02:11:03 +02:00
  • 9c4fdcbec8 [Server] Added --verbose option to README [no ci] (#7335) Leon Knauer 2024-05-17 02:11:03 +02:00
  • 99d7a99cef Revert "server bench: fix bench not waiting for model load (#7284)" (#7334) Pierrick Hymbert 2024-05-16 20:43:45 +02:00
  • 24ecb58168 Revert "server bench: fix bench not waiting for model load (#7284)" (#7334) Pierrick Hymbert 2024-05-16 20:43:45 +02:00
  • fe34112740 rpc : get available mem for the CPU backend Radoslav Gerganov 2024-05-15 16:04:40 +03:00
  • 9afdffe70e rpc : get available mem for the CPU backend Radoslav Gerganov 2024-05-15 16:04:40 +03:00
  • 0b19253ad5 rpc : add command line arg for specifying backend memory Radoslav Gerganov 2024-05-15 15:29:07 +03:00
  • 3b3963c55c rpc : add command line arg for specifying backend memory Radoslav Gerganov 2024-05-15 15:29:07 +03:00
  • 7f9698470d convert : get general.name from model dir, not its parent (#5615) Jared Van Bortel 2024-05-16 02:15:23 -04:00
  • dda64fc17c convert : get general.name from model dir, not its parent (#5615) Jared Van Bortel 2024-05-16 02:15:23 -04:00
  • e3336679b7 grammar, json, llama: replace push on emplace if it possible (#7273) Herman Semenov 2024-05-16 06:14:24 +00:00
  • 0350f58152 grammar, json, llama: replace push on emplace if it possible (#7273) Herman Semenov 2024-05-16 06:14:24 +00:00
  • 756bbb6560 doc: add references to hugging face GGUF-my-repo quantisation web tool. (#7288) Vaibhav Srivastav 2024-05-16 07:38:43 +02:00
  • ad52d5c259 doc: add references to hugging face GGUF-my-repo quantisation web tool. (#7288) Vaibhav Srivastav 2024-05-16 07:38:43 +02:00
  • ab97fe155a ci: fix bin/Release path for windows-arm64 builds (#7317) Max Krasnyansky 2024-05-15 22:36:43 -07:00
  • 172b78210a ci: fix bin/Release path for windows-arm64 builds (#7317) Max Krasnyansky 2024-05-15 22:36:43 -07:00
  • 5cc8a89c08 Add support for properly optimized Windows ARM64 builds with LLVM and MSVC (#7191) Max Krasnyansky 2024-05-15 19:47:36 -07:00
  • 13ad16af12 Add support for properly optimized Windows ARM64 builds with LLVM and MSVC (#7191) Max Krasnyansky 2024-05-15 19:47:36 -07:00
  • 38ecedff95 readme : remove stray double quote (#7310) Daniel Bevenius 2024-05-15 23:41:03 +02:00
  • 8f7080bf48 readme : remove stray double quote (#7310) Daniel Bevenius 2024-05-15 23:41:03 +02:00
  • 358f7b28cf ggml : use dynamic thread scheduling for matrix multiplication (#6915) kunnis 2024-05-15 12:59:12 -05:00
  • e1b40ac3b9 ggml : use dynamic thread scheduling for matrix multiplication (#6915) kunnis 2024-05-15 12:59:12 -05:00
  • 06151b32d7 Avoid unnecessarily disabling CUDA graphs (#7302) agray3 2024-05-15 14:44:49 +01:00
  • dc020985b8 Avoid unnecessarily disabling CUDA graphs (#7302) agray3 2024-05-15 14:44:49 +01:00
  • f3f290e25e ggml : tag ggml_tensor::backend as deprecated (#7290) slaren 2024-05-15 15:08:48 +02:00
  • 344f9126cc ggml : tag ggml_tensor::backend as deprecated (#7290) slaren 2024-05-15 15:08:48 +02:00
  • 946893d257 Add missing " (#7303) AidanBeltonS 2024-05-15 13:26:30 +01:00
  • 9a17ab914b Add missing " (#7303) AidanBeltonS 2024-05-15 13:26:30 +01:00
  • 0b96b615fc embedding : free the batch after execution (#7297) dm4 2024-05-15 20:01:12 +08:00
  • ea3b0590ee embedding : free the batch after execution (#7297) dm4 2024-05-15 20:01:12 +08:00
  • ef6181c079 sync : ggml Georgi Gerganov 2024-05-15 13:23:41 +03:00
  • 29499bb593 sync : ggml Georgi Gerganov 2024-05-15 13:23:41 +03:00
  • 2ac4952938 ggml : add ggml_upscale_ext (ggml/814) John Balis 2024-05-15 03:52:33 -05:00
  • 48aa8fd1f2 ggml : add ggml_upscale_ext (ggml/814) John Balis 2024-05-15 03:52:33 -05:00
  • cbd77f7974 server bench: fix bench not waiting for model load (#7284) Johannes Gäßler 2024-05-15 08:44:16 +02:00
  • 583fd6b000 server bench: fix bench not waiting for model load (#7284) Johannes Gäßler 2024-05-15 08:44:16 +02:00
  • fd3a0965b2 script : sync ggml-rpc Georgi Gerganov 2024-05-14 19:14:38 +03:00
  • 9f773486ab script : sync ggml-rpc Georgi Gerganov 2024-05-14 19:14:38 +03:00
  • 926059600a metal : support FA without mask + add asserts (#7278) Georgi Gerganov 2024-05-14 19:09:30 +03:00
  • e8a7fd4fb0 metal : support FA without mask + add asserts (#7278) Georgi Gerganov 2024-05-14 19:09:30 +03:00
  • b0390e32cf sync : ggml Georgi Gerganov 2024-05-14 15:33:16 +03:00
  • a5e3fde857 sync : ggml Georgi Gerganov 2024-05-14 15:33:16 +03:00
  • 93c6680603 metal : tune soft_max number of threads (whisper/0) Georgi Gerganov 2024-05-13 11:01:07 +03:00
  • f308ea7059 metal : tune soft_max number of threads (whisper/0) Georgi Gerganov 2024-05-13 11:01:07 +03:00
  • 584ab7fbfb ggml : try fix ppc64 (whisper/0) Georgi Gerganov 2024-05-12 20:36:31 +03:00
  • c3c88f296a ggml : try fix ppc64 (whisper/0) Georgi Gerganov 2024-05-12 20:36:31 +03:00
  • 7fdf218783 ggml : expose SSE3 and SSSE3 for MSVC when AVX is available (whisper/2128) Przemysław Pawełczyk 2024-05-08 17:33:43 +02:00
  • 182adefcf3 ggml : expose SSE3 and SSSE3 for MSVC when AVX is available (whisper/2128) Przemysław Pawełczyk 2024-05-08 17:33:43 +02:00
  • 53475f887c ggml : optimize for ppc64le using VSX intrinsics (ggml/784) Hong Bo PENG 2024-05-12 17:17:18 +08:00
  • 0d26d8ccd8 ggml : optimize for ppc64le using VSX intrinsics (ggml/784) Hong Bo PENG 2024-05-12 17:17:18 +08:00
  • ef87ca916e server: free sampling contexts on exit (#7264) Steve Grubb 2024-05-14 10:11:24 -04:00
  • 4f0263633b server: free sampling contexts on exit (#7264) Steve Grubb 2024-05-14 10:11:24 -04:00
  • e115936501 Revert "move ndk code to a new library (#6951)" (#7282) Brian 2024-05-14 23:10:39 +10:00
  • 1265c670fd Revert "move ndk code to a new library (#6951)" (#7282) Brian 2024-05-14 23:10:39 +10:00
  • af81b28dbf ggml : add RPC backend (#6829) Radoslav Gerganov 2024-05-14 14:27:19 +03:00
  • 5e31828d3e ggml : add RPC backend (#6829) Radoslav Gerganov 2024-05-14 14:27:19 +03:00
  • 69afafd1e7 llama : disable pipeline parallelism with nkvo (#7265) slaren 2024-05-14 09:33:42 +02:00
  • 541600201e llama : disable pipeline parallelism with nkvo (#7265) slaren 2024-05-14 09:33:42 +02:00
  • 8536fbb140 move ndk code to a new library (#6951) Elton Kola 2024-05-14 03:30:30 -04:00
  • efc8f767c8 move ndk code to a new library (#6951) Elton Kola 2024-05-14 03:30:30 -04:00