Zay
9905daaaa3
llama.swiftui : update models layout ( #4826 )
...
* Updated Models Layout
- Added a models drawer
- Added downloading directly from Hugging Face
- Load custom models from local folder
- Delete models by swiping left
* trimmed trailing white space
* Updated Models Layout
2024-01-12 14:48:00 +02:00
Georgi Gerganov
3e86f86432
llama.swiftui : update readme
2024-01-08 15:57:36 +02:00
Alex Azarov
30df691a96
llama.swiftui : use llama.cpp as SPM package ( #4804 )
2024-01-07 10:20:50 +02:00
Alex Azarov
8c36aaf5a8
llama.swiftui : add visionOS target ( #4805 )
2024-01-07 09:46:55 +02:00
Daniel Illescas Romero
34d18eff4c
llama.swiftui : use correct pointer for llama_token_eos ( #4797 )
2024-01-06 17:12:59 +02:00
Georgi Gerganov
7e27e37f26
metal : switch back to default.metallib (ggml/681)
...
ggml-ci
2024-01-05 18:02:06 +02:00
singularity
2d08e99f47
llama.swiftui : support loading custom model from file picker ( #4767 )
...
* swiftui: support load model from file picker
* swiftui: remove trailing whitespace
2024-01-04 10:22:38 +02:00
singularity
c399a87c6b
llama.swiftui : fix build of ggml.metallib ( #4754 )
...
* metal: fix metal backend init failure in swiftui
* metal: build ggml.metallib instead of copy src
* llama.swift : remove debug flags from metallib build
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com >
2024-01-04 09:58:16 +02:00
Peter Sugihara
0f60ba09ce
llama.swiftui : fix infinite loop, ouput timings, buff UI ( #4674 )
...
* fix infinite loop
* slight UI simplification, clearer UX
* clearer UI text, add timings to completion log
2023-12-29 15:58:56 +02:00
Georgi Gerganov
7a72042b8f
llama.swiftui : add tinyllama 1.1B F16
2023-12-18 20:17:43 +02:00
Georgi Gerganov
8e9f54e3e2
llama.swiftui : add more models
2023-12-18 20:05:12 +02:00
Georgi Gerganov
6851c8fb39
llama.swiftui : add bench functionality ( #4483 )
...
* llama.swiftui : add bench button
* llama.swiftui : initial bench functionality
* force to use n_gpu_layers on simulator
* add download buttons & expose llamaState.loadModel
* update project.pbxproj
* comment #Preview & fix editorconfig check
* gitignore : xcode stuff
* llama.swiftui : UX improvements
* llama.swiftui : avoid data copy via "downloadTask"
* llama.swiftui : remove model from project
* llama : remove "mostly" from model infos
* llama.swiftui : improve bench
---------
Co-authored-by: jhen <developer@jhen.me >
2023-12-17 19:38:41 +02:00
Miwa / Ensan
ca44b588eb
swift : fix concatenation method to avoid invalid UTF8 stringfication ( #4325 )
2023-12-04 18:03:49 +02:00
Miwa / Ensan
79d6bdf363
swift : fix prompt tokenization logic ( #4321 )
2023-12-04 15:43:45 +02:00
Miwa / Ensan
620a06de72
swift : fix token_to_piece implementation ( #4278 )
...
* Fix token_to_piece implementation in Swift
* Fix errors
2023-12-01 20:19:45 +02:00
Bailey Chittle
a6a660c556
examples : iOS example with swift ui ( #4159 )
...
* copy to llama.cpp as subdir
* attempt enabling metal, fails
* ggml metal compiles!
* Update README.md
* initial conversion to new format, utf8 errors?
* bug fixes, but now has an invalid memory access :(
* added O3, now has insufficient memory access
* begin sync with master
* update to match latest code, new errors
* fixed it!
* fix for loop conditionals, increase result size
* fix current workflow errors
* attempt a llama.swiftui workflow
* Update .github/workflows/build.yml
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com >
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com >
2023-11-27 16:56:52 +02:00