Commit graph

1608 commits

Author SHA1 Message Date
Yazan Agha-Schrader
45f0415ba9
Update README.md 2023-12-01 20:38:19 +01:00
Yazan Agha-Schrader
c8d847d57e
Merge branch 'master' into server-ui-improvements 2023-11-28 12:57:03 +01:00
Yazan Agha-Schrader
3a15b28ce6
fix typo 2023-11-28 12:39:29 +01:00
Yazan Agha-Schrader
c96c458fe5
Merge branch 'ggerganov:master' into master 2023-11-28 09:43:39 +01:00
Yazan Agha-Schrader
5ac0f300a9 improve error handling 2023-11-28 09:43:01 +01:00
Georgi Gerganov
8406b0924b
ggml : re-enable BLAS for CPU when src0 != F32 + remove redundant full offload checks in llama.cpp (#4240)
* ggml : use blas even if src0 is not F32

* llama : use n_threads_batch only when n_tokens >= 32

ggml-ci

* llama : revert n_threads_batch logic

ggml-ci
2023-11-28 10:32:03 +02:00
Yazan Agha-Schrader
116fc90e9a
Update promptFormats.js 2023-11-28 07:17:45 +01:00
Yazan Agha-Schrader
2e4c05e00a
Update promptFormats.js 2023-11-28 07:15:03 +01:00
Yazan Agha-Schrader
9dcb514b1d update start server scripts 2023-11-28 06:57:29 +01:00
Yazan Agha-Schrader
57f8edd016 fix start-server.sh 2023-11-28 05:32:34 +01:00
Yazan Agha-Schrader
e056b06fbd error handling for missing dialog 2023-11-27 22:12:15 +01:00
Yazan Agha-Schrader
4fa32ad0e3 update 2023-11-27 21:45:12 +01:00
Yazan Agha-Schrader
1b6d4226b8 add start scripts to root path 2023-11-27 21:35:31 +01:00
bandoti
b38a16dfcf
cmake : fix issue with version info not getting baked into LlamaConfig.cmake (#3970)
* Split CPP generation from build-info query

* Remove blank lines

* Add BUILD_SHARED_LIBS option
2023-11-27 21:25:42 +02:00
Yazan Agha-Schrader
ae096d0a92
Merge branch 'ggerganov:master' into master 2023-11-27 20:10:11 +01:00
Kasumi
0dab8cd7cc
readme : add Amica to UI list (#4230) 2023-11-27 19:39:42 +02:00
Yazan Agha-Schrader
6c318b54c8
Update README.md 2023-11-27 18:28:32 +01:00
Yazan Agha-Schrader
ecb39732e6 add min-p image 2023-11-27 18:25:51 +01:00
Yazan Agha-Schrader
082b33550f
Update README.md 2023-11-27 18:19:26 +01:00
Yazan Agha-Schrader
c48f3f2042
Merge pull request #3 from mounta11n/server-ui-improvements
add min-p
2023-11-27 17:58:23 +01:00
Yazan Agha-Schrader
464f073307 add min-p 2023-11-27 17:56:30 +01:00
Yazan Agha-Schrader
d55b482361
Merge pull request #2 from mounta11n/server-ui-improvements
Server UI improvements
2023-11-27 17:26:43 +01:00
Yazan Agha-Schrader
809b2697fe
Merge branch 'ggerganov:master' into master 2023-11-27 17:24:35 +01:00
Yazan Agha-Schrader
c161ad20db add mmproj function 2023-11-27 17:17:38 +01:00
Yazan Agha-Schrader
d5683279b1 fix wrong translation 2023-11-27 16:19:08 +01:00
Bailey Chittle
bb03290c17
examples : iOS example with swift ui (#4159)
* copy to llama.cpp as subdir

* attempt enabling metal, fails

* ggml metal compiles!

* Update README.md

* initial conversion to new format, utf8 errors?

* bug fixes, but now has an invalid memory access :(

* added O3, now has insufficient memory access

* begin sync with master

* update to match latest code, new errors

* fixed it!

* fix for loop conditionals, increase result size

* fix current workflow errors

* attempt a llama.swiftui workflow

* Update .github/workflows/build.yml

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-11-27 16:56:52 +02:00
Yazan Agha-Schrader
09e3b50f62 fix wrong formattings 2023-11-27 15:54:21 +01:00
Yazan Agha-Schrader
cf8cb0d303 fix multi-modal-selection 2023-11-27 15:05:23 +01:00
Yazan Agha-Schrader
49d7c07210
Update README.md
add description
2023-11-27 14:23:51 +01:00
Yazan Agha-Schrader
1bb2df7367
Update README.md
add pictures of the ui
2023-11-27 14:22:31 +01:00
Yazan Agha-Schrader
25ed0c4f6b add ui and tui pics 2023-11-27 14:18:58 +01:00
Yazan Agha-Schrader
1bc9ca6a9c add ui and tui pics 2023-11-27 14:17:04 +01:00
Yazan Agha-Schrader
a28935febe
Update README.md 2023-11-27 14:14:46 +01:00
Yazan Agha-Schrader
ca22eb6cc7
Merge pull request #1 from mounta11n/server-ui-improvements
Server UI improvements
2023-11-27 14:11:48 +01:00
Yazan Agha-Schrader
e7cfe1f5d9 add favicon 2023-11-27 13:58:54 +01:00
Yazan Agha-Schrader
9abb31011b
Update index.html
add atlas
2023-11-27 13:47:08 +01:00
Yazan Agha-Schrader
4d15130fda add start script 2023-11-27 13:06:27 +01:00
Yazan Agha-Schrader
2566e53945 ic 2023-11-27 11:33:06 +01:00
Jared Van Bortel
f3b269813f
ggml : fix -Warray-bounds warning with gcc (#4231) 2023-11-26 22:58:43 -05:00
Georgi Gerganov
3e73d31d9c
lookahead : support -n -1 infinite generation 2023-11-26 21:52:23 +02:00
Georgi Gerganov
9656026b53
readme : update hot topics 2023-11-26 20:42:51 +02:00
Georgi Gerganov
922754a8d6
lookahead : add example for lookahead decoding (#4207)
* lookahead : init

* lookahead : generate and store n-grams

* lookahead : use loop instead recursion to generate n-grams

* lookahead : initial working implementation

* lookahead : filter repeating n-grams

* lookahead : use deterministic init

* lookahead : add to Makefile

* lookahead : fix a bug in the seq_id of the lookahead tokens

* lookahead : add comments

---------

Co-authored-by: slaren <slarengh@gmail.com>
2023-11-26 20:33:07 +02:00
Xiao-Yong Jin
22da05536f
metal : fix yarn (#4220)
get the correct n_orig_ctx in metal
2023-11-26 10:30:02 +02:00
Galunid
1ddb52ec38
scripts : Use mmap in torch load (#4202)
* Use mmap in torch load, prefer .bin files when loading

* Revert .bin > .safetensors preference
2023-11-25 22:45:02 +01:00
Marcus Dunn
f837c3a992
llama : grammar reserve space in decode_utf8 (#4210)
* reserve space for codepoints

* improvement for the appended 0
2023-11-25 18:58:23 +02:00
crasm
3014b5415d
Update docs for yarn_ext_factor <0.0 as unspecified instead of NaN (#4189) 2023-11-25 10:47:07 -05:00
Georgi Gerganov
04814e718e
readme : update hot topics 2023-11-25 12:02:13 +02:00
Georgi Gerganov
af19d35734
server : OAI API compatibility (#4198)
* Add openai-compatible POST /v1/chat/completions API endpoint to server example

* fix code style

* Update server README.md

* Improve server README.md

* Fix server.cpp code style according to review

* server : some style changes

* server : indentation

* server : enable special tokens during tokenization by default

* server : minor code style

* server : change random string generator

* straightforward /v1/models endpoint

---------

Co-authored-by: kir-gadjello <111190790+kir-gadjello@users.noreply.github.com>
Co-authored-by: Tobi Lütke <tobi@Tobis-MacBook-Pro.local>
2023-11-25 11:29:06 +02:00
slaren
e9c13ff781
llama : set metal log callback correctly (#4204) 2023-11-24 18:10:01 +01:00
slaren
8a052c131e
ggml-cuda : support stablelm rope (#4156)
* ggml-cuda : support stablelm rope

* remove unused freq_base kernel parameter

* add n_dims parameter to llm_build_k_shift, default to n_rot via overload

* llama : fix llm_build_k_shift args

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-11-24 18:04:31 +01:00