Commit graph

2773 commits

Author SHA1 Message Date
Georgi Gerganov
8791e94e3c
lint : fix 2024-04-26 21:12:05 +03:00
Georgi Gerganov
1b9b79dd14
convert : fix pre-tokenizer type writing 2024-04-26 20:55:14 +03:00
Georgi Gerganov
43e12ce8e5
llama : use new pre-tokenizer type 2024-04-26 20:08:57 +03:00
Georgi Gerganov
9b4d63ae53
convert : add "tokenizer.ggml.pre" GGUF KV (wip) 2024-04-26 19:21:55 +03:00
Georgi Gerganov
e3f6dc7409
Merge branch 'master' into gg/bpe-preprocess 2024-04-26 18:08:40 +03:00
slaren
e2764cd7ca
gguf : fix mismatch between alloc and free functions (#6929) 2024-04-26 18:07:42 +03:00
Justine Tunney
4b1c3c98b4
llamafile : use 64-bit integers in sgemm (#6928) 2024-04-26 17:05:33 +03:00
Georgi Gerganov
e9891769ff
unicode : first try custom implementations 2024-04-26 15:09:07 +03:00
Georgi Gerganov
e8c206be61
unicode : shot in the dark to fix tests on Windows 2024-04-26 14:57:12 +03:00
Georgi Gerganov
4907e41aa7
llama : towards llama3 tokenization support (wip) 2024-04-26 14:55:37 +03:00
Georgi Gerganov
ed42711b90
gguf-py : reader prints warnings on duplicate keys 2024-04-26 14:32:22 +03:00
Georgi Gerganov
e1b2bf783e
tests : add sample usage 2024-04-26 13:43:54 +03:00
Georgi Gerganov
aeafb43ed7
tests : remove and rename tokenizer test scripts 2024-04-26 13:39:03 +03:00
Georgi Gerganov
d999cf65c5
unicode : remove redundant headers 2024-04-26 13:29:48 +03:00
Pierrick Hymbert
bbe3c6e761
ci: server: fix python installation (#6925) 2024-04-26 12:27:25 +02:00
Georgi Gerganov
7a44e44342
tests : add tokenizer tests for numbers 2024-04-26 13:21:28 +03:00
Pierrick Hymbert
7f5ff558ee
server: stop generation at n_ctx_train if n_predict is not set (#6638)
* server: cap n_predict if not set to n_ctx_train

* server: fix infinite loop

* server: infinite loop, move in process_token
server: infinite loop: set stop limit to true

* minor: spaces

* minor: spaces

* server: include prompt tokens in the EOS limit
2024-04-26 12:15:30 +02:00
Georgi Gerganov
c56e19db4b
lint : fix whitespaces 2024-04-26 12:58:07 +03:00
Georgi Gerganov
06d3e693db
unicode : fix? unicode_wstring_to_utf8 2024-04-26 12:55:11 +03:00
Pierrick Hymbert
9e4e077ec5
ci: server: fix python installation (#6922) 2024-04-26 11:11:51 +02:00
Kazim Abrar Mahi
36d983262e
Fixed issue with gpt2 regex custom preprocessor 2024-04-26 11:43:29 +03:00
Kazim Abrar Mahi
753580360b
Fixed issues 2024-04-26 11:43:29 +03:00
Kazim Abrar Mahi
feeaf4f39c
Added needed functionality, testing remains 2024-04-26 11:43:29 +03:00
Kazim Abrar Mahi
7e308ed212
Adding unicode regex function 2024-04-26 11:43:29 +03:00
Kazim Abrar Mahi
a5710a4101
Adding unicode regex mappings 2024-04-26 11:43:29 +03:00
Kazim Abrar Mahi
4c3e882a85
Refactored code 2024-04-26 11:43:29 +03:00
Jaggzh
c8e7d9521d
Updated/merged the deepseek coder pr 2024-04-26 11:43:29 +03:00
Kazim Abrar Mahi
4056dc5b1e
added and refactored unicode_regex_split and related functions 2024-04-26 11:43:28 +03:00
Kazim Abrar Mahi
1c924e4b35
Resolved issues 2024-04-26 11:43:28 +03:00
Kazim Abrar Mahi
54f93eb50b
Moved header files 2024-04-26 11:43:28 +03:00
Kazim Abrar Mahi
d2cfc2225f
Moved regex patterns to unicode.cpp and updated unicode.h 2024-04-26 11:43:28 +03:00
Jaggzh
6fbab2dbc8
merged the changes from deepseeker models to main branch 2024-04-26 11:43:08 +03:00
Georgi Gerganov
83b72cb086
Merge pull request from GHSA-p5mv-gjc5-mwqv
* always use calloc

clamp n_kv on failure to read a kv

* ggml : alternative ctx->header.n_kv update

---------

Co-authored-by: slaren <slarengh@gmail.com>
2024-04-26 10:41:53 +03:00
Pierrick Hymbert
d4a9afc100
ci: server: fix python installation (#6918) 2024-04-26 09:27:49 +02:00
Pierrick Hymbert
7d641c26ac
ci: fix concurrency for pull_request_target (#6917) 2024-04-26 09:26:59 +02:00
Pierrick Hymbert
5790c8dac1
bench: server add stop word for PHI-2 (#6916) 2024-04-26 09:26:16 +02:00
vik
46e12c4692
llava : add support for moondream vision language model (#6899)
* add support for moondream vision language model

This required making the following changes to the CLIP model:

1. Support for patch embedding bias.
2. Make class embedding and pre-layernorm optional.
3. Add support for post-layernorm.

* Update examples/llava/clip.cpp

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-04-25 22:38:31 +03:00
Georgi Gerganov
dba497e0c1
cmake : restore LLAMA_LLAMAFILE_DEFAULT 2024-04-25 21:37:27 +03:00
Georgi Gerganov
fa0b4ad252
cmake : remove obsolete ANDROID check 2024-04-25 18:59:51 +03:00
slaren
d6e1d44f16
llama : synchronize before get/set session data (#6911) 2024-04-25 17:59:03 +02:00
Georgi Gerganov
853d06ffe2
ci : tmp disable slow tests 2024-04-25 17:06:27 +03:00
BarfingLemurs
3fe0596c18
readme : update model list (#6908)
* Update README.md

* missing space

* llama3 !
2024-04-25 16:52:28 +03:00
slaren
0ead1f1072
llama : check that all the tensor data is in the model file (#6885)
* llama : check that all the tensor data is in the model file

* also check for unsigned overflow
2024-04-25 15:23:47 +02:00
Georgi Gerganov
51543729ff
ggml : fix redefinition of vaddvq_f32 for 32-bit ARM (#6906) 2024-04-25 15:48:25 +03:00
Daniel Bevenius
4ab99d8d47
clip : rename lerp function to avoid conflict (#6894)
This commit renamesthe lerp (linear interpolation) function in clip.cpp
to avoid a conflict with the lerp function in the <cmath> standard C++
library when using c++20.

The motivation for this change is to enable projects that use c++20 to
be able to compile clip.cpp without having to resort to patching it. The
lerp function was added to cmath in version C++20 (202002L) and is why
this is not causing any issue at the moment as C++11/C++17 is currently
used by llama.cpp.

I realize that llama.cpp uses either C++11 (or C++17 in the case for
SYCL) but wanted to ask if this would be an acceptable change just the
same.

Refs: https://en.cppreference.com/w/cpp/numeric/lerp

Signed-off-by: Daniel Bevenius <daniel.bevenius@gmail.com>
2024-04-25 15:38:14 +03:00
Georgi Gerganov
54770413c4
ggml : fix MIN / MAX macros (#6904)
ggml-ci
2024-04-25 15:12:28 +03:00
Georgi Gerganov
aa750c1ede
tests : minor bash stuff (#6902)
* tests : minor bash stuff

ggml-ci

* llama : fix build

ggml-ci

* tests : fix CUR_DIR -> ROOT_DIR

ggml-ci

* tests : fix fname

ggml-ci
2024-04-25 14:27:20 +03:00
jiez
1966eb2615
quantize : add '--keep-split' to quantize model into shards (#6688)
* Implement '--keep-split' to quantize model into several shards

* Add test script

* Update examples/quantize/quantize.cpp

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* Split model correctly even if tensor id is out-of-order

* Update llama_model_quantize_params

* Fix preci failures

---------

Co-authored-by: z5269887 <z5269887@unsw.edu.au>
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-04-25 13:29:35 +03:00
Johannes Gäßler
784e11dea1
README: add graphic for matrix multiplication (#6881) 2024-04-24 21:29:13 +02:00
Douglas Hanley
b4e4b8a935
llama : add llama_get_pooling_type function (#6862)
* add llama_get_pooling_type function

* fix argument name, move with ctx funcs
2024-04-24 16:10:07 +03:00