Commit graph

2783 commits

Author SHA1 Message Date
Sigbjørn Skjæret
4bcd1d8a74
test-- 2024-05-05 00:05:35 +02:00
Sigbjørn Skjæret
94ebaae022
test++ 2024-05-05 00:04:40 +02:00
Sigbjørn Skjæret
840986bd4a
event_name is pull_request_target 2024-05-05 00:01:59 +02:00
Sigbjørn Skjæret
06e9307ad6
test-- 2024-05-04 23:41:25 +02:00
Sigbjørn Skjæret
ab57af872d
test++ 2024-05-04 23:39:00 +02:00
Sigbjørn Skjæret
c3d9d7040a
correct github.event usage 2024-05-04 23:35:32 +02:00
Sigbjørn Skjæret
5e8ca26f8b
remove test condition 2024-05-04 23:34:16 +02:00
Sigbjørn Skjæret
dfb33e8f7a
correct github.event usage 2024-05-04 23:30:47 +02:00
Sigbjørn Skjæret
acd1d89124
this is driving me crazy 2024-05-04 23:00:01 +02:00
Sigbjørn Skjæret
3a394659f3
test-- 2024-05-04 22:48:39 +02:00
Sigbjørn Skjæret
fb7304b275
do debug where we can get logs 2024-05-04 22:45:11 +02:00
Sigbjørn Skjæret
8122c9484c
test++ 2024-05-04 22:30:41 +02:00
Sigbjørn Skjæret
29c30faffa
remove debug 2024-05-04 22:29:30 +02:00
Sigbjørn Skjæret
f15f9d9ee4
test-- 2024-05-04 22:24:59 +02:00
Sigbjørn Skjæret
86394bf8e9
enable actions debug 2024-05-04 22:24:23 +02:00
Sigbjørn Skjæret
bde2fc7dc2
test++ 2024-05-04 21:44:44 +02:00
Sigbjørn Skjæret
2d4be61694
style++ 2024-05-04 20:05:50 +02:00
Sigbjørn Skjæret
c4ae6c1cc9
ternary won't work 2024-05-04 20:02:17 +02:00
Sigbjørn Skjæret
f006b5ca5e
more readable as multi-line 2024-05-04 19:28:45 +02:00
Sigbjørn Skjæret
b259634be6
check owner on push also 2024-05-04 14:26:17 +02:00
Sigbjørn Skjæret
d686299124
only check owner on schedule event 2024-05-04 12:08:38 +02:00
Sigbjørn Skjæret
abf0ff0d2a
Disable benchmark on forked repo 2024-05-02 04:52:33 +02:00
Georgi Gerganov
f4ab2a4147
llama : fix BPE pre-tokenization (#6920)
* merged the changes from deepseeker models to main branch

* Moved regex patterns to unicode.cpp and updated unicode.h

* Moved header files

* Resolved issues

* added and refactored unicode_regex_split and related functions

* Updated/merged the deepseek coder pr

* Refactored code

* Adding unicode regex mappings

* Adding unicode regex function

* Added needed functionality, testing remains

* Fixed issues

* Fixed issue with gpt2 regex custom preprocessor

* unicode : fix? unicode_wstring_to_utf8

* lint : fix whitespaces

* tests : add tokenizer tests for numbers

* unicode : remove redundant headers

* tests : remove and rename tokenizer test scripts

* tests : add sample usage

* gguf-py : reader prints warnings on duplicate keys

* llama : towards llama3 tokenization support (wip)

* unicode : shot in the dark to fix tests on Windows

* unicode : first try custom implementations

* convert : add "tokenizer.ggml.pre" GGUF KV (wip)

* llama : use new pre-tokenizer type

* convert : fix pre-tokenizer type writing

* lint : fix

* make : add test-tokenizer-0-llama-v3

* wip

* models : add llama v3 vocab file

* llama : adapt punctuation regex + add llama 3 regex

* minor

* unicode : set bomb

* unicode : set bomb

* unicode : always use std::wregex

* unicode : support \p{N}, \p{L} and \p{P} natively

* unicode : try fix windows

* unicode : category support via std::regex

* unicode : clean-up

* unicode : simplify

* convert : add convert-hf-to-gguf-update.py

ggml-ci

* lint : update

* convert : add falcon

ggml-ci

* unicode : normalize signatures

* lint : fix

* lint : fix

* convert : remove unused functions

* convert : add comments

* convert : exercise contractions

ggml-ci

* lint : fix

* cmake : refactor test targets

* tests : refactor vocab tests

ggml-ci

* tests : add more vocabs and tests

ggml-ci

* unicode : cleanup

* scripts : ignore new update script in check-requirements.sh

* models : add phi-3, mpt, gpt-2, starcoder

* tests : disable obsolete

ggml-ci

* tests : use faster bpe test

ggml-ci

* llama : more prominent warning for old BPE models

* tests : disable test-tokenizer-1-bpe due to slowness

ggml-ci

---------

Co-authored-by: Jaggzh <jaggz.h@gmail.com>
Co-authored-by: Kazim Abrar Mahi <kazimabrarmahi135@gmail.com>
2024-04-29 16:58:41 +03:00
David Renshaw
3f167476b1
sampling : use std::random_device{}() for default random seed (#6962) 2024-04-29 16:35:45 +03:00
Christian Zhou-Zheng
3055a41805
convert : fix conversion of some BERT embedding models (#6937) 2024-04-29 16:34:41 +03:00
Przemysław Pawełczyk
577277ffd2
make : change GNU make default CXX from g++ to c++ (#6966) 2024-04-29 16:08:20 +03:00
Przemysław Pawełczyk
ca7f29f568
ci : add building in MSYS2 environments (Windows) (#6967) 2024-04-29 15:59:47 +03:00
Johannes Gäßler
c4f708a93f
llama : fix typo LAMMAFILE -> LLAMAFILE (#6974) 2024-04-29 15:36:22 +03:00
DAN™
e00b4a8f81
Fix more int overflow during quant (PPL/CUDA). (#6563)
* Fix more int overflow during quant.

* Fix some more int overflow in softmax.

* Revert back to int64_t.
2024-04-29 00:38:44 +02:00
Xuan Son Nguyen
7bb36ccf91
gguf : enforce that tensor names are unique (#6905)
* not allow adding duplicated tensor name

* no duplicated tensor while reading gguf

* typo

* throw exception inside llama_model_loader

Co-authored-by: slaren <slarengh@gmail.com>

---------

Co-authored-by: slaren <slarengh@gmail.com>
2024-04-28 17:36:18 +02:00
Neo Zhang
ce023f6f2f
add device version in device list (#6959)
Co-authored-by: arthw <>
2024-04-28 22:40:31 +08:00
github-actions[bot]
6e472f58e4 flake.lock: Update
Flake lock file updates:

• Updated input 'nixpkgs':
    'github:NixOS/nixpkgs/5c24cf2f0a12ad855f444c30b2421d044120c66f?narHash=sha256-XtTSSIB2DA6tOv%2Bl0FhvfDMiyCmhoRbNB%2B0SeInZkbk%3D' (2024-04-19)
  → 'github:NixOS/nixpkgs/7bb2ccd8cdc44c91edba16c48d2c8f331fb3d856?narHash=sha256-Drmja/f5MRHZCskS6mvzFqxEaZMeciScCTFxWVLqWEY%3D' (2024-04-25)
2024-04-28 11:12:50 +00:00
mgroeber9110
4dba7e8114
Replace "alternative" boolean operator in conditional compilation directive (#6949) 2024-04-27 21:02:06 +02:00
Pierrick Hymbert
b7368332e2
ci: server: tests python env on github container ubuntu latest / fix n_predict (#6935)
* ci: server: fix python env

* ci: server: fix server tests after #6638

* ci: server: fix windows is not building PR branch
2024-04-27 17:50:48 +02:00
agray3
928e0b7013
Reset schedule earlier to allow overlap with ggml graph computation on device (#6933)
* Reset schedule earlier to allow overlap with graph computation on device
2024-04-26 20:08:30 +02:00
Pierrick Hymbert
0c4d489e29
quantize: add imatrix and dataset metadata in GGUF (#6658)
* imatrix: save the dataset file used in the output file

* llama: support kv overrides type string string

* common: factorize KV Overrides parsing between common and server

* quantize: add imatrix n entries and dataset KV metadata
quantize: factorize KV Overrides parsing between common
#6656

* llama: remove kv override str_value initialization as it does not compile on some toolchain

* quantize: add imatrix m_last_call as `quantize.imatrix.chunks_count`

* quantize: add imatrix filename in KV

* llama: add llama_model_kv_override_free

* common: add llama_model_kv_override_free
common: free kv override if used after model loading

* llama: finally move the string KV override value to the stack

* llama : minor

* no need to add a NUL to the std::vector, std::string can be initialized from a pair of iterators.

Co-authored-by: slaren <slarengh@gmail.com>

* kv override: ensure string termination

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
Co-authored-by: slaren <slarengh@gmail.com>
2024-04-26 20:06:33 +02:00
slaren
017e6999b5
add basic tensor data validation function (#6884)
* add basic tensor data validation function

* add --check-tensors command line argument

tensor validation is disabled by default and can be enabled by adding
`--check-tensors` to the command line arguments.

quantize always validates tensors.
2024-04-26 18:39:58 +02:00
slaren
e2764cd7ca
gguf : fix mismatch between alloc and free functions (#6929) 2024-04-26 18:07:42 +03:00
Justine Tunney
4b1c3c98b4
llamafile : use 64-bit integers in sgemm (#6928) 2024-04-26 17:05:33 +03:00
Pierrick Hymbert
bbe3c6e761
ci: server: fix python installation (#6925) 2024-04-26 12:27:25 +02:00
Pierrick Hymbert
7f5ff558ee
server: stop generation at n_ctx_train if n_predict is not set (#6638)
* server: cap n_predict if not set to n_ctx_train

* server: fix infinite loop

* server: infinite loop, move in process_token
server: infinite loop: set stop limit to true

* minor: spaces

* minor: spaces

* server: include prompt tokens in the EOS limit
2024-04-26 12:15:30 +02:00
Pierrick Hymbert
9e4e077ec5
ci: server: fix python installation (#6922) 2024-04-26 11:11:51 +02:00
Georgi Gerganov
83b72cb086
Merge pull request from GHSA-p5mv-gjc5-mwqv
* always use calloc

clamp n_kv on failure to read a kv

* ggml : alternative ctx->header.n_kv update

---------

Co-authored-by: slaren <slarengh@gmail.com>
2024-04-26 10:41:53 +03:00
Pierrick Hymbert
d4a9afc100
ci: server: fix python installation (#6918) 2024-04-26 09:27:49 +02:00
Pierrick Hymbert
7d641c26ac
ci: fix concurrency for pull_request_target (#6917) 2024-04-26 09:26:59 +02:00
Pierrick Hymbert
5790c8dac1
bench: server add stop word for PHI-2 (#6916) 2024-04-26 09:26:16 +02:00
vik
46e12c4692
llava : add support for moondream vision language model (#6899)
* add support for moondream vision language model

This required making the following changes to the CLIP model:

1. Support for patch embedding bias.
2. Make class embedding and pre-layernorm optional.
3. Add support for post-layernorm.

* Update examples/llava/clip.cpp

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-04-25 22:38:31 +03:00
Georgi Gerganov
dba497e0c1
cmake : restore LLAMA_LLAMAFILE_DEFAULT 2024-04-25 21:37:27 +03:00
Georgi Gerganov
fa0b4ad252
cmake : remove obsolete ANDROID check 2024-04-25 18:59:51 +03:00
slaren
d6e1d44f16
llama : synchronize before get/set session data (#6911) 2024-04-25 17:59:03 +02:00