Commit graph

608 commits

Author SHA1 Message Date
Georgi Gerganov
a4da072d39 llama : fix vram size computation 2023-05-20 15:15:55 +03:00
JohannesGaessler
fadcd583fc Attempt clang-tidy fix 2023-05-20 14:07:46 +02:00
JohannesGaessler
b81f662e9d Loop in llama.cpp, fixed progress callback 2023-05-20 13:42:19 +02:00
JohannesGaessler
fee87f6558 gg rebase fixup 2023-05-20 13:27:21 +02:00
Georgi Gerganov
909acb3e3f
Merge branch 'master' into gpu-norms 2023-05-20 13:26:16 +03:00
Georgi Gerganov
a3586c526f cmake : workarounds for cufile when CMake version < 3.25 2023-05-20 13:22:47 +03:00
Georgi Gerganov
3ec7941bad ggml : ggml_mul better broadcast support 2023-05-20 13:14:34 +03:00
Georgi Gerganov
f67bc3c363 llama : code style fixes + progress print fix 2023-05-20 13:14:34 +03:00
JohannesGaessler
ffe9652bc1 GPU weights not in RAM, direct loading with cuFile 2023-05-20 13:14:33 +03:00
Georgi Gerganov
977e74d70e Revert "feature : add blis and other BLAS implementation support (#1502)"
This reverts commit 07e9ace0f9.
2023-05-20 13:11:16 +03:00
Zenix
667c57f11a feature : add blis and other BLAS implementation support (#1502)
* feature: add blis support

* feature: allow all BLA_VENDOR to be assigned in cmake arguments. align with whisper.cpp pr 927

* fix: version detection for BLA_SIZEOF_INTEGER, recover min version of cmake

* Fix typo in INTEGER

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-05-20 13:11:16 +03:00
Georgi Gerganov
54ec8a963b llama : add llama_init_backend() API (close #1527) 2023-05-20 13:11:16 +03:00
DannyDaemonic
f401d5ffa2 Fix for mingw (#1462) 2023-05-20 13:11:16 +03:00
Maxime
df512bbb49 llama : fix name shadowing and C4146 (#1526)
* Fix name shadowing and C4146

* Fix if macros not using defined when required

* Update llama-util.h

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>

* Update llama-util.h

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>

* Code style

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

---------

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-05-20 13:11:16 +03:00
Georgi Gerganov
f14673ad56 llama : fix compile warnings in llama_set_state_data() 2023-05-20 13:11:16 +03:00
Georgi Gerganov
9a7af6c2a5 ggml : fix scalar implementation of Q4_1 dot 2023-05-20 13:11:16 +03:00
Georgi Gerganov
211aa6aff0 ggml : use F16 instead of F32 in Q4_0, Q4_1, Q8_0 (#1508)
* ggml : use F16 instead of F32 in Q4_0, Q4_1 and Q8_0

* llama : bump LLAMA_FILE_VERSION to 3

* cuda : update Q4 and Q8 dequantize kernels

* ggml : fix AVX dot products

* readme : update performance table + hot topics
2023-05-20 13:11:16 +03:00
Georgi Gerganov
9fd8187215 tests : add missing header 2023-05-20 13:11:16 +03:00
Evan Jones
0226d491af examples : add persistent chat (#1495)
* examples : add persistent chat

* examples : fix whitespace

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-05-20 13:11:16 +03:00
Jason McCartney
c51c64a8fe main : make reverse prompt option act as a stop token in non-interactive mode (#1032)
* Make reverse prompt option act as a stop token in non-interactive scenarios

* Making requested review changes

* Update gpt_params_parse and fix a merge error

* Revert "Update gpt_params_parse and fix a merge error"

This reverts commit 2bb2ff1748.

* Update gpt_params_parse and fix a merge error take 2
2023-05-20 13:11:16 +03:00
David Kennedy
75c017fc5a readme : adds WizardLM to the list of supported models (#1485) 2023-05-20 13:11:16 +03:00
Georgi Gerganov
6b5776b0a7 minor : fix compile warnings 2023-05-20 13:11:16 +03:00
Erik Scholz
e22541a49e make kv_f16 the default for api users (#1517) 2023-05-20 13:11:16 +03:00
DannyDaemonic
a94b334591 Fixes #1511 lambda issue for w64devkit (mingw) (#1513)
* Fix for w64devkit and mingw
2023-05-20 13:11:16 +03:00
Stephan Walter
d916c5b863 Remove unused n_parts parameter (#1509) 2023-05-20 13:11:16 +03:00
rankaiyx
d5207bf307 benchmark-matmul: Print the average of the test results (#1490) 2023-05-20 13:11:16 +03:00
Tom Jobbins
1af2844e46 convert.py: Support models which are stored in a single pytorch_model.bin (#1469)
* Support models in a single pytorch_model.bin

* Remove spurious line with typo
2023-05-20 13:11:16 +03:00
Ilya Kurdyukov
230018d11c ~7% faster Q5_1 AVX2 code (#1477) 2023-05-20 13:11:16 +03:00
András Salamon
09d82511d4 define default model path once, sync path with readme (#1366) 2023-05-20 13:11:16 +03:00
Georgi Gerganov
ea600071cb
Revert "feature : add blis and other BLAS implementation support (#1502)"
This reverts commit 07e9ace0f9.
2023-05-20 12:03:48 +03:00
Zenix
07e9ace0f9
feature : add blis and other BLAS implementation support (#1502)
* feature: add blis support

* feature: allow all BLA_VENDOR to be assigned in cmake arguments. align with whisper.cpp pr 927

* fix: version detection for BLA_SIZEOF_INTEGER, recover min version of cmake

* Fix typo in INTEGER

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-05-20 12:02:48 +03:00
Georgi Gerganov
ec2e10c444
llama : add llama_init_backend() API (close #1527) 2023-05-20 11:06:37 +03:00
DannyDaemonic
d2c59b8ba4
Fix for mingw (#1462) 2023-05-20 00:40:02 -07:00
Maxime
503db28849
llama : fix name shadowing and C4146 (#1526)
* Fix name shadowing and C4146

* Fix if macros not using defined when required

* Update llama-util.h

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>

* Update llama-util.h

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>

* Code style

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

---------

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-05-20 10:22:37 +03:00
Georgi Gerganov
8a203f9fa1 llama : fix compile warnings in llama_set_state_data() 2023-05-20 10:14:43 +03:00
Georgi Gerganov
4fd3e29297 ggml : fix scalar implementation of Q4_1 dot 2023-05-20 10:13:19 +03:00
Georgi Gerganov
2d5db48371
ggml : use F16 instead of F32 in Q4_0, Q4_1, Q8_0 (#1508)
* ggml : use F16 instead of F32 in Q4_0, Q4_1 and Q8_0

* llama : bump LLAMA_FILE_VERSION to 3

* cuda : update Q4 and Q8 dequantize kernels

* ggml : fix AVX dot products

* readme : update performance table + hot topics
2023-05-19 22:17:18 +03:00
Georgi Gerganov
6986c7835a
tests : add missing header 2023-05-19 21:17:28 +03:00
Evan Jones
943e6081cc
examples : add persistent chat (#1495)
* examples : add persistent chat

* examples : fix whitespace

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-05-19 20:39:51 +03:00
Jason McCartney
7694b52b9a
main : make reverse prompt option act as a stop token in non-interactive mode (#1032)
* Make reverse prompt option act as a stop token in non-interactive scenarios

* Making requested review changes

* Update gpt_params_parse and fix a merge error

* Revert "Update gpt_params_parse and fix a merge error"

This reverts commit 2bb2ff1748.

* Update gpt_params_parse and fix a merge error take 2
2023-05-19 20:24:59 +03:00
David Kennedy
79e3efb0e9
readme : adds WizardLM to the list of supported models (#1485) 2023-05-19 20:16:30 +03:00
Georgi Gerganov
4b7e245adf
minor : fix compile warnings 2023-05-19 20:14:51 +03:00
JohannesGaessler
24d5ddf67c fixup! GPU weights not in RAM, direct loading with cuFile 2023-05-19 10:13:20 +02:00
JohannesGaessler
1bfe5a9886 fixup! GPU weights not in RAM, direct loading with cuFile 2023-05-18 23:57:13 +02:00
JohannesGaessler
fa1a29f36f GPU weights not in RAM, direct loading with cuFile 2023-05-18 20:18:08 +02:00
JohannesGaessler
2365a2a970 CUDA kernel for ggml_mul, norms in VRAM 2023-05-18 20:18:08 +02:00
JohannesGaessler
de65783ba2 Broadcasting for ggml_mul 2023-05-18 20:18:07 +02:00
Erik Scholz
5ea4339273
make kv_f16 the default for api users (#1517) 2023-05-18 19:31:01 +02:00
DannyDaemonic
ee9654138a
Fixes #1511 lambda issue for w64devkit (mingw) (#1513)
* Fix for w64devkit and mingw
2023-05-18 19:30:40 +02:00
Stephan Walter
dc271c52ed
Remove unused n_parts parameter (#1509) 2023-05-17 22:12:01 +00:00