Commit graph

3155 commits

Author SHA1 Message Date
teleprint-me
5840b6f0b0
refactor: Simplify the get_vocab_base_pre method 2024-05-18 23:59:52 -04:00
teleprint-me
316b404d94
patch: Fix CLI option for generating vocab tests 2024-05-18 23:59:22 -04:00
teleprint-me
da5deebda1
fix: Apply fix to verbose help description and generating vocab tests option 2024-05-18 23:34:33 -04:00
teleprint-me
ce777c8910
Merge branch 'master' into auto-model-support 2024-05-18 22:46:00 -04:00
teleprint-me
d02a0f42f9
feat: Add vocab generation script 2024-05-18 22:15:12 -04:00
teleprint-me
bd32266c87
feat: Add function for generating vocab script and fix CLI opts 2024-05-18 22:14:58 -04:00
teleprint-me
0479e9695f
patch: Add exception handling for non-existent vocab related files 2024-05-18 22:14:19 -04:00
teleprint-me
4b3735ca50
chore: Remove cluttered vocab files 2024-05-18 22:13:21 -04:00
teleprint-me
1a82573126
feat: Add example script for automating generating tokenizer model checksums and tests 2024-05-18 20:49:22 -04:00
teleprint-me
006bb60d27
chore: Fix model path references 2024-05-18 19:20:19 -04:00
fraxy-v
f5bf761747
Capture CUDA logging output (#7298)
* logging: output capture in cuda module

* fix compile error

* fix: vsnprintf terminates with 0, string use not correct

* post review

* Update llama.cpp

Co-authored-by: slaren <slarengh@gmail.com>

* Update llama.cpp

Co-authored-by: slaren <slarengh@gmail.com>

---------

Co-authored-by: slaren <slarengh@gmail.com>
2024-05-19 00:44:42 +02:00
teleprint-me
b6f70b8a0e
chore: Fix line spacing 2024-05-18 16:59:20 -04:00
teleprint-me
832b449cbd
feat: Add pre-tokenizer CLI tooling 2024-05-18 14:33:56 -04:00
teleprint-me
04fb7886c5
chore: Apply isort to package gguf init 2024-05-18 14:33:22 -04:00
teleprint-me
2ef73ee6e4
refactor: Apply SoC for HF requests, vocab, and weights 2024-05-18 13:45:21 -04:00
teleprint-me
5eda2c9485
feat: Add pre-tokenizer logging 2024-05-18 13:21:22 -04:00
Georgi Gerganov
059031b8c4
ci : re-enable sanitizer runs (#7358)
* Revert "ci : temporary disable sanitizer builds (#6128)"

This reverts commit 4f6d1337ca.

* ci : trigger
2024-05-18 18:55:54 +03:00
Georgi Gerganov
511182eabb
android : use "ci-android" branch for CI (#7341)
* android : use "ci-android" branch for CI

* ggml : disable SIMD exp and silu for 32-bit ARM

ggml-ci

* android : do not fetch, use add_subdirectory instead

* cmake : provide binary dir
2024-05-18 20:40:39 +10:00
Johannes Gäßler
133d99c599
CUDA: deduplicate FlashAttention code (#7352) 2024-05-18 12:36:25 +02:00
Johannes Gäßler
cb42c29427
server: correct --threads documentation [no ci] (#7362) 2024-05-18 11:10:47 +02:00
Engininja2
d233b507cd
cuda : add half2 __shfl_xor() for ROCm 5.5 (#7263) 2024-05-18 10:05:17 +02:00
Steffen Röcker
0f98acfac6
llama : add support for larger Granite Code Models (20B, 34B) (#7324)
Tie the weights for ARCH_STARCODER to support the larger Granite code models.
Partially addresses ggerganov/issues/7116

There still remains to be a few things to fix.
Currently requires `--override-kv tokenizer.ggml.add_bos_token=bool:false`
2024-05-18 11:04:55 +03:00
strawberrymelonpanda
ca57e0f35e
perplexity : ndot progress and show stats with < 100 tasks (#7348)
Fix floating point error with ndot printing, allow end stats on lower task numbers if multiple-choice tasks.
2024-05-18 10:57:08 +03:00
0cc4m
c1b295eea5
Update and fix Vulkan soft_max and argsort implementations (#7237)
* Update and fix Vulkan softmax implementation

* Update and fix Vulkan argsort implementation
2024-05-18 08:10:58 +02:00
Brian
de73196344
github-actions-labeler: initial commit (#7330)
* github-actions-labeler: initial commit [no ci]

* github actions: remove priority auto labeling [no ci]
2024-05-18 16:04:23 +10:00
Georgi Gerganov
b49a13dd2f
convert : fix set_vocab_sentencepiece (#6866)
* convert : fix set_vocab_sentencepiece

* Update convert-hf-to-gguf.py
2024-05-18 08:46:20 +03:00
teleprint-me
b2ca23c746
feat: Add method for generating the checksums and writing the results to a json file 2024-05-18 01:46:13 -04:00
teleprint-me
302258721b
refactor: Apply model schema to tokenizer downloads
- Add imports for json and hashlib
- Add missing models: phi, stablelm, mistral, and mixtral
- Fix constructor logic
- Fix how models are accessed
- Apply model schema to download_model method
2024-05-18 01:26:39 -04:00
teleprint-me
f7515abf49
feat: Add tokenizer types, model types, and model repos 2024-05-18 00:37:19 -04:00
teleprint-me
3ba01c7a0e
chore: Fix spacing 2024-05-18 00:10:42 -04:00
teleprint-me
1a286c8e21
refactor: Clean up variable names and separate concerns when downloading tokenizers 2024-05-17 23:27:30 -04:00
teleprint-me
5c8144e645
feat: Add download_model method and fix references for clarity to mitigate confusion 2024-05-17 23:00:12 -04:00
teleprint-me
4790f76740
feat: Add prototype for requesting vocab related files 2024-05-17 21:08:39 -04:00
teleprint-me
98cf788990
patch: Apply minor fixes for handling headers and writing content 2024-05-17 21:07:51 -04:00
slaren
05834841dc
ggml : fix quants nans when all the group weights are very close to zero (#7313) 2024-05-18 02:39:54 +02:00
Engininja2
ef277de2ad
cmake : fix typo in AMDGPU_TARGETS (#7356) 2024-05-18 02:39:25 +02:00
teleprint-me
742abebb39
refactor: Add log for status and fix url path variable name 2024-05-17 20:37:59 -04:00
teleprint-me
ba13d64bb3
feat: Add utils for logging and writing when interacting with HuggingFaceHub 2024-05-17 20:26:21 -04:00
teleprint-me
dbdf6c2b1d
feat: Add prototype for managing huggingface hub content 2024-05-17 20:00:48 -04:00
jaime-m-p
b43272afa2
Unicode codepoint flags for custom regexs (#7245)
* Replace CODEPOINT_TYPE_* with codepoint_flags
* Update and bugfix brute force random test
* Deterministic brute force random test
* Unicode normalization NFD
* Get rid of BOM
2024-05-18 01:09:13 +02:00
Johannes Gäßler
0fc1e820a9
CUDA: faster large batch FA without tensor cores (#7314) 2024-05-17 18:54:52 +02:00
Gavin Zhao
82ca83db3c
ROCm: use native CMake HIP support (#5966)
Supercedes #4024 and #4813.

CMake's native HIP support has become the
recommended way to add HIP code into a project (see
[here](https://rocm.docs.amd.com/en/docs-6.0.0/conceptual/cmake-packages.html#using-hip-in-cmake)).
This PR makes the following changes:

1. The environment variable `HIPCXX` or CMake option
`CMAKE_HIP_COMPILER` should be used to specify the HIP
compiler. Notably this shouldn't be `hipcc`, but ROCm's clang,
which usually resides in `$ROCM_PATH/llvm/bin/clang`. Previously
this was control by `CMAKE_C_COMPILER` and `CMAKE_CXX_COMPILER`.
Note that since native CMake HIP support is not yet available on
Windows, on Windows we fall back to the old behavior.

2. CMake option `CMAKE_HIP_ARCHITECTURES` is used to control the
GPU architectures to build for. Previously this was controled by
`GPU_TARGETS`.

3. Updated the Nix recipe to account for these new changes.

4. The GPU targets to build against in the Nix recipe is now
consistent with the supported GPU targets in nixpkgs.

5. Added CI checks for HIP on both Linux and Windows. On Linux, we test
both the new and old behavior.

The most important part about this PR is the separation of the
HIP compiler and the C/C++ compiler. This allows users to choose
a different C/C++ compiler if desired, compared to the current
situation where when building for ROCm support, everything must be
compiled with ROCm's clang.

~~Makefile is unchanged. Please let me know if we want to be
consistent on variables' naming because Makefile still uses
`GPU_TARGETS` to control architectures to build for, but I feel
like setting `CMAKE_HIP_ARCHITECTURES` is a bit awkward when you're
calling `make`.~~ Makefile used `GPU_TARGETS` but the README says
to use `AMDGPU_TARGETS`. For consistency with CMake, all usage of
`GPU_TARGETS` in Makefile has been updated to `AMDGPU_TARGETS`.

Thanks to the suggestion of @jin-eld, to maintain backwards
compatibility (and not break too many downstream users' builds), if
`CMAKE_CXX_COMPILER` ends with `hipcc`, then we still compile using
the original behavior and emit a warning that recommends switching
to the new HIP support. Similarly, if `AMDGPU_TARGETS` is set but
`CMAKE_HIP_ARCHITECTURES` is not, then we forward `AMDGPU_TARGETS`
to `CMAKE_HIP_ARCHITECTURES` to ease the transition to the new
HIP support.

Signed-off-by: Gavin Zhao <git@gzgz.dev>
2024-05-17 17:03:03 +02:00
Radoslav Gerganov
f4bd8b3d26
rpc : set SO_REUSEADDR for the server socket (#7320)
ref: #7293
2024-05-17 17:25:44 +03:00
Brian
51e9d02599
Added a single test function script and fix debug-test.sh to be more robust (#7279)
* run-single-test.sh: added a single test function script and fix debug-test.sh to be more robust

* debug-test.sh: combined execute and gdb test mode via -g flag

* debug-test.sh: refactor

* debug-test: refactor for clarity

* debug-test.sh: comment style changes

* debug-test.sh: fix gdb
2024-05-17 22:40:14 +10:00
Aarni Koskela
d273c1402b
py : convert-hf-to-gguf-update improvements (#7340)
* convert-hf-to-gguf-update: automate updating

* convert-hf-to-gguf-update: improve download

* share requests session for performance
* create directories only when needed, don't skip downloads when empty directory encountered
* be more graceful about errors
2024-05-17 15:11:45 +03:00
fairydreaming
27b040691c
llama : use n_embd_head_v when reshaping kqv (#7327)
* llama : use n_embd_head_v instead of n_embd_head_k when reshaping kqv

* llama : use n_embd_v_gqa and n_embd_head_v instead of n_embd_k_gqa and n_embd_head_k when making a view of cached value vectors.

---------

Co-authored-by: Stanisław Szymczyk <sszymczy@gmail.com>
2024-05-17 14:24:38 +03:00
Johannes Gäßler
29c60d8cdd
tokenization: add warning for double BOS (#7332) 2024-05-17 09:59:57 +02:00
Herman Semenov
359cbe3f46
ggml-quants, llama : removed excess checks (#7274) 2024-05-17 10:08:49 +03:00
amd-lalithnc
e18bc6aaf3
convert : fix Qwen/Qwen-7b conversion (#7308) 2024-05-17 10:01:58 +03:00
Radoslav Gerganov
ee94172d33
server : add support for the RPC backend (#7305)
ref: #7292
2024-05-17 10:00:17 +03:00