Commit graph

1465 commits

Author SHA1 Message Date
kalomaze
974640ac25 Update README for consistency 2023-10-31 13:18:48 -05:00
kalomaze
512cac630c added a bit more context to the README 2023-10-31 11:32:01 -05:00
kalomaze
9248325f82 Update README & set 0.05 default 2023-10-31 11:25:23 -05:00
kalomaze
87adfad25f Merge branch 'min-p-sampling' of https://github.com/kalomaze/koboldcpp into min-p-sampling 2023-10-29 15:50:02 -05:00
kalomaze
18c0aa7c31 Merge remote-tracking branch 'original/cuda-quantum-batch' into min-p-sampling 2023-10-29 15:46:50 -05:00
cebtenzzre
3ddfd67d13 permit simultaneous use of top_p and min_p 2023-10-29 01:31:14 -04:00
cebtenzzre
69e638e56a cleanup 2023-10-29 01:10:30 -04:00
kalomaze
fcbbfc1666 Even formatting + exclusively 0.0f to disable now 2023-10-28 23:52:22 -05:00
kalomaze
cb233584cc minor whitespace fix 2023-10-28 23:40:23 -05:00
kalomaze
6f7cdec38a Simplified counter by checking candidates size
+ fixed 0.0 default for min_p
2023-10-28 23:37:18 -05:00
kalomaze
49b68e8226 Standardize 0.0 disabling min_p upon feedback 2023-10-28 23:12:14 -05:00
kalomaze
62fc77153b Remove accidentally kept prints + min_keep support 2023-10-28 23:04:29 -05:00
kalomaze
833637b703 erring on the side of caution; disable by default 2023-10-28 22:05:05 -05:00
kalomaze
69ef4ca885 Debugging print statements removed 2023-10-28 21:14:55 -05:00
kalomaze
838d58dc32 Min P disabled if set to 1.0 or 0, otherwise Top P 2023-10-28 21:08:26 -05:00
kalomaze
a235a0d226 Transform Min P into a proper CLI option 2023-10-28 20:49:17 -05:00
kalomaze
a9e2b74f1a Super hacky starting implementation of Min P 2023-10-28 17:23:06 -05:00
Erik Scholz
ff3bad83e2
flake : update flake.lock for newer transformers version + provide extra dev shell (#3797)
* flake : update flake.lock for newer transformers version + provide extra dev shell with torch and transformers (for most convert-xxx.py scripts)
2023-10-28 16:41:07 +02:00
Aarni Koskela
82a6646e02
metal : try cwd for ggml-metal.metal if bundle lookup fails (#3793)
* Try cwd for ggml-metal if bundle lookup fails

When building with `-DBUILD_SHARED_LIBS=ON -DLLAMA_METAL=ON -DLLAMA_BUILD_SERVER=ON`,
`server` would fail to load `ggml-metal.metal` because `[bundle pathForResource:...]`
returns `nil`.  In that case, fall back to `ggml-metal.metal` in the cwd instead of
passing `null` as a path.

Follows up on #1782

* Update ggml-metal.m

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-10-28 15:43:01 +03:00
Georgi Gerganov
ba231e8a6d
issues : change label from bug to bug-unconfirmed (#3748) 2023-10-28 15:35:26 +03:00
Georgi Gerganov
8a2f2fea29
convert : ignore tokens if their IDs are within [0, vocab_size) (#3831) 2023-10-28 06:25:15 -06:00
Kerfuffle
bd6d9e2059
llama : allow quantizing k-quants to fall back when tensor size incompatible (#3747)
* Allow quantizing k-quants to fall back when tensor size incompatible

* quantizing: Add warning when tensors were incompatible with k-quants

Clean up k-quants state passing a bit
2023-10-28 14:54:24 +03:00
Georgi Gerganov
ee1a0ec9cb
llama : add option for greedy sampling with probs (#3813)
* llama : add option for greedy sampling with probs

* llama : add comment about llama_sample_token_greedy() missing probs

* sampling : temp == 0.0 -> no probs, temp < 0.0 -> probs
2023-10-28 14:23:11 +03:00
Henk Poley
177461104b
common : print that one line of the syntax help *also* to standard output (#3823) 2023-10-28 13:16:33 +03:00
Georgi Gerganov
fdee152e4e
starcoder : add GPU offloading (#3827)
* starcoder : do not GPU split 1D bias tensors

* starcoder : offload layers to GPU

ggml-ci
2023-10-28 12:06:08 +03:00
Kerfuffle
41aee4df82
speculative : ensure draft and target model vocab matches (#3812)
* speculative: Ensure draft and target model vocab matches

* Tolerate small differences when checking dft vs tgt vocab
2023-10-28 00:40:07 +03:00
cebtenzzre
6d459cbfbe
llama : correctly report GGUFv3 format (#3818) 2023-10-27 17:33:53 -04:00
Thibault Terrasson
c8d6a1f34a
simple : fix batch handling (#3803) 2023-10-27 08:37:41 -06:00
Georgi Gerganov
2f9ec7e271
cuda : improve text-generation and batched decoding performance (#3776)
* cuda : prints wip

* cuda : new cublas gemm branch for multi-batch quantized src0

* cuda : add F32 sgemm branch

* cuda : fine-tune >= VOLTA params + use MMQ only for small batches

* cuda : remove duplicated cuBLAS GEMM code

* cuda : add CUDA_USE_TENSOR_CORES and GGML_CUDA_FORCE_MMQ macros

* build : add compile option to force use of MMQ kernels
2023-10-27 17:01:23 +03:00
Georgi Gerganov
49af767fad
build : add compile option to force use of MMQ kernels 2023-10-27 13:21:04 +03:00
Georgi Gerganov
34b2a5e1ee
server : do not release slot on image input (#3798) 2023-10-26 22:54:17 +03:00
Georgi Gerganov
a4e15a36e4
cuda : add CUDA_USE_TENSOR_CORES and GGML_CUDA_FORCE_MMQ macros 2023-10-26 16:00:48 +03:00
Georgi Gerganov
4c6744b526
cuda : remove duplicated cuBLAS GEMM code 2023-10-25 18:25:13 +03:00
Georgi Gerganov
a3c28439d3
cuda : fine-tune >= VOLTA params + use MMQ only for small batches 2023-10-25 15:07:34 +03:00
Georgi Gerganov
16b60dd75c
cuda : add F32 sgemm branch 2023-10-25 14:00:21 +03:00
Georgi Gerganov
52af782608
cuda : new cublas gemm branch for multi-batch quantized src0 2023-10-25 13:14:24 +03:00
Georgi Gerganov
59d1232ea7
cuda : prints wip 2023-10-25 10:26:58 +03:00
Georgi Gerganov
6961c4bd0b
batched-bench : print params at start 2023-10-25 10:26:27 +03:00
Georgi Gerganov
cc44877486
log : disable pid in log filenames 2023-10-25 10:09:16 +03:00
cebtenzzre
ad93962657
server : add parameter -tb N, --threads-batch N (#3584) (#3768)
Co-authored-by: Michael Coppola <m18coppola@gmail.com>
Co-authored-by: Michael Coppola <info@michaeljcoppola.com>
2023-10-24 23:10:43 +03:00
Georgi Gerganov
1717521cdb
server : do not block system prompt update (#3767)
* server : do not block system prompt update

* server : update state machine logic to process system prompts

* server : minor
2023-10-24 23:08:20 +03:00
Georgi Gerganov
b2f7e04bd3
sync : ggml (conv ops + cuda MSVC fixes) (#3765)
ggml-ci
2023-10-24 21:51:20 +03:00
John Smith
abd21fc99f
cmake : add missed dependencies (#3763) 2023-10-24 20:48:45 +03:00
Georgi Gerganov
2b4ea35e56
cuda : add batched cuBLAS GEMM for faster attention (#3749)
* cmake : add helper for faster CUDA builds

* batched : add NGL arg

* ggml : skip nops in compute_forward

* cuda : minor indentation

* cuda : batched cuBLAS GEMMs for src0 F16 and src1 F32 (attention ops)

* Apply suggestions from code review

These changes plus:

```c++
#define cublasGemmBatchedEx hipblasGemmBatchedEx
```

are needed to compile with ROCM. I haven't done performance testing, but it seems to work.

I couldn't figure out how to propose a change for lines outside what the pull changed, also this is the first time trying to create a multi-part review so please forgive me if I mess something up.

* cuda : add ROCm / hipBLAS cublasGemmBatchedEx define

* cuda : add cublasGemmStridedBatchedEx for non-broadcasted cases

* cuda : reduce mallocs in cublasGemmBatchedEx branch

* cuda : add TODO for calling cublas from kernel + using mem pool

---------

Co-authored-by: Kerfuffle <44031344+KerfuffleV2@users.noreply.github.com>
2023-10-24 16:48:37 +03:00
Galunid
daab3d7f45
Add more tokenizer tests (#3742)
* Add more tokenizer tests

* Add starcoder

* Update test vocab files

* Restrict bpe tokenizer tests to unicode planes

* Update comment

* Comment cosmetics

* Remove bloom vocab/test
2023-10-24 09:17:17 +02:00
Georgi Gerganov
469c9addef
metal : handle ggml_scale for n%4 != 0 (close #3754)
ggml-ci
2023-10-24 09:47:22 +03:00
Georgi Gerganov
e3932593d4
Revert "make : add optional CUDA_NATIVE_ARCH (#2482)"
This reverts commit 96981f37b1.

See:

https://github.com/ggerganov/llama.cpp/pull/2482#issuecomment-1775975866
2023-10-23 23:46:05 +03:00
M. Yusuf Sarıgöz
9d02956443
issues : separate bug and enhancement template + no default title (#3748) 2023-10-23 22:57:16 +03:00
Galunid
69a6735087
Update special token handling in conversion scripts for gpt2 derived tokenizers (#3746)
We still have the heads up in `README.md` regarding `bpe` tokenizers and this patch is needed for 

- a couple of tokenizer tests
- some more `special` and `non-special` added tokens handling (as far as I understand it)

* Update special token handling

* Add mpt
2023-10-23 21:46:00 +02:00
Marcus Dunn
5be6c803fa
llama : remove token functions with context args in favor of model (#3720)
* added `llama_model_token_*` variants to all the `llama_token_*` functions.

* added `LLAMA_API`

* formatting

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* removed old `llama_token` functions

* changed 3 more functions to take in model

- `llama_token_get_text`
- `llama_token_get_score`
- `llama_token_get_type`

* added back docs

* fixed main.cpp

* changed token functions to use new model variants

* changed token functions to use new model variants

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-10-23 22:40:03 +03:00