Commit graph

2189 commits

Author SHA1 Message Date
Concedo
b84e210f0d merge new rope param nonsense 2023-09-30 11:33:30 +08:00
Concedo
033e3bf844 prepare to merge parallel 2023-09-29 10:30:45 +08:00
Cebtenzzre
2db94d98ed
gguf : basic type checking in gguf_get_* (#3346) 2023-09-28 14:30:31 -04:00
Cebtenzzre
ecf90b1a51
gguf : make token scores and types optional (#3347) 2023-09-28 14:30:15 -04:00
Georgi Gerganov
2619109ad5
ci : disable freeBSD builds due to lack of VMs (#3381) 2023-09-28 19:36:36 +03:00
Georgi Gerganov
ec893798b7
llama : custom attention mask + parallel decoding + no context swaps (#3228)
* tests : verify that RoPE is "additive"

* llama : replace ggml_diag_mask_inf with ggml_add (custom -inf mask)

* ggml : ggml_rope now takes a vector with positions instead of n_past

* metal : add rope_f16 kernel + optimize cpy kernels

* llama : unified KV cache + batch inference API

* llama : add new llama_decode() API that works with llama_batch

* llama : add cell_max heuristic for more efficient kv_cache

* llama : extend llama_kv_cache API

* llama : more robust cell_max heuristic + wip shift

* metal : disable concurrency optimization

* llama : add llama_kv_cache_shift_seq + no more context swaps

* llama : apply K-cache roping for Falcon and Baichuan

* speculative : fix KV cache management

* parallel : example for serving multiple users in parallel

* parallel : disable hot-plug to avoid cache fragmentation

* fixes : speculative KV cache + llama worst-case graph

* llama : extend batch API to select which logits to output

* llama : fix worst case graph build

* ggml-cuda : update rope implementation for parallel decoding (#3254)

* ggml-cuda : update rope implementation for parallel decoding

* better solution for p0 computation

* fix rope

* simpler rope implementation

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* make : add parallel to build + fix static functions in llama.cpp

* simple : fix token counting

* parallel : various improvements

* llama : fix cell_max logic + rename functions

* parallel : try smaller batches when the KV cache is fragmented

* parallel : fix sequence termination criteria

* llama : silence errors KV cache errors

* parallel : remove new line from prompt

* parallel : process system prompt once + configurable paramters + llama API

* parallel : remove question with short answers

* parallel : count cache misses

* parallel : print misses on each request

* parallel : minor

* llama : fix n_kv to never become 0

* parallel : rename hot-plug to continuous-batching

* llama : improve llama_batch API + simplify parallel example

* simple : add parallel decoding support

* simple : improve comments + free batch

* ggml-cuda : add rope f16, restore performance with parallel decoding (#3272)

* ggml-cuda : add rope f16, restore performance

* offload KQ_mask with all models

* fix rope shift

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* llama : disable MPI for now

ggml-ci

* train : make KQ_pos memory buffer permanent via dummy scale op

* ggml : revert change to ggml_cpy, add ggml_cont_Nd instead (#3275)

ggml-ci

* parallel : fix bug (extra BOS) + smaller token_prev array

* parallel : fix cases where the input prompts can overflow the batch

* parallel : add disabled experimental batch chunking in powers of two

* llama : llama.h formatting + comments

* simple : add README.md

* llama : fix kv cache heuristic when context is less than 32

* parallel : fix crash when `-n -1`

* llama : simplify returns if/else branches

* metal : use mm kernels for batch size > 2

* examples : utilize new llama_get_logits_ith()

* examples : add example for batched decoding

* examples : do not eval prompt 2 times (close #3348)

* server : clear the KV cache beyond n_past before llama_decode

* server : avoid context swaps by shifting the KV cache

---------

Co-authored-by: slaren <slarengh@gmail.com>
2023-09-28 19:04:36 +03:00
Concedo
ca8b315202 increase context for gguf to 32k, horde worker stats, fixed glitch in horde launcher ui, oai freq penalty, updated lite 2023-09-28 23:50:08 +08:00
Kevin Ji
45855b3f1c
docs : mark code as Bash (#3375) 2023-09-28 09:11:32 -04:00
Pierre Alexandre SCHEMBRI
4aea3b846e
readme : add Mistral AI release 0.1 (#3362) 2023-09-28 15:13:37 +03:00
slaren
da0400344b
ggml-cuda : perform cublas fp16 matrix multiplication as fp16 (#3370)
Some checks failed
Code Coverage / run (push) Has been cancelled
* ggml-cuda : perform cublas fp16 matrix multiplication as fp16

* try to fix rocm build

* restrict fp16 mat mul to volta and up
2023-09-28 13:08:28 +03:00
Concedo
6a821b268a improved SSE streamiing 2023-09-28 17:33:34 +08:00
Zhang Peiyuan
e519621010
convert : remove bug in convert.py permute function (#3364) 2023-09-27 20:45:20 +02:00
Richard Roberson
ac43576124
make-ggml.py : compatibility with more models and GGUF (#3290)
* Resync my fork with new llama.cpp commits

* examples : rename to use dash instead of underscore

* New model conversions

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-09-27 19:25:12 +03:00
Cebtenzzre
20c7e1e804
gguf : fix a few general keys (#3341) 2023-09-27 12:18:07 -04:00
Rickard Hallerbäck
dc6897404e
metal : reusing llama.cpp logging (#3152)
* metal : reusing llama.cpp logging

* cmake : build fix

* metal : logging callback

* metal : logging va_args memory fix

* metal : minor cleanup

* metal : setting function like logging macro to capital letters

* llama.cpp : trailing whitespace fix

* ggml : log level enum used by llama

* Makefile : cleanup ggml-metal recipe

* ggml : ggml_log_callback typedef

* ggml : minor

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-09-27 18:48:33 +03:00
Jag Chadha
527e57cfd8
build : add ACCELERATE_NEW_LAPACK to fix warning on macOS Sonoma (#3342) 2023-09-27 18:34:32 +03:00
BarfingLemurs
ffe88a36a9
readme : add some recent perplexity and bpw measurements to READMES, link for k-quants (#3340)
* Update README.md

* Update README.md

* Update README.md with k-quants bpw measurements
2023-09-27 18:30:36 +03:00
Concedo
38d4c6cedd updated lite 2023-09-27 16:06:17 +08:00
Concedo
cf31658cbf added a flag to keep console in foreground 2023-09-27 01:53:30 +08:00
Concedo
74edc401c1 Merge branch 'master' into concedo_experimental
# Conflicts:
#	CMakeLists.txt
#	README.md
#	flake.nix
#	scripts/build-info.cmake
#	scripts/verify-checksum-models.py
2023-09-27 01:30:15 +08:00
Concedo
eb86cd4027 bump token limits 2023-09-27 01:26:00 +08:00
Concedo
8bf6f7f8b0 added simulated OAI endpoint 2023-09-27 00:49:24 +08:00
Concedo
7f112e2cd4 support genkeys in polled streaming 2023-09-26 23:46:07 +08:00
DAN™
99115f3fa6
cmake : fix build-info.h on MSVC (#3309) 2023-09-25 18:45:33 -04:00
2f38b454
1726f9626f
docs: Fix typo CLBlast_DIR var. (#3330) 2023-09-25 20:24:52 +02:00
Concedo
6c2134a860 improved makefile, allowing building without k quants 2023-09-25 22:10:47 +08:00
Erik Scholz
a98b1633d5
nix : add cuda, use a symlinked toolkit for cmake (#3202) 2023-09-25 13:48:30 +02:00
Concedo
17ee719c56 improved remotelink cmd, fixed lib unload, updated class.py 2023-09-25 17:50:00 +08:00
Concedo
fdadbd0fbb updated lite (+1 squashed commits)
Squashed commits:

[b4408c79] updated lite
2023-09-24 23:07:37 +08:00
Concedo
8ecf505d5d improved embedded horde worker (+2 squashed commit)
Squashed commit:

[99234379] improved embedded horde worker

[ebcd1968] update lite
2023-09-24 15:16:49 +08:00
slaren
c091cdfb24
llama-bench : add README (#3317)
* llama-bench : add README

* minor edit
2023-09-23 21:48:24 +02:00
Concedo
32cf02487e colab use mmq, update lite and ver 2023-09-23 23:32:00 +08:00
Cebtenzzre
51a7cf5c6e
examples : fix RoPE defaults to match PR #3240 (#3315) 2023-09-23 12:28:50 +03:00
Concedo
60098a176b update colab model 2023-09-23 16:30:40 +08:00
Concedo
bfc696fcc4 update lite, update ver 2023-09-23 12:35:23 +08:00
Kevin Ji
bedb92b603
scripts : use /usr/bin/env in shebang (#3313) 2023-09-22 23:52:23 -04:00
Concedo
bd2500db36 Merge branch 'master' into concedo_experimental
# Conflicts:
#	.github/workflows/build.yml
#	README.md
#	build.zig
#	flake.nix
2023-09-23 10:51:34 +08:00
Concedo
a64d182b8b sched yield fix again 2023-09-23 10:44:41 +08:00
Concedo
1f9e36c733 minor lite fixes 2023-09-23 09:37:49 +08:00
Concedo
de4e27904d clear reader copy on new gen 2023-09-23 00:13:19 +08:00
Lee Drake
bc9d3e3971
Update README.md (#3289)
* Update README.md

* Update README.md

Co-authored-by: slaren <slarengh@gmail.com>

---------

Co-authored-by: slaren <slarengh@gmail.com>
2023-09-21 21:00:24 +02:00
shibe2
36b904e200
ggml-opencl.cpp: Make private functions static (#3300) 2023-09-21 14:10:26 -04:00
Concedo
14295922f9 updated ver, updated lite (+1 squashed commits)
Squashed commits:

[891291bc] updated lite to v67
2023-09-21 17:44:01 +08:00
Edward Taylor
324f3403d5
zig : fix for updated c lib (#3259) 2023-09-21 12:08:20 +03:00
yuiseki
f56c418ab0
embedding : update README.md (#3224) 2023-09-21 11:57:40 +03:00
Johannes Gäßler
8185710a80
CUDA: use only 1 thread if fully offloaded (#2915) 2023-09-21 11:43:53 +03:00
Georgi Gerganov
7eb41179ed
readme : update hot topics 2023-09-20 20:48:22 +03:00
Cebtenzzre
a5661d7e71
llama : allow gguf RoPE keys to be overridden with defaults (#3240) 2023-09-20 12:12:47 -04:00
Cebtenzzre
65c2c1c5ab
benchmark-matmult : do not use integer abs() on a float (#3277) 2023-09-20 12:06:08 -04:00
Concedo
2dda63a4eb add tensor split field 2023-09-20 22:46:47 +08:00