Concedo
36b0c5b398
fix for incorrect missing backends displayed
2023-08-17 22:45:49 +08:00
Concedo
075d079a72
Merge branch 'master' into concedo_experimental
...
# Conflicts:
# CMakeLists.txt
# Makefile
# ggml-cuda.cu
# llama-util.h
# tests/CMakeLists.txt
2023-08-16 10:43:06 +08:00
Georgi Gerganov
b5ffb2849d
scripts : add helper script to get wikitext
2023-08-15 10:05:25 +03:00
Concedo
469d70be45
add support for precompiled binaries, used as a fallback
2023-08-15 13:49:05 +08:00
Jhen-Jie Hong
3ebb00935f
server : add missing /json-schema-to-grammar.mjs ( #2616 )
...
fixes #2611
2023-08-15 06:14:14 +08:00
Jhen-Jie Hong
d783f7982e
metal : return null instead of exit(1) ( #2573 )
2023-08-14 16:37:39 +03:00
Cheng Shao
d75561df20
server : add --numa support ( #2524 )
2023-08-14 16:36:42 +03:00
Kamil Tomšík
348acf188c
llama : add missing enum keyword in function signatures ( #2610 )
2023-08-14 16:35:16 +03:00
Johannes Gäßler
1cd06fa25e
CUDA: launch_bounds, small q4_K, q5_K mmq refactor ( #2596 )
2023-08-14 10:41:22 +02:00
Jhen-Jie Hong
2feb8934eb
server : fix default grammar by use empty string in the UI ( #2604 )
2023-08-14 16:20:17 +08:00
Jhen-Jie Hong
5517d6e692
server : implement json-schema-to-grammar.mjs & add grammar param in the UI ( #2588 )
...
* server : implement json-schema-to-grammar.mjs by follow python impl
* server : add grammar support in chat.mjs
* server : implement grammer param in the UI
* server : generate .hpp
* server : remove trailing whitespaces
* server : generate .hpp
* server : fix sort of prop pairs
* server : optimize regex & iteration
2023-08-14 15:16:54 +08:00
vxiiduu
f31b539714
Enhance Windows 7 and below compatibility. ( #2592 )
...
* Enhance Windows 7 compatibility.
* Clean away unnecessary preprocessor conditional
2023-08-13 20:59:16 -07:00
drbh
ee77efea2a
test : add simple grammar parsing tests ( #2594 )
...
* adds simple grammar parsing tests
* adds cassert header
2023-08-13 17:00:48 +03:00
Johannes Gäßler
f64d44a9b9
CUDA: Fixed OpenLLaMA 3b mmq, reduced compile time ( #2590 )
2023-08-13 00:24:45 +02:00
Concedo
9483288e03
Merge branch 'master' into concedo_experimental
...
# Conflicts:
# Makefile
2023-08-12 16:04:11 +08:00
byte-6174
b19edd54d5
Adding support for llama2.c models ( #2559 )
2023-08-12 01:17:25 +02:00
Equim
53dc399472
server: fixed wrong variable name in timing json ( #2579 )
...
* server: fixed wrong variable name in timing json
* remove redunct entry
2023-08-12 00:35:14 +02:00
Concedo
dae9dffa6a
rename koboldcpp.dll to koboldcpp_default.dll
2023-08-11 14:54:27 +08:00
DannyDaemonic
9ca4abed89
Handle ENABLE_VIRTUAL_TERMINAL_PROCESSING
more gracefully on earlier versions of Windows.
2023-08-10 13:11:36 -07:00
Christian Demsar
e59fcb2bc1
Add --n-predict -2 for stopping generation on full context ( #2565 )
2023-08-10 16:28:27 +02:00
Concedo
886f4eed79
updated lite, up ver, remove bell
2023-08-10 22:01:33 +08:00
Martin Krasser
1638757767
Fix grammar-based sampling issue in server ( #2566 )
2023-08-10 13:16:38 +03:00
Concedo
c5f5209d37
globalize args
2023-08-10 16:30:02 +08:00
Sam Spilsbury
916a9acdd0
ggml-alloc: Don't try to re-use buffers of external tensors ( #2562 )
...
* ggml-alloc: Don't try to re-use buffers of external tensors
They might be weights that came from another context, so we
have no control over them (and they might be re-used elsewhere
so writing to them would be a bad idea).
* ggml-alloc: >= when checking for out-of-bounds
Co-authored-by: slaren <slarengh@gmail.com>
---------
Co-authored-by: slaren <slarengh@gmail.com>
2023-08-09 22:47:42 +02:00
grahameth
ea04a4ca19
add log_callback to llama_context_params for custom logging. ( #2234 )
...
* add log_callback to llama_context_params for custom logging.
* Fix macro expansion on gcc
* Add struct llama_state for global variables and move log_callback there
* Turn log level into enum and some minor changes.
* Remove model_for_logging parameter (not needed anymore)
* Convert remaining fprintf(stderr, ...) calls to use new macros.
* Fix enum and initialize g_state
* Fix log calls after merge
* Fix missing static
* Add back all the new lines in the logging strings
* Add comment for llama_log_callback and replace remaining printf calls
---------
Co-authored-by: grahameth <->
Co-authored-by: Helmut <helmut.buhler@inf.h-brs.de>
2023-08-09 22:46:40 +02:00
Concedo
a07e6dd3ad
revert cuda changes as they are bugggy
2023-08-09 22:36:41 +08:00
Concedo
f8376c7e61
up ver, fixed compile (+1 squashed commits)
...
Squashed commits:
[ca51aa9e] up ver
2023-08-09 21:31:24 +08:00
Concedo
ba09f1c807
Merge branch 'master' into concedo_experimental
...
# Conflicts:
# README.md
# ggml-cuda.cu
2023-08-09 21:18:34 +08:00
Concedo
3a7853d259
handle stablecode-completion-alpha-3b
2023-08-09 21:07:57 +08:00
Johannes Gäßler
25d43e0eb5
CUDA: tuned mul_mat_q kernels ( #2546 )
2023-08-09 09:42:34 +02:00
Concedo
90058d96b0
sleep longer before exit
2023-08-09 15:28:07 +08:00
Concedo
19cf2a8663
add idle field and up ver
2023-08-09 12:42:59 +08:00
Concedo
4b8a354895
cudatoolkit version
2023-08-09 12:25:21 +08:00
Concedo
159ad9269d
up ver, set the cuda pool malloc lookahead back to 5% instead of 2% (+1 squashed commits)
...
Squashed commits:
[e0f65278] up ver, set the cuda pool malloc lookahead back to 5% instead of 2%
2023-08-09 12:06:42 +08:00
Concedo
926d90fbab
Merge branch 'master' into concedo_experimental
...
# Conflicts:
# Makefile
2023-08-09 01:09:04 +08:00
Concedo
793cfd136c
fixed 70B detection again, try fix horde issues, fixed lite unicode issue, fixed cmake for cuda
2023-08-09 01:05:00 +08:00
Martin Krasser
f5bfea0580
Allow passing grammar to completion endpoint ( #2532 )
...
* Allow passing grammar to completion endpoint
2023-08-08 16:29:19 +03:00
Johannes Gäßler
acfc5478ff
CUDA: tighter VRAM scratch size for 65b/70b ( #2551 )
2023-08-08 14:38:16 +02:00
chaihahaha
7ed8d1fe7f
llm.vim : multiline autocompletion, get rid of "^@" ( #2543 )
2023-08-08 15:07:02 +03:00
Georgi Gerganov
e7f94d6fdc
vim : bring back simple llm.vim example
2023-08-08 15:06:18 +03:00
AustinMroz
2d7baaf50f
vim : streaming and more ( #2495 )
...
* Update Vim plugin
* Remove getbufoneline usage, Add input bind example.
getbufoneline() appears to be a recently added function and has been
replaced with getbufline for compatibility.
An additional example that explains how to add a keybind that works in
insert mode was added.
2023-08-08 14:44:48 +03:00
klosax
f3c3b4b167
Add --rope-scale parameter ( #2544 )
...
* common.cpp : Add --rope-scale parameter
* README.md : Add info about using linear rope scaling
2023-08-07 19:07:19 +02:00
Concedo
3554080502
fixed blasbatchmul multiplier
2023-08-08 00:41:02 +08:00
Concedo
28ad80b6e4
Merge branch 'master' into concedo_experimental
2023-08-08 00:34:10 +08:00
Concedo
3c7d938d95
update lite, resize scratch buffers for blasbatch 2048
2023-08-08 00:32:51 +08:00
Georgi Gerganov
93356bdb7a
ggml : mul mat tweaks ( #2372 )
...
* ggml : mul mat wip
ggml-ci
* ggml : alternative thread distribution for mul_mat
ggml-ci
* ggml : mul_mat block tiling attempt
* ggml : mul_mat threads yield
ggml-ci
2023-08-07 14:25:58 +03:00
Georgi Gerganov
60baff7c85
ggml : pad result of ggml_nbytes()
2023-08-07 14:24:42 +03:00
Georgi Gerganov
9082b5dfbf
ggml : change params pointer (style change) ( #2539 )
...
ggml-ci
2023-08-07 13:55:18 +03:00
Georgi Gerganov
99d29c0094
ggml : sync (custom ops) ( #2537 )
...
ggml-ci
2023-08-07 13:20:09 +03:00
Concedo
9133e456d2
Merge branch 'master' into concedo_experimental
...
# Conflicts:
# Makefile
# build.zig
2023-08-07 17:33:42 +08:00