Commit graph

2156 commits

Author SHA1 Message Date
Concedo
32cf02487e colab use mmq, update lite and ver 2023-09-23 23:32:00 +08:00
Concedo
60098a176b update colab model 2023-09-23 16:30:40 +08:00
Concedo
bfc696fcc4 update lite, update ver 2023-09-23 12:35:23 +08:00
Concedo
bd2500db36 Merge branch 'master' into concedo_experimental
# Conflicts:
#	.github/workflows/build.yml
#	README.md
#	build.zig
#	flake.nix
2023-09-23 10:51:34 +08:00
Concedo
a64d182b8b sched yield fix again 2023-09-23 10:44:41 +08:00
Concedo
1f9e36c733 minor lite fixes 2023-09-23 09:37:49 +08:00
Concedo
de4e27904d clear reader copy on new gen 2023-09-23 00:13:19 +08:00
Lee Drake
bc9d3e3971
Update README.md (#3289)
* Update README.md

* Update README.md

Co-authored-by: slaren <slarengh@gmail.com>

---------

Co-authored-by: slaren <slarengh@gmail.com>
2023-09-21 21:00:24 +02:00
shibe2
36b904e200
ggml-opencl.cpp: Make private functions static (#3300) 2023-09-21 14:10:26 -04:00
Concedo
14295922f9 updated ver, updated lite (+1 squashed commits)
Squashed commits:

[891291bc] updated lite to v67
2023-09-21 17:44:01 +08:00
Edward Taylor
324f3403d5
zig : fix for updated c lib (#3259) 2023-09-21 12:08:20 +03:00
yuiseki
f56c418ab0
embedding : update README.md (#3224) 2023-09-21 11:57:40 +03:00
Johannes Gäßler
8185710a80
CUDA: use only 1 thread if fully offloaded (#2915) 2023-09-21 11:43:53 +03:00
Georgi Gerganov
7eb41179ed
readme : update hot topics 2023-09-20 20:48:22 +03:00
Cebtenzzre
a5661d7e71
llama : allow gguf RoPE keys to be overridden with defaults (#3240) 2023-09-20 12:12:47 -04:00
Cebtenzzre
65c2c1c5ab
benchmark-matmult : do not use integer abs() on a float (#3277) 2023-09-20 12:06:08 -04:00
Concedo
2dda63a4eb add tensor split field 2023-09-20 22:46:47 +08:00
kang
80834daecf
flake : Restore default package's buildInputs (#3262) 2023-09-20 15:48:22 +02:00
Concedo
712b8423f6 class.py changes 2023-09-20 21:27:49 +08:00
Concedo
b63cf223c9 add queue info 2023-09-20 21:07:21 +08:00
Concedo
0eb52cf6c2 Merge branch 'master' into concedo_experimental
# Conflicts:
#	Makefile
2023-09-20 21:01:34 +08:00
Concedo
006e87cb56 requirements txt 2023-09-20 21:00:23 +08:00
Alon
a40f2b656f
CI: FreeBSD fix (#3258)
* - freebsd ci: use qemu
2023-09-20 14:06:36 +02:00
Concedo
4a0c515da7 rename notepad to classic 2023-09-20 17:51:02 +08:00
Concedo
436cd474cd regex fix 2023-09-20 16:02:19 +08:00
Georgi Gerganov
d119c04c15
examples : fix benchmark-matmult (#1554)
The precision for Q4_0 has degraded since #1508
2023-09-20 10:02:39 +03:00
Concedo
2fc91d8727 updated lite 2023-09-20 14:28:55 +08:00
Concedo
c03409c1f6 grammar sampling added for lite 2023-09-19 00:13:30 +08:00
Concedo
0142760fc3 Merge branch 'master' into concedo_experimental
# Conflicts:
#	.github/workflows/build.yml
#	Makefile
#	README.md
2023-09-18 23:20:02 +08:00
Concedo
8c453d1e4e added grammar sampling 2023-09-18 23:02:00 +08:00
Cebtenzzre
8781013ef6
make : restore build-info.h dependency for several targets (#3205) 2023-09-18 10:03:53 -04:00
Concedo
951614bfc6 library unloading is working 2023-09-18 15:03:52 +08:00
Erik Scholz
7ddf185537
ci : switch cudatoolkit install on windows to networked (#3236) 2023-09-18 02:21:47 +02:00
Johannes Gäßler
ee66942d7e
CUDA: fix peer access logic (#3231) 2023-09-17 23:35:20 +02:00
Johannes Gäßler
111163e246
CUDA: enable peer access between devices (#2470) 2023-09-17 16:37:53 +02:00
Concedo
34930bfdc2 updated lite 2023-09-17 20:43:04 +08:00
slaren
8b428c9bc8
llama.cpp : show model size and BPW on load (#3223) 2023-09-17 14:33:28 +02:00
Johannes Gäßler
578d8c8f5c
CUDA: fix scratch malloced on non-main device (#3220) 2023-09-17 14:16:22 +02:00
Concedo
e0fcc9a725 fixed all issues with class.py 2023-09-17 15:23:35 +08:00
IsaacDynamo
b541b4f0b1
Enable BUILD_SHARED_LIBS=ON on all Windows builds (#3215) 2023-09-16 19:35:25 +02:00
Concedo
e107bce105 add another missing field 2023-09-16 23:17:22 +08:00
Concedo
f01a75b563 added missing field 2023-09-16 23:15:44 +08:00
Concedo
733127b160 class.py off by 1 2023-09-16 23:01:12 +08:00
Vlad
5dbc2b3213
Enable build with CUDA 11.0 (make) (#3132)
* CUDA 11.0 fixes

* Cleaner CUDA/host flags separation

Also renamed GGML_ASSUME into GGML_CUDA_ASSUME
2023-09-16 16:55:43 +02:00
goerch
b08e75baea
Fixing the last deviations from sentencepiece indicated by test-tokenizer-1 (#3170)
* Fix für #2721

* Reenable tokenizer test for LLaMa

* Add `console.cpp` dependency

* Fix dependency to `common`

* Fixing wrong fix.

* Make console usage platform specific

Work on compiler warnings.

* Adapting makefile

* Remove trailing whitespace

* Adapting the other parts of the makefile

* Fix typo.

* Fixing the last deviations from sentencepiece indicated by test-tokenizer-1

* Simplify logic

* Add missing change...

* Fix ugly compiler warning

* llama_tokenize should accept strings containing NUL now

* Adding huichen's test case
2023-09-16 13:41:33 +02:00
Ycros
f6ba36dff6
Reduce warnings. (#439) 2023-09-16 18:52:09 +08:00
Concedo
8d90072a2a updated class.py 2023-09-16 18:22:28 +08:00
Concedo
c96fb3984d Merge branch 'master' into concedo_experimental
# Conflicts:
#	.github/workflows/build.yml
#	Makefile
#	examples/quantize/quantize.cpp
#	ggml.c
#	pocs/vdot/vdot.cpp
#	scripts/build-info.cmake
#	scripts/build-info.h.in
#	scripts/build-info.sh
#	tests/test-opt.cpp
#	tests/test-quantize-fns.cpp
#	tests/test-quantize-perf.cpp
#	tests/test-sampling.cpp
2023-09-16 12:14:19 +08:00
Concedo
53885de6db added multiuser mode 2023-09-16 11:23:39 +08:00
YellowRoseCx
4218641d97
Separate CuBLAS/hipBLAS (#438) 2023-09-16 10:13:44 +08:00