Lee Drake
bc9d3e3971
Update README.md ( #3289 )
...
* Update README.md
* Update README.md
Co-authored-by: slaren <slarengh@gmail.com>
---------
Co-authored-by: slaren <slarengh@gmail.com>
2023-09-21 21:00:24 +02:00
shibe2
36b904e200
ggml-opencl.cpp: Make private functions static ( #3300 )
2023-09-21 14:10:26 -04:00
Concedo
14295922f9
updated ver, updated lite (+1 squashed commits)
...
Squashed commits:
[891291bc] updated lite to v67
2023-09-21 17:44:01 +08:00
Edward Taylor
324f3403d5
zig : fix for updated c lib ( #3259 )
2023-09-21 12:08:20 +03:00
yuiseki
f56c418ab0
embedding : update README.md ( #3224 )
2023-09-21 11:57:40 +03:00
Johannes Gäßler
8185710a80
CUDA: use only 1 thread if fully offloaded ( #2915 )
2023-09-21 11:43:53 +03:00
Georgi Gerganov
7eb41179ed
readme : update hot topics
2023-09-20 20:48:22 +03:00
Cebtenzzre
a5661d7e71
llama : allow gguf RoPE keys to be overridden with defaults ( #3240 )
2023-09-20 12:12:47 -04:00
Cebtenzzre
65c2c1c5ab
benchmark-matmult : do not use integer abs() on a float ( #3277 )
2023-09-20 12:06:08 -04:00
Concedo
2dda63a4eb
add tensor split field
2023-09-20 22:46:47 +08:00
kang
80834daecf
flake : Restore default package's buildInputs ( #3262 )
2023-09-20 15:48:22 +02:00
Concedo
712b8423f6
class.py changes
2023-09-20 21:27:49 +08:00
Concedo
b63cf223c9
add queue info
2023-09-20 21:07:21 +08:00
Concedo
0eb52cf6c2
Merge branch 'master' into concedo_experimental
...
# Conflicts:
# Makefile
2023-09-20 21:01:34 +08:00
Concedo
006e87cb56
requirements txt
2023-09-20 21:00:23 +08:00
Alon
a40f2b656f
CI: FreeBSD fix ( #3258 )
...
* - freebsd ci: use qemu
2023-09-20 14:06:36 +02:00
Concedo
4a0c515da7
rename notepad to classic
2023-09-20 17:51:02 +08:00
Concedo
436cd474cd
regex fix
2023-09-20 16:02:19 +08:00
Georgi Gerganov
d119c04c15
examples : fix benchmark-matmult ( #1554 )
...
The precision for Q4_0 has degraded since #1508
2023-09-20 10:02:39 +03:00
Concedo
2fc91d8727
updated lite
2023-09-20 14:28:55 +08:00
Concedo
c03409c1f6
grammar sampling added for lite
2023-09-19 00:13:30 +08:00
Concedo
0142760fc3
Merge branch 'master' into concedo_experimental
...
# Conflicts:
# .github/workflows/build.yml
# Makefile
# README.md
2023-09-18 23:20:02 +08:00
Concedo
8c453d1e4e
added grammar sampling
2023-09-18 23:02:00 +08:00
Cebtenzzre
8781013ef6
make : restore build-info.h dependency for several targets ( #3205 )
2023-09-18 10:03:53 -04:00
Concedo
951614bfc6
library unloading is working
2023-09-18 15:03:52 +08:00
Erik Scholz
7ddf185537
ci : switch cudatoolkit install on windows to networked ( #3236 )
2023-09-18 02:21:47 +02:00
Johannes Gäßler
ee66942d7e
CUDA: fix peer access logic ( #3231 )
2023-09-17 23:35:20 +02:00
Johannes Gäßler
111163e246
CUDA: enable peer access between devices ( #2470 )
2023-09-17 16:37:53 +02:00
Concedo
34930bfdc2
updated lite
2023-09-17 20:43:04 +08:00
slaren
8b428c9bc8
llama.cpp : show model size and BPW on load ( #3223 )
2023-09-17 14:33:28 +02:00
Johannes Gäßler
578d8c8f5c
CUDA: fix scratch malloced on non-main device ( #3220 )
2023-09-17 14:16:22 +02:00
Concedo
e0fcc9a725
fixed all issues with class.py
2023-09-17 15:23:35 +08:00
IsaacDynamo
b541b4f0b1
Enable BUILD_SHARED_LIBS=ON on all Windows builds ( #3215 )
2023-09-16 19:35:25 +02:00
Concedo
e107bce105
add another missing field
2023-09-16 23:17:22 +08:00
Concedo
f01a75b563
added missing field
2023-09-16 23:15:44 +08:00
Concedo
733127b160
class.py off by 1
2023-09-16 23:01:12 +08:00
Vlad
5dbc2b3213
Enable build with CUDA 11.0 (make) ( #3132 )
...
* CUDA 11.0 fixes
* Cleaner CUDA/host flags separation
Also renamed GGML_ASSUME into GGML_CUDA_ASSUME
2023-09-16 16:55:43 +02:00
goerch
b08e75baea
Fixing the last deviations from sentencepiece indicated by test-tokenizer-1 ( #3170 )
...
* Fix für #2721
* Reenable tokenizer test for LLaMa
* Add `console.cpp` dependency
* Fix dependency to `common`
* Fixing wrong fix.
* Make console usage platform specific
Work on compiler warnings.
* Adapting makefile
* Remove trailing whitespace
* Adapting the other parts of the makefile
* Fix typo.
* Fixing the last deviations from sentencepiece indicated by test-tokenizer-1
* Simplify logic
* Add missing change...
* Fix ugly compiler warning
* llama_tokenize should accept strings containing NUL now
* Adding huichen's test case
2023-09-16 13:41:33 +02:00
Ycros
f6ba36dff6
Reduce warnings. ( #439 )
2023-09-16 18:52:09 +08:00
Concedo
8d90072a2a
updated class.py
2023-09-16 18:22:28 +08:00
Concedo
c96fb3984d
Merge branch 'master' into concedo_experimental
...
# Conflicts:
# .github/workflows/build.yml
# Makefile
# examples/quantize/quantize.cpp
# ggml.c
# pocs/vdot/vdot.cpp
# scripts/build-info.cmake
# scripts/build-info.h.in
# scripts/build-info.sh
# tests/test-opt.cpp
# tests/test-quantize-fns.cpp
# tests/test-quantize-perf.cpp
# tests/test-sampling.cpp
2023-09-16 12:14:19 +08:00
Concedo
53885de6db
added multiuser mode
2023-09-16 11:23:39 +08:00
YellowRoseCx
4218641d97
Separate CuBLAS/hipBLAS ( #438 )
2023-09-16 10:13:44 +08:00
Cebtenzzre
e6616cf0db
examples : add compiler version and target to build info ( #2998 )
2023-09-15 16:59:49 -04:00
Cebtenzzre
3aefaab9e5
check C++ code with -Wmissing-declarations ( #3184 )
2023-09-15 15:38:27 -04:00
Cebtenzzre
69eb67e282
fix build numbers by setting fetch-depth=0 ( #3197 )
2023-09-15 15:18:15 -04:00
Meng Zhang
4fe09dfe66
llama : add support for StarCoder model architectures ( #3187 )
...
* add placeholder of starcoder in gguf / llama.cpp
* support convert starcoder weights to gguf
* convert MQA to MHA
* fix ffn_down name
* add LLM_ARCH_STARCODER to llama.cpp
* set head_count_kv = 1
* load starcoder weight
* add max_position_embeddings
* set n_positions to max_positioin_embeddings
* properly load all starcoder params
* fix head count kv
* fix comments
* fix vram calculation for starcoder
* store mqa directly
* add input embeddings handling
* add TBD
* working in cpu, metal buggy
* cleanup useless code
* metal : fix out-of-bounds access in soft_max kernels
* llama : make starcoder graph build more consistent with others
* refactor: cleanup comments a bit
* add other starcoder models: 3B, 7B, 15B
* support-mqa-directly
* fix: remove max_position_embeddings, use n_train_ctx
* Update llama.cpp
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
* Update llama.cpp
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
* Apply suggestions from code review
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
* fix: switch to space from tab
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-09-15 22:02:13 +03:00
Cebtenzzre
80291a1d02
common : do not use GNU zero-length __VA_ARGS__ extension ( #3195 )
2023-09-15 21:02:01 +03:00
Georgi Gerganov
c6f1491da0
metal : fix bug in soft_max kernels (out-of-bounds access) ( #3194 )
2023-09-15 20:17:24 +03:00
Cebtenzzre
e3d87a6c36
convert : make ftype optional in simple scripts ( #3185 )
2023-09-15 12:29:02 -04:00