Concedo
c96fb3984d
Merge branch 'master' into concedo_experimental
...
# Conflicts:
# .github/workflows/build.yml
# Makefile
# examples/quantize/quantize.cpp
# ggml.c
# pocs/vdot/vdot.cpp
# scripts/build-info.cmake
# scripts/build-info.h.in
# scripts/build-info.sh
# tests/test-opt.cpp
# tests/test-quantize-fns.cpp
# tests/test-quantize-perf.cpp
# tests/test-sampling.cpp
2023-09-16 12:14:19 +08:00
Concedo
53885de6db
added multiuser mode
2023-09-16 11:23:39 +08:00
YellowRoseCx
4218641d97
Separate CuBLAS/hipBLAS ( #438 )
2023-09-16 10:13:44 +08:00
Cebtenzzre
e6616cf0db
examples : add compiler version and target to build info ( #2998 )
2023-09-15 16:59:49 -04:00
Cebtenzzre
3aefaab9e5
check C++ code with -Wmissing-declarations ( #3184 )
2023-09-15 15:38:27 -04:00
Cebtenzzre
69eb67e282
fix build numbers by setting fetch-depth=0 ( #3197 )
2023-09-15 15:18:15 -04:00
Meng Zhang
4fe09dfe66
llama : add support for StarCoder model architectures ( #3187 )
...
* add placeholder of starcoder in gguf / llama.cpp
* support convert starcoder weights to gguf
* convert MQA to MHA
* fix ffn_down name
* add LLM_ARCH_STARCODER to llama.cpp
* set head_count_kv = 1
* load starcoder weight
* add max_position_embeddings
* set n_positions to max_positioin_embeddings
* properly load all starcoder params
* fix head count kv
* fix comments
* fix vram calculation for starcoder
* store mqa directly
* add input embeddings handling
* add TBD
* working in cpu, metal buggy
* cleanup useless code
* metal : fix out-of-bounds access in soft_max kernels
* llama : make starcoder graph build more consistent with others
* refactor: cleanup comments a bit
* add other starcoder models: 3B, 7B, 15B
* support-mqa-directly
* fix: remove max_position_embeddings, use n_train_ctx
* Update llama.cpp
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
* Update llama.cpp
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
* Apply suggestions from code review
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
* fix: switch to space from tab
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-09-15 22:02:13 +03:00
Cebtenzzre
80291a1d02
common : do not use GNU zero-length __VA_ARGS__ extension ( #3195 )
2023-09-15 21:02:01 +03:00
Georgi Gerganov
c6f1491da0
metal : fix bug in soft_max kernels (out-of-bounds access) ( #3194 )
2023-09-15 20:17:24 +03:00
Cebtenzzre
e3d87a6c36
convert : make ftype optional in simple scripts ( #3185 )
2023-09-15 12:29:02 -04:00
Georgi Gerganov
8c00b7a6ff
sync : ggml (Metal F32 support + reduce ggml-alloc size) ( #3192 )
...
* sync : ggml (Metal F32 support + reduce ggml-alloc size)
ggml-ci
* llama-bench : fix ggml_cpu_has_metal() duplicate function
ggml-ci
2023-09-15 19:06:03 +03:00
Concedo
63fcbbb3f1
Change label to avoid confusion - rocm hipblas users should obtain binaries from yellowrosecx fork. The rocm support in this repo requires self-compilation
2023-09-16 00:04:11 +08:00
Concedo
8b8eb18567
Merge branch 'master' into concedo_experimental
...
# Conflicts:
# .github/workflows/build.yml
# .github/workflows/docker.yml
# CMakeLists.txt
# Makefile
# README.md
# flake.nix
# tests/CMakeLists.txt
2023-09-15 23:51:18 +08:00
Engininja2
7e50d34be6
cmake : fix building shared libs for clang (rocm) on windows ( #3176 )
2023-09-15 15:24:30 +03:00
Concedo
6b7af55db5
updated lite
2023-09-15 16:44:13 +08:00
Evgeny Kurnevsky
235f7c193b
flake : use pkg-config instead of pkgconfig ( #3188 )
...
pkgconfig is an alias, it got removed from nixpkgs:
295a5e1e2b/pkgs/top-level/aliases.nix (L1408)
2023-09-15 11:10:22 +03:00
Georgi Gerganov
a51b687657
metal : relax conditions on fast matrix multiplication kernel ( #3168 )
...
* metal : relax conditions on fast matrix multiplication kernel
* metal : revert the concurrnecy change because it was wrong
* llama : remove experimental stuff
2023-09-15 11:09:24 +03:00
Andrei
76164fe2e6
cmake : fix llama.h location when built outside of root directory ( #3179 )
2023-09-15 11:07:40 +03:00
Ali Tariq
c2ab6fe661
ci : Cloud-V for RISC-V builds ( #3160 )
...
* Added Cloud-V File
* Replaced Makefile with original one
---------
Co-authored-by: moiz.hussain <moiz.hussain@10xengineers.ai>
2023-09-15 11:06:56 +03:00
Roland
2d770505a8
llama : remove mtest ( #3177 )
...
* Remove mtest
* remove from common/common.h and examples/main/main.cpp
2023-09-15 10:28:45 +03:00
Cebtenzzre
98311c4277
llama : make quantize example up to 2.7x faster ( #3115 )
2023-09-14 21:09:53 -04:00
jneem
feea179e9f
flake : allow $out/include to already exist ( #3175 )
2023-09-14 21:54:47 +03:00
Andrei
769266a543
cmake : compile ggml-rocm with -fpic when building shared library ( #3158 )
2023-09-14 20:38:16 +03:00
Asbjørn Olling
cf8238e7f4
flake : include llama.h in nix output ( #3159 )
2023-09-14 20:25:00 +03:00
Cebtenzzre
4b8560e72a
make : fix clang++ detection, move some definitions to CPPFLAGS ( #3155 )
...
* make : fix clang++ detection
* make : fix compiler definitions outside of CPPFLAGS
2023-09-14 20:22:47 +03:00
Alon
83a53b753a
CI: add FreeBSD & simplify CUDA windows ( #3053 )
...
* add freebsd to ci
* bump actions/checkout to v3
* bump cuda 12.1.0 -> 12.2.0
* bump Jimver/cuda-toolkit version
* unify and simplify "Copy and pack Cuda runtime"
* install only necessary cuda sub packages
2023-09-14 19:21:25 +02:00
akawrykow
5c872dbca2
falcon : use stated vocab size ( #2914 )
2023-09-14 20:19:42 +03:00
bandoti
990a5e226a
cmake : add relocatable Llama package ( #2960 )
...
* Keep static libs and headers with install
* Add logic to generate Config package
* Use proper build info
* Add llama as import library
* Prefix target with package name
* Add example project using CMake package
* Update README
* Update README
* Remove trailing whitespace
2023-09-14 20:04:40 +03:00
dylan
980ab41afb
docker : add gpu image CI builds ( #3103 )
...
Enables the GPU enabled container images to be built and pushed
alongside the CPU containers.
Co-authored-by: canardleteer <eris.has.a.dad+github@gmail.com>
2023-09-14 19:47:00 +03:00
Kerfuffle
e394084166
gguf-py : support identity operation in TensorNameMap ( #3095 )
...
Make try_suffixes keyword param optional.
2023-09-14 19:32:26 +03:00
jameswu2014
4c8643dd6e
feature : support Baichuan serial models ( #3009 )
2023-09-14 12:32:10 -04:00
Leng Yue
35f73049af
speculative : add heuristic algorithm ( #3006 )
...
* Add heuristic algo for speculative
* Constrain minimum n_draft to 2
* speculative : improve heuristic impl
* speculative : be more rewarding upon guessing max drafted tokens
* speculative : fix typos
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-09-14 19:14:44 +03:00
Concedo
4d3a64fbb2
add endpoint to fetch true max context
2023-09-14 23:27:12 +08:00
goerch
71ca2fad7d
whisper : tokenizer fix + re-enable tokenizer test for LLaMa ( #3096 )
...
* Fix für #2721
* Reenable tokenizer test for LLaMa
* Add `console.cpp` dependency
* Fix dependency to `common`
* Fixing wrong fix.
* Make console usage platform specific
Work on compiler warnings.
* Adapting makefile
* Remove trailing whitespace
* Adapting the other parts of the makefile
* Fix typo.
2023-09-13 16:19:44 +03:00
Tristan Ross
1b6c650d16
cmake : add a compiler flag check for FP16 format ( #3086 )
2023-09-13 16:08:52 +03:00
Concedo
3d50c6fe0b
only add dll directory on windows
2023-09-13 18:45:54 +08:00
Concedo
1f20479af3
updated cmake
2023-09-13 17:48:54 +08:00
Johannes Gäßler
0a5eebb45d
CUDA: mul_mat_q RDNA2 tunings ( #2910 )
...
* CUDA: mul_mat_q RDNA2 tunings
* Update ggml-cuda.cu
Co-authored-by: Henri Vasserman <henv@hot.ee>
---------
Co-authored-by: Henri Vasserman <henv@hot.ee>
2023-09-13 11:20:24 +02:00
FK
84e723653c
speculative: add --n-gpu-layers-draft option ( #3063 )
2023-09-13 08:50:46 +02:00
Concedo
8f8a530b83
add additional paths to loook for DLLs inside
2023-09-13 14:30:13 +08:00
Eric Sommerlade
b52b29ab9d
arm64 support for windows ( #3007 )
...
Co-authored-by: Cebtenzzre <cebtenzzre@gmail.com>
2023-09-12 21:54:20 -04:00
Johannes Gäßler
4f7cd6ba9c
CUDA: fix LoRAs ( #3130 )
2023-09-13 00:15:33 +02:00
Concedo
d9f5793d22
Merge remote-tracking branch 'rd/concedo' into concedo_experimental
2023-09-12 20:17:02 +08:00
Concedo
ece4fda9c6
Merge branch 'master' into concedo_experimental
...
# Conflicts:
# .clang-tidy
# .github/workflows/build.yml
# CMakeLists.txt
# Makefile
# README.md
# flake.nix
# tests/test-quantize-perf.cpp
2023-09-12 20:14:51 +08:00
Concedo
78b2602844
cuda sources (+1 squashed commits)
...
Squashed commits:
[d3aedc03] add source universally
2023-09-12 19:52:46 +08:00
Concedo
74384cfbb5
added onready argument to execute a command after load is done
2023-09-12 17:10:52 +08:00
Johannes Gäßler
89e89599fd
CUDA: fix mul_mat_q not used for output tensor ( #3127 )
2023-09-11 22:58:41 +02:00
Johannes Gäßler
d54a4027a6
CUDA: lower GPU latency + fix Windows performance ( #3110 )
2023-09-11 19:55:51 +02:00
Jhen-Jie Hong
1b0d09259e
cmake : support build for iOS/tvOS ( #3116 )
...
* cmake : support build for iOS/tvOS
* ci : add iOS/tvOS build into macOS-latest-cmake
* ci : split ios/tvos jobs
2023-09-11 19:49:06 +08:00
Johannes Gäßler
8a4ca9af56
CUDA: add device number to error messages ( #3112 )
2023-09-11 13:00:24 +02:00