Commit graph

4104 commits

Author SHA1 Message Date
slaren
4e49714d20 Merge remote-tracking branch 'origin/master' into sl/dl-backend 2024-11-14 16:21:00 +01:00
Johannes Gäßler
4a8ccb37ad
CUDA: no -sm row for very small matrices (#10185) 2024-11-14 13:00:15 +01:00
Georgi Gerganov
2a82891a85
speculative : fix out-of-bounds access (#10289) 2024-11-14 11:44:15 +02:00
Jeff Bolz
af148c9386
vulkan: Optimize binary ops (#10270)
Reuse the index calculations across all of src0/src1/dst. Add a shader
variant for when src0/src1 are the same dimensions and additional modulus
for src1 aren't needed. Div/mod are slow, so add "fast" div/mod that
have a fast path when the calculation isn't needed or can be done more
cheaply.
2024-11-14 06:22:55 +01:00
slaren
dc2313564d Merge remote-tracking branch 'origin/master' into sl/dl-backend 2024-11-14 00:57:44 +01:00
slaren
fc66c4bf6d only use AMX on x86 2024-11-14 00:33:16 +01:00
Jeff Bolz
66798e42fb
vulkan: Use macros to make the mat mul pipeline creation more concise (#10259)
Also add vk_matmul_pipeline2 to hold f16/f32 accumulator versions of a
pipeline. This isn't really used yet.
2024-11-13 21:59:47 +01:00
slaren
e503ad101d fix sanitizers build 2024-11-13 21:31:57 +01:00
slaren
796f05be62 fix editorconfig 2024-11-13 20:34:11 +01:00
slaren
bc4f6cb64d update cuda & musa dockerfiles 2024-11-13 20:33:53 +01:00
slaren
e0b321b89b add missing libraries to ggml cmake install 2024-11-13 20:33:38 +01:00
Diego Devesa
e541f7ffbe
Update cmake/llama-config.cmake.in
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-11-13 19:41:33 +01:00
slaren
125c23575b Merge remote-tracking branch 'origin/master' into sl/dl-backend 2024-11-13 19:09:30 +01:00
R0CKSTAR
4a80bb95e7
ggml: build musa backend library (cmake) (#10280)
Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>
2024-11-13 19:07:24 +01:00
Michael Podvitskiy
fb4a0ec083
llama : propagate the results of graph_compute (#9525)
* llama: propagating the results of `graph_compute` to the user interface

* llama: reverting kv_cache in case of failed compute

* llama: `llama_kv_cache_state` was removed, only the result of `llama_graph_compute` is returned

* llama: restore a kv_cache in case of failed computation

* llama: correct reverting of the entire batch.
also updates `llama_kv_cache_find_slot`, will correctly count the number of `used` cells for recurrent models

* llama: updated comments

* llama : add comments about KV cache state after error

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-11-13 20:00:35 +02:00
Georgi Gerganov
5ea926dad7
sync : ggml 2024-11-13 18:11:54 +02:00
Georgi Gerganov
42eb364db5
metal : fix build and swift package (#10279)
ggml-ci
2024-11-13 15:57:24 +02:00
Small Grass Forest
1ee9eea094
docs : update bindings list (#10261)
Signed-off-by: tianzixuan <tianzixuan335@hellobike.com>
2024-11-13 13:17:10 +02:00
Alexey Parfenov
ff7fb670d0
server : add missing docs (#10269) 2024-11-13 13:16:30 +02:00
Jhen-Jie Hong
0e712a5acb
server : fix incorrect res in validate_model_chat_template (#10272)
* server : fix validate_model_chat_template

* server : fix chat res
2024-11-13 13:15:23 +02:00
Brian
a0ec17b32e
metadata: Detailed Dataset Authorship Metadata (#8875)
Converter script can now read these two fields as a detailed base model and dataset source.
This was done so that it will be easier for Hugging Face to integrate detailed metadata as needed.

 -  base_model_sources (List[dict], optional)
 -  dataset_sources (List[dict], optional)

Dataset now represented as:

 - general.dataset.count
 - general.dataset.{id}.name
 - general.dataset.{id}.author
 - general.dataset.{id}.version
 - general.dataset.{id}.organization
 - general.dataset.{id}.description
 - general.dataset.{id}.url
 - general.dataset.{id}.doi
 - general.dataset.{id}.uuid
 - general.dataset.{id}.repo_url

This also adds to base model these metadata:

 - general.base_model.{id}.description
2024-11-13 21:10:38 +11:00
Alberto Cabrera Pérez
2e82ffa4af
sycl : Fixes to broken builds and test-backend-ops (#10257)
* Fixes broken build for the SYCL CUDA backend caused by non-explicit gemm call in outprod (merged in with RWKV6 in
Optimize RWKV6 Operator Naming and Implement Multi-core CPU/ SYCL Acceleration #10133)

* Marks permuted MUL_MAT as unsupported to be able to run test-backend-ops

* Fixes asserts in norm to fix debug builds.
2024-11-13 09:40:57 +00:00
Jeff Bolz
80dd7ff22f
vulkan: Optimize contiguous copies (#10254)
* tests: Fix memory bandwidth calculation for perf tests

Add a flops calculation for flash attention.

Add one GGML_OP_CPY perf test.

* vulkan: Optimize contiguous copies

Add a variant of the copy shader for when the tensors are contiguous. Avoid
the complex addressing calculations, and do four elements per invocation
to hide some other overhead.

Apply similar changes to the scale shader, since scale is always contiguous.

Add a "progress bar" for shader compiles.
2024-11-13 07:58:57 +01:00
slaren
dd49f08852 fixes 2024-11-13 01:01:40 +01:00
slaren
d5dd7ed7ee metal install fix 2024-11-12 23:34:50 +01:00
slaren
307ef9a588 update Makefile 2024-11-12 23:07:50 +01:00
slaren
dddf3771c2 use reference quantization fns in AMX until moved to CPU backend
ggml-ci
2024-11-12 22:06:00 +01:00
slaren
5cfaecd34c remove remaining GGML_EXTRA_* and GGML_CDEF_* uses
ggml-ci
2024-11-12 21:39:00 +01:00
slaren
c8da7d0f70 add hip 2024-11-12 20:52:46 +01:00
slaren
710822f32e add amx, cann, sycl 2024-11-12 19:40:25 +01:00
slaren
646e91a642 add vulkan and kompute 2024-11-12 18:44:18 +01:00
slaren
efdd713023 more build fixes 2024-11-12 13:56:28 +01:00
slaren
ab26fb9005 build fixes 2024-11-12 02:32:22 +01:00
slaren
bf79cb3972 add rpc backend 2024-11-11 23:55:27 +01:00
slaren
8768c7c45a fix tests and examples 2024-11-11 23:44:27 +01:00
slaren
4428593487 Merge remote-tracking branch 'origin/master' into sl/dl-backend 2024-11-11 23:20:55 +01:00
slaren
a30f0b2c23 ggml : build backends as libraries 2024-11-11 21:18:27 +01:00
Jeff Bolz
54ef9cfc72
vulkan: Throttle the number of shader compiles during the build step. (#10222)
Fixes #9582

Spawning too many concurrent copies of glslc leads to "Failed to create pipes"
errors on Linux. This change applies the same throttling we use for
multithreaded pipeline creation.
2024-11-11 18:13:51 +01:00
Georgi Gerganov
b0cefea58a
metal : more precise Q*K in FA vec kernel (#10247) 2024-11-11 08:39:13 +02:00
Georgi Gerganov
b141e5f6ef
server : enable KV cache defrag by default (#10233)
ggml-ci
2024-11-11 08:38:43 +02:00
Georgi Gerganov
4b3a9212b6
flake.lock: Update (#10243)
Flake lock file updates:

• Updated input 'nixpkgs':
    'github:NixOS/nixpkgs/807e9154dcb16384b1b765ebe9cd2bba2ac287fd?narHash=sha256-l253w0XMT8nWHGXuXqyiIC/bMvh1VRszGXgdpQlfhvU%3D' (2024-10-29)
  → 'github:NixOS/nixpkgs/4aa36568d413aca0ea84a1684d2d46f55dbabad7?narHash=sha256-Zwl8YgTVJTEum%2BL%2B0zVAWvXAGbWAuXHax3KzuejaDyo%3D' (2024-11-05)

Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2024-11-10 11:45:25 -08:00
MaggotHATE
505f33274d
server : (web UI) Add back sampler settings (#10239)
* Add back samplers to server

* Added tooltips with basic information

* Fixed stretching of input fields.

* use component for settings input, move help msg to tooltips

---------

Co-authored-by: Xuan Son Nguyen <son@huggingface.co>
2024-11-10 15:42:25 -04:00
Jeff Bolz
160687b3ed
vulkan: Fix newly added tests for permuted mul_mat and 1D im2col (#10226) 2024-11-10 12:37:56 +01:00
Georgi Gerganov
6423c65aa8
metal : reorder write loop in mul mat kernel + style (#10231)
* metal : reorder write loop

* metal : int -> short, style

ggml-ci
2024-11-09 11:53:13 +02:00
Georgi Gerganov
39a334a9aa
metal : fix build and some more comments (#10229) 2024-11-09 11:53:02 +02:00
Georgi Gerganov
bb38cdd8ba
metal : fix F32 accumulation in FA vec kernel (#10232) 2024-11-09 11:52:45 +02:00
Georgi Gerganov
f018acba22
llama : fix Qwen model type strings 2024-11-09 11:26:34 +02:00
Georgi Gerganov
46323fa9ef
metal : hide debug messages from normal log 2024-11-09 11:21:49 +02:00
SXX
5b359bb1e3
ggml: fix zero division in ‘dne’ calculation in CUDA COUNT_EQUAL operator when ‘ne’ is small (#10213) 2024-11-09 08:35:46 +01:00
amritahs-ibm
e89213492d
ggml : optimize llamafile cpu matrix multiplication for ppc64le (#10156)
This change upstreams llamafile's cpu matrix
multiplication kernels for ppc64le using MMA
builtins for FP32 datatype.

This change results in a consistent 90%
improvement in input processing time, and 20%
to 80% improvement in output processing time,
across various batch sizes.

The patch is tested with Meta-Lllama-3-8B,
Mistral-7B, Llama-2-7B-chat-hf models on a
IBM POWER10 machine.

Signed-off-by: Amrita H S <amritahs@linux.vnet.ibm.com>
2024-11-09 09:17:50 +02:00