Minsoo Cheong
056bdb3029
add PR link to README
2024-03-04 15:07:40 +09:00
Minsoo Cheong
67ad517e11
remove malloc code by utilizing vectors
2024-03-04 14:55:35 +09:00
Minsoo Cheong
45465b21d1
check grammar in llama_sample_probability_distribution_impl
2024-03-03 03:09:11 +09:00
Minsoo Cheong
c76135401f
remove warnings from comparison between int and size_t
2024-03-02 16:45:07 +09:00
Minsoo Cheong
7463569cad
fix uniform int distribution initialization
2024-03-01 02:24:55 +09:00
Minsoo Cheong
c2cd292307
fix bug in active_seqs sync
2024-02-29 16:01:34 +09:00
Minsoo Cheong
2ad3f7c28c
randomly select next sequence to verify + fix bug in memory freeing
2024-02-29 15:47:41 +09:00
Minsoo Cheong
6b35c8b3cf
fix r random generation
2024-02-29 13:27:29 +09:00
Minsoo Cheong
e4896e71b5
fixes based on review (@JohannesGaessler)
2024-02-29 00:41:31 +09:00
Minsoo Cheong
94f6256fd0
replace use of rand() with mt19937 sampling
2024-02-29 00:26:23 +09:00
Minsoo Cheong
6afc1f60e1
add srand() in speculative.cpp
2024-02-28 02:26:01 +09:00
Minsoo Cheong
875319b323
remove unused variables
2024-02-27 15:30:52 +09:00
Minsoo Cheong
34b942a429
fix style
2024-02-27 15:29:14 +09:00
Minsoo Cheong
fb18827b4e
remove p_accept parameter
2024-02-27 15:09:12 +09:00
Minsoo Cheong
4694edde14
fix #5657 : force greedy sampling with probs when temp is 0
2024-02-22 14:46:19 +09:00
Minsoo Cheong
a9335a5c2a
sample from residual distribution on draft accept failure
2024-02-22 13:50:30 +09:00
Minsoo Cheong
c1bad4a549
(WIP) Implement stochastic speculative decoding
2024-02-21 16:49:27 +09:00
CJ Pais
6560bed3f0
server : support llava 1.6 ( #5553 )
...
* server: init working 1.6
* move clip_image to header
* remove commented code
* remove c++ style from header
* remove todo
* expose llava_image_embed_make_with_clip_img
* fix zig build
2024-02-20 21:07:22 +02:00
slaren
06bf2cf8c4
make : fix debug build with CUDA ( #5616 )
2024-02-20 20:06:17 +01:00
Daniel Bevenius
4ed8e4fbef
llava : add explicit instructions for llava-1.6 ( #5611 )
...
This commit contains a suggestion for the README.md in the llava
example. The suggestion adds explicit instructions for how to convert
a llava-1.6 model and run it using llava-cli.
The motivation for this is that having explicit instructions similar to
the 1.5 instructions will make it easier for users to try this out.
Signed-off-by: Daniel Bevenius <daniel.bevenius@gmail.com>
2024-02-20 19:30:27 +02:00
Xuan Son Nguyen
9c405c9f9a
Server: use llama_chat_apply_template ( #5593 )
...
* server: use llama_chat_apply_template
* server: remove trailing space
* server: fix format_chat
* server: fix help message
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
* server: fix formatted_chat
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-02-20 15:58:27 +01:00
Dane Madsen
5207b3fbc5
readme : update UI list ( #5605 )
...
* Add maid to ui list
* Specify licence
2024-02-20 12:00:23 +02:00
Haoxiang Fei
8dbbd75754
metal : add build system support for embedded metal library ( #5604 )
...
* add build support for embedded metal library
* Update Makefile
---------
Co-authored-by: Haoxiang Fei <feihaoxiang@idea.edu.cn>
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-02-20 11:58:36 +02:00
Pierrick Hymbert
c0a8c6db37
server : health endpoint configurable failure on no slot ( #5594 )
2024-02-20 09:48:19 +02:00
AidanBeltonS
b9111bd209
Update ggml_sycl_op_mul_mat_vec_q ( #5502 )
...
* Update ggml_sycl_op_mul_mat_vec_q
* Apply suggestions from code review
Co-authored-by: Abhilash Majumder <30946547+abhilash1910@users.noreply.github.com>
* revert suggestion on macro
* fix bug
* Add quant type GGML_TYPE_IQ1_S to unsupported
* fix format
---------
Co-authored-by: Abhilash Majumder <30946547+abhilash1910@users.noreply.github.com>
2024-02-20 12:31:25 +05:30
Mathijs de Bruin
633782b8d9
nix: now that we can do so, allow MacOS to build Vulkan binaries
...
Author: Philip Taron <philip.taron@gmail.com>
Date: Tue Feb 13 20:28:02 2024 +0000
2024-02-19 14:49:49 -08:00
0cc4m
22f83f0c38
Enable Vulkan MacOS CI
2024-02-19 14:49:49 -08:00
0cc4m
bb9dcd560a
Refactor validation and enumeration platform checks into functions to clean up ggml_vk_instance_init()
2024-02-19 14:49:49 -08:00
0cc4m
f50db6ae0b
Add check for VK_KHR_portability_enumeration for MoltenVK support
2024-02-19 14:49:49 -08:00
Mathijs de Bruin
d8c054517d
Add preprocessor checks for Apple devices.
...
Based on work by @rbourgeat in https://github.com/ggerganov/llama.cpp/pull/5322/files
2024-02-19 14:49:49 -08:00
Mathijs de Bruin
42f664a382
Resolve ErrorIncompatibleDriver with Vulkan on MacOS.
...
Refs:
- https://chat.openai.com/share/7020ce72-65fc-45ec-b7be-9d9d798a5f3f
- https://github.com/SaschaWillems/Vulkan/issues/954
- https://github.com/haasn/libplacebo/issues/128
- https://github.com/KhronosGroup/Vulkan-Samples/issues/476
2024-02-19 14:49:49 -08:00
Mathijs de Bruin
5dde540897
Allow for Vulkan build with Accelerate.
...
Closes #5304
2024-02-19 14:49:49 -08:00
slaren
40c3a6c1e1
cuda : ignore peer access already enabled errors ( #5597 )
...
* cuda : ignore peer access already enabled errors
* fix hip
2024-02-19 23:40:26 +01:00
Jared Van Bortel
f24ed14ee0
make : pass CPPFLAGS directly to nvcc, not via -Xcompiler ( #5598 )
2024-02-19 15:54:12 -05:00
nopperl
9d679f0fcc
examples : support minItems/maxItems in JSON grammar converter ( #5039 )
...
* support minLength and maxLength in JSON schema grammar converter
* Update examples/json-schema-to-grammar.py
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-02-19 16:14:07 +02:00
Georgi Gerganov
1387cf60f7
llava : remove extra cont ( #5587 )
2024-02-19 15:23:17 +02:00
slaren
6fd413791a
llava : replace ggml_cpy with ggml_cont
2024-02-19 15:09:43 +02:00
Georgi Gerganov
337c9cbd52
sync : ggml
...
ggml-ci
2024-02-19 15:09:43 +02:00
Georgi Gerganov
a3145bdc30
ggml-alloc : apply ggml/731
2024-02-19 15:09:43 +02:00
Didzis Gosko
890559ab28
metal : option to embed MSL source into compiled binary (whisper/1842)
...
* ggml : embed Metal library source (ggml-metal.metal) into binary
enable by setting WHISPER_EMBED_METAL_LIBRARY
* rename the build option
* rename the preprocessor directive
* generate Metal library embedding assembly on-fly during build process
2024-02-19 15:09:43 +02:00
Georgi Gerganov
d0e3ce51f4
ci : enable -Werror for CUDA builds ( #5579 )
...
* cmake : pass -Werror through -Xcompiler
ggml-ci
* make, cmake : enable CUDA errors on warnings
ggml-ci
2024-02-19 14:45:41 +02:00
Georgi Gerganov
68a6b98b3c
make : fix CUDA build ( #5580 )
2024-02-19 13:41:51 +02:00
valiray
70d45af0ef
readme : fix typo in README-sycl.md ( #5353 )
2024-02-19 12:37:10 +02:00
Abhilash Majumder
13e2c771aa
cmake : remove obsolete sycl compile flags ( #5581 )
...
* rm unwanted sycl compile options
* fix bug
* fix bug
* format fix
2024-02-19 11:15:18 +02:00
Georgi Gerganov
f53119cec4
minor : fix trailing whitespace ( #5538 )
2024-02-19 10:34:10 +02:00
Daniel Bevenius
7084755396
llava : avoid changing the original BakLLaVA model ( #5577 )
...
This is a follup of Commit fc0c8d286a
("llava : update surgery script to not remove tensors") but this time
the change is to the BakLLaVA specific part of the surgery script.
I've been able to test this using SkunkworksAI/BakLLaVA-1 and it works
as expected using the instructions in README.md.
Signed-off-by: Daniel Bevenius <daniel.bevenius@gmail.com>
2024-02-19 10:31:59 +02:00
NawafAlansari
4480542b22
baby-llama : allocate graphs in ggml_context ( #5573 )
...
* Fixed the baby-llama issue (see issue #4830 )
* minor : fix whitespaces
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-02-19 10:25:38 +02:00
Xuan Son Nguyen
11b12de39b
llama : add llama_chat_apply_template() ( #5538 )
...
* llama: add llama_chat_apply_template
* test-chat-template: remove dedundant vector
* chat_template: do not use std::string for buffer
* add clarification for llama_chat_apply_template
* llama_chat_apply_template: add zephyr template
* llama_chat_apply_template: correct docs
* llama_chat_apply_template: use term "chat" everywhere
* llama_chat_apply_template: change variable name to "tmpl"
2024-02-19 10:23:37 +02:00
slaren
3a9cb4ca64
cuda, metal : fix nans in soft_max ( #5574 )
...
* cuda : fix nans in soft_max
* metal : fix nans in soft_max
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-02-19 10:04:45 +02:00
Mirko185
769a716e30
readme : update ( #5572 )
...
Added 1.5-bit on README.md
2024-02-19 09:39:31 +02:00