Commit graph

2578 commits

Author SHA1 Message Date
Xuan Son Nguyen
6c32d8c7ad
llama : refactor internal quantization functions (#5830) 2024-03-02 16:19:09 +02:00
compilade
802da0091b
llama : fix segfault from unknown model arch name (#5820)
* llama : fix segfault from unknown model arch name

* llama : make all LLM maps const

This also requires using `std::map::at` instead of its `operator[]`
which does not exist for const maps.

* llama : name LLM_ARCH_UNKNOWN to "(unknown)"

This avoids errors from `std::map::at` when
getting the general name of the model architecture.
Using "(unknown)" instead of an empty string as per suggestion
https://github.com/ggerganov/llama.cpp/pull/5820#issuecomment-1973735284

* llama : remove redundant inner const for LLM_TENSOR_NAMES

The extra const won't do anything here as const maps
return const references to values.

Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com>

* llama : remove redundant nullptr check in llm_arch_from_string

Since LLM_ARCH_NAMES is a const map, no spurious elements
with a NULL name are inserted anymore, so this check is dead code.

---------

Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com>
2024-03-02 15:42:56 +02:00
Neo Zhang Jianyu
715641391d
Support multiple GPUs (split mode) on SYCL backend (#5806)
* suport multiple cards: split-mode - layer|row

* rm warning

* rebase with master, support tow new OPs, close feature for -sm=row, fix for unit test

* update news

* fix merge error

* update according to review comments
2024-03-02 19:49:30 +08:00
crasm
9bf297a02b
workflows : remove nocleanup arg for check-requirements.sh (#5826)
Reduces peak tmpfs usage and should prevent the check from failing from
running out of space.

Fixes the 'No space left on device' issue mentioned in #5703.
2024-03-02 00:11:06 -05:00
Tushar
cb5e8f7fc4
build(nix): Introduce flake.formatter for nix fmt (#5687)
* build(nix): Introduce flake.formatter for `nix fmt`
* chore: Switch to pkgs.nixfmt-rfc-style
2024-03-01 15:18:26 -08:00
nold
da3b9ba2b7
convert-hf-to-gguf : require einops for InternLM2ForCausalLM (#5792) 2024-03-01 16:51:12 -05:00
Sourab Mangrulkar
c29af7e225
llama : add StarCoder2 support (#5795)
* Add support for starcoder2

* handle rope type

* skip rope freq and rotary embeddings from being serialized

* resolve comments

* Update llama.cpp

* remove redundant changes

* handle `rope-theta`

* llama : change starcoder2 rope type

* address comment

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-03-01 21:30:46 +02:00
ochafik
5827ff401b json: add typings 2024-03-01 19:10:29 +00:00
ochafik
c5bc1540d8 json: spaces in output and unrestricted output spaces 2024-03-01 19:10:04 +00:00
ochafik
ed24688af8 json: temp fix for escapes 2024-03-01 19:08:59 +00:00
ochafik
148555cebc json: fix merge 2024-03-01 19:08:44 +00:00
ochafik
bc0e0d9f55 json: support any ({} or {type: object}) 2024-03-01 19:06:19 +00:00
ochafik
f6f851ba87 json: support allOf + nested anyOf 2024-03-01 19:05:38 +00:00
ochafik
82ade9f558 join: support union types (mostly for nullable types I think) 2024-03-01 19:04:42 +00:00
ochafik
ea4244e1f7 json: fix $ref resolution 2024-03-01 19:04:04 +00:00
ochafik
12f0d7e84b json: resolve $ref (and support https schema urls) 2024-03-01 19:03:23 +00:00
ochafik
1428a85ff2 json: add support for pattern 2024-03-01 19:00:47 +00:00
Georgi Gerganov
38d16b1426
server : remove api_like_OAI.py proxy script (#5808) 2024-03-01 20:00:58 +02:00
ddpasa
c2224f003b
ggml-vulkan: fix VULKAN_CHECK_RESULTS flag, which was previously broken (#5813) 2024-03-01 18:00:00 +01:00
ochafik
1daaf30bde json: support required / optional properties 2024-03-01 14:15:27 +00:00
ochafik
3c339ce34a json: support additionalProperties ({[k: string]: [string,number][]}) 2024-03-01 14:14:45 +00:00
ochafik
2d9580a37b json: support tuple types ([number, string]) 2024-03-01 14:12:46 +00:00
ochafik
09248e0897 json: fix arrays (disallow [,1]) 2024-03-01 14:11:13 +00:00
kunal-vaishnavi
e743386728
gemma : fix bfloat16 -> float16 conversion issue (#5810) 2024-03-01 16:08:08 +02:00
Miwa / Ensan
f49a535686
common : fix flag --logits-all to --all-logits (#5805) 2024-03-01 15:48:56 +02:00
Pierrick Hymbert
3ab8b3a92e
llama : cleanup unused mmq flags (#5772)
* cleanup unused --no-mul-mat-q,-nommq, -mmq, --mul-mat-q, mul_mat_q

* remove: mul_mat_q in compare llama bench and usage

* update llama-bench

---------

Co-authored-by: slaren <slarengh@gmail.com>
2024-03-01 13:39:06 +02:00
Douglas Hanley
9600d59e01
unicode : switch to multimap based nfd_map (#5799)
* switch to multimap based nfd_map due to compile time issues

* simplify multimap keys

* dont construct new locale every time
2024-03-01 11:15:36 +02:00
Pierrick Hymbert
5cb02b4a01
server: allow to override threads server pool with --threads-http (#5794) 2024-03-01 10:08:08 +01:00
Eve
6ea0f010ff
ci : add Ubuntu 22 Vulkan CI run (#5789) 2024-03-01 10:54:53 +02:00
Georgi Gerganov
f105471ef6
server : fix newlines in help (#5785) 2024-03-01 09:59:43 +02:00
AidanBeltonS
38d1521608
[SYCL] Use batched mul_mat pathway (#5591)
* Use batched mul_mat pathway

* rm extra line

* Explicitly state scaled data type

---------

Co-authored-by: Abhilash Majumder <30946547+abhilash1910@users.noreply.github.com>
2024-03-01 13:06:47 +05:30
Xuan Son Nguyen
052051d8ae
Server: normalize naming (#5779)
* server: normalize naming

* fix spacing
2024-02-29 21:42:11 +01:00
Marcus Dunn
d5ab29757e
llama : constified llama_set_state_data's src (#5774) 2024-02-29 10:17:23 +02:00
Georgi Gerganov
87c91c0766
ci : reduce 3b ppl chunks to 1 to avoid timeout (#5771)
ggml-ci
2024-02-28 21:44:21 +02:00
Eve
317709b2a8
make portability_enumeration_ext apple only (#5757) 2024-02-28 20:33:37 +01:00
Georgi Gerganov
08c5ee87e4
llama : remove deprecated API (#5770)
ggml-ci
2024-02-28 18:43:38 +02:00
Georgi Gerganov
78aacf3634
awq-py : remove (#5768) 2024-02-28 17:36:53 +02:00
Georgi Gerganov
8c0e8f4e73
sync : ggml 2024-02-28 11:17:32 +02:00
slaren
2774b0c974
add google magika inference example (ggml/748)
* add magika inference example

* ggml : fix unaligned accesses in custom ops

* ggml : fix FP32 GELU for values that exceed the FP16 range

* use ggml_pool_1d

* add README

* Update README.md

* pad inputs if the files are too small

* cleanup

ggml-ci
2024-02-28 11:17:06 +02:00
UEXTM.com
5f70671856
Introduce backend GUIDs (ggml/743)
* Introduce backend GUIDs

Initial proposed implementation of backend GUIDs
(Discussed in https://github.com/ggerganov/ggml/pull/741)

Hardcoded CPU backend GUID (for now)
Change ggml_backend_is_cpu logic to use GUID

* Remove redundant functions

Remove redundant functions `ggml_backend_i::get_name` and `ggml_backend_guid` which are not desired for future expansion

* Add spaces to match style

Co-authored-by: slaren <slarengh@gmail.com>

* Fix brace style to match

Co-authored-by: slaren <slarengh@gmail.com>

* Add void to () in function signature

Co-authored-by: slaren <slarengh@gmail.com>

* Add back ggml_backend_guid and make CPU_GUID a local static in ggml_backend_cpu_guid

* add guids to all backends

ggml-ci

---------

Co-authored-by: slaren <slarengh@gmail.com>
2024-02-28 11:17:05 +02:00
Xuan Son Nguyen
a693bea1e6
server : hit Ctrl+C twice to exit (#5734)
* server: twice ctrl+C to exit

* std::atomic_flag

* sigint: message

* sigint: stderr

* Update examples/server/server.cpp

Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com>

---------

Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com>
2024-02-28 10:55:37 +02:00
compilade
adcb12a9ba
llama : fix non-quantization of expert gating tensors (#5754)
This reverts a single line from #5475
2024-02-28 10:52:56 +02:00
Douglas Hanley
177628bfd8
llama : improve BERT tokenization (#5740)
* implement nfd for stripping accents in wpm tokenizer

* sort nfd map; reuse iterator

* use builtin tolower

* add locale include

* Simplify to_lower cases

Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com>

---------

Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com>
2024-02-28 10:51:11 +02:00
Daniel Bevenius
6c4416868d
readme : add link to LLaVA 1.6 models (#5758)
Signed-off-by: Daniel Bevenius <daniel.bevenius@gmail.com>
2024-02-28 10:39:39 +02:00
Jorge A
efc72253f7
server : add "/chat/completions" alias for "/v1/...` (#5722)
* Add "/chat/completions" as alias for "/v1/chat/completions"

* merge to upstream master

* minor : fix trailing whitespace

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-02-28 10:39:15 +02:00
Kawrakow
7c4263d426
ggml : make i-quants work with super-blocks of 64 (CPU,Metal) (#5760)
* WIP: make i-quants work for QK_K = 64

* iq2_xs: attempt to fix AVX dot product for QK_K = 64

Tests pass, but I get gibberish.

* QK_K = 64 tests pass on ARM_NEON and Metal

Sadly, that does not mean it actually works.

* Make CUDA compile with QK_K = 64

Tests don't pass, plus we get misaligned access

* Q2_K: fixed bug in imatrix quantization for QK_K = 64

* iq1_s: turn off SIMD implementation for QK_K = 64 (it does not work)

---------

Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2024-02-28 10:37:02 +02:00
Kawrakow
cb49e0f8c9
Attempt to fix android build (#5752)
Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2024-02-27 19:16:49 +02:00
Kawrakow
0becb22ac0
IQ4_XS: a 4.25 bpw quantization (#5747)
* Try IQ4_NL with blocks of 64 - does not look good

* iq4_xs: go to super-blocks of 256 and 6-bit scales for blocks of 32

* iq4_xs: CUDA works - 133.2 t/s

* iq4_xs: AVX2 dot product

* iq4_xs: ARM_NEON dot product

* iq4_nl: Metal implementation

As usual, Metal / Apple Silicon don't like my quants.

* iq3_xs: minor fix

* iq4_xs: shrink by using IQ3_S for attn_k and attn_q

* iq4_xs: revert using IQ3_S for attn_k and attn_v

PPL vs size is good, but CPU performance suffers: on M2 Max
TG-128 drops to 21.7 t/s from 28.8, and on a Ryzen-7950X
to 14.5 t/s from 15.8 t/s. On CUDA we have 135 t/s when
using IQ3_S vs 133 t/s with pure IQ4_XS.

* Fix CI

* iq4_xs: Added forgotten check for 256 divisibility

---------

Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2024-02-27 16:34:24 +02:00
Engininja2
c24a2a6e60
cuda : replace remaining shfl_xor with calls to warp_reduce functions (#5744) 2024-02-27 14:22:45 +01:00
Engininja2
1f30b7a9f1
ggml-quants : fix avx2 iq1_s vec_dot when compiled with gcc (#5742) 2024-02-27 14:50:18 +02:00