Commit graph

3104 commits

Author SHA1 Message Date
Yazan Agha-Schrader
9fa0aa53f5 fix chatml & add llama3 format 2024-05-29 11:26:34 +02:00
Yazan Agha-Schrader
5fa255edfb add user message suffix 2024-05-29 10:28:07 +02:00
Yazan Agha-Schrader
eac8d739a5 update forgotten css theme 2024-05-29 08:54:04 +02:00
Akarshan Biswas
0e8d8bfd6c
Add Arc A750 and Arch linux to readme-sycl.md as verified GPU model and Linux distro (#7605) 2024-05-29 16:53:47 +10:00
Yazan Agha-Schrader
aa493e022d add css class 2024-05-29 08:45:20 +02:00
Yazan Agha-Schrader
9bb074e1f6 add phi3 to dropdown 2024-05-29 06:28:27 +02:00
Yazan Agha-Schrader
be675948d4 add phi-3 prompt template 2024-05-29 05:28:52 +02:00
zhouwg
504f0c340f
ggml : fix typo in ggml.c (#7603) 2024-05-29 04:09:31 +02:00
Meng, Hengyu
b864b50ce5
[SYCL] Align GEMM dispatch (#7566)
* align GEMM dispatch
2024-05-29 07:00:24 +08:00
jaime-m-p
02c1ecad07
Tokenizer WPM fixes (#7500)
* Update random test: add_bos_token.
* Update random test: add WPM models for testing.
* Build vocab.special_tokens_cache using vocab token types.
* Fix and improve WPM preprocessing.
  - Fix unicode edge case combinations.
  - Split by whitspace in the same pass.
* Discard all tokens when no matching found.
2024-05-28 21:46:34 +02:00
Georgi Gerganov
6bd12ce409
sycl : fix assert (#7563) 2024-05-28 22:22:50 +03:00
Giuseppe Scrivano
5442939fcc
llama : support small Granite models (#7481)
* Add optional MLP bias for Granite models

Add optional MLP bias for ARCH_LLAMA to support Granite models.
Partially addresses ggerganov/llama.cpp/issues/7116
Still needs some more changes to properly support Granite.

* llama: honor add_space_prefix from the model configuration

propagate the add_space_prefix configuration from the HF model
configuration to the gguf file and honor it with the gpt2 tokenizer.

Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>

* llama: add support for small granite models

it works only for the small models 3b and 8b.

The convert-hf-to-gguf.py script uses the vocabulary size of the
granite models to detect granite and set the correct configuration.

Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>

---------

Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
Co-authored-by: Steffen Roecker <sroecker@redhat.com>
2024-05-28 21:49:49 +03:00
k.h.lai
56411a950f
vulkan: properly initialize vulkan devices for LLAMA_SPLIT_MODE_NONE (#7552) 2024-05-28 19:25:08 +02:00
Radoslav Gerganov
2b737caae1
rpc : resource management rework (#7562)
* rpc : resource management rework

* address review comments
2024-05-28 18:13:36 +03:00
fairydreaming
ee3dff6b8e
Add support for DeepseekV2ForCausalLM (#7519)
* common : increase max number of experts to 160

* common : add tensors ATTN_Q_A, ATTN_Q_A_NORM, ATTN_Q_B, ATTN_KV_A_MQA, ATTN_KV_A_NORM, ATTN_KV_B needed by DeepSeek-V2 MLA (multi-head latent attention) architecture

* common : add model header parameters: leading_dense_block_count, expert_feed_forward_length, expert_shared_count, expert_weights_scale, attention.q_lora_rank, attention.kv_lora_rank, rope.scaling.yarn_log_multiplier

* convert-hf : add model conversion support for DeepseekV2ForCausalLM

* llama : add model types for DeepSeek-V2 and DeepSeek-V2-Lite models

* llama : add two new llm_build_moe_ffn() arguments: scale_w (whether to scale weights of selected MoE experts) and w_scale (numerical value of the scaling factor)

* llama : add inference support for LLM_ARCH_DEEPSEEK2

---------

Co-authored-by: Stanisław Szymczyk <sszymczy@gmail.com>
2024-05-28 17:07:05 +02:00
Georgi Gerganov
edc29433fa
tests : fix test-tokenizer-0.sh 2024-05-28 15:04:09 +03:00
Georgi Gerganov
8b99e2aa66
llama : handle unknown utf8 bytes (#7588) 2024-05-28 13:55:35 +03:00
Brian
271ff3fc44
github: add refactor to issue template (#7561)
* github: add refactor issue template [no ci]

* Update 07-refactor.yml
2024-05-28 20:27:27 +10:00
Neo Zhang
e2b065071c
[SYCL]fix ggml_sycl_mul_mat_id() to match the change of api (#7436)
* fix mul_mat_id to match the change of api

* rm comment

* rm unused or duplicated code, rename as review comment
2024-05-28 10:53:37 +01:00
Georgi Gerganov
0548a4187f
ggml : generalize GGML_OP_CONCAT (#7563)
* ggml : generalize GGML_OP_CONCAT (WIP)

ggml-ci

* tests : add dim != 2 tests

* metal : generalize concat kernel

* tests : naming

* cuda : generalize concat kernel

ggml-ci

* sycl : add warning and assert

* ggml : fix op params handling

* metal : bugfix kernel

ggml-ci

* ggml : reimplement CPU and Metal

* cuda : add asserts

ggml-ci

* ggml : fix ptrs

ggml-ci
2024-05-28 11:04:19 +03:00
mgroeber9110
9335b969e8
server: do not remove whitespace at the start of a completion chunk (#7524) 2024-05-28 14:55:51 +10:00
Nathan Epstein
c41767154e
Markdownish code block fix (#7571)
* markdownish codeblock fix

* updating regexes
2024-05-28 14:41:14 +10:00
Ikko Eltociear Ashimine
74b239b3d5
llava : update clip.h (#7580)
overriden -> overridden
2024-05-28 12:48:16 +10:00
Yazan Agha-Schrader
efbbc95321 remove ms per token, since not relevant for most webui users and use cases 2024-05-28 02:51:47 +02:00
Yazan Agha-Schrader
6bf7ae08dd update const ModelGenerationInfo 2024-05-28 02:46:18 +02:00
Yazan Agha-Schrader
8768b4f5ea use template literales for promptFormats.js 2024-05-28 02:25:49 +02:00
Djip007
852aafb163
update HIP_UMA #7399 (#7414)
* update HIP_UMA #7399

add use of hipMemAdviseSetCoarseGrain when LLAMA_HIP_UMA is enable.
- get x2 on prompte eval and x1.5 on token gen with rocm6.0 on ryzen 7940HX iGPU (780M/gfx1103)

* simplify code, more consistent style

---------

Co-authored-by: slaren <slarengh@gmail.com>
2024-05-28 01:40:47 +02:00
kunnis
0136966daf
adding in x64 targets to cmake presets (#7574) 2024-05-28 01:40:12 +02:00
Yazan Agha-Schrader
e2d917c5f0 fix grammar field width 2024-05-28 00:29:08 +02:00
Yazan Agha-Schrader
7055d84020 fix FloatField and BoolField tooltips 2024-05-28 00:23:18 +02:00
Yazan Agha-Schrader
569cbd8bbe
Merge branch 'ggerganov:master' into server-ui-pr 2024-05-27 23:49:47 +02:00
Yazan Agha-Schrader
0f077968c0 add tooltips to the parameters with comprehensible explanations 2024-05-27 23:46:18 +02:00
Yazan Agha-Schrader
c7803876ce move API to the top, rearrange param sliders. update css 2024-05-27 22:33:18 +02:00
Johannes Gäßler
10b1e45876
make: add --device-debug to NVCC debug flags (#7542) 2024-05-27 19:34:40 +02:00
agray3
197c00681b
Allow multiple copy function pointers for CUDA graph kernel param updates (#7565)
CUDA graphs require parameter updates to kernels associated with
GGML_OP_CPY nodes. Previously the implementation only checked for a
single CUDA kernel in such nodes, but this caused a bug in cases where
2 such kernels exist. This fixes the issue by using a vector to allow
multiple function pointers to be stored and checked against.

Fixes #7942
2024-05-27 19:33:42 +02:00
AidanBeltonS
95f84d5ce8
Fix q_xxs using mul_mat_q (#7459) 2024-05-27 22:04:51 +05:30
Yazan Agha-Schrader
450471454c clean the code 2024-05-27 17:00:03 +02:00
Yazan Agha-Schrader
a2edaf48c3 Add API key CSS classes and update styling in style.css 2024-05-27 16:33:45 +02:00
Yazan Agha-Schrader
b16e10bb69 some necessary fixes 2024-05-27 16:28:09 +02:00
Yazan Agha-Schrader
5a14ef1dca add api-key css classes 2024-05-27 15:15:00 +02:00
AidanBeltonS
5487593bc7
Add freq factors (#7495) 2024-05-27 18:04:09 +05:30
Georgi Gerganov
1d8fca72ae
metal : add GGML_OP_REPEAT kernels (#7557)
ggml-ci
2024-05-27 12:10:19 +03:00
Georgi Gerganov
62bfef5194
metal : disable FA kernel for HS=256 (#7556)
ggml-ci
2024-05-27 10:38:39 +03:00
Yazan Agha-Schrader
5d455f2789 chore: Update HTML meta tags in index.html file 2024-05-27 09:20:10 +02:00
Yazan Agha-Schrader
8d49b9906a de prompts 2024-05-27 09:18:19 +02:00
Yazan Agha-Schrader
bd2c97c51a add the belonging stuff: css,favicon etc 2024-05-27 09:14:54 +02:00
Yazan Agha-Schrader
902862a505 migrate my eary work 2024-05-27 08:33:36 +02:00
Georgi Gerganov
eaf6e03174
llama : add comments about experimental flags (#7544) 2024-05-27 09:24:13 +03:00
Yazan Agha-Schrader
0a30b6e082 ic 2024-05-27 07:05:08 +02:00
Brian
d6ef0e77dd
github: add self sorted issue ticket forms (#7543)
* github: add self sorted issue ticket forms [no ci]

* github: consolidate BSD in bug issue ticket

* github: remove contact from bug ticket template [no ci]

* github: remove bios from os dropdown in bug report [no ci]
2024-05-27 10:54:30 +10:00