Commit graph

3288 commits

Author SHA1 Message Date
Georgi Gerganov
f308ea7059 metal : tune soft_max number of threads (whisper/0) 2024-05-14 19:08:09 +03:00
Georgi Gerganov
c3c88f296a ggml : try fix ppc64 (whisper/0) 2024-05-14 19:08:09 +03:00
Przemysław Pawełczyk
182adefcf3 ggml : expose SSE3 and SSSE3 for MSVC when AVX is available (whisper/2128) 2024-05-14 19:08:09 +03:00
Hong Bo PENG
0d26d8ccd8 ggml : optimize for ppc64le using VSX intrinsics (ggml/784)
* optimize for ppc64le using VSX intrinsics

* 1. code clean up by removing comments about overflow concern.

2. fix typo in suffix of scaling.

* Continue to fix typo in suffix of scaling for QK_K <> 256

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-05-14 19:08:09 +03:00
HanishKVC
f8c0b474ec ChatON+:RenameTo chaton_meta_load_json to match semantic
Also add simple note wrt itself and its helper.
2024-05-14 21:37:05 +05:30
HanishKVC
bd5c39e0f0 ChatOn+GroupKV: Cleanup a bit, including using debug logging 2024-05-14 21:22:48 +05:30
Steve Grubb
4f0263633b
server: free sampling contexts on exit (#7264)
* server: free sampling contexts on exit

This cleans up last leak found by the address sanitizer.

* fix whitespace

* fix whitespace
2024-05-14 16:11:24 +02:00
HanishKVC
bb9ce52b11 ChatON+: ValidateDump dumps All, wrapped in optional LDBUG_LN
GroupKV dump adds needed ":" seperator on its own, so calling
functions can just pass the tag string they want in the log without
worrying about any demarkation.
2024-05-14 18:45:25 +05:30
Brian
1265c670fd
Revert "move ndk code to a new library (#6951)" (#7282)
This reverts commit efc8f767c8.
2024-05-14 16:10:39 +03:00
Radoslav Gerganov
5e31828d3e
ggml : add RPC backend (#6829)
* ggml : add RPC backend

The RPC backend proxies all operations to a remote server which runs a
regular backend (CPU, CUDA, Metal, etc).

* set TCP_NODELAY

* add CI workflows

* Address review comments

* fix warning

* implement llama_max_devices() for RPC

* Address review comments

* Address review comments

* wrap sockfd into a struct

* implement get_alignment and get_max_size

* add get_device_memory

* fix warning

* win32 support

* add README

* readme : trim trailing whitespace

* Address review comments

* win32 fix

* Address review comments

* fix compile warnings on macos
2024-05-14 14:27:19 +03:00
slaren
541600201e
llama : disable pipeline parallelism with nkvo (#7265) 2024-05-14 17:33:42 +10:00
Elton Kola
efc8f767c8
move ndk code to a new library (#6951) 2024-05-14 17:30:30 +10:00
Haggai Nuchi
e0f556186b
Add left recursion check: quit early instead of going into an infinite loop (#7083)
* Add left recursion check: quit early instead of going into an infinite loop

* Remove custom enum, rename left recursion check and move to "grammar internal" section, add handling for edge case where a leftmost nonterminal may be empty

* Remove unnecessary declaration
2024-05-14 15:25:56 +10:00
Ryuei
27f65d6267
docs: Fix typo and update description for --embeddings flag (#7026)
- Change '--embedding' to '--embeddings' in the README
- Update the description to match the latest --help output
- Added a caution about defining physical batch size
2024-05-14 15:20:47 +10:00
HanishKVC
28ddd2c474 ChatON: ChatParts dump returns info str rather than direct logging 2024-05-14 02:21:16 +05:30
HanishKVC
4dfd10a40d ChatON: Move core templating/tagging code into ChatTemplates class
However still retain the wrappers, which work with a predefined
global instance of ChatTemplates.
2024-05-14 01:49:38 +05:30
HanishKVC
600653dae2 ChatON:Optional control of MsgCntBasedTagging
Use same to bypass any msg count based tagging behaviour for the
single message tagging through its helper wrapper.
2024-05-14 01:27:24 +05:30
HanishKVC
6e13c0c87e ChatON:Control SystemMsgSuffix+End tags only wrt 1st system msg
Make it similar to user-begin+prefix control. ie only wrt 1st msg
of respective type.
2024-05-14 01:19:04 +05:30
HanishKVC
3fcaf19967 ChatON+:Multi4Single: applyGlobalIfAny flag wrt templating api
Given that now the multi chat templating logic itself is used to
apply chat templating/tagging to a single chat message, so give
flexibility of deciding whether global tags if any should be
applied or not wrt the core tagging logic.

examples/main inturn updated to not apply global tags if any wrt
the system message. Also the user messages already dont apply
global tags if any, as its currently implemented to build on the
existing in-prefix/suffix and anitprompt flow.
2024-05-14 01:00:17 +05:30
HanishKVC
8165bd4035 ChatON:WIP:chaton_tmpl_apply_single build on multi msg tagging
To avoid having to duplicate any hardcoding in future, wrt any new
model/chat-template-standard, at multiple locations, remove the
single message templating code with a wrapper which does the same
but using the multi-msg templating helper.
2024-05-14 00:44:47 +05:30
HanishKVC
fe0c9ce646 ChatON:BasicCheck+:return a string with info, dont directly log 2024-05-14 00:25:00 +05:30
compilade
ee52225067
convert-hf : support direct Q8_0 conversion (#7234)
* convert-hf : support q8_0 conversion

* convert-hf : add missing ftype

This was messing with the checksums otherwise.

* convert-hf : add missing ftype to Baichuan and Xverse

I didn't notice these on my first pass.
2024-05-13 14:10:51 -04:00
Georgi Gerganov
614d3b914e
llama : less KV padding when FA is off (#7257)
ggml-ci
2024-05-13 17:15:15 +03:00
k.h.lai
30e70334f7
llava-cli: fix base64 prompt (#7248) 2024-05-14 00:02:36 +10:00
HanishKVC
efbb87dba6 ChatON:ChatTemplates:TmplBasicCheck 2024-05-13 17:50:15 +05:30
HanishKVC
0cfe99076d ChatON:ChatTemplates: TmplExists, TmplGetKey, TmplRoleGetKeys
ChatTemplate directly supports these now, as well as the existing
global instance based corresponding helpers depend on same.
2024-05-13 17:30:47 +05:30
Johannes Gäßler
1c570d8bee
perplexity: add BF16 vs. FP16 results (#7150) 2024-05-13 13:03:27 +02:00
HanishKVC
184ac322e3 ChatON: Make json_get efficient and flexible wrt its calling
Also explicitly indicate that we are looking at a chain of keys
2024-05-13 16:21:02 +05:30
Neo Zhang
948f4ec7c5
[SYCL] rm wait() (#7233) 2024-05-13 18:11:26 +08:00
Joan Fontanals
9aa672490c
llama : rename jina tokenizers to v2 (#7249)
* refactor: rename jina tokenizers to v2

* refactor: keep refactoring non-breaking
2024-05-13 11:35:14 +03:00
HanishKVC
eb7554ca3b ChatON: Avoid -> to match simpcfg as well as corresponding keys 2024-05-13 10:37:14 +05:30
Brian
b1f8af1886
convert.py: Outfile default name change and additional metadata support (#4858)
* convert.py: Outfile default name change and additional metadata support

* convert.py: don't stringify Metadata load method output

* convert.py: typo fix

* convert.py: fix metadata format to sync with LLM_KV_NAMES in llama.cpp
2024-05-13 12:56:47 +10:00
Benjamin Findley
e586ee4259
change default temperature of OAI compat API from 0 to 1 (#7226)
* change default temperature of OAI compat API from 0 to 1

* make tests explicitly send temperature to OAI API
2024-05-13 12:40:08 +10:00
Neo Zhang
cbf75894d2
[SYCL] Add oneapi runtime dll files to win release package (#7241)
* add oneapi running time dlls to release package

* fix path

* fix path

* fix path

* fix path

* fix path

---------

Co-authored-by: Zhang <jianyu.zhang@intel.com>
2024-05-13 08:04:29 +08:00
Neo Zhang
0d5cef78ae
[SYCL] update CI with oneapi 2024.1 (#7235)
Co-authored-by: Zhang <jianyu.zhang@intel.com>
2024-05-13 08:02:55 +08:00
HanishKVC
d5b0bfbaec SimpCfg: Remove now unused SC_DEBUG, rather GroupKV uses equiv
The code which was using SC_DEBUG moved to GroupKV and inturn
GKV_DEBUG
2024-05-13 00:33:36 +05:30
HanishKVC
857570f8f8 SimpCfgTest: Update dump usage to GKV return string semantic 2024-05-13 00:20:58 +05:30
HanishKVC
9249649fb3 ChatON+TestPrgs: Use specific log files 2024-05-12 23:59:48 +05:30
Johannes Gäßler
dc685be466
CUDA: add FP32 FlashAttention vector kernel (#7188)
* CUDA: add FP32 FlashAttention vector kernel

* fixup! CUDA: add FP32 FlashAttention vector kernel

* fixup! fixup! CUDA: add FP32 FlashAttention vector kernel

* fixup! fixup! fixup! CUDA: add FP32 FlashAttention vector kernel
2024-05-12 19:40:45 +02:00
HanishKVC
3d33d62924 SimpCfg: Move testing code into its own file in tests
Also set functions to inline or static as appropriate
2024-05-12 22:53:48 +05:30
HanishKVC
f2dd1263fd GroupKV: Move test code into its own file in tests 2024-05-12 22:33:48 +05:30
HanishKVC
6048218383 SimpCFG: COnvert to GroupKV extended version
Reuse the code already moved into GroupKV

Add explicit get and set wrt int32_t, which was added after move
to GroupKV wrt basic MapOfMapOfVariant logic.
2024-05-12 21:58:59 +05:30
Georgi Gerganov
6f1b63606f
cmake : fix version cmp (#7227) 2024-05-12 18:30:23 +03:00
HanishKVC
db2ffabb18 ChatON: use templated json_get when loading bool key-value fields
With this now even loading chaton_meta.json file will generate
more informative exception, so that user can know which field
is missing, if any.
2024-05-12 18:26:58 +05:30
HanishKVC
470b8885f3 ChatON: Switch to templated json_get for str/bool/etal 2024-05-12 18:19:18 +05:30
HanishKVC
0249c07e6b ChatON:Switch to json_get_str to help identify missing keys better
The json library generates less informative exception message,
which doesnt help one identify which key is missing, so switch to
the new json_get_str helper added in the last commit. It generates
more informative exception message.
2024-05-12 17:44:13 +05:30
HanishKVC
4eae05a6b7 ChatON: json access helper which raises exception if key missing 2024-05-12 17:34:04 +05:30
HanishKVC
f94fed92d3 ChatON+MetaHpp: Had forgotten to conv reverse-prompt
Also has dump was using get_value calls with fallback to default,
so it wasnt identifying the missed field.

Have fixed both of those. Also reconverted meta json file.

Misc: interesting avesham and aattam
2024-05-12 16:20:28 +05:30
HanishKVC
4232ec1fb9 Main: Load json meta file only if specified
This should be ok, given that there is a version of the chat tmpl
meta data already included with the library.

So only if user wants to change the chat template info wrt a existing
model/template-standard or add a new one, then there is need to
pass a json file with info for that model/standard.
2024-05-12 14:53:37 +05:30
HanishKVC
a3285e8e25 ChatON:Include auto converted ChatONMeta.hpp chat template data
This should allow for using this generic chat templating code flow
along with the included chat template data, without needing to
load any json file at runtime.

However If user wants to change the already included chat template
data, or add new chat template standard/model related data, one can
explicitly load json file.

TODO: Need to cross check this flow once, but logically should work
2024-05-12 14:08:09 +05:30