Reuse the code already moved into GroupKV
Add explicit get and set wrt int32_t, which was added after move
to GroupKV wrt basic MapOfMapOfVariant logic.
The json library generates less informative exception message,
which doesnt help one identify which key is missing, so switch to
the new json_get_str helper added in the last commit. It generates
more informative exception message.
Also has dump was using get_value calls with fallback to default,
so it wasnt identifying the missed field.
Have fixed both of those. Also reconverted meta json file.
Misc: interesting avesham and aattam
This should be ok, given that there is a version of the chat tmpl
meta data already included with the library.
So only if user wants to change the chat template info wrt a existing
model/template-standard or add a new one, then there is need to
pass a json file with info for that model/standard.
This should allow for using this generic chat templating code flow
along with the included chat template data, without needing to
load any json file at runtime.
However If user wants to change the already included chat template
data, or add new chat template standard/model related data, one can
explicitly load json file.
TODO: Need to cross check this flow once, but logically should work
Rename kv helpers to match their semantic.
* whether working with string or bool value
* whether two keys or a single key
Add support for kv with bool value
inturn add the kv boolean pairs used in the chaton_meta.json file
Add the closing bracket
Use repr to retain the escape sequences in the read string.
And parallely skip the single quote around strings wrt repr.
Bring in more k-v pairs wrt chaton_meta.json
WIP:NOTE:
Initial go converting from json driven flow to ChatTemplatesGroupKV
related flow done. Needs to be tested.
A optional helper added to load ChatTemplates from a specified
json file.
Need to add a compile time initialized MapOfMapOfVariants wrt
the chat template details of models/standards already known
to the program. So that one can use the llama.cpp and this new
chat template logic, even without json dependency, if one doesnt
want to.
Manually iterate the json object items using begin-end explicitly,
because the implicit iteration for loop related helpers for the
used json lib gives only the values and not a key-value pair.
So that no need to explicitly specify <int64_t> or LL wrt int
literals, which dont need 64bit space by default.
Which also means one shouldnt/cant mix up type of value stored and
default type specified when getting.
Rename all the log messages to have GKV and not SC.
The log messages in get_vector made conditional to GKV_DEBUG, this
was missed out earlier in simpcfg itself.
This should pave way for having a default chat templates dataset
in the code, without needing to load it from a config file, if
one doesnt want to.
TODO: allow for loading config from json into simpcfg, so that
a program which uses llama.cpp can decide, whether it is ok with
what is already there in the internal dataset, or allow for loading
template info at runtime using the simpcfg's simple text file or
additionally include the json code to load template info at runtime
from json file.
It appears like std::format is not supported in older g++/lib still
in wide use like current debian stable, so avoiding same wrt direct
library use.
Allow for empty VAARGS
NOTE: However test program mode of the same uses cout and format
Have merged master branch has of 20240510IST12XY with chaton_v3
branch.
As part of same had to update the flow in examples/main/main.cpp
wrt conversion related commit in master branch and my chaton related
commits in this branch.
* Revert "Revert "llava : add support for moondream vision language model (#6899)""
This reverts commit 9da243b36a.
* Fix num_positions and embeddings initialization
* Modify mat mat mul shader for mul_mat_id, modify mat vec mul shaders for single call batch operation
* Further work towards MoE, disabled for now
* Disable MoE code (not ready yet), fix a number of bugs in shaders and Vulkan code
* Add softmax with f16 mask and pos buffer support
* Disable mul_mat_id shaders for now
* Fix flake8
* Fix validation errors caused by empty buffers on larger batch sizes
This commit changes the value assigned to llama_timings.n_p_eval when
ctx->n_p_eval is 0 to be 1 instead of 1 which is the current value.
The motivation for this change is that if session caching is enabled,
for example using the `--prompt-cache main-session.txt` command line
argument for the main example, and if the same prompt is used then on
subsequent runs, the prompt tokens will not actually be passed to
llama_decode, and n_p_eval will not be updated by llama_synchoronize.
But the value of n_p_eval will be set 1 by llama_get_timings because
ctx->n_p_eval will be 0. This could be interpreted as 1 token was
evaluated for the prompt which could be misleading for applications
using this value.
Signed-off-by: Daniel Bevenius <daniel.bevenius@gmail.com>
* Add special token modification capability
To be able to fix/amend special tokens in a GGUF let's add two new arguments:
* `--special-token <name> <value>` where `<name>` can be bos, eos, prefix, middle, etc. while `<value>` is the token value, f.ex. `"<|fim▁begin|>"`
* `--special-token-by-id <name> <id>` where `<id>` is the ID of the token, f.ex. 32006
So, in order to f.ex. add fill-in-middle tokens to a GGUF you would do the following:
```bash
python3 gguf-new-metadata.py input.gguf output.gguf --special-token prefix "<|fim▁begin|>" --special-token middle "<|fim▁hole|>" --special-token suffix "<|fim▁end|>"
```
* improve help text
* flake--
* fix multiple tokens warning
* make script executable
* switch to namedtuple, no need to dataclass
* typing++
* add progress bar
* Add special token modification capability
To be able to fix/amend special tokens in a GGUF let's add two new arguments:
* `--special-token <name> <value>` where `<name>` can be bos, eos, prefix, middle, etc. while `<value>` is the token value, f.ex. `"<|fim▁begin|>"`
* `--special-token-by-id <name> <id>` where `<id>` is the ID of the token, f.ex. 32006
So, in order to f.ex. add fill-in-middle tokens to a GGUF you would do the following:
```bash
gguf-new-metadata.py input.gguf output.gguf --special-token prefix "<|fim▁begin|>" --special-token middle "<|fim▁end|>" --special-token suffix "<|fim▁hole|>"
```
(yes, fim_end is the `middle` token, because completion is a `prefix`/`suffix`/`middle` sequence (where `middle` is unfilled))
or
```bash
gguf-new-metadata.py input.gguf output.gguf --special-token prefix "<fim_prefix>" --special-token middle "<fim_middle>" --special-token suffix "<fim_suffix>"
```
etc...
NB: The tokens have to exist already, trying to add non-existent token name/IDs will be ignored (with a warning), while non-existent values will fail (with an error).
* improve help text
* flake--
* fix multiple tokens warning
* make script executable
* switch to namedtuple, no need to dataclass
* typing++
* add progress bar
* fail on invalid token id
* opencl alignment size should be converted from bits to bytes
Reference: https://registry.khronos.org/OpenCL/specs/3.0-unified/html/OpenCL_API.html#CL_DEVICE_MEM_BASE_ADDR_ALIGN
> Alignment requirement (in bits) for sub-buffer offsets.
* Update ggml-opencl.cpp for readability using division instead of shift
Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com>
---------
Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com>