| .. |
|
baby-llama
|
build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
2024-06-13 00:41:52 +01:00 |
|
batched
|
build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
2024-06-13 00:41:52 +01:00 |
|
batched-bench
|
build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
2024-06-13 00:41:52 +01:00 |
|
batched.swift
|
build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
2024-06-13 00:41:52 +01:00 |
|
benchmark
|
build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
2024-06-13 00:41:52 +01:00 |
|
convert-llama2c-to-ggml
|
build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
2024-06-13 00:41:52 +01:00 |
|
cvector-generator
|
cvector: better prompt handling, add "mean vector" method (#8069)
|
2024-06-25 13:59:54 +02:00 |
|
embedding
|
embedding : more cli arguments (#7458)
|
2024-06-24 08:30:24 +03:00 |
|
eval-callback
|
build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
2024-06-13 00:41:52 +01:00 |
|
export-lora
|
build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
2024-06-13 00:41:52 +01:00 |
|
finetune
|
build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
2024-06-13 00:41:52 +01:00 |
|
gbnf-validator
|
llama : return nullptr from llama_grammar_init (#8093)
|
2024-06-25 15:07:28 -04:00 |
|
gguf
|
build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
2024-06-13 00:41:52 +01:00 |
|
gguf-split
|
build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
2024-06-13 00:41:52 +01:00 |
|
gritlm
|
llama : allow pooled embeddings on any model (#7477)
|
2024-06-21 08:38:22 +03:00 |
|
imatrix
|
llama : reorganize source code + improve CMake (#8006)
|
2024-06-26 18:33:02 +03:00 |
|
infill
|
Only use FIM middle token if it exists (#7648)
|
2024-06-18 22:19:45 +10:00 |
|
jeopardy
|
build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
2024-06-13 00:41:52 +01:00 |
|
llama-bench
|
llama-bench : fix RPC indication (#7936)
|
2024-06-14 16:47:41 +03:00 |
|
llama.android
|
android : module (#7502)
|
2024-05-25 11:11:33 +03:00 |
|
llama.swiftui
|
swiftui : enable stream updating (#7754)
|
2024-06-21 08:30:58 +03:00 |
|
llava
|
llama : reorganize source code + improve CMake (#8006)
|
2024-06-26 18:33:02 +03:00 |
|
lookahead
|
build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
2024-06-13 00:41:52 +01:00 |
|
lookup
|
build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
2024-06-13 00:41:52 +01:00 |
|
main
|
Add chat template support for llama-cli (#8068)
|
2024-06-25 21:56:49 +10:00 |
|
main-cmake-pkg
|
build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
2024-06-13 00:41:52 +01:00 |
|
parallel
|
build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
2024-06-13 00:41:52 +01:00 |
|
passkey
|
build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
2024-06-13 00:41:52 +01:00 |
|
perplexity
|
build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
2024-06-13 00:41:52 +01:00 |
|
quantize
|
Update llama-quantize ppl/file size output from LLaMA-v1 to Llama-3 values (#8058)
|
2024-06-22 15:16:10 +02:00 |
|
quantize-stats
|
build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
2024-06-13 00:41:52 +01:00 |
|
retrieval
|
llama : allow pooled embeddings on any model (#7477)
|
2024-06-21 08:38:22 +03:00 |
|
rpc
|
llama : reorganize source code + improve CMake (#8006)
|
2024-06-26 18:33:02 +03:00 |
|
save-load-state
|
build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
2024-06-13 00:41:52 +01:00 |
|
server
|
llama : reorganize source code + improve CMake (#8006)
|
2024-06-26 18:33:02 +03:00 |
|
simple
|
build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
2024-06-13 00:41:52 +01:00 |
|
speculative
|
build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
2024-06-13 00:41:52 +01:00 |
|
sycl
|
llama : reorganize source code + improve CMake (#8006)
|
2024-06-26 18:33:02 +03:00 |
|
tokenize
|
build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
2024-06-13 00:41:52 +01:00 |
|
train-text-from-scratch
|
build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
2024-06-13 00:41:52 +01:00 |
|
base-translate.sh
|
build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
2024-06-13 00:41:52 +01:00 |
|
chat-13B.bat
|
Create chat-13B.bat (#592)
|
2023-03-29 20:21:09 +03:00 |
|
chat-13B.sh
|
build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
2024-06-13 00:41:52 +01:00 |
|
chat-persistent.sh
|
build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
2024-06-13 00:41:52 +01:00 |
|
chat-vicuna.sh
|
build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
2024-06-13 00:41:52 +01:00 |
|
chat.sh
|
build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
2024-06-13 00:41:52 +01:00 |
|
CMakeLists.txt
|
llama : reorganize source code + improve CMake (#8006)
|
2024-06-26 18:33:02 +03:00 |
|
convert-legacy-llama.py
|
ggml : refactor rope norm/neox (#7634)
|
2024-06-05 11:29:20 +03:00 |
|
json-schema-pydantic-example.py
|
json: fix additionalProperties, allow space after enum/const (#7840)
|
2024-06-26 01:45:58 +01:00 |
|
json_schema_to_grammar.py
|
json: better support for "type" unions (e.g. nullable arrays w/ typed items) (#7863)
|
2024-06-26 01:46:35 +01:00 |
|
llama.vim
|
llama.vim : added api key support (#5090)
|
2024-01-23 08:51:27 +02:00 |
|
llm.vim
|
llm.vim : stop generation at multiple linebreaks, bind to <F2> (#2879)
|
2023-08-30 09:50:55 +03:00 |
|
Miku.sh
|
build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
2024-06-13 00:41:52 +01:00 |
|
pydantic-models-to-grammar-examples.py
|
examples : make pydantic scripts pass mypy and support py3.8 (#5099)
|
2024-01-25 14:51:24 -05:00 |
|
pydantic_models_to_grammar.py
|
grammars: x{min,max} repetition operator (#6640)
|
2024-06-06 10:07:06 +01:00 |
|
reason-act.sh
|
build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
2024-06-13 00:41:52 +01:00 |
|
regex-to-grammar.py
|
JSON schema conversion: ⚡️ faster repetitions, min/maxLength for strings, cap number length (#6555)
|
2024-04-12 19:43:38 +01:00 |
|
server-embd.py
|
server : refactor (#5882)
|
2024-03-07 11:41:53 +02:00 |
|
server-llama2-13B.sh
|
build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
2024-06-13 00:41:52 +01:00 |
|
ts-type-to-grammar.sh
|
JSON schema conversion: ⚡️ faster repetitions, min/maxLength for strings, cap number length (#6555)
|
2024-04-12 19:43:38 +01:00 |