cosmopolitan/third_party/ggml
2023-05-12 09:46:07 -07:00
..
common.cc Introduce -q (quiet flag) and improve ctrl-c ux 2023-05-12 09:46:07 -07:00
common.h Introduce -q (quiet flag) and improve ctrl-c ux 2023-05-12 09:46:07 -07:00
companionai.txt Upgrade llama.cpp to e6a46b0ed1884c77267dc70693183e3b7164e0e0 2023-05-10 04:20:48 -07:00
ggml.c Use yield on aarch in spin locks 2023-05-11 19:57:09 -07:00
ggml.h Upgrade llama.cpp to e6a46b0ed1884c77267dc70693183e3b7164e0e0 2023-05-10 04:20:48 -07:00
ggml.mk Clean up llama.com anti/stop/reverse-prompt code 2023-05-12 08:20:58 -07:00
LICENSE Import llama.cpp 2023-04-27 14:37:14 -07:00
llama.cc Introduce -q (quiet flag) and improve ctrl-c ux 2023-05-12 09:46:07 -07:00
llama.h Fix subtoken antiprompt scanning 2023-05-12 08:55:40 -07:00
llama_util.h Fix alignment bug in llama.com 2023-05-10 06:15:32 -07:00
main.cc Introduce -q (quiet flag) and improve ctrl-c ux 2023-05-12 09:46:07 -07:00
README.cosmo Use Companion AI in llama.com by default 2023-04-30 23:08:15 -07:00

DESCRIPTION

  ggml is a machine learning library useful for LLM inference on CPUs

LICENSE

  MIT

ORIGIN

  https://github.com/ggerganov/llama.cpp
  commit 0b2da20538d01926b77ea237dd1c930c4d20b686
  Author: Stephan Walter <stephan@walter.name>
  Date:   Wed Apr 26 20:26:42 2023 +0000
  ggml : slightly faster AVX2 implementation for Q5 (#1197)

LOCAL CHANGES

  - Make it possible for loaded prompts to be cached to disk
  - Introduce -v and --verbose flags
  - Reduce batch size from 512 to 32
  - Allow --n_keep to specify a substring of prompt
  - Don't print stats / diagnostics unless -v is passed
  - Reduce --top_p default from 0.95 to 0.70
  - Change --reverse-prompt to no longer imply --interactive
  - Permit --reverse-prompt specifying custom EOS if non-interactive
  - Refactor headers per cosmo convention
  - Replace code like 'ggjt' with READ32BE("ggjt")
  - Remove C++ exceptions; use Die() function instead