cosmopolitan/third_party/ggml
Justine Tunney 4d629fd424
Fix stack abuse in llama.cc
This change also incorporates improvements for MODE=asan. It's been
confirmed that o/asan/third_party/ggml/llama.com will work.

Fixes #829
2023-06-08 07:12:26 -07:00
..
common.cc Validate privileged code relationships 2023-06-08 04:38:06 -07:00
common.h Introduce support for GGJT v3 file format 2023-06-03 15:46:21 -07:00
companionai.txt Upgrade llama.cpp to e6a46b0ed1884c77267dc70693183e3b7164e0e0 2023-05-10 04:20:48 -07:00
fp16.c Prevent ftrace from misaligning functions 2023-06-06 06:00:31 -07:00
fp16.h Make more ML improvements 2023-05-16 08:07:23 -07:00
fp16.internal.h Make more ML improvements 2023-05-16 08:07:23 -07:00
ggjt.v1.c Get radpajama to build 2023-05-13 20:44:36 -07:00
ggjt.v1.internal.h Add support for new GGJT v2 quantizers 2023-05-13 08:08:32 -07:00
ggjt.v1.q4_0.c Upgrade to Cosmopolitan GCC 11.2.0 for aarch64 2023-06-05 02:07:28 -07:00
ggjt.v1.q4_0.h Add support for new GGJT v2 quantizers 2023-05-13 08:08:32 -07:00
ggjt.v1.q4_1.c Upgrade to Cosmopolitan GCC 11.2.0 for aarch64 2023-06-05 02:07:28 -07:00
ggjt.v1.q4_1.h Add support for new GGJT v2 quantizers 2023-05-13 08:08:32 -07:00
ggjt.v1.q4_2.c Upgrade to Cosmopolitan GCC 11.2.0 for aarch64 2023-06-05 02:07:28 -07:00
ggjt.v1.q4_2.h Add support for new GGJT v2 quantizers 2023-05-13 08:08:32 -07:00
ggjt.v1.q5_0.c Upgrade to Cosmopolitan GCC 11.2.0 for aarch64 2023-06-05 02:07:28 -07:00
ggjt.v1.q5_0.h Add support for new GGJT v2 quantizers 2023-05-13 08:08:32 -07:00
ggjt.v1.q5_1.c Upgrade to Cosmopolitan GCC 11.2.0 for aarch64 2023-06-05 02:07:28 -07:00
ggjt.v1.q5_1.h Add support for new GGJT v2 quantizers 2023-05-13 08:08:32 -07:00
ggjt.v1.q8_0.c Upgrade to Cosmopolitan GCC 11.2.0 for aarch64 2023-06-05 02:07:28 -07:00
ggjt.v1.q8_0.h Add support for new GGJT v2 quantizers 2023-05-13 08:08:32 -07:00
ggjt.v1.q8_1.c Upgrade to Cosmopolitan GCC 11.2.0 for aarch64 2023-06-05 02:07:28 -07:00
ggjt.v1.q8_1.h Add support for new GGJT v2 quantizers 2023-05-13 08:08:32 -07:00
ggjt.v2.c Introduce support for GGJT v3 file format 2023-06-03 15:46:21 -07:00
ggjt.v2.internal.h Upgrade to Cosmopolitan GCC 11.2.0 for aarch64 2023-06-05 02:07:28 -07:00
ggjt.v2.q4_0.c Introduce support for GGJT v3 file format 2023-06-03 15:46:21 -07:00
ggjt.v2.q4_0.h Introduce support for GGJT v3 file format 2023-06-03 15:46:21 -07:00
ggjt.v2.q4_1.c Introduce support for GGJT v3 file format 2023-06-03 15:46:21 -07:00
ggjt.v2.q4_1.h Introduce support for GGJT v3 file format 2023-06-03 15:46:21 -07:00
ggjt.v2.q5_0.c Introduce support for GGJT v3 file format 2023-06-03 15:46:21 -07:00
ggjt.v2.q5_0.h Introduce support for GGJT v3 file format 2023-06-03 15:46:21 -07:00
ggjt.v2.q5_1.c Introduce support for GGJT v3 file format 2023-06-03 15:46:21 -07:00
ggjt.v2.q5_1.h Introduce support for GGJT v3 file format 2023-06-03 15:46:21 -07:00
ggjt.v2.q8_0.c Introduce support for GGJT v3 file format 2023-06-03 15:46:21 -07:00
ggjt.v2.q8_0.h Introduce support for GGJT v3 file format 2023-06-03 15:46:21 -07:00
ggjt.v2.q8_1.c Introduce support for GGJT v3 file format 2023-06-03 15:46:21 -07:00
ggjt.v2.q8_1.h Introduce support for GGJT v3 file format 2023-06-03 15:46:21 -07:00
ggml.c Upgrade to Cosmopolitan GCC 11.2.0 for aarch64 2023-06-05 02:07:28 -07:00
ggml.h Introduce support for GGJT v3 file format 2023-06-03 15:46:21 -07:00
ggml.mk Fix stack abuse in llama.cc 2023-06-08 07:12:26 -07:00
LICENSE Import llama.cpp 2023-04-27 14:37:14 -07:00
llama.cc Fix stack abuse in llama.cc 2023-06-08 07:12:26 -07:00
llama.h Introduce support for GGJT v3 file format 2023-06-03 15:46:21 -07:00
llama_util.h Introduce support for GGJT v3 file format 2023-06-03 15:46:21 -07:00
main.cc Introduce support for GGJT v3 file format 2023-06-03 15:46:21 -07:00
perplexity.cc Perform some code cleanup 2023-05-15 16:32:10 -07:00
quantize.cc Introduce support for GGJT v3 file format 2023-06-03 15:46:21 -07:00
README.cosmo Introduce support for GGJT v3 file format 2023-06-03 15:46:21 -07:00

DESCRIPTION

  ggml is a machine learning library useful for LLM inference on CPUs

LICENSE

  MIT

ORIGIN

  https://github.com/ggerganov/llama.cpp
  d8bd0013e8768aaa3dc9cfc1ff01499419d5348e

LOCAL CHANGES

  - Maintaining support for deprecated file formats
  - Make it possible for loaded prompts to be cached to disk
  - Introduce -v and --verbose flags
  - Reduce batch size from 512 to 32
  - Allow --n_keep to specify a substring of prompt
  - Don't print stats / diagnostics unless -v is passed
  - Reduce --top_p default from 0.95 to 0.70
  - Change --reverse-prompt to no longer imply --interactive
  - Permit --reverse-prompt specifying custom EOS if non-interactive
  - Refactor headers per cosmo convention
  - Remove C++ exceptions; use Die() function instead
  - Removed division from matrix multiplication.
  - Let quantizer convert between ggmt formats