cosmopolitan/third_party/ggml
Justine Tunney e8b43903b2
Import llama.cpp
https://github.com/ggerganov/llama.cpp
0b2da20538d01926b77ea237dd1c930c4d20b686
See third_party/ggml/README.cosmo for changes
2023-04-27 14:37:14 -07:00
..
common.cc Import llama.cpp 2023-04-27 14:37:14 -07:00
common.h Import llama.cpp 2023-04-27 14:37:14 -07:00
ggml.c Import llama.cpp 2023-04-27 14:37:14 -07:00
ggml.h Import llama.cpp 2023-04-27 14:37:14 -07:00
ggml.mk Import llama.cpp 2023-04-27 14:37:14 -07:00
LICENSE Import llama.cpp 2023-04-27 14:37:14 -07:00
llama.cc Import llama.cpp 2023-04-27 14:37:14 -07:00
llama.h Import llama.cpp 2023-04-27 14:37:14 -07:00
llama_util.h Import llama.cpp 2023-04-27 14:37:14 -07:00
main.cc Import llama.cpp 2023-04-27 14:37:14 -07:00
README.cosmo Import llama.cpp 2023-04-27 14:37:14 -07:00

DESCRIPTION

  ggml is a machine learning library useful for LLM inference on CPUs

LICENSE

  MIT

ORIGIN

  https://github.com/ggerganov/llama.cpp
  commit 0b2da20538d01926b77ea237dd1c930c4d20b686
  Author: Stephan Walter <stephan@walter.name>
  Date:   Wed Apr 26 20:26:42 2023 +0000
  ggml : slightly faster AVX2 implementation for Q5 (#1197)

LOCAL CHANGES

  - Refactor headers per cosmo convention
  - Replace code like 'ggjt' with READ32BE("ggjt")
  - Remove C++ exceptions; use Die() function instead