cosmopolitan/third_party/ggml
Justine Tunney 420f889ac3
Further optimize the math library
The sincosf() function is now twice as fast, thanks to ARM Limited. The
same might also be true of logf() and expm1f() which have been updated.
2023-04-28 01:20:47 -07:00
..
common.cc Import llama.cpp 2023-04-27 14:37:14 -07:00
common.h Import llama.cpp 2023-04-27 14:37:14 -07:00
ggml.c Further optimize the math library 2023-04-28 01:20:47 -07:00
ggml.h Import llama.cpp 2023-04-27 14:37:14 -07:00
ggml.mk Import llama.cpp 2023-04-27 14:37:14 -07:00
LICENSE Import llama.cpp 2023-04-27 14:37:14 -07:00
llama.cc Import llama.cpp 2023-04-27 14:37:14 -07:00
llama.h Import llama.cpp 2023-04-27 14:37:14 -07:00
llama_util.h Import llama.cpp 2023-04-27 14:37:14 -07:00
main.cc Import llama.cpp 2023-04-27 14:37:14 -07:00
README.cosmo Import llama.cpp 2023-04-27 14:37:14 -07:00

DESCRIPTION

  ggml is a machine learning library useful for LLM inference on CPUs

LICENSE

  MIT

ORIGIN

  https://github.com/ggerganov/llama.cpp
  commit 0b2da20538d01926b77ea237dd1c930c4d20b686
  Author: Stephan Walter <stephan@walter.name>
  Date:   Wed Apr 26 20:26:42 2023 +0000
  ggml : slightly faster AVX2 implementation for Q5 (#1197)

LOCAL CHANGES

  - Refactor headers per cosmo convention
  - Replace code like 'ggjt' with READ32BE("ggjt")
  - Remove C++ exceptions; use Die() function instead