mirror of
https://github.com/jart/cosmopolitan.git
synced 2025-09-10 10:43:48 +00:00
Import llama.cpp
https://github.com/ggerganov/llama.cpp 0b2da20538d01926b77ea237dd1c930c4d20b686 See third_party/ggml/README.cosmo for changes
This commit is contained in:
parent
f42089d5c6
commit
e8b43903b2
14 changed files with 18313 additions and 2 deletions
21
third_party/ggml/README.cosmo
vendored
Normal file
21
third_party/ggml/README.cosmo
vendored
Normal file
|
@ -0,0 +1,21 @@
|
|||
DESCRIPTION
|
||||
|
||||
ggml is a machine learning library useful for LLM inference on CPUs
|
||||
|
||||
LICENSE
|
||||
|
||||
MIT
|
||||
|
||||
ORIGIN
|
||||
|
||||
https://github.com/ggerganov/llama.cpp
|
||||
commit 0b2da20538d01926b77ea237dd1c930c4d20b686
|
||||
Author: Stephan Walter <stephan@walter.name>
|
||||
Date: Wed Apr 26 20:26:42 2023 +0000
|
||||
ggml : slightly faster AVX2 implementation for Q5 (#1197)
|
||||
|
||||
LOCAL CHANGES
|
||||
|
||||
- Refactor headers per cosmo convention
|
||||
- Replace code like 'ggjt' with READ32BE("ggjt")
|
||||
- Remove C++ exceptions; use Die() function instead
|
Loading…
Add table
Add a link
Reference in a new issue