This website requires JavaScript.
Explore
Help
Sign in
vbatts
/
llama.cpp
Watch
1
Star
0
Fork
You've already forked llama.cpp
0
Code
Issues
Pull requests
Projects
Releases
Packages
Wiki
Activity
Actions
bcefa03bc0
llama.cpp
/
ggml
History
Download ZIP
Download TAR.GZ
Johannes Gäßler
bcefa03bc0
CUDA: fix MMQ stream-k rounding if ne00 % 128 != 0 (
#8311
)
2024-07-05 09:05:34 +02:00
..
cmake
llama : reorganize source code + improve CMake (
#8006
)
2024-06-26 18:33:02 +03:00
include
Removes multiple newlines at the end of files that is breaking the editorconfig step of CI. (
#8258
)
2024-07-02 12:18:10 -04:00
src
CUDA: fix MMQ stream-k rounding if ne00 % 128 != 0 (
#8311
)
2024-07-05 09:05:34 +02:00
CMakeLists.txt
ggml : add GGML_CUDA_USE_GRAPHS option, restore GGML_CUDA_FORCE_CUBLAS (cmake) (
#8140
)
2024-06-26 21:34:14 +02:00
ggml_vk_generate_shaders.py
llama : reorganize source code + improve CMake (
#8006
)
2024-06-26 18:33:02 +03:00