This website requires JavaScript.
Explore
Help
Sign in
vbatts
/
llama.cpp
Watch
1
Star
0
Fork
You've already forked llama.cpp
0
Code
Issues
Pull requests
Projects
Releases
Packages
Wiki
Activity
Actions
b044a0fe3c
llama.cpp
/
ggml
/
src
/
ggml-vulkan
History
Download ZIP
Download TAR.GZ
Wagner Bruna
b044a0fe3c
vulkan: add environment variable GGML_VK_PREFER_HOST_MEMORY to avoid VRAM allocation (
#11592
)
2025-02-10 07:08:22 +01:00
..
cmake
fix: ggml: fix vulkan-shaders-gen build (
#10448
)
2025-01-15 14:17:42 +01:00
vulkan-shaders
vulkan: optimize coopmat2 iq2/iq3 callbacks (
#11521
)
2025-02-06 07:15:30 +01:00
CMakeLists.txt
fix: ggml: fix vulkan-shaders-gen build (
#10448
)
2025-01-15 14:17:42 +01:00
ggml-vulkan.cpp
vulkan: add environment variable GGML_VK_PREFER_HOST_MEMORY to avoid VRAM allocation (
#11592
)
2025-02-10 07:08:22 +01:00