Logo
Explore Help
Sign in
vbatts/llama.cpp
1
0
Fork
You've already forked llama.cpp
0
Code Issues Pull requests Projects Releases Packages Wiki Activity Actions
All workflows build.yml close-issue.yml docker.yml editorconfig.yml gguf-publish.yml labeler.yml python-check-requirements.yml python-lint.yml python-type-check.yml server.yml
Actor
All actors vbatts
Status
All status Success Failure Waiting Running
docs : Quantum -> Quantized (#8666)
python-lint.yml #2 -Commit 4b0eff3df5 pushed by vbatts
vbatts-finetune
2025-05-01 17:44:33 +00:00
0s
ggml-cuda : perform cublas fp16 matrix multiplication as fp16 (#3370)
code-coverage.yml #1 -Commit da0400344b pushed by vbatts
vbatts-gguf-2023-sept
2025-02-11 19:37:50 +00:00
0s
Powered by Forgejo Version: 11.0.3 Page: 119ms Template: 5ms
English
Bahasa Indonesia Dansk Deutsch English Español Esperanto Filipino Français Italiano Latviešu Magyar nyelv Nederlands Plattdüütsch Polski Português de Portugal Português do Brasil Slovenščina Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API