This website requires JavaScript.
Explore
Help
Sign in
vbatts
/
llama.cpp
Watch
1
Star
0
Fork
You've already forked llama.cpp
0
Code
Issues
Pull requests
Projects
Releases
Packages
Wiki
Activity
Actions
1248
commits
380
branches
3056
tags
365
MiB
custom-attention-mask-no-roped-cache
Commit graph
2 commits
Author
SHA1
Message
Date
Georgi Gerganov
1fb033fd85
ggml : ggml_rope now takes a vector with positions instead of n_past
2023-09-17 21:17:10 +03:00
Georgi Gerganov
c5df72e848
tests : verify that RoPE is "additive"
2023-09-17 17:55:12 +03:00