wip: llama : separate recurrent states from the KV cache
This will be necessary to support Jamba (and other recurrent models mixed with Attention). Doesn't compile yet, and finding a slot isn't yet done correctly for recurrent states.
This commit is contained in:
parent
5fb1574c81
commit
271104c65c