Update README.md
This commit is contained in:
parent
21ac3a1503
commit
73cae1e306
1 changed files with 5 additions and 0 deletions
|
@ -590,6 +590,11 @@ Several quantization methods are supported. They differ in the resulting model d
|
||||||
| 13B | ms/tok @ 8th | - | 73 | 82 | 98 | 105 | 128 |
|
| 13B | ms/tok @ 8th | - | 73 | 82 | 98 | 105 | 128 |
|
||||||
| 13B | bits/weight | 16.0 | 4.5 | 5.0 | 5.5 | 6.0 | 8.5 |
|
| 13B | bits/weight | 16.0 | 4.5 | 5.0 | 5.5 | 6.0 | 8.5 |
|
||||||
|
|
||||||
|
- [k-quants](https://github.com/ggerganov/llama.cpp/pull/1684)
|
||||||
|
- recent k-quants improvements
|
||||||
|
- [#2707](https://github.com/ggerganov/llama.cpp/pull/2707)
|
||||||
|
- [#2807](https://github.com/ggerganov/llama.cpp/pull/2807)
|
||||||
|
|
||||||
### Perplexity (measuring model quality)
|
### Perplexity (measuring model quality)
|
||||||
|
|
||||||
You can use the `perplexity` example to measure perplexity over a given prompt (lower perplexity is better).
|
You can use the `perplexity` example to measure perplexity over a given prompt (lower perplexity is better).
|
||||||
|
|
Loading…
Add table
Add a link
Reference in a new issue