fix: readme
This commit is contained in:
parent
f97c587639
commit
ef61a6667b
1 changed files with 2 additions and 3 deletions
|
@ -1,11 +1,10 @@
|
||||||
# AWQ: Activation-aware Weight Quantization for LLM - version apply to llamacpp
|
# AWQ: Activation-aware Weight Quantization for LLM - version apply to llamacpp
|
||||||
[[Paper](https://arxiv.org/abs/2306.00978)][[Original Repo](https://github.com/mit-han-lab/llm-awq)][[Easy-to-use Repo](https://github.com/casper-hansen/AutoAWQ)]
|
[[Paper](https://arxiv.org/abs/2306.00978)][[Original Repo](https://github.com/mit-han-lab/llm-awq)][[Easy-to-use Repo](https://github.com/casper-hansen/AutoAWQ)]
|
||||||
Thank VinAI Team a lot for helping me on this project.
|
|
||||||
|
|
||||||
**Supported models:**
|
**Supported models:**
|
||||||
|
|
||||||
- [X] LLaMA 🦙
|
- [X] LLaMA
|
||||||
- [x] LLaMA 2 🦙🦙
|
- [x] LLaMA 2
|
||||||
- [X] MPT
|
- [X] MPT
|
||||||
- [X] Mistral AI v0.1
|
- [X] Mistral AI v0.1
|
||||||
- [ ] Bloom
|
- [ ] Bloom
|
||||||
|
|
Loading…
Add table
Add a link
Reference in a new issue