From ef61a6667b43495d63f1bd826e7b543ba9d2d795 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Tr=E1=BA=A7n=20=C4=90=E1=BB=A9c=20Nam?= Date: Tue, 19 Dec 2023 14:40:12 +0700 Subject: [PATCH] fix: readme --- awqpy/README.md | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/awqpy/README.md b/awqpy/README.md index 77445f9f3..c5c597715 100644 --- a/awqpy/README.md +++ b/awqpy/README.md @@ -1,11 +1,10 @@ # AWQ: Activation-aware Weight Quantization for LLM - version apply to llamacpp [[Paper](https://arxiv.org/abs/2306.00978)][[Original Repo](https://github.com/mit-han-lab/llm-awq)][[Easy-to-use Repo](https://github.com/casper-hansen/AutoAWQ)] -Thank VinAI Team a lot for helping me on this project. **Supported models:** -- [X] LLaMA 🦙 -- [x] LLaMA 2 🦙🦙 +- [X] LLaMA +- [x] LLaMA 2 - [X] MPT - [X] Mistral AI v0.1 - [ ] Bloom