From 3e31203b8df2f31325eb422c16e75ae64c7b2e2b Mon Sep 17 00:00:00 2001 From: Ziang Wu <97337387+ZiangWu-77@users.noreply.github.com> Date: Tue, 19 Mar 2024 22:46:47 +0800 Subject: [PATCH] Update examples/llava/MobileVLM-README.md Co-authored-by: Georgi Gerganov --- examples/llava/MobileVLM-README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/examples/llava/MobileVLM-README.md b/examples/llava/MobileVLM-README.md index 3fe38c741..c1f361d17 100644 --- a/examples/llava/MobileVLM-README.md +++ b/examples/llava/MobileVLM-README.md @@ -36,7 +36,7 @@ git clone https://huggingface.co/openai/clip-vit-large-patch14-336 python ./examples/llava/llava-surgery.py -m path/to/MobileVLM-1.7B ``` -3. Use `convert-image-encoder-to-gguf.py` with `--projector-type ldp` (for **V2** the arg is `--projector-type ldpv2`)to convert the LLaVA image encoder to GGUF : +3. Use `convert-image-encoder-to-gguf.py` with `--projector-type ldp` (for **V2** the arg is `--projector-type ldpv2`) to convert the LLaVA image encoder to GGUF: ```sh python ./examples/llava/convert-image-encoder-to-gguf \