From 2a77902a1d290e8f02192adc4488710bf6ffa645 Mon Sep 17 00:00:00 2001 From: Ziang Wu <97337387+ZiangWu-77@users.noreply.github.com> Date: Thu, 28 Mar 2024 21:37:57 +0800 Subject: [PATCH] Update examples/llava/MobileVLM-README.md Co-authored-by: Georgi Gerganov --- examples/llava/MobileVLM-README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/examples/llava/MobileVLM-README.md b/examples/llava/MobileVLM-README.md index ae21904e8..063b943ff 100644 --- a/examples/llava/MobileVLM-README.md +++ b/examples/llava/MobileVLM-README.md @@ -6,7 +6,7 @@ for more information, please go to [Meituan-AutoML/MobileVLM](https://github.com The implementation is based on llava, and is compatible with llava and mobileVLM. The usage is basically same as llava. -Notice: The overall process of model inference for both **MobileVLM** and **MobileVLM_V2** models are the same, but the process of model conversion is a little different. Therefore, using **MobileVLM-1.7B** as an example, the different conversion step will be shown. +Notice: The overall process of model inference for both **MobileVLM** and **MobileVLM_V2** models is the same, but the process of model conversion is a little different. Therefore, using **MobileVLM-1.7B** as an example, the different conversion step will be shown. ## Usage Build with cmake or run `make llava-cli` to build it.