From 08da1841477c97a9e1eee3a7b651b99c80b30790 Mon Sep 17 00:00:00 2001 From: Olivier Chafik Date: Wed, 12 Jun 2024 11:27:01 +0100 Subject: [PATCH] add hot topic notice to README.md --- README.md | 1 + 1 file changed, 1 insertion(+) diff --git a/README.md b/README.md index fc54aad18..9bcb1ddb3 100644 --- a/README.md +++ b/README.md @@ -22,6 +22,7 @@ Inference of Meta's [LLaMA](https://arxiv.org/abs/2302.13971) model (and others) ### Hot topics +- > [!IMPORTANT] Binaries have been renamed w/ a `llama-` prefix. `main` is now `llama-cli`, `server` is `llama-server`, etc (https://github.com/ggerganov/llama.cpp/pull/7809) - **`convert.py` has been deprecated and moved to `examples/convert-legacy-llama.py`, please use `convert-hf-to-gguf.py`** https://github.com/ggerganov/llama.cpp/pull/7430 - Initial Flash-Attention support: https://github.com/ggerganov/llama.cpp/pull/5021 - BPE pre-tokenization support has been added: https://github.com/ggerganov/llama.cpp/pull/6920