From 37db7ef9d0b914cd06f16bd91d03d7e7005b093e Mon Sep 17 00:00:00 2001 From: Neo Zhang Jianyu Date: Wed, 20 Mar 2024 10:27:59 +0800 Subject: [PATCH] Update README-sycl.md Co-authored-by: Meng, Hengyu --- README-sycl.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README-sycl.md b/README-sycl.md index b207d389c..32adfda47 100644 --- a/README-sycl.md +++ b/README-sycl.md @@ -29,7 +29,7 @@ For Intel CPU, recommend to use llama.cpp for X86 (Intel MKL building). ## News - 2024.3 - - New base line is ready: tag b2437. + - New base line is ready: [tag b2437](https://github.com/ggerganov/llama.cpp/tree/b2437). - Support multiple cards: **--split-mode**: [none|layer]; not support [row], it's on developing. - Support to assign main GPU by **--main-gpu**, replace $GGML_SYCL_DEVICE. - Support detecting all GPUs with level-zero and same top **Max compute units**.