From cbecb35619437387382d8b6ed6fb6dcbdd59f368 Mon Sep 17 00:00:00 2001 From: ochafik Date: Wed, 29 Jan 2025 22:44:46 +0000 Subject: [PATCH] Add tool call to hot topics --- README.md | 1 + tests/test-chat.cpp | 3 +-- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index ff8536773..748def30f 100644 --- a/README.md +++ b/README.md @@ -18,6 +18,7 @@ Inference of Meta's [LLaMA](https://arxiv.org/abs/2302.13971) model (and others) - **How to use [MTLResidencySet](https://developer.apple.com/documentation/metal/mtlresidencyset?language=objc) to keep the GPU memory active?** https://github.com/ggerganov/llama.cpp/pull/11427 - **VS Code extension for FIM completions:** https://github.com/ggml-org/llama.vscode +- Universal tool call support in `llama-server`: https://github.com/ggerganov/llama.cpp/pull/9639 - Vim/Neovim plugin for FIM completions: https://github.com/ggml-org/llama.vim - Introducing GGUF-my-LoRA https://github.com/ggerganov/llama.cpp/discussions/10123 - Hugging Face Inference Endpoints now support GGUF out of the box! https://github.com/ggerganov/llama.cpp/discussions/9669 diff --git a/tests/test-chat.cpp b/tests/test-chat.cpp index 149eb47c8..4fecdcb41 100644 --- a/tests/test-chat.cpp +++ b/tests/test-chat.cpp @@ -3,8 +3,7 @@ // Also acts as a CLI to generate a Markdown summary of the formats of Jinja templates, // e.g. given Minja (http://github.com/google/minja) checked out in parent dir: // -// cmake -B build && cmake --build build --parallel && \ -// ./build/bin/test-chat ../minja/build/tests/*.jinja 2>/dev/null +// cmake -B build && cmake --build build --parallel && ./build/bin/test-chat ../minja/build/tests/*.jinja 2>/dev/null // #include "chat.hpp" #include "chat-template.hpp"