diff --git a/docs/llguidance.md b/docs/llguidance.md index 792d20704..cda787b14 100644 --- a/docs/llguidance.md +++ b/docs/llguidance.md @@ -13,13 +13,15 @@ cmake -B build -DLLAMA_LLGUIDANCE=ON make -C build -j ``` +For Windows use `cmake --build build --config Release` instead of `make`. + This requires the Rust compiler and the `cargo` tool to be [installed](https://www.rust-lang.org/tools/install). ## Interface There are no new command-line arguments or modifications to `common_params`. When enabled, grammars starting with `%llguidance` are passed to LLGuidance instead of the [current](../grammars/README.md) llama.cpp grammars. Additionally, JSON Schema requests (e.g., using the `-j` argument in `llama-cli`) are also passed to LLGuidance. -For your existing GBNF grammars, you can use [gbnf_to_lark.py script](https://github.com/guidance-ai/llguidance/blob/main/scripts/gbnf_to_lark.py) to convert them to LLGuidance Lark-like format. +For your existing GBNF grammars, you can use [gbnf_to_lark.py script](https://github.com/guidance-ai/llguidance/blob/main/python/llguidance/gbnf_to_lark.py) to convert them to LLGuidance Lark-like format. ## Performance