From 55ed36b3e4ee274842bb194fb41ee9585ef886c1 Mon Sep 17 00:00:00 2001 From: Daniel Bevenius Date: Tue, 2 Jan 2024 08:59:28 +0100 Subject: [PATCH] finetune: fix typo in README.md Signed-off-by: Daniel Bevenius --- examples/finetune/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/examples/finetune/README.md b/examples/finetune/README.md index a2a2c1281..a884706c5 100644 --- a/examples/finetune/README.md +++ b/examples/finetune/README.md @@ -61,7 +61,7 @@ For example to apply 40% of the 'shakespeare' LORA adapter, 80% of the 'bible' L --lora lora-open-llama-3b-v2-q8_0-yet-another-one-LATEST.bin ``` -The scale numbers don't need to add up to one, and you can also use numbers greater than 1 to further increase the influence of an adapter. But making the values to big will sometimes result in worse output. Play around to find good values. +The scale numbers don't need to add up to one, and you can also use numbers greater than 1 to further increase the influence of an adapter. But making the values too big will sometimes result in worse output. Play around to find good values. Gradient checkpointing reduces the memory requirements by ~50% but increases the runtime. If you have enough RAM, you can make finetuning a bit faster by disabling checkpointing with `--no-checkpointing`.