more compact progress bar refactor: llama_prepare_sbatch/ubatch llama_save_model_to_file gqa_mode arg for repeat_back llama_opt_param_filter ggml_graph_dup force_grads refactor ggml_opt, fix test-opt
17 lines
899 B
Markdown
17 lines
899 B
Markdown
# llama.cpp/examples/training
|
|
|
|
This directory contains examples related to language model training using llama.cpp/GGML.
|
|
So far finetuning is technically functional (for FP32 models and limited hardware setups) but the code is very much WIP.
|
|
Finetuning of Stories 260K and LLaMA 3.2 1b seems to work with 24 GB of memory.
|
|
**For CPU training, compile llama.cpp without any additional backends such as CUDA.**
|
|
**For CUDA training, use the maximum number of GPU layers.**
|
|
|
|
Proof of concept:
|
|
|
|
``` sh
|
|
export model_name=llama_3.2-1b && export quantization=f32
|
|
./build/bin/finetune --file wikitext-2-raw/wiki.test.raw -ngl 999 --model models/${model_name}-${quantization}.gguf -c 512 -b 512 -ub 512
|
|
./build/bin/perplexity --file wikitext-2-raw/wiki.test.raw -ngl 999 --model finetuned-model.gguf
|
|
```
|
|
|
|
The perplexity value of the finetuned model should be lower after training on the test set for 2 epochs.
|