finetune: automatically allocate all memory and changes to command line options

remove '--n_examples N' parameter, as it no longer makes sense to call optimization process multiple times in a loop.
add '--only_write_lora' command line option: will skip tokenization and training, to only write a llama.cpp comptabile LORA adapter.
remove memory buffer related command line options.
improve iteration console output.
This commit is contained in:
xaedes 2023-09-01 15:58:24 +02:00
parent 7e01d11a28
commit 5bba329e58
No known key found for this signature in database
GPG key ID: 30030EDD817EA2B1

File diff suppressed because it is too large Load diff