Commit graph

1211 commits

Author SHA1 Message Date
xaedes
620275361d
add debug prints for training memory improvements 2023-08-16 16:23:21 +02:00
xaedes
be7e564b11
bug fixes to make finetune compile
automatic allocator does not work yet
2023-08-16 16:21:43 +02:00
xaedes
50b1e66200
remove const model and layer arguments in API functions for accessing model tensors 2023-08-16 16:21:02 +02:00
xaedes
28ee0c8583
first draft for LORA finetune training 2023-08-16 15:31:04 +02:00
xaedes
c0a372fd3d
add API functions to access remaining model parameters:
mult, head and rot
2023-08-16 15:30:31 +02:00
xaedes
9eb1ef8653
move and remove code 2023-08-15 14:03:02 +02:00
xaedes
5e059ace25
add stub example for finetuning, based on train-text-from-scratch 2023-08-15 13:54:28 +02:00
xaedes
316b0707f4
add API functions to access llama model tensors 2023-08-15 13:53:13 +02:00
Georgi Gerganov
b5ffb2849d
scripts : add helper script to get wikitext 2023-08-15 10:05:25 +03:00
Jhen-Jie Hong
3ebb00935f
server : add missing /json-schema-to-grammar.mjs (#2616)
fixes #2611
2023-08-15 06:14:14 +08:00
xaedes
3b5515bbe0
reverse order of for loop in ggml_build_backward_expand to save memory when using gradient checkpointing and allocator
with this loop order gradient checkpointing with allocator on 16 layer model saves 13% memory; 2 layer memory it saves 2% memory.

the computation results are the same
2023-08-14 22:09:36 +02:00
xaedes
56228461c8
fix memory "leak" in optimizers
each iteration a new cplan with new memory for work data was allocated.
now cplan creation only happens at the start of optimization, with each iteration reusing the cplan and its work data.
2023-08-14 21:12:02 +02:00
xaedes
3e6468b097
fix test when to create temporary backward graph
temporary backward graph is only necessary when using checkpointing
2023-08-14 20:57:18 +02:00
xaedes
098654c277
only use ggml_allocr_alloc when tensor has NULL data and is no view 2023-08-14 20:57:18 +02:00
xaedes
faf3e21eaf
add debug asserts in ggml_allocr_alloc to some common pitfalls when using this function directly 2023-08-14 20:50:09 +02:00
xaedes
6e280b24dc
remove unused forward_batch function 2023-08-14 19:02:12 +02:00
xaedes
3794dceb7f
remove unused train params: mem_compute1_gb & mem_compute2_gb
mem_compute_gb is used for compute when automatic memory allocator is not enabled, otherwise it can be very small to only hold the tensor definitions
mem_compute0_gb is used for automatic memory allocator (as long as measurement of max required size is not implemented)
2023-08-14 18:44:42 +02:00
xaedes
6f161c784b
remove trailing whitespace 2023-08-14 18:33:27 +02:00
xaedes
271e4d64b5
remove unused training parameters "use_scratch" and "use_unified" 2023-08-14 18:31:59 +02:00
xaedes
c954f41ca4
remove handwritten training functions 2023-08-14 18:30:50 +02:00
xaedes
fe788a1c7a
allocate graph on context using ggml_new_graph 2023-08-14 18:24:13 +02:00
xaedes
75baed230c
set names for tensors in unified train function for easier debugging 2023-08-14 18:17:14 +02:00
xaedes
3e99a8d653
format name of cloned tensors with " (clone)" suffix 2023-08-14 18:15:09 +02:00
xaedes
865c4cd3c1
integrate unified training function which may use memory allocator
the unified training function also supports arguments whether to use flash attention and/or gradient checkpointing
2023-08-14 18:12:58 +02:00
xaedes
4ed096c6b0
add training options whether to use allocator and/or unified training function 2023-08-14 18:10:02 +02:00
xaedes
d6c5b03858
fix ASSERT to work with zero layers 2023-08-14 18:08:19 +02:00
xaedes
38f4438c32
make sure some tensors are not reallocated by inserting new temporary nodes depending on them:
output and parameter gradient tensors need to be available at the end of the graph execution

parameter gradient tensors also need to be available before the graph execution because they are set to zero before each optimizer iteration

checkpoint tensors are allocated all together to reduce memory allocator fragmentation

afterwards, in addition to the temporary nodes, we also need to reset the temporary leafs
2023-08-14 18:07:16 +02:00
xaedes
9716eb8ef0
fix variable name and add missing boolean negation 2023-08-14 17:59:19 +02:00
xaedes
5884b43a62
add input tensors as checkpoints
so that recursive tensor cloning of gradient checkpointing terminates on input tensors
2023-08-14 17:58:49 +02:00
xaedes
b2f1310196
swap arguments to commutative ops to be the same as in forward_batch_wo_cache_flash_attn 2023-08-14 17:57:13 +02:00
xaedes
5a11b75875
fix variable names 2023-08-14 17:55:51 +02:00
xaedes
345f516f7c
correctly clone view tensors by setting data pointers
without this the checkpointing would only work when being used together with memory allocator
2023-08-14 17:55:13 +02:00
xaedes
52c92c0a8c
terminate recursive tensor cloning when reaching tensor without src tensors 2023-08-14 17:53:36 +02:00
xaedes
0dd496c5e2
fix variable name and add missing type cast 2023-08-14 17:52:48 +02:00
xaedes
cfddc36be2
correctly clone reshape and permute operations by also cloning tensor->nb values 2023-08-14 17:52:15 +02:00
xaedes
d43741540b
don't use allocate hash_map on context
because the context has no_alloc=True when using memory allocator resulting in NULL data pointers
2023-08-14 17:51:20 +02:00
xaedes
fc826c8ea8
in train function replace add_inplace by regular add
because using add_inplace seems to result in different gradients
2023-08-14 17:49:22 +02:00
Jhen-Jie Hong
d783f7982e
metal : return null instead of exit(1) (#2573) 2023-08-14 16:37:39 +03:00
Cheng Shao
d75561df20
server : add --numa support (#2524) 2023-08-14 16:36:42 +03:00
Kamil Tomšík
348acf188c
llama : add missing enum keyword in function signatures (#2610) 2023-08-14 16:35:16 +03:00
Johannes Gäßler
1cd06fa25e
CUDA: launch_bounds, small q4_K, q5_K mmq refactor (#2596) 2023-08-14 10:41:22 +02:00
Jhen-Jie Hong
2feb8934eb
server : fix default grammar by use empty string in the UI (#2604) 2023-08-14 16:20:17 +08:00
Jhen-Jie Hong
5517d6e692
server : implement json-schema-to-grammar.mjs & add grammar param in the UI (#2588)
* server : implement json-schema-to-grammar.mjs by follow python impl

* server : add grammar support in chat.mjs

* server : implement grammer param in the UI

* server : generate .hpp

* server : remove trailing whitespaces

* server : generate .hpp

* server : fix sort of prop pairs

* server : optimize regex & iteration
2023-08-14 15:16:54 +08:00
vxiiduu
f31b539714
Enhance Windows 7 and below compatibility. (#2592)
* Enhance Windows 7 compatibility.
* Clean away unnecessary preprocessor conditional
2023-08-13 20:59:16 -07:00
drbh
ee77efea2a
test : add simple grammar parsing tests (#2594)
* adds simple grammar parsing tests

* adds cassert header
2023-08-13 17:00:48 +03:00
Johannes Gäßler
f64d44a9b9
CUDA: Fixed OpenLLaMA 3b mmq, reduced compile time (#2590) 2023-08-13 00:24:45 +02:00
byte-6174
b19edd54d5
Adding support for llama2.c models (#2559) 2023-08-12 01:17:25 +02:00
Equim
53dc399472
server: fixed wrong variable name in timing json (#2579)
* server: fixed wrong variable name in timing json

* remove redunct entry
2023-08-12 00:35:14 +02:00
DannyDaemonic
9ca4abed89
Handle ENABLE_VIRTUAL_TERMINAL_PROCESSING more gracefully on earlier versions of Windows. 2023-08-10 13:11:36 -07:00
Christian Demsar
e59fcb2bc1
Add --n-predict -2 for stopping generation on full context (#2565) 2023-08-10 16:28:27 +02:00