M. Yusuf Sarıgöz 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								8194cd8772 
								
							 
						 
						
							
							
								
								gguf : export objects to user code ( #2780 )  
							
							... 
							
							
							
							* gguf export more objects to user code
* gguf export all objects to user code for now
* gguf : bump version 
							
						 
						
							2023-08-25 12:43:41 +03:00 
							
								 
							
						 
					 
				
					
						
							
								
								
									Henri Vasserman 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								6bbc598a63 
								
							 
						 
						
							
							
								
								ROCm Port ( #1087 )  
							
							... 
							
							
							
							* use hipblas based on cublas
* Update Makefile for the Cuda kernels
* Expand arch list and make it overrideable
* Fix multi GPU on multiple amd architectures with rocblas_initialize() (#5 )
* add hipBLAS to README
* new build arg LLAMA_CUDA_MMQ_Y
* fix half2 decomposition
* Add intrinsics polyfills for AMD
* AMD assembly optimized __dp4a
* Allow overriding CC_TURING
* use "ROCm" instead of "CUDA"
* ignore all build dirs
* Add Dockerfiles
* fix llama-bench
* fix -nommq help for non CUDA/HIP
---------
Co-authored-by: YellowRoseCx <80486540+YellowRoseCx@users.noreply.github.com>
Co-authored-by: ardfork <134447697+ardfork@users.noreply.github.com>
Co-authored-by: funnbot <22226942+funnbot@users.noreply.github.com>
Co-authored-by: Engininja2 <139037756+Engininja2@users.noreply.github.com>
Co-authored-by: Kerfuffle <44031344+KerfuffleV2@users.noreply.github.com>
Co-authored-by: jammm <2500920+jammm@users.noreply.github.com>
Co-authored-by: jdecourval <7315817+jdecourval@users.noreply.github.com> 
							
						 
						
							2023-08-25 12:09:42 +03:00 
							
								 
							
						 
					 
				
					
						
							
								
								
									Georgi Gerganov 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								3f460a2b72 
								
							 
						 
						
							
							
								
								cuda : add RoPE kernel for mode == 2 (NeoX) ( #2760 )  
							
							... 
							
							
							
							* cuda : add RoPE kernel for mode == 2 (NeoX)
* falcon : do not offload the embeddings layer 
							
						 
						
							2023-08-25 11:55:59 +03:00 
							
								 
							
						 
					 
				
					
						
							
								
								
									M. Yusuf Sarıgöz 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								87e3733f24 
								
							 
						 
						
							
							
								
								gguf : make gguf pip-installable  
							
							... 
							
							
							
							* gitignore : add dist and rm pyproject.toml
* gguf: prepare as Pip package
* gguf: prepare as Pip package
* gguf : fix line endings
* requirements : add gguf
* gguf : update readme with build notes
* gguf : update readme with build notes
* gguf : add notes for tests 
							
						 
						
							2023-08-25 09:26:05 +03:00 
							
								 
							
						 
					 
				
					
						
							
								
								
									Shouzheng Liu 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								b91ad7f461 
								
							 
						 
						
							
							
								
								ggml-alloc : enlarge size of parse_seq ( #2776 )  
							
							... 
							
							
							
							Since we also store barriers in this array, we need to double its size. 
							
						 
						
							2023-08-25 08:58:00 +03:00 
							
								 
							
						 
					 
				
					
						
							
								
								
									Marcus Dunn 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								2e5f70a25f 
								
							 
						 
						
							
							
								
								Added enum to llama_token_get_type return type ( #2774 )  
							
							
							
						 
						
							2023-08-24 23:49:30 +02:00 
							
								 
							
						 
					 
				
					
						
							
								
								
									slaren 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								d0f77b1353 
								
							 
						 
						
							
							
								
								convert.py : try to determine n_ctx automatically for CodeLlama ( #2770 )  
							
							
							
						 
						
							2023-08-24 21:10:39 +02:00 
							
								 
							
						 
					 
				
					
						
							
								
								
									slaren 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								0d3094f0c7 
								
							 
						 
						
							
							
								
								gguf : add rope_freq_base parameter for CodeLlama ( #2769 )  
							
							
							
						 
						
							2023-08-24 21:04:05 +03:00 
							
								 
							
						 
					 
				
					
						
							
								
								
									Georgi Gerganov 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								01f2224682 
								
							 
						 
						
							
							
								
								falcon : write file type  
							
							
							
						 
						
							2023-08-24 19:58:30 +03:00 
							
								 
							
						 
					 
				
					
						
							
								
								
									Shouzheng Liu 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								38b16dfca6 
								
							 
						 
						
							
							
								
								metal : bug-fix when enable ggml-alloc ( #2757 )  
							
							... 
							
							
							
							* metal: better memory alloc w/ concurrency dispatch
The ggml-alloc should only free tensors at memory barriers.
* ggml-alloc: avoid return silently
In certain cases, the allocate_node() function may silently return
without performing any memory allocation. 
							
						 
						
							2023-08-24 19:27:25 +03:00 
							
								 
							
						 
					 
				
					
						
							
								
								
									Georgi Gerganov 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								8f8c28e89c 
								
							 
						 
						
							
							
								
								convert : auto-determine model name based on dir + scripts update  
							
							
							
						 
						
							2023-08-24 19:26:47 +03:00 
							
								 
							
						 
					 
				
					
						
							
								
								
									Kerfuffle 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								7694adda8d 
								
							 
						 
						
							
							
								
								Fix for main example getting stuck when -n -2 and --interactive ( #2767 )  
							
							... 
							
							
							
							* Fix for main example getting stuck when -n -2 and --interactive
* Add a comment so future generations may suffer less. 
							
						 
						
							2023-08-24 10:11:13 -06:00 
							
								 
							
						 
					 
				
					
						
							
								
								
									slaren 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								fea95c682d 
								
							 
						 
						
							
							
								
								fix convert.py for codellama, add llama 34B to the list of recognized models ( #2768 )  
							
							
							
						 
						
							2023-08-24 17:44:11 +02:00 
							
								 
							
						 
					 
				
					
						
							
								
								
									DannyDaemonic 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								ef955fbd23 
								
							 
						 
						
							
							
								
								Tag release with build number ( #2732 )  
							
							... 
							
							
							
							* Modified build.yml to use build number for release
* Add the short hash back into the tag
* Prefix the build number with b 
							
						 
						
							2023-08-24 15:58:02 +02:00 
							
								 
							
						 
					 
				
					
						
							
								
								
									Georgi Gerganov 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								d67777c202 
								
							 
						 
						
							
							
								
								metal : add Q8_0 support ( #2763 )  
							
							... 
							
							
							
							* metal : add dequantize_q8_0 kernel
* metal : add mul_mat_q8_0_f32 kernel
* metal : add Q8_0 mul_mm kernel 
							
						 
						
							2023-08-24 16:19:57 +03:00 
							
								 
							
						 
					 
				
					
						
							
								
								
									Georgi Gerganov 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								c3e53b421a 
								
							 
						 
						
							
							
								
								llama : escape all U+2581 in a string ( #2750 )  
							
							
							
						 
						
							2023-08-24 12:26:01 +03:00 
							
								 
							
						 
					 
				
					
						
							
								
								
									Evan Jones 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								6e91a1b070 
								
							 
						 
						
							
							
								
								llama : fix grammar sometimes generating null char ( #2756 )  
							
							
							
						 
						
							2023-08-24 07:07:13 +03:00 
							
								 
							
						 
					 
				
					
						
							
								
								
									Georgi Gerganov 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								44d5462b5c 
								
							 
						 
						
							
							
								
								readme : fix link  
							
							
							
						 
						
							2023-08-23 23:44:19 +03:00 
							
								 
							
						 
					 
				
					
						
							
								
								
									Georgi Gerganov 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								c7868b0753 
								
							 
						 
						
							
							
								
								minor : fix trailing whitespace  
							
							
							
						 
						
							2023-08-23 23:43:00 +03:00 
							
								 
							
						 
					 
				
					
						
							
								
								
									Georgi Gerganov 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								79da24b58c 
								
							 
						 
						
							
							
								
								readme : update hot topics  
							
							
							
						 
						
							2023-08-23 23:41:16 +03:00 
							
								 
							
						 
					 
				
					
						
							
								
								
									Georgi Gerganov 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								cf658adc83 
								
							 
						 
						
							
							
								
								llm : add Falcon support ( #2717 )  
							
							... 
							
							
							
							* llama : refactor GGUF constants into static maps
* llama : check if model architecture is known
* llama : refactor llama_model_load_internal()
* gguf : add KV constant maps
* llm : read arch-specific KVs
* convert : add dummy scores + types
* falcon : load tensor data (CPU only)
* llama : fix loading progress bar
* llama : add arch member to llama_model
* falcon : CPU inference working
* falcon : support non-40B models
* falcon : minor
* llama : minor updates
ggml-ci
* convert-falcon-hf-to-gguf.py : fix special token mapping
* llama.cpp : llama default UNK token = id 0
* llama.cpp : fix bpe tokenizer
* llama.cpp : fix the fix of bpe tokenizer
* ggml : pass eps to ggml_norm
* metal : implement RoPE (mode = 2) + avoid ggml_repeat
* ggml : ggml_repeat always creates new tensor
* falcon : copy-paste self-attention from LLaMA
* metal : print extra compute pipeline info
* falcon : minor changes (still chasing the Metal problem)
* llama.cpp : fix linefeed token
* metal : fix GELU kernel numerical stability by using precise::tanh
* metal : temporary workaround for the concurrency optimization bug
* falcon : add CUDA offloading (#2739 )
* llama : better model naming and size reporting
* llama : prep new tokenizer support
* llama : advanced BPE tokenizer based on ggllm.cpp imlpementation
* llama : remove oboslete comment
ggml-ci
* common : remove obsolete BPE API + disable test-tokenizer-1
* llama : revert BPE special-case in llama_byte_to_token()
* cuda : add TODOs for RoPE NeoX implementation
* llama : default special tokens based on vocab type
* perplexity : add log for start of tokenization
---------
Co-authored-by: klosax <131523366+klosax@users.noreply.github.com>
Co-authored-by: slaren <slarengh@gmail.com> 
							
						 
						
							2023-08-23 23:08:04 +03:00 
							
								 
							
						 
					 
				
					
						
							
								
								
									Georgi Gerganov 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								a192860cfe 
								
							 
						 
						
							
							
								
								minor : fix trailing whitespace  
							
							
							
						 
						
							2023-08-23 22:37:39 +03:00 
							
								 
							
						 
					 
				
					
						
							
								
								
									Olivier Chafik 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								95385241a9 
								
							 
						 
						
							
							
								
								examples : restore the functionality to import llama2.c models ( #2685 )  
							
							... 
							
							
							
							* Fix import of llama2.c models that don't share weights between embedding layers
* llama2c: reinstate ggmlv3 conversion output + update readme w/ gguf conv
* llama2.c: comment out legacy "load from ggml model" logic
* llama2.c: convert special-cased "<0xXX>" single byte tokens from tokenizer.bin 
							
						 
						
							2023-08-23 22:33:05 +03:00 
							
								 
							
						 
					 
				
					
						
							
								
								
									slaren 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								335acd2ffd 
								
							 
						 
						
							
							
								
								fix convert-lora-to-ggml.py ( #2738 )  
							
							
							
						 
						
							2023-08-23 16:46:54 +02:00 
							
								 
							
						 
					 
				
					
						
							
								
								
									klosax 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								5290c38e6e 
								
							 
						 
						
							
							
								
								main : insert bos if no tokens ( #2727 )  
							
							... 
							
							
							
							* main.cpp : insert bos if no tokens
* Update examples/main/main.cpp
* Update examples/main/main.cpp
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> 
							
						 
						
							2023-08-23 16:46:03 +02:00 
							
								 
							
						 
					 
				
					
						
							
								
								
									akawrykow 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								cc34dbda96 
								
							 
						 
						
							
							
								
								gitignore : fix for windows ( #2729 )  
							
							
							
						 
						
							2023-08-23 17:31:34 +03:00 
							
								 
							
						 
					 
				
					
						
							
								
								
									Cebtenzzre 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								7c2227a197 
								
							 
						 
						
							
							
								
								chmod : make scripts executable ( #2675 )  
							
							
							
						 
						
							2023-08-23 17:29:09 +03:00 
							
								 
							
						 
					 
				
					
						
							
								
								
									JohnnyB 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								f19dca04ea 
								
							 
						 
						
							
							
								
								devops : RPM Specs ( #2723 )  
							
							... 
							
							
							
							* Create llama-cpp.srpm
* Rename llama-cpp.srpm to llama-cpp.srpm.spec
Correcting extension.
* Tested spec success.
* Update llama-cpp.srpm.spec
* Create lamma-cpp-cublas.srpm.spec
* Create lamma-cpp-clblast.srpm.spec
* Update lamma-cpp-cublas.srpm.spec
Added BuildRequires
* Moved to devops dir 
							
						 
						
							2023-08-23 17:28:22 +03:00 
							
								 
							
						 
					 
				
					
						
							
								
								
									Kawrakow 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								8207214b6a 
								
							 
						 
						
							
							
								
								Fix values shown in the quantize tool help ( #2735 )  
							
							... 
							
							
							
							Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com> 
							
						 
						
							2023-08-23 12:57:12 +03:00 
							
								 
							
						 
					 
				
					
						
							
								
								
									Kawrakow 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								62959e740e 
								
							 
						 
						
							
							
								
								Strided perplexity ( #2714 )  
							
							... 
							
							
							
							* Implementing strided computation of perplexity
* Alternative way to output PPL results
---------
Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com> 
							
						 
						
							2023-08-23 12:56:42 +03:00 
							
								 
							
						 
					 
				
					
						
							
								
								
									IgnacioFDM 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								7f7ddd5002 
								
							 
						 
						
							
							
								
								Fix ggml to gguf conversion on Windows ( #2733 )  
							
							... 
							
							
							
							This fixes `RuntimeWarning: overflow encountered in long_scalars`
Credit: anon (not mine) 
							
						 
						
							2023-08-23 03:31:09 -06:00 
							
								 
							
						 
					 
				
					
						
							
								
								
									Xiao-Yong Jin 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								b8ad1b66b2 
								
							 
						 
						
							
							
								
								server : allow json array in prompt or content for direct token input ( #2306 )  
							
							... 
							
							
							
							* server: allow json array in prompt or content
We accept an array of strings and numbers representing tokens,
in addition to the current string valued prompt or content.
This allows direct token input, so that any special tokens
can be processed and used at the frontend during the construction
of the json data, before sending to the server. And the server
does not need to know or parse special tokens from textual input.
With this, we can use EOS and BOS used in llama-2-chat models.
* server: use tokenizePrompt(json) and default "" if empty prompt
* server: fix prompt check
* server: tokenize endpoint no longer adds BOS 
							
						 
						
							2023-08-23 15:12:12 +08:00 
							
								 
							
						 
					 
				
					
						
							
								
								
									Evan Jones 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								f5fe98d11b 
								
							 
						 
						
							
							
								
								docs : add grammar docs ( #2701 )  
							
							... 
							
							
							
							* docs : add grammar docs
* tweaks to grammar guide
* rework GBNF example to be a commented grammar 
							
						 
						
							2023-08-22 21:01:57 -04:00 
							
								 
							
						 
					 
				
					
						
							
								
								
									Kerfuffle 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								777f42ba18 
								
							 
						 
						
							
							
								
								Improve handling of special tokens in GGML to GGUF converter ( #2725 )  
							
							... 
							
							
							
							* Improve UNK, BOS, EOS token handling when converting without metadata.
* Allow importing as a module.
* Remove some obsolete code and minor cleanups.
* Set default UNK token mapping from -1 to 0 in llama.cpp
* Try to handle overflow due to buggy Windows Python with a better error message 
							
						 
						
							2023-08-22 17:39:39 -06:00 
							
								 
							
						 
					 
				
					
						
							
								
								
									goerch 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								46ef5b5fcf 
								
							 
						 
						
							
							
								
								llama : fix whitespace escaping in tokenizer ( #2724 )  
							
							
							
						 
						
							2023-08-23 00:10:42 +03:00 
							
								 
							
						 
					 
				
					
						
							
								
								
									Johannes Gäßler 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								c63bb1d16a 
								
							 
						 
						
							
							
								
								CUDA: use mul_mat_q kernels by default ( #2683 )  
							
							
							
						 
						
							2023-08-22 22:47:05 +02:00 
							
								 
							
						 
					 
				
					
						
							
								
								
									Alex Petenchea 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								3b6cfe7c92 
								
							 
						 
						
							
							
								
								convert.py : clarifying error message ( #2718 )  
							
							
							
						 
						
							2023-08-22 21:58:16 +03:00 
							
								 
							
						 
					 
				
					
						
							
								
								
									Jiahao Li 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								800c9635b4 
								
							 
						 
						
							
							
								
								Fix CUDA softmax by subtracting max value before exp ( #2665 )  
							
							
							
						 
						
							2023-08-22 20:27:06 +02:00 
							
								 
							
						 
					 
				
					
						
							
								
								
									Georgi Gerganov 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								deb7dfca4b 
								
							 
						 
						
							
							
								
								gguf : add ftype meta info to the model ( #2710 )  
							
							... 
							
							
							
							* llama : add ftype meta info to the model
ggml-ci
* convert.py : add ftype when converting (does not work)
* convert.py : fix Enum to IntEnum
ggml-ci 
							
						 
						
							2023-08-22 20:05:59 +03:00 
							
								 
							
						 
					 
				
					
						
							
								
								
									Kawrakow 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								bac66994cf 
								
							 
						 
						
							
							
								
								Quantization imrovements for k_quants ( #2707 )  
							
							... 
							
							
							
							* Improve LLaMA-2 2-, 3- and 4-bit quantization
* Q3_K_S: use Q5_K for 1st 2 layers of attention.wv and feed_forward.w2
* Q4_K_S: use Q6_K for 1st 2 layers of attention.wv and feed_forward.w2
* Q2_K and Q3_K_M: use Q5_K instead of Q4_K for 1st 2 layers of
  attention.wv and feed_forward.w2
This leads to a slight model sized increase as follows:
Q2_K  : 2.684G vs 2.670G
Q3_K_S: 2.775G vs 2.745G
Q3_K_M: 3.071G vs 3.057G
Q4_K_S: 3.592G vs 3.563G
LLaMA-2 PPL for context 512 changes as follows:
Q2_K  : 6.6691 vs 6.8201
Q3_K_S: 6.2129 vs 6.2584
Q3_K_M: 6.0387 vs 6.1371
Q4_K_S: 5.9138 vs 6.0041
There are improvements for LLaMA-1 as well, but they are
way smaller than the above.
* Minor 4-bit quantization improvement
For the same model size as previus commit, we get
PPL = 5.9069 vs 5.9138.
* Some more fine tuning
* Adding make_qkx2_quants
With it, we get PPL = 5.8828 for L2-7B Q4_K_S.
* Another minor improvement
* Q2_K improvement
Smaller model, lower perplexity.
 7B: file size = 2.632G, PPL = 6.3772 vs original 2.670G PPL = 6.8201
12B: file size = 5.056G, PPL = 5.4577 vs original 5.130G PPL = 5.7178
It is mostly Q3_K except for tok_embeddings, attention.wq, attention.wk,
which are Q2_K
* Iterating
* Revert Q5_K back to make_qkx1_quants
* Better Q6_K
* make_qkx2_quants is better for Q5_K after all
* Fix after rebasing on master
* Fix for changed tensor names
---------
Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com> 
							
						 
						
							2023-08-22 19:14:09 +03:00 
							
								 
							
						 
					 
				
					
						
							
								
								
									slaren 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								519c981f8b 
								
							 
						 
						
							
							
								
								embedding : evaluate prompt in batches ( #2713 )  
							
							
							
						 
						
							2023-08-22 16:03:12 +02:00 
							
								 
							
						 
					 
				
					
						
							
								
								
									slaren 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								1123f7fbdf 
								
							 
						 
						
							
							
								
								ggml-cuda : use graph allocator ( #2684 )  
							
							... 
							
							
							
							use a different function for no_alloc to avoid breaking backwards compat, fixes lora
remove 512 n_batch limit
fixed 2048 batch size
cleanup
Co-authored-by: Johannes Gäßler <johannesg@5d6.de> 
							
						 
						
							2023-08-22 15:25:19 +02:00 
							
								 
							
						 
					 
				
					
						
							
								
								
									Georgi Gerganov 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								ef3f333d37 
								
							 
						 
						
							
							
								
								ggml : sync latest (SAM + SD operators, CUDA alibi) ( #2709 )  
							
							... 
							
							
							
							* ggml : sync latest (SAM + SD operators, CUDA alibi)
ggml-ci
* ggml : fix tabs 
							
						 
						
							2023-08-22 14:22:08 +03:00 
							
								 
							
						 
					 
				
					
						
							
								
								
									slaren 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								8e4364f2af 
								
							 
						 
						
							
							
								
								llama-bench : minor fixes ( #2695 )  
							
							
							
						 
						
							2023-08-22 10:56:03 +03:00 
							
								 
							
						 
					 
				
					
						
							
								
								
									Kylin 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								1e3bc523d8 
								
							 
						 
						
							
							
								
								ggml : support CUDA's half type for aarch64( #1455 ) ( #2670 )  
							
							... 
							
							
							
							* ggml: support CUDA's half type for aarch64(#1455 )
support CUDA's half type for aarch64 in ggml_fp16_t definition
* ggml: use __CUDACC__ to recognise nvcc compiler 
							
						 
						
							2023-08-22 10:14:23 +03:00 
							
								 
							
						 
					 
				
					
						
							
								
								
									Shouzheng Liu 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								14b1d7e6f7 
								
							 
						 
						
							
							
								
								metal : add missing barriers for mul-mat ( #2699 )  
							
							
							
						 
						
							2023-08-22 09:18:40 +03:00 
							
								 
							
						 
					 
				
					
						
							
								
								
									Jhen-Jie Hong 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								226255b44e 
								
							 
						 
						
							
							
								
								server : fallback to default if client param is null ( #2688 )  
							
							... 
							
							
							
							* server : fallback to default if client param is null
* server : do not overwrite 404 if status is 500 from exception_handler 
							
						 
						
							2023-08-22 08:32:00 +08:00 
							
								 
							
						 
					 
				
					
						
							
								
								
									Kerfuffle 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								930523c8e1 
								
							 
						 
						
							
							
								
								Fix convert-llama-ggmlv3-to-gguf.py vocab conversion ( #2698 )  
							
							... 
							
							
							
							When converting without metadata, the hex value for bytes entries weren't 0 padded to 2 digits. 
							
						 
						
							2023-08-21 18:01:34 -06:00 
							
								 
							
						 
					 
				
					
						
							
								
								
									Georgi Gerganov 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								c8dba409e6 
								
							 
						 
						
							
							
								
								py : remove obsolete script  
							
							
							
						 
						
							2023-08-21 23:40:22 +03:00 
							
								 
							
						 
					 
				
					
						
							
								
								
									Georgi Gerganov 
								
							 
						 
						
							
							
								
								
							
							
							
								
							
							
								6381d4e110 
								
							 
						 
						
							
							
								
								gguf : new file format with flexible meta data (beta) ( #2398 )  
							
							... 
							
							
							
							* gguf : first API pass
* gguf : read header + meta data
* gguf : read tensor info
* gguf : initial model loading - not tested
* gguf : add gguf_get_tensor_name()
* gguf : do not support passing existing ggml_context to gguf_init
* gguf : simplify gguf_get_val
* gguf : gguf.c is now part of ggml.c
* gguf : read / write sample models
* gguf : add comments
* refactor : reduce code duplication and better API (#2415 )
* gguf : expose the gguf_type enum through the API for now
* gguf : add array support
* gguf.py : some code style changes
* convert.py : start a new simplified implementation by removing old stuff
* convert.py : remove GGML vocab + other obsolete stuff
* GGUF : write tensor (#2426 )
* WIP: Write tensor
* GGUF : Support writing tensors in Python
* refactor : rm unused import and upd todos
* fix : fix errors upd writing example
* rm example.gguf
* gitignore *.gguf
* undo formatting
* gguf : add gguf_find_key (#2438 )
* gguf.cpp : find key example
* ggml.h : add gguf_find_key
* ggml.c : add gguf_find_key
* gguf : fix writing tensors
* gguf : do not hardcode tensor names to read
* gguf : write sample tensors to read
* gguf : add tokenization constants
* quick and dirty conversion example
* gguf : fix writing gguf arrays
* gguf : write tensors one by one and code reuse
* gguf : fix writing gguf arrays
* gguf : write tensors one by one
* gguf : write tensors one by one
* gguf : write tokenizer data
* gguf : upd gguf conversion script
* Update convert-llama-h5-to-gguf.py
* gguf : handle already encoded string
* ggml.h : get array str and f32
* ggml.c : get arr str and f32
* gguf.py : support any type
* Update convert-llama-h5-to-gguf.py
* gguf : fix set is not subscriptable
* gguf : update convert-llama-h5-to-gguf.py
* constants.py : add layer norm eps
* gguf.py : add layer norm eps and merges
* ggml.h : increase GGML_MAX_NAME to 64
* ggml.c : add gguf_get_arr_n
* Update convert-llama-h5-to-gguf.py
* add gptneox gguf example
* Makefile : add gptneox gguf example
* Update convert-llama-h5-to-gguf.py
* add gptneox gguf example
* Update convert-llama-h5-to-gguf.py
* Update convert-gptneox-h5-to-gguf.py
* Update convert-gptneox-h5-to-gguf.py
* Update convert-llama-h5-to-gguf.py
* gguf : support custom alignment value
* gguf : fix typo in function call
* gguf : mmap tensor data example
* fix : update convert-llama-h5-to-gguf.py
* Update convert-llama-h5-to-gguf.py
* convert-gptneox-h5-to-gguf.py : Special tokens
* gptneox-main.cpp : special tokens
* Update gptneox-main.cpp
* constants.py : special tokens
* gguf.py : accumulate kv and tensor info data + special tokens
* convert-gptneox-h5-to-gguf.py : accumulate kv and ti + special tokens
* gguf : gguf counterpart of llama-util.h
* gguf-util.h : update note
* convert-llama-h5-to-gguf.py : accumulate kv / ti + special tokens
* convert-llama-h5-to-gguf.py : special tokens
* Delete gptneox-common.cpp
* Delete gptneox-common.h
* convert-gptneox-h5-to-gguf.py : gpt2bpe tokenizer
* gptneox-main.cpp : gpt2 bpe tokenizer
* gpt2 bpe tokenizer (handles merges and unicode)
* Makefile : remove gptneox-common
* gguf.py : bytesarray for gpt2bpe tokenizer
* cmpnct_gpt2bpe.hpp : comments
* gguf.py : use custom alignment if present
* gguf : minor stuff
* Update gptneox-main.cpp
* map tensor names
* convert-gptneox-h5-to-gguf.py : map tensor names
* convert-llama-h5-to-gguf.py : map tensor names
* gptneox-main.cpp : map tensor names
* gguf : start implementing libllama in GGUF (WIP)
* gguf : start implementing libllama in GGUF (WIP)
* rm binary commited by mistake
* upd .gitignore
* gguf : calculate n_mult
* gguf :  inference with 7B model working (WIP)
* gguf : rm deprecated function
* gguf : start implementing gguf_file_saver (WIP)
* gguf : start implementing gguf_file_saver (WIP)
* gguf : start implementing gguf_file_saver (WIP)
* gguf : add gguf_get_kv_type
* gguf : add gguf_get_kv_type
* gguf : write metadata in gguf_file_saver (WIP)
* gguf : write metadata in gguf_file_saver (WIP)
* gguf : write metadata in gguf_file_saver
* gguf : rm references to old file formats
* gguf : shorter name for member variable
* gguf : rm redundant method
* gguf : get rid of n_mult, read n_ff from file
* Update gguf_tensor_map.py
* Update gptneox-main.cpp
* gguf : rm references to old file magics
* gguf : start implementing quantization (WIP)
* gguf : start implementing quantization (WIP)
* gguf : start implementing quantization (WIP)
* gguf : start implementing quantization (WIP)
* gguf : start implementing quantization (WIP)
* gguf : start implementing quantization (WIP)
* gguf : quantization is working
* gguf : roper closing of file
* gguf.py : no need to convert tensors twice
* convert-gptneox-h5-to-gguf.py : no need to convert tensors twice
* convert-llama-h5-to-gguf.py : no need to convert tensors twice
* convert-gptneox-h5-to-gguf.py : simplify nbytes
* convert-llama-h5-to-gguf.py : simplify nbytes
* gptneox-main.cpp : n_layer --> n_block
* constants.py : n_layer --> n_block
* gguf.py : n_layer --> n_block
* convert-gptneox-h5-to-gguf.py : n_layer --> n_block
* convert-llama-h5-to-gguf.py : n_layer --> n_block
* gptneox-main.cpp : n_layer --> n_block
* Update gguf_tensor_map.py
* convert-gptneox-h5-to-gguf.py : load model in parts to save memory
* convert-llama-h5-to-gguf.py : load model in parts to save memory
* convert : write more metadata for LLaMA
* convert : rm quantization version
* convert-gptneox-h5-to-gguf.py : add file_type key
* gptneox-main.cpp : add file_type key
* fix conflicts
* gguf : add todos and comments
* convert-gptneox-h5-to-gguf.py : tensor name map changes
* Create gguf_namemap.py : tensor name map changes
* Delete gguf_tensor_map.py
* gptneox-main.cpp : tensor name map changes
* convert-llama-h5-to-gguf.py : fixes
* gguf.py : dont add empty strings
* simple : minor style changes
* gguf : use UNIX line ending
* Create convert-llama-7b-pth-to-gguf.py
* llama : sync gguf-llama.cpp with latest llama.cpp (#2608 )
* llama : sync gguf-llama.cpp with latest llama.cpp
* minor : indentation + assert
* llama : refactor gguf_buffer and gguf_ctx_buffer
* llama : minor
* gitignore : add gptneox-main
* llama : tokenizer fixes (#2549 )
* Merge tokenizer fixes into the gguf branch.
* Add test vocabularies
* convert : update convert-new.py with tokenizer fixes (#2614 )
* Merge tokenizer fixes into the gguf branch.
* Add test vocabularies
* Adapt convert-new.py (and fix a clang-cl compiler error on windows)
* llama : sync gguf-llama with llama (#2613 )
* llama : sync gguf-llama with llama
* tests : fix build + warnings (test-tokenizer-1 still fails)
* tests : fix wstring_convert
* convert : fix layer names
* llama : sync gguf-llama.cpp
* convert : update HF converter to new tokenizer voodoo magics
* llama : update tokenizer style
* convert-llama-h5-to-gguf.py : add token types
* constants.py : add token types
* gguf.py : add token types
* convert-llama-7b-pth-to-gguf.py : add token types
* gguf-llama.cpp :  fix n_head_kv
* convert-llama-h5-to-gguf.py : add 70b gqa support
* gguf.py : add tensor data layout
* convert-llama-h5-to-gguf.py : add tensor data layout
* convert-llama-7b-pth-to-gguf.py : add tensor data layout
* gptneox-main.cpp : add tensor data layout
* convert-llama-h5-to-gguf.py : clarify the reverse permute
* llama : refactor model loading code (#2620 )
* llama : style formatting + remove helper methods
* llama : fix quantization using gguf tool
* llama : simplify gguf_file_saver
* llama : fix method names
* llama : simplify write_header()
* llama : no need to pass full file loader to the file saver
just gguf_ctx
* llama : gguf_file_saver write I32
* llama : refactor tensor names (#2622 )
* gguf: update tensor names searched in quantization
* gguf : define tensor names as constants
* gguf : initial write API (not tested yet)
* gguf : write to file API (not tested)
* gguf : initial write API ready + example
* gguf : fix header write
* gguf : fixes + simplify example + add ggml_nbytes_pad()
* gguf : minor
* llama : replace gguf_file_saver with new gguf write API
* gguf : streaming support when writing files
* gguf : remove oboslete write methods
* gguf : remove obosolete gguf_get_arr_xxx API
* llama : simplify gguf_file_loader
* llama : move hparams and vocab from gguf_file_loader to llama_model_loader
* llama : merge gguf-util.h in llama.cpp
* llama : reorder definitions in .cpp to match .h
* llama : minor simplifications
* llama : refactor llama_model_loader (WIP)
wip : remove ggml_ctx from llama_model_loader
wip : merge gguf_file_loader in llama_model_loader
* llama : fix shape prints
* llama : fix Windows build + fix norm_rms_eps key
* llama : throw error on missing KV paris in model meta data
* llama : improve printing + log meta data
* llama : switch print order of meta data
---------
Co-authored-by: M. Yusuf Sarıgöz <yusufsarigoz@gmail.com>
* gguf : deduplicate (#2629 )
* gguf : better type names
* dedup : CPU + Metal is working
* ggml : fix warnings about unused results
* llama.cpp : fix line feed and compiler warning
* llama : fix strncpy warning + note token_to_str does not write null
* llama : restore the original load/save session implementation
Will migrate this to GGUF in the future
* convert-llama-h5-to-gguf.py : support alt ctx param name
* ggml : assert when using ggml_mul with non-F32 src1
* examples : dedup simple
---------
Co-authored-by: klosax <131523366+klosax@users.noreply.github.com>
* gguf.py : merge all files in gguf.py
* convert-new.py : pick #2427  for HF 70B support
* examples/gguf : no need to keep q option for quantization any more
* llama.cpp : print actual model size
* llama.cpp : use ggml_elements()
* convert-new.py : output gguf (#2635 )
* convert-new.py : output gguf (WIP)
* convert-new.py : add gguf key-value pairs
* llama : add hparams.ctx_train + no longer print ftype
* convert-new.py : minor fixes
* convert-new.py : vocab-only option should work now
* llama : fix tokenizer to use llama_char_to_byte
* tests : add new ggml-vocab-llama.gguf
* convert-new.py : tensor name mapping
* convert-new.py : add map for skipping tensor serialization
* convert-new.py : convert script now works
* gguf.py : pick some of the refactoring from #2644 
* convert-new.py : minor fixes
* convert.py : update to support GGUF output
* Revert "ci : disable CI temporary to not waste energy"
This reverts commit 7e82d25f40#2644 )
* gguf : single pass for writing tensors + refactoring writer
* gguf : single pass for writing tensors + refactoring writer
* gguf : single pass for writing tensors + refactoring writer
* gguf : style fixes in simple conversion script
* gguf : refactor gptneox conversion script
* gguf : rename h5 to hf (for HuggingFace)
* gguf : refactor pth to gguf conversion script
* gguf : rm file_type key and method
* gguf.py : fix vertical alignment
* gguf.py : indentation
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
* convert-gptneox-hf-to-gguf.py : fixes
* gguf.py : gptneox mapping
* convert-llama-hf-to-gguf.py : fixes
* convert-llama-7b-pth-to-gguf.py : fixes
* ggml.h : reverse GGUF_MAGIC
* gguf.py : reverse GGUF_MAGIC
* test-tokenizer-0.cpp : fix warning
* llama.cpp : print kv general.name
* llama.cpp : get special token kv and linefeed token id
* llama : print number of tensors per type + print arch + style
* tests : update vocab file with new magic
* editorconfig : fix whitespaces
* llama : re-order functions
* llama : remove C++ API + reorganize common source in /common dir
* llama : minor API updates
* llama : avoid hardcoded special tokens
* llama : fix MPI build
ggml-ci
* llama : introduce enum llama_vocab_type + remove hardcoded string constants
* convert-falcon-hf-to-gguf.py : falcon HF --> gguf conversion, not tested
* falcon-main.cpp : falcon inference example
* convert-falcon-hf-to-gguf.py : remove extra kv
* convert-gptneox-hf-to-gguf.py : remove extra kv
* convert-llama-7b-pth-to-gguf.py : remove extra kv
* convert-llama-hf-to-gguf.py : remove extra kv
* gguf.py : fix for falcon 40b
* falcon-main.cpp : fix for falcon 40b
* convert-falcon-hf-to-gguf.py : update ref
* convert-falcon-hf-to-gguf.py : add tensor data layout
* cmpnct_gpt2bpe.hpp : fixes
* falcon-main.cpp : fixes
* gptneox-main.cpp : fixes
* cmpnct_gpt2bpe.hpp : remove non-general stuff
* Update examples/server/README.md
Co-authored-by: slaren <slarengh@gmail.com>
* cmpnct_gpt2bpe.hpp : cleanup
* convert-llama-hf-to-gguf.py : special tokens
* convert-llama-7b-pth-to-gguf.py : special tokens
* convert-permute-debug.py : permute debug print
* convert-permute-debug-master.py : permute debug for master
* convert-permute-debug.py : change permute type of attn_q
* convert.py : 70b model working (change attn_q permute)
* Delete convert-permute-debug-master.py
* Delete convert-permute-debug.py
* convert-llama-hf-to-gguf.py : fix attn_q permute
* gguf.py : fix rope scale kv
* convert-llama-hf-to-gguf.py : rope scale and added tokens
* convert-llama-7b-pth-to-gguf.py : rope scale and added tokens
* llama.cpp : use rope scale kv
* convert-llama-7b-pth-to-gguf.py : rope scale fix
* convert-llama-hf-to-gguf.py : rope scale fix
* py : fix whitespace
* gguf : add Python script to convert GGMLv3 LLaMA models to GGUF (#2682 )
* First pass at converting GGMLv3 LLaMA models to GGUF
* Cleanups, better output during conversion
* Fix vocab space conversion logic
* More vocab conversion fixes
* Add description to converted GGUF files
* Improve help text, expand warning
* Allow specifying name and description for output GGUF
* Allow overriding vocab and hyperparams from original model metadata
* Use correct params override var name
* Fix wrong type size for Q8_K
Better handling of original style metadata
* Set default value for gguf add_tensor raw_shape KW arg
* llama : improve token type support (#2668 )
* Merge tokenizer fixes into the gguf branch.
* Add test vocabularies
* Adapt convert-new.py (and fix a clang-cl compiler error on windows)
* Improved tokenizer test
But does it work on MacOS?
* Improve token type support
- Added @klosax code to convert.py
- Improved token type support in vocabulary
* Exclude platform dependent tests
* More sentencepiece compatibility by eliminating magic numbers
* Restored accidentally removed comment
* llama : add API for token type
ggml-ci
* tests : use new tokenizer type API (#2692 )
* Merge tokenizer fixes into the gguf branch.
* Add test vocabularies
* Adapt convert-new.py (and fix a clang-cl compiler error on windows)
* Improved tokenizer test
But does it work on MacOS?
* Improve token type support
- Added @klosax code to convert.py
- Improved token type support in vocabulary
* Exclude platform dependent tests
* More sentencepiece compatibility by eliminating magic numbers
* Restored accidentally removed comment
* Improve commentary
* Use token type API in test-tokenizer-1.cpp
* py : cosmetics
* readme : add notice about new file format
ggml-ci
---------
Co-authored-by: M. Yusuf Sarıgöz <yusufsarigoz@gmail.com>
Co-authored-by: klosax <131523366+klosax@users.noreply.github.com>
Co-authored-by: goerch <jhr.walter@t-online.de>
Co-authored-by: slaren <slarengh@gmail.com>
Co-authored-by: Kerfuffle <44031344+KerfuffleV2@users.noreply.github.com> 
							
						 
						
							2023-08-21 23:07:43 +03:00