| .. | 
		
		
			
			
			
			
				| ggml-amx | add amx kernel for gemm (#8998) | 2024-10-18 13:34:36 +08:00 | 
		
			
			
			
			
				| ggml-cann | cann: fix crash when llama-bench is running on multiple cann devices (#9627) | 2024-09-25 11:30:38 +08:00 | 
		
			
			
			
			
				| ggml-cuda | increase cuda_cpy block size (ggml/996) | 2024-10-26 10:33:56 +03:00 | 
		
			
			
			
			
				| ggml-sycl | fix mul_mat_vec_q and *_vec_q error (#9939) | 2024-10-21 14:26:09 +08:00 | 
		
			
			
			
			
				| kompute@4565194ed7 | llama : reorganize source code + improve CMake (#8006) | 2024-06-26 18:33:02 +03:00 | 
		
			
			
			
			
				| kompute-shaders | kompute: add mul_mat_q4_k shader (#10097) | 2024-10-31 11:09:52 +02:00 | 
		
			
			
			
			
				| llamafile | llamafile : extend sgemm.cpp support for Q5_0 models (#10010) | 2024-10-25 10:27:41 +03:00 | 
		
			
			
			
			
				| vulkan-shaders | ggml: Add POOL2D OP for GPU acceleration to the Vulkan backend in the MobileVLM model. (#9763) | 2024-10-29 09:52:56 +01:00 | 
		
			
			
			
			
				| CMakeLists.txt | llama : use smart pointers for ggml resources (#10117) | 2024-11-01 23:48:26 +01:00 | 
		
			
			
			
			
				| ggml-aarch64.c | ggml : add Q4_0_8_8 RISC-V GEMV and GEMM kernels (#10029) | 2024-10-30 09:00:40 +02:00 | 
		
			
			
			
			
				| ggml-aarch64.h | ggml : minor naming changes (#8433) | 2024-07-12 10:46:02 +03:00 | 
		
			
			
			
			
				| ggml-alloc.c | ggml-alloc : remove buffer_id from leaf_alloc (ggml/987) | 2024-10-16 11:28:01 +03:00 | 
		
			
			
			
			
				| ggml-amx.cpp | llama : refactor model loader with backend registry (#10026) | 2024-10-30 02:01:23 +01:00 | 
		
			
			
			
			
				| ggml-backend-impl.h | llama : refactor model loader with backend registry (#10026) | 2024-10-30 02:01:23 +01:00 | 
		
			
			
			
			
				| ggml-backend.cpp | llama : fix buffer checks for mamba and rwk (#10111) | 2024-10-31 22:54:23 +01:00 | 
		
			
			
			
			
				| ggml-blas.cpp | llama : refactor model loader with backend registry (#10026) | 2024-10-30 02:01:23 +01:00 | 
		
			
			
			
			
				| ggml-cann.cpp | llama : refactor model loader with backend registry (#10026) | 2024-10-30 02:01:23 +01:00 | 
		
			
			
			
			
				| ggml-common.h | ggml-quants : ternary packing for TriLMs and BitNet b1.58 (#8151) | 2024-09-05 21:48:47 -04:00 | 
		
			
			
			
			
				| ggml-cpu-impl.h | ggml : move common CPU backend impl to new header (#9509) | 2024-09-16 16:22:07 +02:00 | 
		
			
			
			
			
				| ggml-cuda.cu | llama : fix buffer checks for mamba and rwk (#10111) | 2024-10-31 22:54:23 +01:00 | 
		
			
			
			
			
				| ggml-impl.h | fix: use vm_allocateto allocate CPU backend buffer on macOS (#9875) | 2024-10-17 00:36:51 +02:00 | 
		
			
			
			
			
				| ggml-kompute.cpp | kompute: add mul_mat_q4_k shader (#10097) | 2024-10-31 11:09:52 +02:00 | 
		
			
			
			
			
				| ggml-metal.m | llama : refactor model loader with backend registry (#10026) | 2024-10-30 02:01:23 +01:00 | 
		
			
			
			
			
				| ggml-metal.metal | metal : minor fixup in FA kernel (#10143) | 2024-11-03 15:18:40 +02:00 | 
		
			
			
			
			
				| ggml-quants.c | ggml : add run-time detection of neon, i8mm and sve (#9331) | 2024-09-28 15:06:16 +03:00 | 
		
			
			
			
			
				| ggml-quants.h | ggml : add run-time detection of neon, i8mm and sve (#9331) | 2024-09-28 15:06:16 +03:00 | 
		
			
			
			
			
				| ggml-rpc.cpp | llama : refactor model loader with backend registry (#10026) | 2024-10-30 02:01:23 +01:00 | 
		
			
			
			
			
				| ggml-sycl.cpp | llama : refactor model loader with backend registry (#10026) | 2024-10-30 02:01:23 +01:00 | 
		
			
			
			
			
				| ggml-vulkan.cpp | vulkan : improve ggml_vk_create_buffer error handling (#9898) | 2024-11-01 19:33:14 +01:00 | 
		
			
			
			
			
				| ggml.c | ggml : remove ggml_scratch (#10121) | 2024-11-01 12:58:45 +02:00 |