| .. |
|
template-instances
|
CUDA: MMQ code deduplication + iquant support (#8495)
|
2024-07-20 22:25:26 +02:00 |
|
vendors
|
Add some minimal optimizations for CDNA (#10498)
|
2024-11-27 17:10:08 +01:00 |
|
acc.cu
|
llama : reorganize source code + improve CMake (#8006)
|
2024-06-26 18:33:02 +03:00 |
|
acc.cuh
|
llama : reorganize source code + improve CMake (#8006)
|
2024-06-26 18:33:02 +03:00 |
|
arange.cu
|
llama : reorganize source code + improve CMake (#8006)
|
2024-06-26 18:33:02 +03:00 |
|
arange.cuh
|
llama : reorganize source code + improve CMake (#8006)
|
2024-06-26 18:33:02 +03:00 |
|
argmax.cu
|
cuda : optimize argmax (#10441)
|
2024-11-21 18:18:50 +01:00 |
|
argmax.cuh
|
ggml/ex: calculate accuracy in graph, adapt MNIST (ggml/980)
|
2024-10-03 21:17:26 +03:00 |
|
argsort.cu
|
ggml : reduce hash table reset cost (#8698)
|
2024-07-27 04:41:55 +02:00 |
|
argsort.cuh
|
llama : reorganize source code + improve CMake (#8006)
|
2024-06-26 18:33:02 +03:00 |
|
binbcast.cu
|
ggml/examples: add backend support for numerical optimization (ggml/949)
|
2024-09-20 21:15:05 +03:00 |
|
binbcast.cuh
|
ggml/examples: add backend support for numerical optimization (ggml/949)
|
2024-09-20 21:15:05 +03:00 |
|
clamp.cu
|
llama : reorganize source code + improve CMake (#8006)
|
2024-06-26 18:33:02 +03:00 |
|
clamp.cuh
|
llama : reorganize source code + improve CMake (#8006)
|
2024-06-26 18:33:02 +03:00 |
|
CMakeLists.txt
|
cmake : enable warnings in llama (#10474)
|
2024-11-26 14:18:08 +02:00 |
|
common.cuh
|
Add some minimal optimizations for CDNA (#10498)
|
2024-11-27 17:10:08 +01:00 |
|
concat.cu
|
llama : reorganize source code + improve CMake (#8006)
|
2024-06-26 18:33:02 +03:00 |
|
concat.cuh
|
llama : reorganize source code + improve CMake (#8006)
|
2024-06-26 18:33:02 +03:00 |
|
conv-transpose-1d.cu
|
feat: cuda implementation for ggml_conv_transpose_1d (ggml/854)
|
2024-07-08 12:23:00 +03:00 |
|
conv-transpose-1d.cuh
|
feat: cuda implementation for ggml_conv_transpose_1d (ggml/854)
|
2024-07-08 12:23:00 +03:00 |
|
convert.cu
|
llama : reorganize source code + improve CMake (#8006)
|
2024-06-26 18:33:02 +03:00 |
|
convert.cuh
|
llama : reorganize source code + improve CMake (#8006)
|
2024-06-26 18:33:02 +03:00 |
|
count-equal.cu
|
ggml: fix zero division in ‘dne’ calculation in CUDA COUNT_EQUAL operator when ‘ne’ is small (#10213)
|
2024-11-09 08:35:46 +01:00 |
|
count-equal.cuh
|
ggml/ex: calculate accuracy in graph, adapt MNIST (ggml/980)
|
2024-10-03 21:17:26 +03:00 |
|
cpy.cu
|
cuda: add q8_0->f32 cpy operation (#9571)
|
2024-09-24 02:14:24 +02:00 |
|
cpy.cuh
|
increase cuda_cpy block size (ggml/996)
|
2024-10-26 10:33:56 +03:00 |
|
cross-entropy-loss.cu
|
ggml/examples: add backend support for numerical optimization (ggml/949)
|
2024-09-20 21:15:05 +03:00 |
|
cross-entropy-loss.cuh
|
ggml/examples: add backend support for numerical optimization (ggml/949)
|
2024-09-20 21:15:05 +03:00 |
|
dequantize.cuh
|
llama : reorganize source code + improve CMake (#8006)
|
2024-06-26 18:33:02 +03:00 |
|
diagmask.cu
|
llama : reorganize source code + improve CMake (#8006)
|
2024-06-26 18:33:02 +03:00 |
|
diagmask.cuh
|
llama : reorganize source code + improve CMake (#8006)
|
2024-06-26 18:33:02 +03:00 |
|
fattn-common.cuh
|
ggml : build backends as libraries (#10256)
|
2024-11-14 18:04:35 +01:00 |
|
fattn-tile-f16.cu
|
ggml : build backends as libraries (#10256)
|
2024-11-14 18:04:35 +01:00 |
|
fattn-tile-f16.cuh
|
llama : reorganize source code + improve CMake (#8006)
|
2024-06-26 18:33:02 +03:00 |
|
fattn-tile-f32.cu
|
ggml : build backends as libraries (#10256)
|
2024-11-14 18:04:35 +01:00 |
|
fattn-tile-f32.cuh
|
llama : reorganize source code + improve CMake (#8006)
|
2024-06-26 18:33:02 +03:00 |
|
fattn-vec-f16.cuh
|
CUDA: remove unnecessary warp reduce in FA (ggml/1032)
|
2024-12-03 20:04:49 +02:00 |
|
fattn-vec-f32.cuh
|
CUDA: remove unnecessary warp reduce in FA (ggml/1032)
|
2024-12-03 20:04:49 +02:00 |
|
fattn-wmma-f16.cuh
|
ggml : build backends as libraries (#10256)
|
2024-11-14 18:04:35 +01:00 |
|
fattn.cu
|
metal : optimize FA kernels (#10171)
|
2024-11-08 13:47:22 +02:00 |
|
fattn.cuh
|
llama : reorganize source code + improve CMake (#8006)
|
2024-06-26 18:33:02 +03:00 |
|
getrows.cu
|
ggml : reduce hash table reset cost (#8698)
|
2024-07-27 04:41:55 +02:00 |
|
getrows.cuh
|
llama : reorganize source code + improve CMake (#8006)
|
2024-06-26 18:33:02 +03:00 |
|
ggml-cuda.cu
|
ggml : refactor online repacking (#10446)
|
2024-12-07 14:37:50 +02:00 |
|
im2col.cu
|
CUDA: fix 1D im2col, add tests (ggml/993)
|
2024-10-23 16:50:02 +03:00 |
|
im2col.cuh
|
llama : reorganize source code + improve CMake (#8006)
|
2024-06-26 18:33:02 +03:00 |
|
mma.cuh
|
CUDA: optimize and refactor MMQ (#8416)
|
2024-07-11 16:47:47 +02:00 |
|
mmq.cu
|
Add some minimal optimizations for CDNA (#10498)
|
2024-11-27 17:10:08 +01:00 |
|
mmq.cuh
|
Add some minimal optimizations for CDNA (#10498)
|
2024-11-27 17:10:08 +01:00 |
|
mmv.cu
|
CUDA: fix shared memory access condition for mmv (#10740)
|
2024-12-09 20:07:12 +01:00 |
|
mmv.cuh
|
CUDA: remove DMMV, consolidate F16 mult mat vec (#10318)
|
2024-11-17 09:09:55 +01:00 |
|
mmvq.cu
|
Add some minimal optimizations for CDNA (#10498)
|
2024-11-27 17:10:08 +01:00 |
|
mmvq.cuh
|
llama : reorganize source code + improve CMake (#8006)
|
2024-06-26 18:33:02 +03:00 |
|
norm.cu
|
ggml : add epsilon as a parameter for group_norm (#8818)
|
2024-08-06 10:26:46 +03:00 |
|
norm.cuh
|
llama : reorganize source code + improve CMake (#8006)
|
2024-06-26 18:33:02 +03:00 |
|
opt-step-adamw.cu
|
ggml: new optimization interface (ggml/988)
|
2024-11-17 08:30:29 +02:00 |
|
opt-step-adamw.cuh
|
ggml/examples: add backend support for numerical optimization (ggml/949)
|
2024-09-20 21:15:05 +03:00 |
|
out-prod.cu
|
ggml : fix builds (#0)
|
2024-09-20 21:15:05 +03:00 |
|
out-prod.cuh
|
ggml/examples: add backend support for numerical optimization (ggml/949)
|
2024-09-20 21:15:05 +03:00 |
|
pad.cu
|
llama : reorganize source code + improve CMake (#8006)
|
2024-06-26 18:33:02 +03:00 |
|
pad.cuh
|
llama : reorganize source code + improve CMake (#8006)
|
2024-06-26 18:33:02 +03:00 |
|
pool2d.cu
|
llama : reorganize source code + improve CMake (#8006)
|
2024-06-26 18:33:02 +03:00 |
|
pool2d.cuh
|
llama : reorganize source code + improve CMake (#8006)
|
2024-06-26 18:33:02 +03:00 |
|
quantize.cu
|
cuda : optimize argmax (#10441)
|
2024-11-21 18:18:50 +01:00 |
|
quantize.cuh
|
CUDA: optimize and refactor MMQ (#8416)
|
2024-07-11 16:47:47 +02:00 |
|
rope.cu
|
ggml : move rope type enum to ggml.h (#8949)
|
2024-08-13 21:13:15 +02:00 |
|
rope.cuh
|
llama : reorganize source code + improve CMake (#8006)
|
2024-06-26 18:33:02 +03:00 |
|
scale.cu
|
llama : reorganize source code + improve CMake (#8006)
|
2024-06-26 18:33:02 +03:00 |
|
scale.cuh
|
llama : reorganize source code + improve CMake (#8006)
|
2024-06-26 18:33:02 +03:00 |
|
softmax.cu
|
llama : reorganize source code + improve CMake (#8006)
|
2024-06-26 18:33:02 +03:00 |
|
softmax.cuh
|
llama : reorganize source code + improve CMake (#8006)
|
2024-06-26 18:33:02 +03:00 |
|
sum.cu
|
ggml : build backends as libraries (#10256)
|
2024-11-14 18:04:35 +01:00 |
|
sum.cuh
|
tests: add gradient tests for all backends (ggml/932)
|
2024-09-08 11:05:55 +03:00 |
|
sumrows.cu
|
sync : ggml
|
2024-08-27 22:41:27 +03:00 |
|
sumrows.cuh
|
sync : ggml
|
2024-08-27 22:41:27 +03:00 |
|
tsembd.cu
|
llama : reorganize source code + improve CMake (#8006)
|
2024-06-26 18:33:02 +03:00 |
|
tsembd.cuh
|
llama : reorganize source code + improve CMake (#8006)
|
2024-06-26 18:33:02 +03:00 |
|
unary.cu
|
RWKV v6: RWKV_WKV op CUDA implementation (#9454)
|
2024-09-22 04:29:12 +02:00 |
|
unary.cuh
|
RWKV v6: RWKV_WKV op CUDA implementation (#9454)
|
2024-09-22 04:29:12 +02:00 |
|
upscale.cu
|
llama : reorganize source code + improve CMake (#8006)
|
2024-06-26 18:33:02 +03:00 |
|
upscale.cuh
|
llama : reorganize source code + improve CMake (#8006)
|
2024-06-26 18:33:02 +03:00 |
|
vecdotq.cuh
|
CUDA: MMQ code deduplication + iquant support (#8495)
|
2024-07-20 22:25:26 +02:00 |
|
wkv6.cu
|
Optimize RWKV6 Operator Naming and Implement Multi-core CPU/ SYCL Acceleration (#10133)
|
2024-11-07 15:19:10 +08:00 |
|
wkv6.cuh
|
Optimize RWKV6 Operator Naming and Implement Multi-core CPU/ SYCL Acceleration (#10133)
|
2024-11-07 15:19:10 +08:00 |