Commit graph

888 commits

Author SHA1 Message Date
Concedo
57230b5196 upgrade all other formats 2023-05-17 16:28:20 +08:00
Concedo
00da2a5f4e neox is updated 2023-05-17 14:56:54 +08:00
Concedo
90fe9096b4 clean and refactoring pass before supporting newer models for different arch 2023-05-17 11:23:29 +08:00
Concedo
60ee00428b updated lite 2023-05-17 10:26:36 +08:00
Concedo
d8d39f1ba8 Merge branch 'master' into concedo_experimental
# Conflicts:
#	Makefile
2023-05-17 10:07:43 +08:00
Concedo
f561fe5a4a switch back to ofast for c 2023-05-17 10:04:54 +08:00
Concedo
504a2aa874 Merge remote-tracking branch 'fixmake/concedo' into concedo_experimental 2023-05-17 10:01:57 +08:00
Concedo
327763c21b Merge remote-tracking branch 'occam/opencl-dev' into concedo_experimental 2023-05-17 10:01:22 +08:00
Tom Jobbins
2b2646931b
convert.py: Support models which are stored in a single pytorch_model.bin (#1469)
* Support models in a single pytorch_model.bin

* Remove spurious line with typo
2023-05-17 00:04:35 +02:00
Ilya Kurdyukov
42627421ec
~7% faster Q5_1 AVX2 code (#1477) 2023-05-16 18:36:47 +00:00
0cc4m
de10afa80f Fix tensor load to device
Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
2023-05-16 18:49:49 +02:00
horenbergerb
f29c25e7a1 hacky fix for linux cublas build 2023-05-16 12:29:04 -04:00
Concedo
417711be46 add more QOL 2023-05-17 00:11:28 +08:00
András Salamon
9560655409
define default model path once, sync path with readme (#1366) 2023-05-16 17:46:34 +02:00
Concedo
94ef3e81cf inc allocation 2023-05-16 23:32:35 +08:00
Concedo
954b87eb05 working checkpoint 2023-05-16 22:33:21 +08:00
Concedo
84c1bc7822 Merge remote-tracking branch 'occam/opencl-dev' into concedo_experimental 2023-05-16 22:00:15 +08:00
0cc4m
b3ff66d87f Fix error in convert f16 to f32 kernel call 2023-05-16 13:05:33 +02:00
Concedo
8e394f9913 progress 2023-05-16 17:29:55 +08:00
Concedo
196fbba527 Merge branch 'opencl-dev2' into concedo_experimental
# Conflicts:
#	CMakeLists.txt
2023-05-16 17:04:33 +08:00
sandyiscool
2a5ee023ad
Add alternate include path for openblas (#1476)
In some linux distributions (fedora, for example), the include path for openblas is located at '/usr/local/include'
2023-05-16 10:30:15 +02:00
Concedo
554340f565 revert library back first 2023-05-16 15:45:05 +08:00
0cc4m
342d346c13 Generate dequant_mul_mat kernels from simple templates 2023-05-16 07:42:01 +02:00
Concedo
e4e6994353 Not working, don't use. testing a merge 2023-05-16 12:33:24 +08:00
0cc4m
1747c598fa Fix CMakeLists.txt 2023-05-15 19:51:23 +02:00
Concedo
d43b243b9a off static 2023-05-15 22:17:04 +08:00
Concedo
96c28dda4d export symbols 2023-05-15 20:38:21 +08:00
Concedo
72836d4eac fixing more compile issues 2023-05-15 20:10:54 +08:00
Concedo
6504150fac just testing cublas 2023-05-15 20:01:22 +08:00
Concedo
fce2e7e518 up version 2023-05-15 14:53:13 +08:00
Concedo
466cd21368 test cmakefile for cublas. 2023-05-15 14:50:38 +08:00
Concedo
923184f2e8 Merge branch 'master' into concedo_experimental
# Conflicts:
#	ggml.h
2023-05-15 10:55:15 +08:00
zrm
63d20469b8
fix get_num_physical_cores() (#1436)
* fix get_num_physical_cores()
had been broken on complex topologies because "cpu cores" in /proc/cpuinfo is per-"physical id"

* Add spaces to maintain consistent formatting

---------

Co-authored-by: slaren <ddevesa@gmail.com>
2023-05-15 04:25:42 +02:00
slaren
b5c9295eef
benchmark-matmul: fix clang-tidy issues, report results in GFLOPS (#1458)
* benchmark-matmul: fix command line parsing, replace macros with functions, report results in GFLOPS
2023-05-14 22:46:00 +02:00
0cc4m
5a74dc1536 Add remaining dequant_mul_mat functions 2023-05-14 22:19:54 +02:00
0cc4m
883e587a04 Fix dequant_mul_mat kernel 2023-05-14 21:26:28 +02:00
0cc4m
8795403de3 Fix bugs in dequant_mul_mat code 2023-05-14 21:14:05 +02:00
Johannes Gäßler
eb363627fd
cuda : deduplicated dequantization code (#1453) 2023-05-14 21:53:23 +03:00
xaedes
79b2d5b69d
ggml : alternative fix for race condition bug in non-inplace ggml_compute_forward_diag_mask_f32 (#1454)
* fix race condition bug in non-inplace ggml_compute_forward_diag_mask_f32

memcpy needs to be synchronized across threads to avoid race conditions.
=> do it in INIT phase

* remove trailing whitespace

* Update ggml.c

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-05-14 18:55:02 +03:00
Georgi Gerganov
13c351ad72
ggml : various fixes (#1450)
- `ggml_rope()`
- `ggml_diag_mask_inf()` multi-threaded
- compatibility with scratch buffers
2023-05-14 18:22:50 +03:00
0cc4m
c77966524a Refactor OpenCL code to work more like the CUDA code, add missing functions 2023-05-14 17:01:46 +02:00
0cc4m
82bc517b9a Move back to C++ for OpenCL 2023-05-14 17:00:37 +02:00
Concedo
c81dd58e76 Merge commit 'f954edda93' into archive_lib
# Conflicts:
#	ggml.c
2023-05-14 18:34:56 +08:00
katsu560
60f8c361ca
ggml : add AVX support based on AVX2 code (#1430) 2023-05-14 10:03:51 +00:00
Concedo
b692e4d2a4 wip 2023-05-14 17:21:07 +08:00
Georgi Gerganov
601a033475
ggml : add GGML_QNT_VERSION to track quantization format changes
https://github.com/ggerganov/ggml/issues/150#issuecomment-1546625668
2023-05-14 10:20:19 +03:00
Concedo
e01e373e63 Merge branch 'master' into concedo_experimental
# Conflicts:
#	Makefile
#	ggml.c
#	llama.cpp
2023-05-14 11:34:41 +08:00
Concedo
9cd5b9a769 up ver 2023-05-14 11:10:26 +08:00
Concedo
8a5fe628df recognize q8_0 as an older format as the new clblast doesnt work correctly with it 2023-05-14 11:06:23 +08:00
Concedo
49d6334dc1 try fix kernel 2023-05-14 00:41:26 +08:00