Max Krasnyansky
204377a0a8
threadpool: update threadpool resume/pause function names
2024-08-27 06:37:58 -07:00
Max Krasnyansky
49ac51f2a3
threadpool: simplify threadpool init logic and fix main thread affinity application
...
Most of the init code is now exactly the same between threadpool and openmp.
2024-08-27 06:37:58 -07:00
Max Krasnyansky
8008463aee
threadpool: replace checks for compute_thread ret code with proper status check
2024-08-27 06:37:58 -07:00
Max Krasnyansky
c506d7fc46
threadpool: enable --cpu-mask and other threadpool related options only if threadpool is enabled
2024-08-27 06:37:58 -07:00
Max Krasnyansky
f64c975168
threadpool: fix swift wrapper errors due to n_threads int type cleanup
2024-08-27 06:37:58 -07:00
Max Krasnyansky
40648601f1
threadpool: fix apply_priority() function name
2024-08-27 06:37:58 -07:00
Max Krasnyansky
31541d7427
threadpool: move typedef into ggml.h
2024-08-27 06:37:57 -07:00
Max Krasnyansky
c4452edfea
threadpool: add support for ggml_threadpool_params_default/init
...
Also removes the need for explicit mask_specified param.
all-zero cpumask means use default (usually inherited) cpu affinity mask.
2024-08-27 06:37:57 -07:00
Max Krasnyansky
4a4d71501b
threadpool: consistent use of int type for n_threads params
2024-08-27 06:37:57 -07:00
Max Krasnyansky
2358bb364b
threadpool: better naming for thread/cpumask releated functions
2024-08-27 06:37:57 -07:00
Max Krasnyansky
63a0dad83c
threadpool: remove abort_callback from threadpool state
2024-08-27 06:37:57 -07:00
Max Krasnyansky
307fece5d7
threadpool: use relaxed order for chunk sync
...
Full memory barrier is an overkill for this since each thread works on different chunk
2024-08-27 06:37:57 -07:00
Max Krasnyansky
db45b6d3a9
threadpool: do not clear barrier counters between graphs computes (fixes race with small graphs)
...
This fixes the race condition with very small graphs where the main thread happens to
start a new graph while the workers are just about to exit from barriers.
2024-08-27 06:37:57 -07:00
Max Krasnyansky
538bd9f730
threadpool: remove special-casing for disposable threadpools
...
With the efficient hybrid polling there is no need to make disposable pools any different.
This simplifies the overall logic and reduces branching.
Include n_threads in debug print for disposable threadpool.
Declare pause and stop flags as atomic_bool
This doesn't actually generate any memory barriers and simply informs
the thread sanitizer that these flags can be written & read by different
threads without locking.
2024-08-27 06:37:57 -07:00
Max Krasnyansky
9d3e78c6b8
threadpool: reduce the number of barrier required
...
New work is now indicated with an atomic counter that is incremented for
each new graph that needs to be computed.
This removes the need for extra barrier for clearing the "new_work" and
removes the special case for trivial graphs.
2024-08-27 06:37:57 -07:00
Max Krasnyansky
b630acdb73
threadpool: add support for hybrid polling
...
poll params (--poll, ...) now specify "polling level", i.e. how aggresively we poll before waiting on cond.var.
poll=0 means no polling, 1 means poll for 128K rounds then wait, 2 for 256K rounds, ...
The default value of 50 (ie 50x128K rounds) seems like a decent default across modern platforms.
We can tune this further as things evolve.
2024-08-27 06:37:57 -07:00
Max Krasnyansky
494e27c793
threadpool: reduce pause/resume/wakeup overhead in common cases
...
We now start threadpool in paused state only if we have two.
The resume is now implicit (ie new work) which allows for reduced locking and context-switch overhead.
2024-08-27 06:37:57 -07:00
Max Krasnyansky
48aa8eec07
threadpool: do not create two threadpools if their params are identical
2024-08-27 06:37:57 -07:00
fmz
2e18f0d4c9
fix potential race condition in check_for_work
2024-08-27 06:37:57 -07:00
Max Krasnyansky
dfa63778bd
threadpool: do not wakeup threads in already paused threadpool
2024-08-27 06:37:57 -07:00
Max Krasnyansky
3b62f7c145
threadpool: make polling the default to match openmp behavior
...
All command line args now allow for setting poll to 0 (false).
2024-08-27 06:37:57 -07:00
Max Krasnyansky
6fcc780b5f
atomics: always use stdatomics with clang and use relaxed memory order when polling in ggml_barrier
...
This also removes sched_yield() calls from ggml_barrier() to match OpenMP behavior.
2024-08-27 06:37:57 -07:00
Max Krasnyansky
2953441563
bench: create fresh threadpool for each test
...
For benchmarking it's better to start a fresh pool for each test with the exact number of threads
needed for that test. Having larger pools is suboptimal (causes more load, etc).
2024-08-27 06:37:57 -07:00
Max Krasnyansky
96d6603dc7
threadpool: use cpu_get_num_math to set the default number of threadpool threads
...
This way we avoid using E-Cores and Hyperthreaded siblings.
2024-08-27 06:37:57 -07:00
Faisal Zaghloul
3008b31b17
fix deadlock for cases where cgraph.n_nodes == 1
...
and fix --poll case
2024-08-27 06:37:57 -07:00
Faisal Zaghloul
57637326c4
fix more race conditions
2024-08-27 06:37:57 -07:00
Faisal Zaghloul
817eaf0c00
Fix Android bulid issue
2024-08-27 06:37:57 -07:00
Faisal Zaghloul
82224f84d7
fixed a harmless race condition
2024-08-27 06:37:57 -07:00
Faisal Zaghloul
d5c9c14dea
fixed use after release bug
2024-08-27 06:37:57 -07:00
Faisal Zaghloul
a0aae528bb
Minor fixes
2024-08-27 06:37:57 -07:00
Faisal Zaghloul
130adf8415
Introduce ggml_compute_threadpool
...
- OpenMP functional: check
- Vanilla ggml functional: Check
- ggml w/threadpool functional: Check
- OpenMP no regression: No glaring problems
- Vanilla ggml no regression: No glaring problems
- ggml w/threadpool no regression: No glaring problems
2024-08-27 06:37:54 -07:00
Xie Yanbo
3246fe84d7
Fix minicpm example directory ( #9111 )
2024-08-27 14:33:08 +02:00
compilade
78eb487bb0
llama : fix qs.n_attention_wv for DeepSeek-V2 ( #9156 )
2024-08-27 13:09:23 +03:00
Xuan Son Nguyen
a77feb5d71
server : add some missing env variables ( #9116 )
...
* server : add some missing env variables
* add LLAMA_ARG_HOST to server dockerfile
* also add LLAMA_ARG_CONT_BATCHING
2024-08-27 11:07:01 +02:00
CausalLM
2e59d61c1b
llama : fix ChatGLM4 wrong shape ( #9194 )
...
This should fix THUDM/glm-4-9b-chat-1m and CausalLM/miniG
2024-08-27 09:58:22 +03:00
Carsten Kragelund Jørgensen
75e1dbbaab
llama : fix llama3.1 rope_freqs not respecting custom head_dim ( #9141 )
...
* fix: llama3.1 rope_freqs not respecting custom head_dim
* fix: use potential head_dim for Exaone
2024-08-27 09:53:40 +03:00
arch-btw
ad76569f8e
common : Update stb_image.h to latest version ( #9161 )
...
* Update stb_image.h to latest version
Fixes https://github.com/ggerganov/llama.cpp/issues/7431
* Update .ecrc
2024-08-27 08:58:50 +03:00
slaren
7d787ed96c
ggml : do not crash when quantizing q4_x_x with an imatrix ( #9192 )
2024-08-26 19:44:43 +02:00
Georgi Gerganov
06658ad7c3
metal : separate scale and mask from QKT in FA kernel ( #9189 )
...
* metal : separate scale and mask from QKT in FA kernel
* metal : ne01 check no longer necessary
* metal : keep data in local memory
2024-08-26 18:31:02 +03:00
Georgi Gerganov
fc18425b6a
ggml : add SSM Metal kernels ( #8546 )
...
* ggml : add ggml_ssm_conv metal impl
* ggml : add ssm_scan metal impl
ggml-ci
2024-08-26 17:55:36 +03:00
Georgi Gerganov
879275ac98
tests : fix compile warnings for unreachable code ( #9185 )
...
ggml-ci
2024-08-26 16:30:25 +03:00
Georgi Gerganov
7a3df798fc
ci : add VULKAN support to ggml-ci ( #9055 )
2024-08-26 12:19:39 +03:00
Georgi Gerganov
e5edb210cd
server : update deps ( #9183 )
2024-08-26 12:16:57 +03:00
slaren
0c41e03ceb
metal : gemma2 flash attention support ( #9159 )
2024-08-26 11:08:59 +02:00
slaren
f12ceaca0c
ggml-ci : try to improve build time ( #9160 )
2024-08-26 11:03:30 +02:00
Justine Tunney
436787f170
llama : fix time complexity of string replacement ( #9163 )
...
This change fixes a bug where replacing text in a very long string could
cause llama.cpp to hang indefinitely. This is because the algorithm used
was quadratic, due to memmove() when s.replace() is called in a loop. It
seems most search results and LLM responses actually provide the O(n**2)
algorithm, which is a great tragedy. Using a builder string fixes things
2024-08-26 09:09:53 +03:00
Herman Semenov
93bc3839f9
common: fixed not working find argument --n-gpu-layers-draft ( #9175 )
2024-08-26 00:54:37 +02:00
Johannes Gäßler
f91fc5639b
CUDA: fix Gemma 2 numerical issues for FA ( #9166 )
2024-08-25 22:11:48 +02:00
Johannes Gäßler
e11bd856d5
CPU/CUDA: Gemma 2 FlashAttention support ( #8542 )
...
* CPU/CUDA: Gemma 2 FlashAttention support
* apply logit_softcap to scale in kernel
* disable logit softcapping tests on Metal
* remove metal check
2024-08-24 21:34:59 +02:00
João Dinis Ferreira
8f824ffe8e
quantize : fix typo in usage help of quantize.cpp
( #9145 )
2024-08-24 09:22:45 +03:00