ggml : add NUMA support (#1556)
* detect NUMA systems and pin work threads to nodes (linux) * disable mmap prefetch/readahead for NUMA systems * avoid sending finalize op to thread pool if it does nothing * silence robot * fix args * make --numa a param * recommendation that n_nodes evenly divide n_threads did not warrant such aggressive enforcement * lower synchronization overhead * statically allocate * move numa state to g_state * add description for --numa * ggml : minor style changes * ggml : minor style + try fix sanitizer build * llama : allow to initialize backend with NUMA support * llama : avoid ggml include in llama-util.h * ggml : style / formatting * ggml : fix handling of ops with n_threads > n_tasks > 1 * server : utilize numa parameter --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
This commit is contained in:
parent
9225baef71
commit
b853d45601
14 changed files with 339 additions and 236 deletions
3
llama.h
3
llama.h
|
@ -140,8 +140,9 @@ extern "C" {
|
|||
|
||||
// TODO: not great API - very likely to change
|
||||
// Initialize the llama + ggml backend
|
||||
// If numa is true, use NUMA optimizations
|
||||
// Call once at the start of the program
|
||||
LLAMA_API void llama_init_backend();
|
||||
LLAMA_API void llama_init_backend(bool numa);
|
||||
|
||||
LLAMA_API int64_t llama_time_us();
|
||||
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue