Documented CUDA reproducibility, added warning (#1346)
This commit is contained in:
parent
e1295513a4
commit
1f48b0abcf
3 changed files with 6 additions and 1 deletions
|
@ -257,6 +257,8 @@ Building the program with BLAS support may lead to some performance improvements
|
|||
cmake --build . --config Release
|
||||
```
|
||||
|
||||
Note: Because llama.cpp uses multiple CUDA streams for matrix multiplication results [are not guaranteed to be reproducible](https://docs.nvidia.com/cuda/cublas/index.html#results-reproducibility). If you need reproducibility, set `GGML_CUDA_MAX_STREAMS` in the file `ggml-cuda.cu` to 1.
|
||||
|
||||
### Prepare Data & Run
|
||||
|
||||
```bash
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue