This commit is contained in:
Diego Devesa 2025-01-22 17:21:46 +01:00 committed by GitHub
parent b00f23ef78
commit 9006401b2a
No known key found for this signature in database
GPG key ID: B5690EEEBB952194

View file

@ -312,7 +312,7 @@ These options help improve the performance and memory usage of the LLaMA models.
- `-ub N`, `--ubatch-size N`: Physical batch size. This is the maximum number of tokens that may be processed at a time. Increasing this value may improve performance during prompt processing, at the expense of higher memory usage. Default: `512`.
- `-b N`, `--batch-size N`: Logical batch size. Increasing this value above the value of the physical batch size may improve prompt processing performance when using multiple GPUs with pipeline parallelism. Default: `2048`
- `-b N`, `--batch-size N`: Logical batch size. Increasing this value above the value of the physical batch size may improve prompt processing performance when using multiple GPUs with pipeline parallelism. Default: `2048`.
### Prompt Caching