Merge branch 'master' into custom-attention-mask

This commit is contained in:
Georgi Gerganov 2023-09-28 15:19:57 +03:00
commit 25856900db
No known key found for this signature in database
GPG key ID: 449E073F9DC10735
36 changed files with 730 additions and 239 deletions

View file

@ -1,3 +1,21 @@
# embedding
# llama.cpp/example/embedding
TODO
This example demonstrates generate high-dimensional embedding vector of a given text with llama.cpp.
## Quick Start
To get started right away, run the following command, making sure to use the correct path for the model you have:
### Unix-based systems (Linux, macOS, etc.):
```bash
./embedding -m ./path/to/model --log-disable -p "Hello World!" 2>/dev/null
```
### Windows:
```powershell
embedding.exe -m ./path/to/model --log-disable -p "Hello World!" 2>$null
```
The above command will output space-separated float values.

View file

@ -1,3 +1,4 @@
#include "build-info.h"
#include "common.h"
#include "llama.h"