From a402d3cf74b02ebdcdbc61759b3d6020be9cb468 Mon Sep 17 00:00:00 2001 From: GangT <52822318+TruongGiangBT@users.noreply.github.com> Date: Tue, 27 Feb 2024 14:27:36 +0700 Subject: [PATCH] Update issues.feature --- examples/server/tests/features/issues.feature | 14 +++++++++++++- 1 file changed, 13 insertions(+), 1 deletion(-) diff --git a/examples/server/tests/features/issues.feature b/examples/server/tests/features/issues.feature index bf5a175a3..3dccab706 100644 --- a/examples/server/tests/features/issues.feature +++ b/examples/server/tests/features/issues.feature @@ -1,4 +1,16 @@ # List of ongoing issues @bug -Feature: Issues +Feature: Issues llama.cpp server # No confirmed issue at the moment + Background: Server startup + Given a server listening on localhost:8080 + And a model with n_embed=4096 + And n_ctx=32768 + And 8 slots + And embeddings extraction + Then the server is starting + Then the server is healthy + + Scenario: Embedding + When 8 identical inputs (1000 tokens) are computed simultaneously. + Then embeddings are generated, but they are different