Tested spec success.

This commit is contained in:
JohnnyB 2023-08-22 15:16:09 +01:00 committed by GitHub
parent 4cc6943d56
commit 489177caf6
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23

View file

@ -12,17 +12,13 @@
# 4. OpenCL/CLBLAST support simply requires the ICD loader and basic opencl libraries. # 4. OpenCL/CLBLAST support simply requires the ICD loader and basic opencl libraries.
# It is up to the user to install the correct vendor-specific support. # It is up to the user to install the correct vendor-specific support.
Name: llamacpp Name: llama.cpp
Version: master Version: master
Release: 1%{?dist} Release: 1%{?dist}
Summary: CPU Inference of LLaMA model in pure C/C++ (no CUDA/OpenCL) Summary: CPU Inference of LLaMA model in pure C/C++ (no CUDA/OpenCL)
License: MIT License: MIT
Source0: https://github.com/ggerganov/llama.cpp/archive/refs/heads/master.tar.gz Source0: https://github.com/ggerganov/llama.cpp/archive/refs/heads/master.tar.gz
BuildRequires: coreutils make gcc-c++ git BuildRequires: coreutils make gcc-c++ git
Requires(pre): shadow-utils
Requires(post):
Requires(preun):
Requires(postun):
URL: https://github.com/ggerganov/llama.cpp URL: https://github.com/ggerganov/llama.cpp
%define debug_package %{nil} %define debug_package %{nil}
@ -35,6 +31,7 @@ CPU inference for Meta's Lllama2 models using default options.
%autosetup %autosetup
%build %build
tree
make -j make -j
%install %install
@ -48,7 +45,9 @@ rm -rf %{buildroot}
rm -rf %{_builddir}/* rm -rf %{_builddir}/*
%files %files
%{_bindir}/%{name} %{_bindir}/llamacpp
%{_bindir}/llamacppserver
%{_bindir}/llamacppsimple
%pre %pre