Introduce support for GGJT v3 file format

llama.com can now load weights that use the new file format which was
introduced a few weeks ago. Note that, unlike llama.cpp, we will keep
support for old file formats in our tool so you don't need to convert
your weights when the upstream project makes breaking changes. Please
note that using ggjt v3 does make avx2 inference go 5% faster for me.
This commit is contained in:
Justine Tunney 2023-06-03 13:48:52 -07:00
parent 6ae18a10ba
commit 8fdb31681a
No known key found for this signature in database
GPG key ID: BE714B4575D6E328
33 changed files with 3829 additions and 371 deletions

View file

@ -55,9 +55,10 @@ static const std::map<std::string, llama_ftype> LLAMA_FTYPE_MAP = {
// ./quantize models/llama/ggml-model.bin models/llama/ggml-model-quant.bin type [nthreads]
//
int main(int argc, char ** argv) {
MakeProcessNice();
ShowCrashReports();
ggjt_v2();
ggjt_v3();
ggml_time_init();
if (argc < 3) {
@ -69,11 +70,7 @@ int main(int argc, char ** argv) {
}
// needed to initialize f16 tables
{
struct ggml_init_params params = { 0, NULL, false };
struct ggml_context * ctx = ggml_init(params);
ggml_free(ctx);
}
llama_init_backend();
const std::string fname_inp = argv[1];
const std::string fname_out = argv[2];
@ -95,7 +92,7 @@ int main(int argc, char ** argv) {
ftype = (enum llama_ftype)atoi(argv[3]);
}
int nthread = argc > 4 ? atoi(argv[4]) : 0;
int nthread = argc > 4 ? atoi(argv[4]) : std::min(20, std::max(1, _getcpucount() >> 1));
const int64_t t_main_start_us = ggml_time_us();