Commit graph

6 commits

Author SHA1 Message Date
Justine Tunney
3a9cac4892
Fix small matters and improve sysconf()
- Fix mkdeps.com out of memory error
- Remove static memory from __get_cpu_count()
- Add support for passing hyphen to cat in cocmd
- Change more ZipOS errors from ENOTSUP to EROFS
- Specify mem_unit in sysinfo() output on BSD OSes
2023-08-17 00:32:11 -07:00
Justine Tunney
e6b7c16a53
Make changes needed for new demo 2023-06-15 23:22:49 -07:00
Justine Tunney
8fdb31681a
Introduce support for GGJT v3 file format
llama.com can now load weights that use the new file format which was
introduced a few weeks ago. Note that, unlike llama.cpp, we will keep
support for old file formats in our tool so you don't need to convert
your weights when the upstream project makes breaking changes. Please
note that using ggjt v3 does make avx2 inference go 5% faster for me.
2023-06-03 15:46:21 -07:00
Justine Tunney
1422e96b4e
Introduce native support for MacOS ARM64
There's a new program named ape/ape-m1.c which will be used to build an
embeddable binary that can load ape and elf executables. The support is
mostly working so far, but still chasing down ABI issues.
2023-05-20 04:17:03 -07:00
Justine Tunney
e7eb0b3070
Make more ML improvements
- Fix UX issues with llama.com
- Do housekeeping on libm code
- Add more vectorization to GGML
- Get GGJT quantizer programs working well
- Have the quantizer keep the output layer as f16c
- Prefetching improves performance 15% if you use fewer threads
2023-05-16 08:07:23 -07:00
Justine Tunney
210187cf77
Perform some code cleanup 2023-05-15 16:32:10 -07:00