Commit graph

10 commits

Author SHA1 Message Date
Justine Tunney
a0237a017c
Get llama.com working on aarch64 2023-05-10 04:20:47 -07:00
Justine Tunney
4c093155a3
Get llama.com building as an aarch64 native binary 2023-05-10 04:20:47 -07:00
Justine Tunney
d04430f4ef
Get LIBC_MEM and LIBC_STDIO building with aarch64 2023-05-10 04:20:47 -07:00
Justine Tunney
2b73e72d59
Make more code aarch64 friendly 2023-05-10 04:20:46 -07:00
Justine Tunney
3dac9f8999
Use Companion AI in llama.com by default 2023-04-30 23:08:15 -07:00
Justine Tunney
d9e27203d4
Incorporate some fixes and updates for GGML 2023-04-28 20:24:55 -07:00
Justine Tunney
b31ba86ace
Introduce prompt caching so prompts load instantly
This change also introduces an ephemeral status line in non-verbose mode
to display a load percentage status when slow operations are happening.
2023-04-28 16:15:26 -07:00
Justine Tunney
1c2da3a55a
Make shell usability improvements to llama.cpp
- Introduce -v and --verbose flags
- Don't print stats / diagnostics unless -v is passed
- Reduce --top_p default from 0.95 to 0.70
- Change --reverse-prompt to no longer imply --interactive
- Permit --reverse-prompt specifying custom EOS if non-interactive
2023-04-28 02:54:11 -07:00
Justine Tunney
420f889ac3
Further optimize the math library
The sincosf() function is now twice as fast, thanks to ARM Limited. The
same might also be true of logf() and expm1f() which have been updated.
2023-04-28 01:20:47 -07:00
Justine Tunney
e8b43903b2
Import llama.cpp
https://github.com/ggerganov/llama.cpp
0b2da20538d01926b77ea237dd1c930c4d20b686
See third_party/ggml/README.cosmo for changes
2023-04-27 14:37:14 -07:00