李为
|
b86cdedb7e
|
remove iostream header
|
2024-12-03 15:03:55 +08:00 |
|
李为
|
07c7ff3e4a
|
Merge branch 'weili/dev' of github.com:NexaAI/llama.cpp into weili/dev
|
2024-12-03 15:00:34 +08:00 |
|
李为
|
ca7e8ef19e
|
fix clip_n_patch() allocation size error for 81-series omni-vlm models
|
2024-12-03 15:00:23 +08:00 |
|
李为
|
be54cb02ff
|
bug fix
|
2024-12-03 11:47:28 +08:00 |
|
liwiii
|
97267e60bd
|
bug fix in common-nexa.cpp
gguf_free(ctx_gguf) is called twice in L155, but this typo is not appeared in apollo repos, so this is just a tiny but fatal typo.
|
2024-12-03 11:36:59 +08:00 |
|
李为
|
71b563ec9a
|
Merge branch 'weili/dev' of github.com:NexaAI/llama.cpp into weili/dev
|
2024-12-03 11:26:26 +08:00 |
|
T
|
0b15d2d745
|
fix conficts (#32)
|
2024-12-02 15:41:12 +08:00 |
|
Zack Li
|
661b3f718c
|
Merge pull request #31 from NexaAI/teliu/dev
Ugrade to llama.cpp 74d73dc
|
2024-12-01 22:40:13 -08:00 |
|
Te993
|
a2c53052bd
|
merge from master
|
2024-12-02 14:38:20 +08:00 |
|
Te993
|
809db95990
|
ugrade to llama.cpp 74d73dc
|
2024-12-02 14:24:50 +08:00 |
|
Yicheng Qian
|
3479f516ea
|
udpate prompt template in wrapper
|
2024-11-22 14:01:42 -08:00 |
|
Zack Li
|
43f41a4c00
|
Merge pull request #28 from NexaAI/zack/vlm
Zack/vlm
|
2024-11-22 01:50:10 -08:00 |
|
zack Zhiyuan Li
|
fe8c7b45fd
|
revert CMakeList
|
2024-11-22 09:08:44 +00:00 |
|
zack Zhiyuan Li
|
460212ac2a
|
change template for inference
|
2024-11-22 09:06:15 +00:00 |
|
zack Zhiyuan Li
|
bbf1aaa7ed
|
Merge remote-tracking branch 'origin' into zack/vlm
|
2024-11-22 09:04:28 +00:00 |
|
李为
|
7589158595
|
expose omni_context_params struct
|
2024-11-21 20:44:49 +08:00 |
|
李为
|
fd2c58286a
|
remove reference interface from extern C in qwen2audio examples
|
2024-11-21 20:10:27 +08:00 |
|
Zack Li
|
fe792d62b1
|
Merge pull request #26 from NexaAI/master
master -> master release
|
2024-11-14 18:16:50 -08:00 |
|
Zack Li
|
25190fefa2
|
Merge pull request #25 from NexaAI/weili/master-release
fix all mem leaks of qwen2audio example
|
2024-11-14 17:49:34 -08:00 |
|
李为
|
e4ca946c48
|
free omni_ctx heap malloc space in omni_free() api
Currently mem leaks in qwen2audio are almost fixed.
|
2024-11-15 08:31:01 +08:00 |
|
李为
|
8e2e630405
|
fix mem leakage based on leaks tool (still WIP)
|
2024-11-14 22:04:01 +08:00 |
|
李为
|
aad0167bc3
|
audio embedding free() (but still memory leakage detected)
|
2024-11-14 14:50:49 +08:00 |
|
Zack Li
|
b9845b4f63
|
Merge pull request #24 from NexaAI/weili/master-release
[memory leakage] fixed a leakage by projector free
|
2024-11-13 17:16:39 -08:00 |
|
李为
|
fc25544867
|
[memory leakage] fixed a leakage by projector free
|
2024-11-14 08:32:55 +08:00 |
|
Zack Li
|
bb33473f08
|
Merge pull request #23 from NexaAI/david/vulkan2
fix vulkan build bug for external build
|
2024-11-11 23:47:03 -08:00 |
|
Zack Li
|
98297afbd5
|
Merge pull request #22 from NexaAI/david/vulkan2
fix vulkan build bug for external build
|
2024-11-11 23:46:51 -08:00 |
|
Yicheng Qian
|
4e80184c32
|
fix vulkan build bug for external build
|
2024-11-11 23:35:11 -08:00 |
|
Zack Li
|
89bcf5a6d9
|
Merge pull request #21 from NexaAI/master
update master-release
|
2024-11-11 23:17:58 -08:00 |
|
Zack Li
|
82dbdbdb40
|
Merge pull request #20 from NexaAI/weili/master-release
[omni-vlm] fixed the segmentation fault issue in nano-vlm-instruct(WIP)
|
2024-11-11 22:36:57 -08:00 |
|
李为
|
55953d35a4
|
[omni-vlm] fixed the segmentation fault issue in nano-vlm-instruct(WIP,
current solution is still not perfect)
|
2024-11-12 14:17:42 +08:00 |
|
Zack Li
|
5f2d958492
|
Merge pull request #19 from NexaAI/master
include latest vlm and audio lm changes
|
2024-11-11 12:25:51 -08:00 |
|
Zack Li
|
362bdf3292
|
Merge pull request #18 from NexaAI/weili/master-release
[omni-vlm example] reset model in every inerence step to avoid nosense output.
|
2024-11-11 12:24:12 -08:00 |
|
李为
|
7cf07df5e2
|
reset model in every inerence step to avoid nosense output.
|
2024-11-11 19:41:26 +08:00 |
|
Zack Li
|
6f0e8c3ee6
|
Create CODEOWNERS
|
2024-11-09 18:56:57 -08:00 |
|
Zack Li
|
21bc833273
|
Merge pull request #17 from NexaAI/weili/master-release
fix OCR template error.
|
2024-11-09 10:03:46 -08:00 |
|
李为
|
d04e354f2f
|
fix OCR template error.
|
2024-11-09 20:35:55 +08:00 |
|
Perry Cheng
|
667a6d9838
|
Merge pull request #16 from NexaAI/perry/android-dev
changed download models and nlen
|
2024-11-08 15:23:54 -08:00 |
|
zhycheng614
|
ecfe0b487f
|
changed download models and nlen
|
2024-11-08 23:22:26 +00:00 |
|
Zack Li
|
d5df53658f
|
Merge pull request #14 from NexaAI/teliu/android/dev
Add submodule llava for android sample
|
2024-11-08 13:25:00 -08:00 |
|
Zack Li
|
8c417282d5
|
Merge pull request #15 from NexaAI/weili/master-release
support all omni-vlm models in one omni-vlm/ folder.
|
2024-11-08 13:23:46 -08:00 |
|
李为
|
eb6d54679e
|
update README.md
|
2024-11-08 22:05:57 +08:00 |
|
李为
|
3d9c63a3ff
|
remove omni-vlm-v2/
|
2024-11-08 21:00:42 +08:00 |
|
李为
|
16c22471e8
|
remove redundant omni-vlm-v2/ folder, all omni-vlm examples will be added to omni-vlm/ folder.
|
2024-11-08 20:59:23 +08:00 |
|
liute110
|
b17684efb3
|
add include llava.h
|
2024-11-08 16:07:50 +08:00 |
|
liute110
|
400fc2a4b0
|
add one more model
|
2024-11-08 16:06:37 +08:00 |
|
liute110
|
86c2233a38
|
add submodule llava for android
|
2024-11-08 16:02:45 +08:00 |
|
Zack Li
|
df5841b6b8
|
Merge pull request #13 from NexaAI/weili/master-release
add omni-vlm-v2 implementations( C++ & python)
|
2024-11-07 00:48:21 -08:00 |
|
李为
|
3dfac7817f
|
add returned string type (const char*) for nexa-omni-audio
|
2024-11-07 16:13:53 +08:00 |
|
Zack Li
|
20b9f02cee
|
Merge pull request #12 from NexaAI/weili/master-release
add returned string type (const char*) for nexa-omni-audio
|
2024-11-06 19:28:46 -08:00 |
|
李为
|
5edadffd88
|
add returned string type (const char*) for nexa-omni-audio
|
2024-11-07 11:19:50 +08:00 |
|