forked from ggml-org/llama.cpp
-
Notifications
You must be signed in to change notification settings - Fork 10
Pull requests: tetherto/qvac-ext-lib-llama.cpp
Author
Label
Projects
Milestones
Reviews
Assignee
Sort
Pull requests list
Quality and Speed tuning scripts
python
script
tier1
#16
opened Sep 17, 2025 by
jesusmb1995
Loading…
WIP: llama: Vulkan: Fix Adreno Q8_0 issues.
examples
ggml
Nvidia GPU
testing
Vulkan
#11
opened Aug 29, 2025 by
infinitalo
Loading…
Draft: Save resume lora ckpt
examples
ggml
Nvidia GPU
Vulkan
#6
opened Aug 26, 2025 by
makaveli10
Loading…
ProTip!
Find all pull requests that aren't related to any open issues with -linked:issue.