Skip to content

Commit b9f1fb6

Browse files
committed
Document the new max GPU layers default in help
This is a key change, just letting users know. Signed-off-by: Eric Curtin <[email protected]>
1 parent 2c8dac7 commit b9f1fb6

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

common/arg.cpp

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2466,7 +2466,7 @@ common_params_context common_params_parser_init(common_params & params, llama_ex
24662466
).set_examples({LLAMA_EXAMPLE_SPECULATIVE, LLAMA_EXAMPLE_SERVER}).set_env("LLAMA_ARG_N_CPU_MOE_DRAFT"));
24672467
add_opt(common_arg(
24682468
{"-ngl", "--gpu-layers", "--n-gpu-layers"}, "N",
2469-
"number of layers to store in VRAM",
2469+
string_format("number of layers to store in VRAM (default: %d, 999 = max layers)", params.n_gpu_layers),
24702470
[](common_params & params, int value) {
24712471
params.n_gpu_layers = value;
24722472
if (!llama_supports_gpu_offload()) {

0 commit comments

Comments
 (0)