Skip to content

llama.cpp server-cuda-b6503 Public Latest

Install from the command line
Learn more about packages
$ docker pull ghcr.io/ggml-org/llama.cpp:server-cuda-b6503

Recent tagged image versions

  • Published about 23 hours ago · Digest
    sha256:016b9ba707007400cc5959f61981a864e696f28f657d7e4d492c54fe765db4ab
    535 Version downloads
  • Published about 23 hours ago · Digest
    sha256:0857e00d1df3d6a4b28ca337421cd14bd0249e2e9ccafe889997b439122f3089
    6 Version downloads
  • Published about 23 hours ago · Digest
    sha256:c0d1a878083a62d5c6ab77c64cc59651df05b93fbcead480e481827e127b73fc
    131 Version downloads
  • Published about 23 hours ago · Digest
    sha256:ceb1a02d8cf4187564036402d3fb991ad036d54e22b461aca89c61ddf751d5aa
    6 Version downloads
  • Published about 23 hours ago · Digest
    sha256:ce4907627407d98a9bc55e05133b4d03a93a9866657d3921290c058c19eb2ce9
    0 Version downloads

Loading

Details


Last published

23 hours ago

Discussions

2.58K

Issues

887

Total downloads

481K