Skip to content

Conversation

infinitalo
Copy link

This MR is a work-in-progress.

The current commits are able to get inference working for Q8_0 on Adreno 830 (Samsung S25), but finetuning still crashes.

We're currently working on a fix for lora-finetuning on Adreno A830, but you can use this for testing in the meanwhile.

makaveli10 and others added 20 commits August 19, 2025 10:07
This fixes the vkDeviceLostError on Mali
@infinitalo infinitalo force-pushed the italo/tether/adreno_q8_inference branch 2 times, most recently from cbea88f to 208747f Compare September 1, 2025 14:13
@infinitalo
Copy link
Author

infinitalo commented Sep 1, 2025

Steps to run the backend-ops test suite:

  1. Set up your Android environment for testing llama.cpp. You can use this comment as a reference if you haven't built it already: Add initial LoRA finetuning support; vulkan OUT_PROD; vulkan cross-entropy-backward #5 (comment)
  2. Configure your build with: cmake -B build -DGGML_VULKAN=1 -DCMAKE_BUILD_TYPE=Debug -DBUILD_TESTING=ON
  3. Build llama.cpp: cmake --build build --config Debug -j2
  4. Run the backend-ops tests: ./build/bin/test-backend-ops
  5. You can also run tests for specific operators with the -o option, for example: ./build/bin/test-backend-ops -o MUL_MAT

This PR has a commit disabling several tests for quantized datatypes that are not currently working properly on Adreno 830.

If you run the test suite as described above with this branch, it should say 2/2 backends passing at the end, with no failing tests on A830, as the attached file shows.

test_adreno_q8_inf2.txt

jpgaribotti pushed a commit that referenced this pull request Sep 10, 2025
* oai moe

* compat with new checkpoint

* add attn sink impl

* add rope scaling yarn

* logits match with latest transformers code

* wip chat template

* rm trailing space

* use ggml_scale_bias

* rm redundant is_swa_all

* convert interleaved gate_up

* graph : fix activation function to match reference (#7)

* vocab : handle o200k_harmony special tokens

* ggml : add attention sinks support (#1)

* llama : add attn sinks

* ggml : add attn sinks

* cuda : add attn sinks

* vulkan : add support for sinks in softmax

remove unnecessary return

* ggml : add fused swiglu_oai op (#11)

* ggml : add fused swiglu_oai op

* Update ggml/src/ggml-cpu/ops.cpp

Co-authored-by: Georgi Gerganov <[email protected]>

* update CUDA impl

* cont : metal impl

* add vulkan impl

* test-backend-ops : more test cases, clean up

* llama : remove unfused impl

* remove extra lines

---------

Co-authored-by: Georgi Gerganov <[email protected]>

---------

Co-authored-by: slaren <[email protected]>

* repack mxfp4 upon conversion

* clean up a bit

* enable thinking

* add quick hack to render only some special tokens

* fix bf16 conversion

* remove vocab hack

* webui ok

* support chat parsing for gpt-oss

* fix webui

* direct mapping mxfp4, FINALLY

* force using mxfp4

* properly use lazy tensor

* ggml : add mxfp4

ggml : use e8m0 conversion instead of powf

Co-authored-by: Diego Devesa <[email protected]>

change kvalues_mxfp4 table to match e2m1 (#6)

metal : remove quantization for now (not used)

cuda : fix disabled CUDA graphs due to ffn moe bias

vulkan : add support for mxfp4

cont : add cm2 dequant

* ggml : add ggml_add_id (#13)

* ggml : add ggml_add_id

* add cuda impl

* llama : add weight support check for add_id

* perf opt

* add vulkan impl

* rename cuda files

* add metal impl

* allow in-place ggml_add_id

* llama : keep biases on CPU with --cpu-moe

* llama : fix compile error

ggml-ci

* cuda : add fallback for __nv_cvt_e8m0_to_bf16raw

ggml-ci

* cleanup

ggml-ci

* sycl : fix supports_op for MXFP4

ggml-ci

* fix Unknown reasoning format

* ggml-cpu : fix AVX build

ggml-ci

* fix hip build

ggml-ci

* cuda : add mxfp4 dequantization support for cuBLAS

ggml-ci

* ggml-cpu : fix mxfp4 fallback definitions for some architectures

ggml-ci

* cuda : fix version required for __nv_cvt_e8m0_to_bf16raw

---------

Co-authored-by: Xuan Son Nguyen <[email protected]>
Co-authored-by: slaren <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants