-
Notifications
You must be signed in to change notification settings - Fork 2.7k
Description
OpenVINO Version
2025.2.0
Operating System
Other (Please specify in description)
Device used for inference
GPU
Framework
ONNX
Model used
https://github.com/Peterande/D-FINE
Issue description
Inference on GPU
using Openvino produces identical/blank results.
However, if I run the inference model on the CPU
, the results are valid.
No matter what image I'm using as input, outputs stays same.
Running on Ubuntu 22.04.1 using Docker image openvino/ubuntu22_runtime:2025.2.0 bash
.
Am I doing something wrong?
Step-by-step reproduction
-
Run Docker container
sudo docker run --privileged -it --rm -v /dev:/dev --device=/dev/dri --group-add=$(stat -c "%g" /dev/dri/render* | head -n 1) -u $(id -u):$(id -g) openvino/ubuntu22_runtime:2025.2.0 bash
-
Clone the repo and install requirements
-
Download and export D-Fine (checked m and l sizes) model
pth
->onnx
(orpth
->onnx
->IR
xml+bin) using script
python3 ./tools/deployment/export_onnx.py -c configs/dfine/objects365/dfine_hgnetv2_m_obj2coco.yml --simplify -r dfine_m_obj2coco.pth
- Run inference setting device_name to
GPU
using script
python3 ./tools/inference/openvino_inf.py --ov_model ./dfine_m_obj2coco/dfine_m_obj2coco.onnx --image img.png
Relevant log output
On CPU:
scores: [0.95673555 0.93977463 0.88885874 0.62019074 0.4028598 ...]
labels: [1 16 7 2 3 58 58 1 58 3 0 58 ...]
On GPU:
scores: [0.05419922 0.05419922 0.05419922 0.05419922 0.05419922 0.05419922 ...]
labels: [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ...]
Issue submission checklist
- I'm reporting an issue. It's not a question.
- I checked the problem with the documentation, FAQ, open issues, Stack Overflow, etc., and have not found a solution.
- There is reproducer code and related data files such as images, videos, models, etc.