Skip to content

Conversation

yuejiaointel
Copy link
Contributor

@yuejiaointel yuejiaointel commented Jan 28, 2025

Description

Add a comprehensive description of proposed changes

List associated issue number(s) if exist(s): #6 (for example)

Documentation PR (if needed): #1340 (for example)

Benchmarks PR (if needed): IntelPython/scikit-learn_bench#155 (for example)


PR should start as a draft, then move to ready for review state after CI is passed and all applicable checkboxes are closed.
This approach ensures that reviewers don't spend extra time asking for regular requirements.

You can remove a checkbox as not applicable only if it doesn't relate to this PR in any way.
For example, PR with docs update doesn't require checkboxes for performance while PR with any change in actual code should have checkboxes and justify how this code change is expected to affect performance (or justification should be self-evident).

Checklist to comply with before moving PR from draft:

PR completeness and readability

  • I have reviewed my changes thoroughly before submitting this pull request.
  • I have commented my code, particularly in hard-to-understand areas.
  • I have updated the documentation to reflect the changes or created a separate PR with update and provided its number in the description, if necessary.
  • Git commit message contains an appropriate signed-off-by string (see CONTRIBUTING.md for details).
  • I have added a respective label(s) to PR if I have a permission for that.
  • I have resolved any merge conflicts that might occur with the base branch.

Testing

  • I have run it locally and tested the changes extensively.
  • All CI jobs are green or I have provided justification why they aren't.
  • I have extended testing suite if new functionality was introduced in this PR.

Performance

  • I have measured performance for affected algorithms using scikit-learn_bench and provided at least summary table with measured data, if performance change is expected.
  • I have provided justification why performance has changed or why changes are not expected.
  • I have provided justification why quality metrics have changed or why changes are not expected.
  • I have extended benchmarking suite and provided corresponding scikit-learn_bench PR if new measurable functionality was introduced in this PR.

@icfaust
Copy link
Contributor

icfaust commented Jan 28, 2025

@yuejiaointel you are on the right path before the reverts. The design pattern test issues are to be expected. This was a PR which I did a little bit of testing towards it. https://github.com/uxlfoundation/scikit-learn-intelex/pull/2213/files please message me with questions to save time.

@icfaust
Copy link
Contributor

icfaust commented Jan 28, 2025

design patterns key off of to_table showing that onedal rather than daal4py was run, but the design of kneighbors is so wrong that now those tests show that the incorrect ordering of operations is occurring with validate_data etc., you'll have to either deselect and make a new ticket, or fix them in this PR.

Copy link

codecov bot commented Jan 28, 2025

Codecov Report

❌ Patch coverage is 94.73684% with 1 line in your changes missing coverage. Please review.

Files with missing lines Patch % Lines
onedal/neighbors/neighbors.py 94.73% 0 Missing and 1 partial ⚠️
Flag Coverage Δ
azure 80.31% <94.73%> (-0.35%) ⬇️
github 72.91% <94.73%> (-0.24%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

Files with missing lines Coverage Δ
onedal/neighbors/neighbors.py 76.90% <94.73%> (-5.84%) ⬇️

... and 4 files with indirect coverage changes

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@yuejiaointel
Copy link
Contributor Author

design patterns key off of to_table showing that onedal rather than daal4py was run, but the design of kneighbors is so wrong that now those tests show that the incorrect ordering of operations is occurring with validate_data etc., you'll have to either deselect and make a new ticket, or fix them in this PR.

Hi Ian,
Thx for the pointers, I can replicate the error locally and I plan to fix it in this PR, and will ask questions later : D
Best
Yue

@yuejiaointel yuejiaointel marked this pull request as ready for review January 31, 2025 06:11
@yuejiaointel yuejiaointel marked this pull request as draft January 31, 2025 06:27
@yuejiaointel
Copy link
Contributor Author

/intelci: run ml_benchmarks

@yuejiaointel
Copy link
Contributor Author

/intelci: run

@yuejiaointel
Copy link
Contributor Author

/intelci: run

@yuejiaointel
Copy link
Contributor Author

/intelci: run

@yuejiaointel
Copy link
Contributor Author

/intelci: run

@yuejiaointel
Copy link
Contributor Author

/intelci: run

@yuejiaointel yuejiaointel marked this pull request as draft September 22, 2025 16:00
@yuejiaointel yuejiaointel force-pushed the replace_daal4py_with_pybind11_obj_knn branch from dca1be3 to 480dd6f Compare September 24, 2025 22:03
@yuejiaointel
Copy link
Contributor Author

/intelci: run

@yuejiaointel yuejiaointel marked this pull request as ready for review September 25, 2025 00:43
@yuejiaointel
Copy link
Contributor Author

CI failures look like they come from this PR:

self = array([[5, 3, 1, 0],
       [4, 3, 1, 0],
       [4, 3, 1, 0],
       [4, 3, 1, 0],
       [5, 3, 1, 0],
       [5, 3,...
       [6, 3, 5, 2],
       [6, 2, 5, 1],
       [6, 3, 5, 2],
       [6, 3, 5, 2],
       [5, 3, 5, 1]], dtype=uint8)
dtype = dtype('float64')

    def __array__(self, dtype=None, /, *, copy=None):
>       raise TypeError(
            "Implicit conversion to a NumPy array is not allowed. "
            "Please use `.asnumpy()` to construct a NumPy array explicitly."
        )
E       TypeError: Implicit conversion to a NumPy array is not allowed. Please use `.asnumpy()` to construct a NumPy array explicitly.

/usr/share/miniconda/envs/CB/lib/python3.11/site-packages/dpnp/dpnp_array.py:189: TypeError
_ test_special_estimator_patching[NearestNeighbors(algorithm='brute')-kneighbors_graph-dpnp-SyclQueue_CPU-uint16] _

Thx for reviewing it! Fixed it now. Added some validate_data check tests in to violation array, these will be added based on array api rules in the followup PR

@yuejiaointel
Copy link
Contributor Author

/intelci: run

Copy link
Contributor

@icfaust icfaust left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You'll need to temporarily re-add check_feature_names to the various sklearnex estimators. It'll get replaced with validate_data in the array API PR. This is for scikit-learn conformance in private CI.

So far this PR looks great, and looks like its going to be merged quite soon.

@yuejiaointel
Copy link
Contributor Author

/intelci: run

@yuejiaointel
Copy link
Contributor Author

You'll need to temporarily re-add check_feature_names to the various sklearnex estimators. It'll get replaced with validate_data in the array API PR. This is for scikit-learn conformance in private CI.

So far this PR looks great, and looks like its going to be merged quite soon.

thx, added!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants