Skip to content

Conversation

Pranaykarvi
Copy link
Contributor

@Pranaykarvi Pranaykarvi commented Aug 5, 2025

📝 Description

This PR addresses Issue #2579, where calling the reciprocal MIL op with int32 inputs causes an unexpected crash during SSA conversion.

🔧 Problem

Using mb.reciprocal(x=int32_tensor) leads to a runtime crash because reciprocal operations are invalid for int32 types. The crash occurred deep within the MIL SSA conversion without a clear error message, making it hard to debug.


✅ Fix

  • Added an explicit ValueError in the type_inference method of the reciprocal op for int32 inputs.
  • This change provides clear, early feedback and avoids internal MIL crashes.

🧪 Tests

  • ✅ New unit test added in test_inverse_conversion.py to verify the ValueError is raised correctly.
  • ✅ Existing tests pass with pytest coremltools/test/test_inverse_conversion.py.

🧠 Files Changed

  • coremltools/converters/mil/mil/ops/defs/elementwise.py
    ➤ Added type check and exception for int32 inputs in reciprocal.type_inference.

  • coremltools/test/test_inverse_conversion.py
    ➤ Added test case to ensure invalid int32 input raises ValueError.


🔗 Related Issue

Closes #2579

@Pranaykarvi
Copy link
Contributor Author

Hey @junpeiz!
Could you please review my PR #2580 ? Thanks! 😊

@Pranaykarvi Pranaykarvi changed the title **Fix reciprocal op crash on int32 input by raising ValueError (Fixes #2047)** **Fix reciprocal op crash on int32 input by raising ValueError (Fixes #2579)** Aug 5, 2025
@junpeiz
Copy link
Collaborator

junpeiz commented Aug 5, 2025

Thank you @Pranaykarvi for contributing! I will let this week's oncaller (@abhishek-patel6) to take a look.

@junpeiz junpeiz requested a review from abhishek-patel6 August 5, 2025 20:19
@abhishek-patel6
Copy link
Collaborator

Hi, @Pranaykarvi . Thank you for the patch! I think that the conversion already throws an error. Original issue expects the torch compiler to implicitly convert to float. Probably something like

def log1p(context, node):
inputs = _get_inputs(context, node, expected=1)
x = inputs[0]
if types.is_int(x.dtype):
x = mb.cast(x=x, dtype="fp32")
context.add(mb.log(x=x, epsilon=1.0, name=node.name))

@Pranaykarvi
Copy link
Contributor Author

Thanks @abhishek-patel6!

I checked ops.py, and unlike log1p, there’s no frontend handler for inverse or reciprocal. The op is introduced internally during MIL lowering (e.g., for 16 / x.shape[0]), so casting can't happen in the frontend.

That’s why I added the check directly in the MIL op to raise a clear ValueError — it’s the only reliable place to catch this issue. Let me know if you’d prefer an alternate approach!

@abhishek-patel6
Copy link
Collaborator

Thanks @abhishek-patel6!

I checked ops.py, and unlike log1p, there’s no frontend handler for inverse or reciprocal. The op is introduced internally during MIL lowering (e.g., for 16 / x.shape[0]), so casting can't happen in the frontend.

That’s why I added the check directly in the MIL op to raise a clear ValueError — it’s the only reliable place to catch this issue. Let me know if you’d prefer an alternate approach!

I think there is a handler for inverse.

@register_torch_op
def reciprocal(context, node):
inputs = _get_inputs(context, node, expected=1)
context.add(mb.inverse(x=inputs[0], name=node.name))

Anyways, the linked issue requests for implicit conversion to float IIUC. Throwing an error message earlier wouldn't fix their original issue still.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Unsupported a / x.shape[0]
3 participants