Skip to content

Conversation

amd-hhashemi
Copy link

@amd-hhashemi amd-hhashemi commented Feb 22, 2025

Please direct your PRs to the upstream vllm (https://github.com/vllm-project/vllm.git)

Accepting PRs into the ROCm fork (https://github.com/ROCm/vllm) will require a clear previously communicated exception

@gshtras gshtras merged commit fd70f59 into ROCm:main Feb 28, 2025
1 check passed
gshtras pushed a commit that referenced this pull request Mar 12, 2025
fix weight shuffle

remove ck_moe_fused_stage2's weight_scale expand

rename

Properly disabling aiter components

Aiter whl fix branch (#452)

* Using proposed fix from ROCm/aiter#115

* Build fix

tuning adjustment for quantized skinny gemm. (#444)

* tuning adjustment for quantized skinny gemm.

* lint fix

Flipping the default switch once more

Revert "Merge branch 'aiter_integration_final' into aiter_integration_ck_fused_moe"

This reverts commit df5f297, reversing
changes made to cdeb54e.

Reapply "Merge branch 'aiter_integration_final' into aiter_integration_ck_fused_moe"

This reverts commit f28b89b.

aiter 2stage moe branch modify

fix for Mixtral-8x7B-Instruct-v0.1-FP8-KV on MI325
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants