forked from vllm-project/vllm
-
Notifications
You must be signed in to change notification settings - Fork 48
Update fp8 paged attention #592
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Draft
amd-xiaoyu12
wants to merge
890
commits into
ROCm:main
Choose a base branch
from
amd-xiaoyu12:fp8-paged-attention
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Draft
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
* Enabling ROCm CI on MI250 machines: - correct build target - correct queue Signed-off-by: Alexei V. Ivanov <[email protected]> --------- Signed-off-by: Alexei V. Ivanov <[email protected]>
* Optimization for quantized gemm skinny sizes * lint fix * Add support for bf16/fp16 * code cleanup * code cleanup * lint fix2 * cleanup * Moved the logic into tuned gemm to preserve API compatibility --------- Co-authored-by: Gregory Shtrasberg <[email protected]> Co-authored-by: Gregory Shtrasberg <[email protected]>
* Removing gfx940 and gfx941 targets. These have been deprecated in favor of gfx942 for MI300X Signed-off-by: Gregory Shtrasberg <[email protected]> * Remove from custom kernels as well --------- Signed-off-by: Gregory Shtrasberg <[email protected]>
Signed-off-by: Divakar Verma <[email protected]>
* Advance torch commit to be past pytorch/pytorch#144942 to fix tunable ops * Make sure to use the submodule commit compatible with the main aiter commit
Signed-off-by: Sage Moore <[email protected]>
Signed-off-by: Sage Moore <[email protected]>
Signed-off-by: Sage Moore <[email protected]>
Upstream merge 25 02 24
* Using aiter branch that can be built into a whl with PREBUILD_KERNELS=1 * Using fail fast on aiter build to see compilation errors in the log since it fails silently * Check for build success without installing whl
* Using proposed fix from ROCm/aiter#115 * Build fix
* tuning adjustment for quantized skinny gemm. * lint fix
)" This reverts commit 8294773.
Upstream merge 25 03 03
Signed-off-by: Gregory Shtrasberg <[email protected]>
Signed-off-by: Sage Moore <[email protected]>
Upstream merge 2025 06 23
Upstream merge 2025 06 25
Upstream merge 2025 06 30
* Updated README.md for June 24 Docker release * Added additional throughput results * Fixed some throughput results
1d2c43d
to
eb9d4de
Compare
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Please direct your PRs to the upstream vllm (https://github.com/vllm-project/vllm.git)
Accepting PRs into the ROCm fork (https://github.com/ROCm/vllm) will require a clear previously communicated exception
Summary:
Support full fp8 MFMA with wrap level dynamic query quantization to improve fp8 performance on MI308, which can also benefits other MI300x accelerator or latest hardware.