-
-
Notifications
You must be signed in to change notification settings - Fork 1.2k
Port xenia-canary vulkan resolution scaling to master #2315
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
Largely similar to D3D12 implementation but more simple buffer management and no mips scaling. Tested on Linux with 2x2 and 3x3 running smoothly.
- EDRAM scaling and tile dimension scaling - Resolution scaling inversion and coordinate division by scale factor - Added center pixel check to prevent duplicate exports - Reverted texture_cache.cc change (don't need to touch that file)
This is interesting, thanks! Though I wonder, what happens if a single render target resolve or a texture load operation crosses the 256 MB boundary? In the Direct3D 12 backend, the buffers are used as a sliding window, with each buffer overlapping the memory of half of the previous buffer and half of the next buffer (the actual memory is shared between them, mapped to two buffers at once via sparse binding, also known as tiled resources in Direct3D). The first buffer is mapped to 0…2 GB, the second to 1…3 GB, the third to 2…4 GB. So if a texture begins at 1.98 GB and ends at 2.02 GB, it's accessed through the 1…3 GB buffer as one continuous binding:
I'm thinking about implementing resolution scaling without sparse binding across the board at some point, but it appears that it's going to need very careful splitting of render target resolves and texture loads to ensure that they don't cross the resolution-scaled memory "page" boundaries, and also that both the source and the target bindings stay within |
Thank you for the quick review! You're absolutely right, the 256MB boundary is a fundamental constraint. The reason it ends up working in practice is that it basically brute forces the expected resolved texture size which should never(?) be that large even with 3x3 scaling. If my math is right even 1920x1080 at 4bpp with 3x3 scaling would be ~67MB, but we're anyway limited by EDRAM where even full dump at 3x3 scale would be max 90MB, so in practice we should never have the boundary issue with these buffer sizes. However, if the boundary is somehow crossed a larger 512MB buffer is allocated so it should not be a hard failure. All that being said, this implementation is of course much less memory efficient and elegant than the d3d12 one, it works but is definitely not ideal. If you think it's worth the effort I can try generalizing the d3d12 code so that it could be reused here, it would be somewhat more invasive though especially if you plan on further refactors. |
On second thought maybe the straightforward linear allocation approach is actually more reasonable for how Vulkan handles resource management. I'm not an expert by any means, but if my understanding is correct, there are no Vulkan equivalents for D3D12's ID3D12Resource, heap management, and tile mapping. So this simpler approach for Vulkan is maybe more appropriate for its memory model.. |
I initially sent this change to xenia-canary but was told there that gpu changes should still be going to master.
It's basically very similar to the d3d12 implementation with simpler buffer management and no mips scaling.