* Initial implementation of migration between memory heaps
- Missing OOM handling
- Missing `_map` data safety when remapping
- Copy may not have completed yet (needs some kind of fence)
- Map may be unmapped before it is done being used. (needs scoped access)
- SSBO accesses are all "writes" - maybe pass info in another way.
- Missing keeping map type when resizing buffers (should this be done?)
* Ensure migrated data is in place before flushing.
* Fix issue where old waitable would be signalled.
- There is a real issue where existing Auto<> references need to be replaced.
* Swap bound Auto<> instances when swapping buffer backing
* Fix conversion buffers
* Don't try move buffers if the host has shared memory.
* Make GPU methods return PinnedSpan with scope
* Storage Hint
* Fix stupidity
* Fix rebase
* Tweak rules
Attempt to sidestep BOTW slowdown
* Remove line
* Migrate only when command buffers flush
* Change backing swap log to debug
* Address some feedback
* Disallow backing swap when the flush lock is held by the current thread
* Make PinnedSpan from ReadOnlySpan explicitly unsafe
* Fix some small issues
- Index buffer swap fixed
- Allocate DeviceLocal buffers using a separate block list to images.
* Remove alternative flags
* Address feedback
* Add MVK basics.
* Use appropriate output attribute types
* 4kb vertex alignment, bunch of fixes
* Add reduced shader precision mode for mvk.
* Disable ASTC on MVK for now
* Only request robustnes2 when it is available.
* It's just the one feature actually
* Add triangle fan conversion
* Allow NullDescriptor on MVK for some reason.
* Force safe blit on MoltenVK
* Use ASTC only when formats are all available.
* Disable multilevel 3d texture views
* Filter duplicate render targets (on backend)
* Add Automatic MoltenVK Configuration
* Do not create color attachment views with formats that are not RT compatible
* Make sure that the host format matches the vertex shader input types for invalid/unknown guest formats
* FIx rebase for Vertex Attrib State
* Fix 4b alignment for vertex
* Use asynchronous queue submits for MVK
* Ensure color clear shader has correct output type
* Update MoltenVK config
* Always use MoltenVK workarounds on MacOS
* Make MVK supersede all vendors
* Fix rebase
* Various fixes on rebase
* Get portability flags from extension
* Fix some minor rebasing issues
* Style change
* Use LibraryImport for MVKConfiguration
* Rename MoltenVK vendor to Apple
Intel and AMD GPUs on moltenvk report with the those vendors - only apple silicon reports with vendor 0x106B.
* Fix features2 rebase conflict
* Rename fragment output type
* Add missing check for fragment output types
Might have caused the crash in MK8
* Only do fragment output specialization on MoltenVK
* Avoid copy when passing capabilities
* Self feedback
* Address feedback
Co-authored-by: gdk <gab.dark.100@gmail.com>
Co-authored-by: nastys <nastys@users.noreply.github.com>
* Fix various issues caused by #3679
- The arguments for the 0th dummy vertex buffer were incorrect - it was given an offset of 16 rather than a size of 16.
- The wrong size was used when doing `autoBuffer.Get` on a converted vertex buffer.
- The possibility of a vertex buffer being disposed and then rebound can rebindings to find a different buffer where the current range is out of bounds. Avoid binding when out of range to prevent validation errors.
- The above also affects generation of converted buffers, which was a bit more fatal. Conversion functions now attempt to bound input offset/size.
* Fix offset for converted buffer
* Vertex Buffer Alignment part 1
* Update CacheByRange
* Add Stride Change compute shader, fix storage buffers in helpers
* An AMD exclusive
* Reword
* Change rules - stride conversion when attrs misalign
* Fix stupid mistake
* Fix background pipeline compile
* Improve a few things.
* Fix some feedback
* Address Feedback
(the shader binary didn't change when i changed the source to use the subgroup size)
* Fix bug where rewritten buffer would be disposed instantly.