[Feature request] Mega Textures AKA Sparse Virtual Textures #25
Loading…
x
Reference in New Issue
Block a user
No description provided.
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Description
Mega textures (sparse virtual textures) is an older rendering technique, heavily used in the 2011 game rage, and used by ID software for titles in their ID Tech engine as recent as Doom (2016). This is a rendering technique that is especially useful for legacy hardware. It is particularly useful for systems with a small amount of video memory.
It seems like Unreal Engine uses a more advanced version of this for their "virtual textures" which allows them to max out video memory and load tons of ultra HD textures in a scene without capping video memory.
How would it work
Mega texturing is a process whereby the engine creates a large texture of a pre-determined size. The size can be anything set by the user arbitrarily, typically targeting a hard VRAM limit. It is split into tiles. Larger sizes will take longer to update and will use more vram. It is best used on lower-end systems with smaller textures and more vram restrictions. Anyways, a very large single texture "atlas" of sorts is created in video memory. Instead of loading textures at full resolution, the game engine uses the screen space size of the texel to figure out what the optimal resolution is for a texture. It essentially is constantly swapping in different resolution mip maps of textures so that all the textures on-screen can fit into video memory. This way, a scene can have loads of textures that would far exceed memory limits, but instead, that data is streamed in in real-time. This would require a robust texture streaming and caching system.
In some cases, you will see texture pop-in as the mega texture is updated. This is an asyncronous process and runs on its own thread.
On the engine side, this would entail computing mip maps for all the different textures in the game. So, if you have a 2048x2048 texture, for example, we would shrink it to a 1024x1024, a 512x512, a 256x256, etc. down to maybe a 16x16, and cache all of these in an organized way. The engine can then request a specific mipmap and load it in to the mega texture without doing any additional processing. This will use a bit more storage space and will have a longer build time, but will drastically improve the scale of games possible on low-end hardware for release. This could be especially useful for targeting Android, iOS, or Nintendo Switch. It allowed Machine Games, Bathesda, and ID Software to release some ground-breaking titles on legacy hardware and got them around limitations of limited video memory restricting texture sizes.
We may want to figure out a good image format to use for the mip maps. We'd probably want to use something easily decompressed by the GPU and something with minimal file size to allow the textures to be streamed in quickly and to spend minimal time computing them into the mega texture.
Like I said, it does have some tradeoffs. We definitely don't want this on by default as it will dramatically increase storage requirements, build time, and will introduce things like texture pop-in. But, it would be very beneficial for anyone targeting old PCs, low-end consoles, or even legacy consoles like possibly 6th and 7th gen game consoles! If implemented well, it could unlock very high resolution texturing for our games on high-end systems similar to Unreal Engine.
Resources
Here is a video by James Lambert showcasing an extreme example of it on the Nintendo 64.
Here is an article with a high-level overview of its implementation in Doom (2016)
A more detailed breakdown of SVTs