I’m building an application prototype (C++, Vulkan graphics backend) where my plan is to create a USD importer/exporter for geometry, materials, transforms etc… Upon exploring the USD documentation, I’ve come to realize that USD/OpenUSD is much more than just a file format and could be useful for a number of other reasons.
I have a couple of questions I’m hoping to get some feedback on before I go too far with OpenUSD, though.
If my application is based on a Vulkan backend and my renderer will be a real-time DI raytracer, is Hydra still of use? I’ve already built both the Release and Debug builds (MSVC 143 -VS 17 2022) of OpenUSD for a Vulkan backend, but afterward, I realized Hydra might only be for raster/preview? Also, even if Hydra is used to get vertex data onto the GPU (which I would still need for the raytracer) would it still be of benefit if I’m using indirect draw and uploading all vertex data into buffers that don’t get copied to the GPU every frame (only when new assets are loaded and unloaded from the scene).
My intention with the application is to use an Entity Component System (ECS) for all data to maximize throughput/OPS and be able to render large scenes/simulations in real-time. Knowing that the USD API exposes its data as a scene graph, the data in the .usd files is organized hierarchically. Are there any thoughts on whether I should load data as USD and then convert it to my own format/layout or stick with the USD data structures?
Hydra is doing a few different things. It’s an integration point, meaning if your application calls into hydra it can support USD scenes (via the UsdImaging library) but also any other hydra-enabled scene graph, as well as any hydra-enabled renderer, of which there are quite a few. Hydra’s “usdImaging” library does quite a bit of resolution of the USD scene, including things like parsing instancing, flattening the scene into renderable objects, etc, and you probably want to take advantage of that if you’re loading USD since the semantics can get quite complex.
Hydra can run through many renderers, but we ship with support for “Storm” and “Renderman”. Storm is a preview renderer, currently doing all rasterization through GL or Metal. We’re working on a Vulkan port as we speak. Our architecture sounds similar to yours; Storm does minimal data invalidation on edits/frame changes, and stores all of the mesh data in cache-friendly striped buffers. In GL, draw calls are built into big indirect draw buffers. We don’t do any raytracing right now, though, and while folks have extended Storm with their own backends it’s a bit complicated at the moment.
Probably the easiest way to take advantage of usdImaging (and any other plugins in the future) is to write a render delegate integration for your renderer.
As for your second question, the USD object model has the potential for lots of residual computations (like flattening transforms, applying collection bindings, aggregating instances). UsdImaging does this computation, as well as dependency tracking so that it can e.g. intelligently invalidate and recompute transforms for a subtree on an ancestor transform edit. If you’d like to take advantage of the usdImaging code to track and recompute those dependencies, authoring directly to USD would make sense. If you’d like to have a way for the application to edit the resolved objects without triggering hierarchical updates, or want to use your own computation system (e.g. like in Omniverse, where they cache resolved objects onto the GPU and then use compute shaders to resolve updates), that would point towards using your own scene graph.
To your ECS question, for an ECS I usually recommend having your components reference into the USD system or copy the data into them wholesale if you’re trying to avoid indirection. That, of course, varies by what the component is representing.
You can still author to USD, and separate it out for runtime use into components. e.g a USD Mesh may have both the transform and mesh vertices on one prim, but your ECS will likely prefer them as separate components. The Mesh component could still point to the USD data if you want to avoid memory duplication.
You can also use custom USD prims like RealityKit does to store component data for things that USD doesn’t have a mapping to.
It sounds like the best strategy is to use as much USD functionality as possible, especially in loading and authoring. I’ll look into what I’ll need to do to write a usdImaging render delegate integration for my renderer.
I like the idea of persistent USD objects for scene management/editing, pointing to mesh data or copying directly into flattened arrays for my components. I’ll have to experiment with what will work best for my use case and how that would work with usdImaging and a render delegate.