I’m prepping a pipeline for a science visualization production, where the topic is largely space-based. Since the previous productions are mostly created with Houdini and Mantra, we’re hoping this time to transition some of the heavy lifting into USD and Solaris where we plan to replace Mantra with Karma + possibly Arnold.
Typically our productions have long camera paths that transition over orders-of-magnitude scale, and being able to use double-precision for both transforms and geometry data has been essential, mostly from a workflow standpoint where we don’t have to create complex setups to avoid precision issues (although we still need to do that occasionally).
I’ve been researching USD capabilities in this regard and have found a couple of concerns, testing with USD 23.08:
many of the base classes seem to define points as specifically floats, rather than a generalized float/double. The UsdGeom base schema.json defines points as being of type point3f[] points. If I instead feed double-precision data to tools that use the underlying schema, will I get double precision outputs? Will it work at all?
Xformable transforms can be specified in any precision, but in testing there seems to be a hard coded upper limit to a scale matrix at ~6.95e12, either within Hydra or USD. This limit only seems to appear using husk-based rendering, though, so may be a Houdini-specific limitation, as I can see large transforms in OpenGL views.
Because we’re small, it’s much easier to just flip switches from single to double float to avoid workflow issues and let the software do it’s thing, rather than design complex processes to work around bit depth limitations, so I would love to find some solutions during our testing phase.
Hi @parker, and welcome! The UsdGeom and UsdPhysics schemas were definitely designed with the practicalities of large VFX datasets and efficient computation for film and game needs. These practicalities include:
long sequences (thousands of samples) of animated, high point-count meshes, curves, and pointclouds
potentially many of the above in a single scene
many scenes being rendered at once in a network-distributed “render farm”, and thus bandwidth concerns
no production renderer that can even accept double-precision geometry
So UsdGeom’s current precision compromise is that “leaf node geometry” is all single precision, while all affine transformations applied hierarchically to the leaves can be double-precision.
That said, we do want to make USD useful in science and engineering domains, as well, and the precision issue has come up, before. Attributes in USD are strongly typed, so you cannot just provide double-precision data for the UsdGeomPointBased.points attribute. There are, however, ways we could accommodate this going forward. One way would be to add a point3d[] pointsd attribute to PointBased, similarly to how we are adding a quatf[] orientationsf attribute to our PointInstancer schema. Or it might make sense to have different schemas for select types of double-precision geometry.
In that vein, are Mesh primitives what you actually want to be using, or are there more natural (for your domain) geometry representations that UsdGeom currently does not support?
Thank you for clarifying USD’s typing system a bit. Meshes are still the primary geometry type (along with curves) that would need any sort of higher precision. For the point3d[] pointsd data type, is the intention there that if a mesh has that property defined, it will override the point3f[] points attribute? In Houdini for example, the point P attribute can be worked with transparently as either a 64bit or 32bit value. It sounds like the application would need to be aware of this extra attribute. Even so, I think that solution would work for us, as 64bit is still a special case for geometry and is not the majority of our data.
There are some volumetric mesh data types that don’t exist in USD (hexahedral grids and unstructured tetrahedral grids) that we work with and would be nice to have formal definitions of, but currently we deal with those separately through the Mantra / Karma renderer’s ability to read meshes at render-time. But specific to the double precision question, those typically remain 32bit structures but I could envision a case where 64bits would help there as well: for example, an adaptive mesh simulation of a protoplanetary disk embedded in a larger interstellar molecular cloud, where the scale magnitude changes within a single simulation are quite large.
Thanks, @parker ! In our predecessor to USD, we did allow variable precision for all numeric attribute types, and wound up paying a fairly heavy complexity and performance cost for it in our pipeline. That definitely had something to do with how we provided the support, and I trust that SideFX has done a great job with it! But with USD we’d rather be more judicious about where we allow those choices.
Btw, support for a UsdGeomPointBased-derived TetMesh schema is slated for an upcoming USD release.
Hi @spiff, great to hear about tet meshes being officially supported at some point! They’re obviously useful for more domains than just cosmological simulations.