Hi, first post!
I’m prepping a pipeline for a science visualization production, where the topic is largely space-based. Since the previous productions are mostly created with Houdini and Mantra, we’re hoping this time to transition some of the heavy lifting into USD and Solaris where we plan to replace Mantra with Karma + possibly Arnold.
Typically our productions have long camera paths that transition over orders-of-magnitude scale, and being able to use double-precision for both transforms and geometry data has been essential, mostly from a workflow standpoint where we don’t have to create complex setups to avoid precision issues (although we still need to do that occasionally).
I’ve been researching USD capabilities in this regard and have found a couple of concerns, testing with USD 23.08:
- many of the base classes seem to define points as specifically floats, rather than a generalized float/double. The
UsdGeom
baseschema.json
defines points as being of typepoint3f[] points
. If I instead feed double-precision data to tools that use the underlying schema, will I get double precision outputs? Will it work at all? - Xformable transforms can be specified in any precision, but in testing there seems to be a hard coded upper limit to a scale matrix at ~6.95e12, either within Hydra or USD. This limit only seems to appear using husk-based rendering, though, so may be a Houdini-specific limitation, as I can see large transforms in OpenGL views.
Because we’re small, it’s much easier to just flip switches from single to double float to avoid workflow issues and let the software do it’s thing, rather than design complex processes to work around bit depth limitations, so I would love to find some solutions during our testing phase.