Double support in geometry data

Hi, first post!

I’m prepping a pipeline for a science visualization production, where the topic is largely space-based. Since the previous productions are mostly created with Houdini and Mantra, we’re hoping this time to transition some of the heavy lifting into USD and Solaris where we plan to replace Mantra with Karma + possibly Arnold.

Typically our productions have long camera paths that transition over orders-of-magnitude scale, and being able to use double-precision for both transforms and geometry data has been essential, mostly from a workflow standpoint where we don’t have to create complex setups to avoid precision issues (although we still need to do that occasionally).

I’ve been researching USD capabilities in this regard and have found a couple of concerns, testing with USD 23.08:

  • many of the base classes seem to define points as specifically floats, rather than a generalized float/double. The UsdGeom base schema.json defines points as being of type point3f[] points. If I instead feed double-precision data to tools that use the underlying schema, will I get double precision outputs? Will it work at all?
  • Xformable transforms can be specified in any precision, but in testing there seems to be a hard coded upper limit to a scale matrix at ~6.95e12, either within Hydra or USD. This limit only seems to appear using husk-based rendering, though, so may be a Houdini-specific limitation, as I can see large transforms in OpenGL views.

Because we’re small, it’s much easier to just flip switches from single to double float to avoid workflow issues and let the software do it’s thing, rather than design complex processes to work around bit depth limitations, so I would love to find some solutions during our testing phase.

Hi @parker, and welcome! The UsdGeom and UsdPhysics schemas were definitely designed with the practicalities of large VFX datasets and efficient computation for film and game needs. These practicalities include:

  • long sequences (thousands of samples) of animated, high point-count meshes, curves, and pointclouds
  • potentially many of the above in a single scene
  • many scenes being rendered at once in a network-distributed “render farm”, and thus bandwidth concerns
  • no production renderer that can even accept double-precision geometry

So UsdGeom’s current precision compromise is that “leaf node geometry” is all single precision, while all affine transformations applied hierarchically to the leaves can be double-precision.

That said, we do want to make USD useful in science and engineering domains, as well, and the precision issue has come up, before. Attributes in USD are strongly typed, so you cannot just provide double-precision data for the UsdGeomPointBased.points attribute. There are, however, ways we could accommodate this going forward. One way would be to add a point3d[] pointsd attribute to PointBased, similarly to how we are adding a quatf[] orientationsf attribute to our PointInstancer schema. Or it might make sense to have different schemas for select types of double-precision geometry.

In that vein, are Mesh primitives what you actually want to be using, or are there more natural (for your domain) geometry representations that UsdGeom currently does not support?

Cheers,
–spiff

Hi @spiff,

Thank you for the reply!

Thank you for clarifying USD’s typing system a bit. Meshes are still the primary geometry type (along with curves) that would need any sort of higher precision. For the point3d[] pointsd data type, is the intention there that if a mesh has that property defined, it will override the point3f[] points attribute? In Houdini for example, the point P attribute can be worked with transparently as either a 64bit or 32bit value. It sounds like the application would need to be aware of this extra attribute. Even so, I think that solution would work for us, as 64bit is still a special case for geometry and is not the majority of our data.

There are some volumetric mesh data types that don’t exist in USD (hexahedral grids and unstructured tetrahedral grids) that we work with and would be nice to have formal definitions of, but currently we deal with those separately through the Mantra / Karma renderer’s ability to read meshes at render-time. But specific to the double precision question, those typically remain 32bit structures but I could envision a case where 64bits would help there as well: for example, an adaptive mesh simulation of a protoplanetary disk embedded in a larger interstellar molecular cloud, where the scale magnitude changes within a single simulation are quite large.

Thanks, @parker ! In our predecessor to USD, we did allow variable precision for all numeric attribute types, and wound up paying a fairly heavy complexity and performance cost for it in our pipeline. That definitely had something to do with how we provided the support, and I trust that SideFX has done a great job with it! But with USD we’d rather be more judicious about where we allow those choices.

Btw, support for a UsdGeomPointBased-derived TetMesh schema is slated for an upcoming USD release.

1 Like

Hi @spiff, great to hear about tet meshes being officially supported at some point! They’re obviously useful for more domains than just cosmological simulations.

Hi, has there been any more discussion on this point internally?

Also, in regards to the hard upper scale limit on Xformables at ~6.95e12, is this something that can be worked around / addressed somehow?

Hi @parker , not yet, but it is queued for discussion. I’ll post an update if we get clarity on what might make the most sense.

To your question about hard upper scale limits… that’s definitely in the rendering system. I don’t recall whether the Hydra “internals” clamp to 32 bit precision (I doubt it), but both Storm and Prman limit to 32 bits for both geometry and transforms, and I wouldn’t be surprised if the Houdini renderers did, as well; I think it’s the norm for VFX and game renderers, for performance and scalability. I thought there was a way to enable 64 bit transforms for Storm, but I can’t find it atm.

OK, so the env var is HD_ENABLE_DOUBLE_MATRIX, although I’m advised that Metal doesn’t support the feature, so it won’t have any effect in Storm on a Mac.

Which also makes me want to ask… what level of double support is really enough for your needs? Is it enough that USD can carry the precision when needed, or do you require Hydra renderers to support double-precision meshes, as the latter seems pretty unlikely?

Hi @spiff ,

Thank you for the replies.

I can see why most renderers would limit themselves to 32bits, although I’m fairly sure Side Effect’s Mantra at the very least supported 64bit transform matrices.

For rendering, sure, there aren’t a lot of specific reasons to keep geometry bits at 64bit, although I can imagine potential trouble with things like LIDAR scans of large regions.

For our specific production needs where I need precision, we are visualizing large astronomical phenomena, and when using Houdini + native geometry and rendering, not having to extremely carefully manage our scene level transforms due to 64 bit being supported was a huge productivity boost when 64bit support was added to point attributes in Houdini. We also piggyback off of a NASA / JPL utility called SPICE to manage our global transforms, so can dynamically scale things as needed, but if, say, a graphic sphere that encompasses the solar system were scaled in meters to be visible behind a spacecraft, it would suddenly disappear given the transform limit in Hydra. In short, we have an extremely small staff and can’t afford the time set up a system to always carefully manage scales; 64bit transforms are an instant win in that respect.

We also receive datasets where the values are natively 64bits, and it would be useful to be able to store everything in a useable way at its native precision in USD.

So to answer your specific question, simply providing the ability to store positions natively as 64bit in USD (that are passed into the renderer as 32bit values), along with large transforms supported by the renderers should be good enough for us.

Having said that though, I’m not sure why, even with 32bit precision, you can’t have objects larger than the 5e12 limit I posted above? Maybe I’m misunderstanding the details of the implementation, but floats can be set up to something like 1e38 without trouble, aside from the obvious loss in precision.

I’m reminded of the old fable of Bill Gates saying " 640K ought to be enough for anybody"…

PS: thanks for the env var, unfortunately usdview didn’t seem to enjoy having it enabled:

ERROR: Usdview encountered an error while rendering.
        Error in 'pxrInternal_v0_23__pxrReserved__::HdSt_DrawBatch::_GetDrawingProgram' at line 280 in file /home/prisms/builder-new/WeeklyDevToolsHEAD/dev_tools/src/usd/usd-23.08/USD-py3.10/pxr/imaging/hdSt/drawBatch.cpp : 'Failed to compile shader for prim /sphere1.'
        Error in 'pxrInternal_v0_23__pxrReserved__::HdSt_DrawBatch::_GetDrawingProgram' at line 299 in file /home/prisms/builder-new/WeeklyDevToolsHEAD/dev_tools/src/usd/usd-23.08/USD-py3.10/pxr/imaging/hdSt/drawBatch.cpp : 'Failed verification: ' res ' -- Failed to compile with fallback material network'

Thanks, @parker - that context is all helpful!

You’re right, and maybe this question would be best asked in the “Hydra” category… my only guess is that you may be running afoul of far clip-plane defaults, though my inspection of the usdview code indicates you’d run into problems way before then, as the default farClip distance is just 2e6, to work around older NVidia architecture limitations!

This is surprising, as we do seem to have unit tests enabled that test this setting. If you’re able to construct a simple test scene that runs afoul of the compile errors, would you be willing to file a GitHub Issue? Seems important to address, but we do not run with the setting enabled in our productions, so it is quite possible something could atrophy.

Adding some notes from an internal discussion about this, for further discussion towards an eventual solution… all thoughts welcome!

  • If we were to make a point3d pointsd optional sibling attribute in UsdGeomPointBased, would double-precision versions of velocities and accelerations also be required to encode, e.g. topologically-varying animation at e.g. a cosmological scale?
  • The super, super-class of PointBased is UsdGeomBoundable, which is where float3[] extent is defined; this is how UsdGeom computes and stores object-space extents for geometry, used for culling, camera-framing, etc. If some double-precision geometry were to exceed the magnitude of what can be stored in a float, what would we expect its extent to be? It would be a more involved enterprise, we think, to allow for an double3 extentd encoding, as well.
  • The only alternative that would faithfully represent double-precision extents would be to recreate the entire class hierarchy from Boundable down, as BoundableD etc. which would be a pretty big addition to the system and its clients

I realized belatedly that we already have the “insufficiently precisioned extent” problem in UsdGeom, as the geometry properties for the intrinsic gprims (Sphere, Cube, Cone, Cylinder, Capsule) are already double-precision. Unlikely in VFX, but if a cosmological dataset used units that necessitated a Sphere of radius greater than roughly 1.7e+38, we’d already be unable to bound it. Of course, all of our rendering pipelines would also have other problems with geometry values outside of float range, and compensating in Xformable transforms would not help.

Hi @spiff ,

Although in theory something could be bigger than your limits, the visible universe itself is estimated to be ~9^32 mm wide, so it is within the 32bit limit!

A system that supports 64bit transforms + point positions + velocities should satisfy our requirements as even 32bit bounds are large enough to support the units, unless, for example, one uses picometers to measure the visible universe… and I should quote myself quoting Bill Gates above when mentioning this!

I’ll have to think on the points you made, as it is certainly a tricky issue to address.

Bounding the whole thing may not be a problem, but you’d run into inaccuracies in combining extents long before you get to that size. For example, even for the number 1 billion represented in IEEE 32 bit encoding, the next closest representable number is 100 units away, and the distance grows bigger as the exponent climbs from 9 upwards. Gemini tells me Jupiter’s radius is 7e10 millimeters, so depending on your needs for extents, there’d be noticeable imprecision in bounding even single (small on the scale of stars!) celestial objects, if you needed to measure in millimeters (is that common?).

Thanks @spiff for pointing out that potential issue. It’s not common but I was just presenting an extreme case by using millimeters… but your point about combining extents would become an issue in meter scales as well!