Thanks spiff, yes, absolutely.
Requirements
So, based on my initial post, when rendering with motionblur, Lighters need to be able to control the amount of samples required for good motionblur, with higher count on specific shots compared to other shots where that’s not necesary. And control it globally, with per-shape overrides.
Per-shape overrides are important, to be able to have (actual movie example usecase coming->) a bee in fullscreen with highspeed-flapping wings (33 motionblur samples) and a heavy fur-body that barely moves in camera (3 motionblur samples are enough).
Linear Interpolation limitations
With linear interpolation, as described earlier, you can either:
- bake specific amount of subsamples (say 33) on every creatures for every shot so the data is there
- produces too much data
- makes longer baking tasks (e.g. creatures with wardrobe pieces, hair, etc)
- bake low subsamples by default, and bake more when needed
- re-baking triggers reprocessing of the data in upstream departments
- longer iterations and waiting time
This is like having a polygonal mesh and publishing it with subdivision-levels baked in the vertices.
When more vertices are required, you have to request a republish of the model with more subdiv levels.
New Catmull-Rom interpolation
To avoid baking all subdiv-levels into the vertices, we can today use Subdiv attributes, so that the settings and amount of subdivision levels to be used to enrich a mesh with more information, could be based on camera distance, tessellation requirements, etc, and it is not encoded in the data itself; it’s an operation executed “on the fly”.
Having a similar approach on timesampled primvars, introducing a different “interpolation” (catmull-rom), will make it easier for artists to require the better/smoother interpolation when needed.
Catmull-Rom interpolation for timesampled primvars will provide more “details” (smoother) for the motionblur geometry itself, without necessarily the need of rebaking.
And just like a Subdiv mesh, it is “consumed” with a different interpolation, but baked with the “control” data.
Should this be done in delegates/renderers or directly in OpenUSD
Having subdivision being an operation that the renderer takes care of, means that everytime subdivision is required outside of a specific renderer, it needs to be evaluated in the same way the renderer will do it (yes, OpenSubdiv helps here).
Say, you need to extract the mesh from USD for water sim, or you need to dress a terrain with assets, you need to do it on the subdivided mesh.
The same applies to different ways of interpolating timesampled primvars.
If it is done in the delegates/renderers, then different consumers might have to reproduce the same results.
If OpenUSD can internally provide data interpolated in different ways (just like it does for linear-interpolation), the result of this will be the same everywhere, whether that is in a “rebaking” process to freeze the interpolation, or in all delegates/renderers.
And being a stage-level setting, every single prim-type (mesh,curves,point,transforms,cameras,etc) will respect the same interpolation method, with the ability to just choose how many samples to be requested per prim.
And frankly, I think that even subdiv meshes should do the same, but this is another story
Conclusion
Sorry for the long post, and I hope I’ve explained our requirements and usecases (and actually current implementation in Atlas&Friends).
What do others think about this ? do you have similar issues or could have ? did you find your own solution ? is rebaking actually not a big deal for you all ?
Cheers,
Paolo