New interpolation methods for attributes

Hi,

I’ve noticed that in the latest 24.03 there is a first version of a Ts library, to take care (in the future) of the Spline Animation proposal.

From the proposal I was mainly interested in this section:

Am I reading the proposal properly (and not biasedly hoping for what I really want from that proposal!) that time-sampled values will be able to benefit from new interpolation-methods, or at least have a way to describe time-varying data not just with linear samples ?

We were discussing internally about this and whether we needed to look at potentially extending the interpolation types to add a new one, to match what we currently have internally.

What I’m trying to solve is cases like these, with deforming meshes.
This is a “rotating-deforming mesh”, with deforming vertices (e.g. waving your arms)

shutter-open would be like this
image

and shutter-close will be like this:
image

when enabling motionblur, with only those two samples in the usd file, you’ll end up with this
image

so to get better looking motionblur, you currently have to add more samples in the usd file, so with just one extra sub-sample, you get a better result but still not smooth:
image

and to get proper smooth blur, you need to bake more samples in the usd files
image

For large meshes, more subsamples could be quite expensive.
So potentially having the 3-samples usd file and using a different interpolation-method, we could obtain similar results to the multi-sampled example.

Cheers,
Paolo

Hi Paolo, there’s two parts to the answer.

The first is that to get the advantages of time curve splines to time sampled data, you’d need to copy the time series data over to the new representation, and you can then bias or subdivide the splines as necessary to distribute samples where you want them.

As to the second part, which is generalization of interpolation, please keep an eye out for the next round of refined proposals that will show the current state of play. It would be good to get a read on how the latest revisions map to your needs.

1 Like

Thank you @nporcino
looking forward to the new proposals!

But just to set expectations… we had alot of discussion around making timesamples fancier, and ultimately decided to really try hard to keep them simple and efficient. And also to keep splines simple (as simple as possible, i.e. with regards to allowable datatypes)

I understand.

I do agree that it adds potentially extra complexity, but in a way, you might look at it as the “complexity” setting in RenderParams, meaning that if needed, you can change how you handle timesamples, to achieve better quality.

you can add more details to a mesh with subd, or you can re-bake the mesh with more data.
I think that’s similar to timesamples: you can add more details changing interpolation (subd them), or rebake them.
Re-baking, in both cases, adds more work in the pipeline.

OpenUSD and Hydra are going to be used in viewports, for fast interactive renders, and in offline renders, for better/final quality, and we want to be able to decide the quality/complexity; e.g. whether to run subdivs on geom or subdiv the timesamples, if you allow me the similarity.

Thanks, @paoloemilioselva ; for sure it’s always worth digging into issues where tradeoffs between rebaking and adding complexity to the runtime are concerned, as there’s almost never a simple, clear answer!

It would be great to have a community-wide discussion on the general ideas involved here before you go to the trouble of writing an official proposal - could you outline what you’re thinking, and then we can debate whether a “core USD” solution or a “Compute/Hydra-based” solution might be more appropriate?

Thanks!

1 Like

Thanks spiff, yes, absolutely.

Requirements

So, based on my initial post, when rendering with motionblur, Lighters need to be able to control the amount of samples required for good motionblur, with higher count on specific shots compared to other shots where that’s not necesary. And control it globally, with per-shape overrides.
Per-shape overrides are important, to be able to have (actual movie example usecase coming->) a bee in fullscreen with highspeed-flapping wings (33 motionblur samples) and a heavy fur-body that barely moves in camera (3 motionblur samples are enough).

Linear Interpolation limitations

With linear interpolation, as described earlier, you can either:

  • bake specific amount of subsamples (say 33) on every creatures for every shot so the data is there
    • produces too much data
    • makes longer baking tasks (e.g. creatures with wardrobe pieces, hair, etc)
  • bake low subsamples by default, and bake more when needed
    • re-baking triggers reprocessing of the data in upstream departments
    • longer iterations and waiting time

This is like having a polygonal mesh and publishing it with subdivision-levels baked in the vertices.
When more vertices are required, you have to request a republish of the model with more subdiv levels.

New Catmull-Rom interpolation

To avoid baking all subdiv-levels into the vertices, we can today use Subdiv attributes, so that the settings and amount of subdivision levels to be used to enrich a mesh with more information, could be based on camera distance, tessellation requirements, etc, and it is not encoded in the data itself; it’s an operation executed “on the fly”.
Having a similar approach on timesampled primvars, introducing a different “interpolation” (catmull-rom), will make it easier for artists to require the better/smoother interpolation when needed.
Catmull-Rom interpolation for timesampled primvars will provide more “details” (smoother) for the motionblur geometry itself, without necessarily the need of rebaking.
And just like a Subdiv mesh, it is “consumed” with a different interpolation, but baked with the “control” data.

Should this be done in delegates/renderers or directly in OpenUSD

Having subdivision being an operation that the renderer takes care of, means that everytime subdivision is required outside of a specific renderer, it needs to be evaluated in the same way the renderer will do it (yes, OpenSubdiv helps here).
Say, you need to extract the mesh from USD for water sim, or you need to dress a terrain with assets, you need to do it on the subdivided mesh.
The same applies to different ways of interpolating timesampled primvars.
If it is done in the delegates/renderers, then different consumers might have to reproduce the same results.
If OpenUSD can internally provide data interpolated in different ways (just like it does for linear-interpolation), the result of this will be the same everywhere, whether that is in a “rebaking” process to freeze the interpolation, or in all delegates/renderers.
And being a stage-level setting, every single prim-type (mesh,curves,point,transforms,cameras,etc) will respect the same interpolation method, with the ability to just choose how many samples to be requested per prim.

And frankly, I think that even subdiv meshes should do the same, but this is another story :slight_smile:

Conclusion

Sorry for the long post, and I hope I’ve explained our requirements and usecases (and actually current implementation in Atlas&Friends).
What do others think about this ? do you have similar issues or could have ? did you find your own solution ? is rebaking actually not a big deal for you all ?

Cheers,
Paolo

Hi @paoloemilioselva , thanks for laying out the case and workflow constraints! I think I’m missing something with the bee example, as I can’t imagine, even with catrom interpolation, getting anything reasonable with “standard” one or two samples per frame, so you’re going to need to more densely bake those attributes anyways, no?

In Presto we have prim metadata that’s inherited down namespace that tells the export/bake process how many samples to export, so that we never need to uniformly super-sample a whole scene. There is a rebake cost associated with that when a lighter/renderer first notices a problem, but as I’ve mentioned before, we are possibly the only studio that rebakes all active shots every night no matter what.

UsdGeomMotionAPI’s motion:nonlinearSampleCount attribute is inherited down namespace also, and informs Hydra and other clients how many samples to compute for geometry properties that are non-linear, such as points with velocities and accelerations, and rotate xformOps. That doesn’t come close to providing the general inerpolation control you’re advocating here, but does hint at a pattern we could deploy, if it’s acceptable for this to be a non-core behavior.

Before exploring further, can you clarify one thing for me: when you advocate for “global number of samples, with local per-prim overrides”, are you talking about:

  1. number of adjacent timeSamples to pull into the catrom fit to evaluate any one sample
  2. number of samples that a renderer should be given for a prim’s properties, regardless of how the interpolation is computed

Or both?

Thanks for your reply spiff.

The bee example is to explain the granularity you need in terms of how many samples you want to send to the renderer, independently on how you baked them ( so to answer to your last question, I’d say is 2) ).

Yes, with your workflow of rebaking every night, there is probably less overhead, even though I can’t stop thinking about how much that costs with 20 shows running at the same time, and how artists are going to not have their creatures changing every morning… O_O (but I can’t compare two different pipelines :sweat_smile: )

Having the ability to have a different interpolation for the already baked samples provides the ability to query an “infinite” amount of subsamples, where needed, to refine more and more the look of the blur.
I keep pointing at subdiv on meshes, because that’s exactly what this compares to.

Yes, there could be situations in which subframes motion has to be captured at baking time, but it might not be visually annoying, you are really just going to miss some movements but the look of the blur will be smooth, because you can refine it increasing the samples you are sending to the renderer.

Aside of baking enough samples to capture the movement of the (deforming) wings in a shot like this

you need to provide a beautifully smooth arc on that blur.

And maybe you can bake enough samples to just capture enough motion, say like this

and with only linear interpolation, you are going to get a segmented blur, even increasing the number of samples you might want to send to the renderer

whereas with a catrom interpolation, with the same baked samples in the file, you can query more subsamples to send to the renderer

You can decouple how many samples you really have to bake, to capture the nuance of the motion, and the actual amount of samples you have to send to the renderer to have a soft/smooth blur.