Thanks for the answer @spiff, I realized how late to the party I have arrived, sorry about that.
You don’t control all the DCC’s that are contributing layers, and therefore can’t guarantee they’ve done everything properly
Yes, this is very true, but how does the proposal help with that? That API can be omitted like the metadata part, am I missing something?
Even today, you can only assume you don’t need to traverse a stage if you 100% trust the asset you are referencing.
And the traversing you’d need to do with the new proposal is far easier than with the old,
because the information you need to check is already cached for you at the stage level,
and easily accessible, vs. needing to construct a UsdPrimCompositionQuery for every prim (not cheap at all).
This is true, however the sort of again the advantage is that with the current layer metadata information I can:
- Have validation that each layer does have the metadata part, easy to check, easy to require and document. There is one fixed place where this information is.
- The UsdPrimCompositionQuery is expensive yes, however I dont require it to run in most cases because if units match, I dont have to construct the queries. Because as a pre-step I can easily get all participant layers in the stage and just check the units, no need to run the expensive queries if I dont have to. However the new approach I have to always traverse the whole stage, there is no early exit option for me.
I’d like to challenge that until proven otherwise? Especially if you continue making the assumption you are making currently,
that each asset you reference is already internally self-consistent. Under those assumptions, then I think
(but please correct me if I’m wrong here) your work changes from examining the contents of of the layer metadata of the layer you are adding a reference to :
Adding the reference and then examining the UsdTypeInfo of the resulting composed prim to see if there is a UsdLinearMetricsAPI applied
For a sub-root reference if no metrics API is applied, then opening the targetted reference masked to the referenced prim (with no child prims) so that you can discover the coordinate system in which the prim is defined by examining its ancestor prims
Unless there is a compelling reason to do the analysis prior to adding the reference, then in the “root reference” case in an authoring workflow, I don’t see any significant added expense. The sub-root reference case is substantially more expensive, but does not scale with the size of the scene, or even with the amount of data/scene being referenced… only with depth of the hierarchy you’re referencing into. I would hazard that with modest number of cores, even that check would be in the hundreds of milliseconds max, commonly. Are there authoring workflows that can’t afford that at reference-adding time?
The traversal cost is something that is bothering me yes, the traversal itself is usually fast however querying applied API schema is not from what I have seen so far. Maybe I do something wrong, but on a larger hierarchy this is expensive and can add seconds easily.
Just to note again, I believe there should be no difference in how much of the stage you need to process, in either case. But perhaps I don’t understand how you are resolving differing units?
I think there is a huge difference, if I get a stage and have to check if there are divergent units on a given stage.
Let me try to write up what I do for units now and make an exercise how would that change.
Lets assume an easier case where there is already existing stage that has all consistent units and we try to add a reference to a new layer.
Currently this is what I do (very simplified):
- I get stage root layer metadata, I get the reference layer metadata, if the units match done
- If units dont match we run the metrics assembler to resolve the divergences
- I do traverse the hierarchy and run the UsdPrimCompositionQuery to find the two divergent layers on that prim
- Based on the units difference corrections are applied
Now with this new pattern:
- I need to traverse the added layer to find the applied metrics API if there is anywhere some divergent API, this means I can never early exist, always the whole layer has to be traversed.
- If units dont match run the metrics assembler
- So instead of running the composition query I could indeed get the API applied and compare against the parent applied API. This sounds ok I guess if we can trust the API values as those can be authored and mutated by various layer, but I guess thats fine.
Now the benefit here would be, that I can mutate the metrics attributes on the right layers to indicate that the metrics now matches across the whole stage, thats interesting.
So I guess my main issue would remain, that if we allow this metrics API to be on non-root prims, the validation part becomes non-trivial and possibly expensive.
Ales