Hi Spiff,
I’ve recently had a similar need to condense large, inter-linked production USD stacks into a more compressed and self-contained folder. I was able to achieve a desirable result for the assets I was testing like the USD working group Teapot which has nested variant sets and the OpenChessSet pieces. I also tested on some production assets which I can’t share here, but a more complex character with many wardrobe variations should* condense down from 517 layers down to 149.
The following shows the condensed/re-structured result from the Teapot:
usd-wg/assets/full_assets/Teapot
├── DrawModes.usd
├── geo
│ ├── FancyTeapot.usd
│ └── UtahTeapot.usd
├── Teapot_Geometry.usd
├── Teapot_Materials.usd
├── Teapot_Payload.usd
└── Teapot.usd
Condenses down to:
Assets/usdwg/test.teapot
├── Teapot
│ └── Geometry
│ ├── FancyPorcelainFlowers.usda ({modelVariation=Fancy})
│ └── Utah.usda ({modelVariation=Utah})
└── Teapot.usda
I would like to understand more about what you mean about difficulty in ‘teas[ing] out the “preserved” contributions from those that are getting flattened’. My baseline assumption is that I can safely ignore “variant” opinions for any Prims which are descendants of a Variant; this allows me to de-duplicate redundant prims. I then merge all the variant prims back into a single variant set by re-aligning each variant’s Prim stack. That’s in the “merge_variants” function within the attached “load” notebook. Is that a safe assumption? Or at least a safe-ish assumption?
For additional context, the basic approach I took to achieve this result was to record every possible Prim within the composed stage (extract), perform the various merging/pruning graph operations needed to form a plan (transform), and then flatten, merge and re-link the resulting layers (load). I admit this approach may require a tinfoil hat, but I’m not well versed enough in the composition engine’s subtleties to be able to restructure the layers as I need them pre-composition. I have a feeling that there’s an interesting point somewhere in the middle of the scene composition where the condense-restructuring can take place in order to avoid this rather brute force and perhaps excessively analytical approach.
For excessive context, I need to re-work the condense algorithm to discover components and then recursively perform an isolated condense per-component kind and then re-link to the composite character/environment which instances that component. I may not be able to circle back on that now as we just needed the leaf components for now, but I’ll hopefully be testing (in due time) on large production assemblies, the OpenChessSet scene, the Pixar Kitchen, ALab, etc.
(I hope the journey from .ipynb to .pdf wasn’t too rough, but this is the complete set of code for reference, the extract/load are the main USD touchpoints, the transform notebook is the one that may require a tinfoil hat)
01_condense_extraction.pdf (193.7 KB)
02_condense_transform.pdf (1.3 MB)
03_condense_load.pdf (209.4 KB)