We had an initial chat on the team about this last week, and came up with an ordering of actions we’d like to pursue, if this makes sense to y’all:
- We have some new ideas to pursue on increasing performance in dealing with large numbers of sibling prims and properties, and if they pan out, it would benefit everyone. TO this end, can you all get specific about where you’re having problems, what the general patterns are (i.e. using schemas, basic UsdObject API’s, low-level Sdf). Also, on the reading/consumption side, we believe the only thing that should be slow for large numbers of properties is enumerating properties (e.g. GetAttributes(), GetPropertiesInNamespace()) - if that’s not your experience, can you describe what’s demonstrably scaling poorly?
- Adding a metadata type
dictionary[]
that would be embeddable inside otherdictionary
metadata. Seems like this would go fairly far in facilitating the approach @CalvinGu is taking, which still suffers from the inability to easily describe/encode the structure of the data, but otherwise is pretty flexible - Provide
dictionary
anddictionary[]
-valued attributes. Our undertanding currently is that the main thing this provides over 2 is just the ease of creating such attributes, since new dictionary-valued metadatum fields need to be declared in plugInfo.json’s, and it’s not awesome for discoverability that Calvin is needing to stick all complex data intocustomData
right now… I almost hate myself for suggesting this but even having just a single canonical piece of metadata (likestructuredData
or something) could still be “organized” into schemas by addingopaque
-type attributes whose purpose is to name and host the dictionary that contains the complex data. Dictionary-valued attributes are the most complicated solution on the table because it brings in timeSamples and dictionary-style value resolution through them… not to mention code-sites that may get tripped by having a new datatype to handle.
Eager to hear your thoughts - thanks!
–spiff