Apologies for the multiple posts on this forum – please let me know if it’s an issue.
I’m am still looking into the implementation of SdfPath. Upon deletion of a path, both primPart and propPart undergo deletion. For primPart, which is reference-counted, traversing through the Sdf_PathNodeHandleImpl destructor triggers _DecRef() if the handle is valid. Skipping ahead, if the reference count of the managed object drops to 1, _Destroy is called, leading to the invocation of an overridden delete operator for Sdf_PrimPathNode. This accomplishes two things:
It calls Sdf_PathPrimPartPool::Free(), which, as I understand, places the handle back in a free list.
It triggers the destructor of the instance (~Sdf_PrimPathNode()), which in turn calls _Remove. This function seems responsible for removing the handle from the map.
So far, it all makes sense. However, I’m having an issue with propPart. Given that propPart elements are not reference-counted, I’m not finding where their instances are actually destroyed? In my debugging, ~Sdf_PropPartPathNode doesn’t seem to be called. Since _MapAndMutex only contains PoolHandle values for map, I’m questioning how their destruction is handled.
Given the thoroughness of the code design, I’m likely missing something and would appreciate any help. Could it be that the method should be modified to:
inline void _DecRef() const {
if (Counted) {
intrusive_ptr_release(get());
}
else {
get()->_Destroy(); // _Destroy public for the sake of the test
}
}
Any insights would be helpful. -ms
PS: is it be better to post such a question here, or is it more appropriate for the GitHub repo?
Hey Mark – currently we’ve chosen to make property nodes immortal. We’ve made this choice based on empirical evidence that the number of unique property nodes we tend to see is vastly smaller than the number of prim nodes, and that if we reference count them the same way that we do prim nodes, we see thrashing and poor performance since calling code doesn’t typically retain property paths in USD.
Long term we’ll want to find a better solution here, but this is how things stand currently.
I understand that continuously loading new files could be impacted by this approach, but your perspective suggests that properties across different scenes are likely to be similar and thus reused. However, property names and parent-child relationships might vary from scene to scene.
Is there a performance concern with deleting nodes? While I see your point, the idea of leaving objects in memory without destroying them (assuming deletion cost is minimal) on exit is unsettling.
We will leave it at that then for the time being. Thanks a lot.
BTW, this leads me to a question about scale, something you hinted at in your response. In some of the most typical or extreme cases, how many prim nodes are involved, and what’s the average ratio between prim and prop nodes? My experience with complex scenes in large companies is limited but I’d guess that even the most intricate scenes might have, at most, around 100,000 paths. Is this a significant underestimation? Are there scenarios where a scene could contain millions of paths?
The reason there are usually dramatically fewer property nodes than prim nodes is that property nodes begin their own new prefix-tree. Consider a scene with 100,000 Mesh prims. Each will have its own unique prim path, so there will be 100,000 prim path nodes. But even though each mesh has a .points attribute, there will be exactly one.points property path node. This is because each unique attribute will be identified by an SdfPath that pairs the unique Mesh prim path node with the single.points property path node.
Most properties are analogous to this case. We typically see a small fixed-size set of unique property names, so this is why the current arrangement of immortal property nodes hasn’t been a problem yet. That said I share your unsettled feeling about it and as I mentioned, long term we’ll probably want a better solution.
Regarding scale, we routinely have scenes with millions of prims. Going back to the film “Coco” from 7 years ago, the Santa Cecilia town set has ~1.6 million prims composed over ~3000 layers. USD native instancing lets us push into the 10s of millions of effective prims, and with UsdGeom point instancing we get to 100s of millions of objects.
Also, just a bit more under-the-hood context to your question about overhead… there is always some overhead for allocating/deallocating, and that goes up in a highly multi-threaded context, which is our common case.
Secondly, if you consider consider a single attribute query from a client (like pointsAttr.Get()), the number of Property paths that the USD engine needs to dynamically create depends on the number of composition arcs (not including subLayers) that contribute to the prim hosting the property. In our pipeline that’s typically somewhere between a half-dozen and a dozen, but in some cases can be much, much more!
So to render a scene with a 100K prims in it, we’d be making tens to hundreds of millions of property paths on the fly, and for a scene with a million prims, it’s easily in the billions.
I’m super impressed. I hope most of these layers were generated procedurally. Nonetheless, it’s an incredible feat. And yes, I’ve understood your explanation about properties being a collection of standard names shared across files, like .color, .radius, and .points. I appreciate your detailed response.
It might sound like a lot but with 300 artists, 3000 layers is just 10 layers per artist. It’s not so simple as that of course! But our filmmakers are great at building lush worlds. If you blur your eyes and wave your hands the 3000 number roughly corresponds to the number of unique objects in a scene (modulo variation by variants).