I have been experimenting with the Python API on a proof-of-concept system which I populate an in-memory USD stage from two different data sources and while inserting the data to the in-memory USD stage, I’d like to render the in-memory stage. The renderer could be as simple as UsdView or could be as complex as Omniverse, it doesn’t matter as long as it renders.
My main client, in which I create the in-memory USD stage, pulls some geometry data from two live systems. These live systems are designed to collect some data, put in a queue and the data can be pulled from the queue. After pulling the data, the live system deletes it. The data could be any type of geometry, e.g. point cloud, triangular meshes or BSpline surfaces, but the live system tells me what kind of geometry data it is sending. Unfortunately, the data is not in USD format, and it needs to be converted into USD spec when received by the main client. Apart from the geometry, the live system also provides meta data, and these data is expected to be added as attributes to the geometry prims.
The final USD file becomes very big, and some of the USD files I generated could go up to 1 GB.
In my case, there are multiple live systems which my main client is connecting. For a single live system connection, the USD stage generation is very fast, but when I add the second live system, hence doubling the data amount, the scene generation slows down a lot and also it affects the rendering performance.
I am looking for some tips and tricks on how to improve the speed of the stage generation. I am running my experiment with the USD Python API at this time. I was thinking of using SdfChangeBlock to push multiple prims to the stage, e.g. call pull method for 10 times and generate 20 prims inside with Sdf.ChangeBlock() block; 10 + 10 from two live systems with meta data as attributes.
Do you have any other suggestions on improving the stage generation under the circumstances described above?
Using the ChangeBlock is usually the go to recommendation for live editing like this, to reduce the update pressure.
However if you’re doing really large data editing, you could do something like putting the heavier data behind a payload and make it so the payload is off by default , while putting an extent on the prim that payloads it in.
That would perhaps give you a workflow speed boost in that your data isn’t represented right away, but the user gets a bounding box visualization and can have the data loaded asynchronously from when you are doing the edits? But its hard to say without knowing more unfortunately.
Thank you very much for your response @dhruvgovil. I really appreciate you.
I think it would be possible to introduce some delays to presentation of the data, but it might not be convenient to wait till the stage is fully generated (that might take hours in my case).
I was thinking of something like this:
Create main.usd file and open with the renderer
Create live1.usd and live2.usd and add these two as references to main.usd
Populate live1.usd and live2.usd on separate threads (python doesn’t have a real threading option, but it is not the main concern as of now)
I believe Python’s asyncio, or something like uvloop could be implemented on top of the thread structure, but I am not sure if it is really necessary at this time.
In the above use case, I need to save live1.usd and live2.usd from time to time to make them visible on the renderer. I believe OS notification about the file modification triggers the renderer.
The above works okay with some lags, I haven’t implemented SdfChangeBlock yet, to be honest, but I will in a couple of days. The lags could be due to my computer (or OS) as well, I also would like to cross that option off as soon as possible.
I was also wondering if a change from Reference to Payload would make any difference regarding the above case. Payload gives us more flexibility in loading, but would it make difference in stage generation?
I am guessing the renderer needs to implement the asynchronous loading for Payloads, would that be correct?
Yes, payloads would make a difference in threaded authoring ability, but only if all-but-one are unloaded while the authoring is taking place. That is because UsdStage subscribes to a “many readers, single writer” policy, so you cannot author data to two different layers in different threads simultaneously if those two layers feed the same UsdStage.
Also please keep in mind that when using SdfChangeBlock it is only considered safe to use the SdfLayer and various SdfSpec API’s to author data, not the UsdStage, UsdPrim, or UsdProperty API’s.
Once all layers are present/composed on the stage, you can only author to a single layer at a time.
Thank you for your reply @spiff! It is very helpful, and I really appreciate it.
To be a bit more direct and referencing the use case above, I need to create the main stage (main.usd) and two other stages (live1.usd and live2.usd) before running the renderer. The two other stages need be added as payloads (not references) to the main stage. It might require some reading and testing to start using SdfSpec API, but sounds like this might be part of the solution and I’ll definitely work on it.
If I understand correctly, I need to open the main stage using InitialLoadSet.LoadNone with the renderer. It seems like this is one of the optional arguments for UsdStage.Open. This would trigger asynchronous loading of the payloads added to the main scene. I never used InitialLoadSet option, never needed it for sure. If it could be the part of the solution, I’ll definitely give it a try.