Hello!
(I did read this thread but no updates were posted so I’ll try to resuscitate - sorry about that - and make a more comprehensive one)
I need to load a host-agnostic C library into different DCCs that each ship their own USD build. Concern: USD’s C++ ABI isn’t stable across versions, so two USDs in one process can clash.
The solutions I could think of:
-
Per-host adapters: keep the core USD-free (pure C ABI, POD/opaque handles). Ship tiny adapters per host that parse that host’s USD stage and translate to plain C descriptors/buffer views for my C library; for write-back, either provide output buffers the core writes into or apply a compact delta stream back to the stage.
-
Out-of-process USD: run USD in a helper process and communicate via IPC/shared memory to avoid in-process symbol conflicts entirely.
-
In-process isolation (I see this as risky): keep USD out of the public API, hide symbols (
-fvisibility=hidden+ version scripts), and, on Linux considerdlmopen/private namespaces or static linking - but I think it’s still fragile and not portable.
So I’d like to ask a few questions after trying to do my homework: is this the recommended pattern set to avoid in-process USD version clashes?
-
Any guidance on not exposing USD types across FFI (opaque handles, symbol visibility, version checks)?
-
For high-throughput write-back (think many transforms for usd prims), experiences with adapter-owned output buffers vs delta streams?
-
Pointers to reference implementations showing this separation?
-
If embedding USD in the core is unavoidable, are there supported isolation techniques beyond symbol hiding, or is it discouraged?
Also links to docs/talks would be much appreciated.
Thanks guys for the work you’re doing!