How to access subdivided geometry in HdGp?

Is there a way to access subdivided geometry in HdGp procedurals?
We’re mostly interested in sharing data between procedurals and render delegate is there a good way that avoid calculating it twice?

On a similar note is it possible to layer procedural evaluation i.e. access data generated in one procedural from another one?

Thank you

I’m not sure there’s a generic answer to “interested in sharing data between procedurals and render delegate”. You could subdivide the geometry in your procedural and write that back to the Hydra prim and then your render delegate would do no further subdivision. Or you could subdivide the geometry in your procedural, stuff it into a memory buffer somewhere, and replace the Hydra mesh with a custom Hydra prim that advertises the memory location to your delegate. And as for how you subdivide … I suspect for consistency you’d probably want to leverage your renderer’s internal tech (I’m assuming you’re still at AL and talking about Glimpse).

I’ve not verified that this works, but my mental model for how I’d approach layered procedurals is by using multiple resolving Scene Indexes, each with a different targetPrimTypeName Universal Scene Description: HdGpGenerativeProceduralResolvingSceneIndex Class Reference (the first resolver would target hydraGenerativeProcedural prims as it does by default, the second resolver would target hydraGenerativeProcedural_layer2 prims, etc). This would probably require another upstream Scene Index to parse some ordering metadata on the prims to re-type them for the resolvers.

The docs are pretty clear that, within a given resolver, all the procedurals will see the same input.

Thank you for reply. That’s consistent with our understanding as well.
We were wondering how would displacement fit into this.
At the moment we’re using direct connection to renderer to get subdivided and displaced surfaces for procedurals to operate on, but we would like to avoid such coupling if possible.
I understand that the best way to go would be to have scene index that would subdivide and displace surfaces before HdGp, right?
Also, how would view dependant subdivision work in such scenario? Do you have any experience with that?

I think the answers to what you’re asking here are very dependent on the details of your technology and pipeline. Trying to propose what I think is “right” or “best” - and what makes sense in terms of responsibility between the components of the renderer vs the procedurals - isn’t really possible without knowing a lot more about the general architecture of your renderer, etc.

Sorry, I know that’s not immediately helpful. I could propose a hypothetical pipeline, but I don’t want it to be a wasted effort if it’s built on assumptions that are incompatible with the capabilities/interfaces of your internal tech.