Updating nested asset paths - recommended workflow

I wanted to ask about updating nested asset paths. I initially thought this was up to an Asset Resolver to help pick and update at resolve time, but after having a discussion Today with some people I’m not so sure anymore.


Consider this particular case for a versioned pipeline, so each output generates a new unique version. There’s not a hero/master/latest version file that gets replaced, but a new version published each time. It’s a pull pipeline where anyone picks when to update what they’ve loaded.

I create an asset with a model and a look.

    - look.usd
    - model.usd

Each of these is versioned.

  • So modeling pushes v001, then v002, then v003.
  • Same for the look department, pushes their v001, v002, v003, v004, etc.
    - look_v002.usd
    - model_v003.usd

Now the layout or animation department are bringing this asset v006 into a shot.
All of these are also versioned - I’ve just omitted version numbers for readability

    - layout.usd
        - references: asset_v006.usd

The lighting department works on the lighting for the shot downstream:

    - lighting.usd
    - layout.usd
        - references: asset_v006.usd

Now the asset’s model or look gets updated so we might be seeing updates to the version of the asset, e.g. an asset_v007.usd or higher. I would expect the lighting department to be able to say “update the asset version” without having to go to the layout department and update it there to override its authored reference opinion.

    - lighting.usd
        - update the asset reference to latest, e.g. asset_v007.usd
    - layout.usd
        - references: asset_v006.usd

I’d say this ‘concept’ of replacing the path in this example refers to references but what I’m looking to do is to actually also support it for payloads and sublayers. I just want to say “update” a nested version that was loaded elsewhere.

The same issue also occurs with e.g. nested referencing of an asset into an assembly which gets loaded into a layout into a shot, etc. Say the “lamp” asset on the “cabinet” assembly is loaded into the layout and the lighting department wants to update the version of the lamp. How do they do that? (It’s essentially the exact same question, just more nesting.)


The question now is “What are the options to update this nested reference’s asset path?” and preferably with some thinking about why or why not to use a specific case. What is the recommended workflow?

So how to update and pick which asset version I want - without having to load a newer version in layout and then updating my loaded shot with the new layout. Basically avoiding the “recursive” / “nested” rebuilding.


Asset Resolver Remapping

I initially thought this was up to a USD Asset Resolver. In Lighting department the resolver context would say ‘remap asset_v006 → asset_v007’.


  • It’s “non-destructive”.


  • Not sure if you can write this resolver context in a way from the lighting department so that it’s clear that the lighting department ‘updated’ this particular path.
  • Not very explicity maybe from the perspective of the USD data itself.
  • You’d likely remap a particular path to a new path (without context of which Prim you’re operating on?) and thus if the asset was referenced more than once, you can’t really set the version for only ONE of the references. You’d always be remapping all?

Deleting references, prepending new

We could also delete the existing reference, and prepend a new one. For example:

over "ModelA" (
    delete references = @./ModelPublish/ModelA-v001.usd@</AssetA>
    prepend references = @./ModelPublish/ModelA-v002.usd@</AssetA>


  • The layer’s authored opinion clearly show an explicit removal of the original.


  • You can’t really “replace” the original reference. If the original reference you’re replacing is say not at the front of the reference list or at the end you can’t really “insert” the replacement at the same index - and thus, you are either adding the replacement at a different strength than the original. Giving unintended results.
  • If at some point the shot.usd file updates the reference to a newer layout.usd file which does not have ModelA-v001.usd anymore, but a newer version. Then the “replacement” logic will not delete the original (it does not exist anymore) and you’ll now suddenly have two references to the model since you prepended a new one, but also didn’t remove the one from layout.

Explicit reference list

The lighting department authors an explicit list override to the references


  • You can replace the path at the correct index, preserving the strength ordering.


  • Quite destructive in the fact that you’re authoring way more changes than just updating the one asset path. You’re basically redefining all references completely.
  • You’re completely overriding all upstream behavior of the references, but also even downstream if those only prepended/appended since (I believe) explicit opinion is stronger.

Side notes

Preferably there’s a way to do this in a mostly DCC agnostic manner. E.g. explicit override lists in Houdini in lighting might make more sense than in Maya, because Houdini Solaris is more procedural in nature and ‘updating to latest’ behavior might be more stable to do to only swap a single path because when reopening the lighting scene it could just re-run that python LOP node, but in Maya we might need very different logic to update any remapping.

Of course, lots of focus could also be spent on whenever a new model is published that all downstream uses are automatically updated and new versions published - however the recursive updates can easily grow to tons of uses.

In short, I have a feeling I might be ‘missing’ a core USD concept here on how to do this - or there’s just no reasonable recommendation to give for this workflow.


Hi @BigRoyNL , this is a great topic, and I’m eager to hear how others would approach it, since you are definitely not alone.

So firstly, I’ll confirm that any attempt to “override” the references by reauthoring them in shot/sequence layers is doomed to failure; in addition to the problems you noted, anything that’s nested across any arc is utterly untouchable.

There are maybe two exceptions to that, i.e. ways you could approach this without leveraging an ArResolver.

Version VariantSet

In the first pipeline we built for Presto, we attempted to encode asset version control using essentially an “assetVersion” VariantSet. Basically, each time a new version of an asset was pushed, a titular, “table of versions” layer was updated, adding a new Variant to the assetVersion VariantSet it defined (which referenced the newly published version), and it was that ToV file that always got referenced into assemblies and shots. I thought I remembered more discussion of it in usd-interest, but I at least found this post that describes why we moved away from it.


  • You actually can select different versions for different instances. Note that we actually found this to be more of an anti-feature than a feature, because “Switching all” was what we were already used to, and generally what we wanted. And, given that we set up our model assets with inherits to </_class_MODEL>, it means that now any inherited “global broadcast overrides” you do at the shot level may not be meaningful (or have a different interpretation) for instances of different versions.
  • Version selection is explicit in the scene, and you don’t need an Ar plugin to provide the behavior.
  • Most DCC’s have good support for VariantSet selections already


  • In addition to those mentioned in the usd-interest post, it’s limited to “model assets”. In your examples you’re versioning the subLayers of each model independently as well, and there’s no easy way to capture/specify that with variantSets.

Variable Expressions

You might be able to do pretty general versioning without a resolver by leveraging the recent work for Variable Expressions, though it would impose some possibly unacceptable authoring constraints. I’ll throw it out there anyways… It relies on each versionable asset have an acceptably shot unique name in your environment, since we’ll be constructing a variable name from it.

So you’d start by authoring all your asset paths like:

references = @`"./ModelPublish/${MODEL_A}.usd"`@

Let’s say you were adding that reference into mySet.usd . So then, in the root layer of mySet (which here is mySet.usd), you’d add into the layer metadata:

    expressionVariables = {
        # Define the version the published set uses
        string MODEL_A = "ModelA-v001"

… and you’d do that for every asset you introduce into that assembly.

Now when we are at the shot-level, and lighting wants to change the consumed version of that nested asset (or any other), it can simply assign a new value to the MODEL_A variable in the shot’s root layer. (you can read in the feature proposal why variables cannot override/compose through a layerStack, but can override values set in “referenced root layers” (including payloads)). I can’t think of a way to indicate that it was the lighting department that made that version selection that doesn’t introduce redundant data encoding.


  • Works generally (assuming unique asset names), and no Ar plugin required!
  • Gives pretty reasonable “at a glance” discoverability of what versions each shot has updated, and it is also packaged as part of the shot


  • You need to set alot of variables in your assemblies where you bring in most of your models
  • It’s a recent feature, and I don’t think most DCC’s have great support for it yet, so you’d be rolling your own GUI controllers
  • While it’s pretty good at facilitating version overrides at the assembly (for nested assemblies!) and shot level, it doesn’t allow you to do “sequence level” overrides (i.e. sequence layers shared by all the shots in the sequence)
  • As mentioned above, all departments must record their overrides into the same shot root layer… could be contention/merge-conflicts

Which brings us back to…

Custom ArResolver

This is what we do… again I thought I’d find more discussion of it in usd-interest, but couldn’t. Here’s one thread with a bit more info. Our pipeline is a push pipeline with authored references defaulting to a “stable” pin, but artists can pin assets to any version (including, sweepingly, a “latest” version) at the sequence or shot level, and depending on show-configuration, even the assembly-level. We encode the “pins” as either symlinks in the filesystem, or a sidecar manifest file - this is an encoding that long predated USD and Presto, and benefits from distributed filesystem performance characteristics, though if we were redesigning it today, maybe we would choose to encode it in usd.

But whether it’s a push or pull system probably doesn’t affect the number of overrides a shot accrues too substantially? So if you want your “remapping resolver” to work without any sidecar files or data, we did some work a year or so ago that makes it more robust to encode your “resolver context data” directly in your USD scene, such that when you call RefreshContext() on the ResolverContext associated with your Stage, the Stage will update appropriately if versions have changed (assuming your resolver implements all the needed update logic for the context-based remappings).

You could go simply, here, and just say that the context is encoded with a single dictionary in the layerMetadata of the root layer, which, as with variable expressions, may prove somewhat limiting. Or you could allow your context to be clever and use the SdfLayer API’s to “walk the nested subLayer stack” of layers rooted at the layer provided to it, accumulating version remappings from all of them.


  • Very (most?) general in what you can override
  • Allows overrides at sequence and shot level
  • Also allows each department to make their overrides in their own layer


  • Does not allow you to record overrides at the assembly level, because you can’t very well walk the composed stage to find the references to those things inside your ResolverContext…
  • Results may not fully match expectations when it comes to layer muting on your stage… e.g. you might expect that if you mute a layer, its overrides would no longer be consumed. But the Context won’t know about stage state, so they still will, even if you RefreshContext()

OK, if you’re still awake, that’s what I have for now!


Very interesting thread.

I think the ability to override the version of an asset path associated with a specific prim would be desirable for most pipelines. Prior to reading this thread I assumed it was possible for an AR to know about the prim it is resolving a path for, but upon subsequent reading of the AR docs I see that is not the case.

With this approach – if I understand correctly – any such accumulated overrides in sublayers would end up applying to all matching asset paths on that stage, yeah?

When you say “assembly level”, do you mean an override recorded at the “final” shot level for a specific asset instance nested in an assembly? Or are you referring to the assembly publish itself holding instance-specific overrides? Either way, I assume you are confirming here that an AR can’t know anything about the prim it is resolving a path for?

I guess the fallback is that any assembly that needs instance-specific overrides must get its own new publish with that version baked in.

Thanks for the insight!

Correct. You’d have the ability to merge/override them amongst the layers as you see fit, but whatever you come up with would apply uniformly to the whole stage

Correct, assemblies enter the stage from across a reference/payload on a particular prim, which puts them out of range for an ArResolverContext

1 Like

Very interesting thread here. Thanks to roy for starting it.

Do you mind me asking a bit about how you would implement the “Custome resolver option”
To give a bit off context. Im currently working on a resolver for AYON and im breaking my head about setting versions from higher order layers specifically for an asset or an sublayer so that i can say assetA and assetB point at path A and i want to tell only assetA to be version 001 from now on.

If i understood your text right you would have a reference or an sublayer and you would redirect it to a defined place in order to “pin/set” the version. : this portion is clear to me.
But i would love to have an function / option where i could say asset at sdfPath /path/assetA will not be version 001 instead off version 002 e.g.

I thought about having the sdf path in the uri path but thats not optimal or working. But i saw this portion where you said have it in the custom layer data and that sounds very cool.

With all off that said ( sory for the long text btw im horrible at explaining ) might i ask how would you advice dooing this via a resolver in a way that is very agnositc.

I would imagine to have an key value array in the layer metadata and then put the sdfPath at the end off the uri so that the resolver looks at the layer metadata when resolving and then gets the redirected path. But then how would i set the data and acces the data in a way that i can write it in the right layer.
So that if lightning wants to overwrite a version its actually in the lightning layer.

I hope what i write makes sense. And thanks for reading.

Apologies, let me try to make it more clear: there’s no viable approach I can think of to use an ArResolver for versioning that can robustly take SdfPath (i.e. the location of a referencing prim on the UsdStage) into account. Even if you redundantly burned the scene-path of the referencing prim into the reference itself as a URI argument, you could only use those paths reliably as a key for your resolver-mappings for references added in the scene’s root layerStack. Any references that were added from a referenced asset (what the OP called “nested references”) will alias for all instances of that referenced asset in the scene, so you’d be right back where you started.

ArResolver::Resolve() must be able to resolve the versioning, and it is a much lower level of the system than UsdStage, or even SdfLayer, and therefore can know nothing about layer paths or scene paths, or whether the “asset” represents a reference, subLayer, or texture (though it might be able to infer the latter in many cases by looking at file extensions).

i see so taking the sdf pah would be an bad idea because on the one side you cant really acces hem and on the other side if an asset moves to a new sdfPath it would break and to make it worse if the resolver stores a list off sdfPaths and redirections uris and i place an new asset on to an sdfPath that already known to the resolver it would overwrite it. that makes a lot off sens.

but with that said. you head this line in your first comment.

You could go simply, here, and just say that the context is encoded with a single dictionary in the layerMetadata of the root layer, which, as with variable expressions, may prove somewhat limiting. Or you could allow your context to be clever and use the SdfLayer API’s to “walk the nested subLayer stack” of layers rooted at the layer provided to it, accumulating version remappings from all of them

and reading this i asked my self the question:

  • lets say i give every reference and sublayer that is created an uri and an UUID e.g. „uri://1(id:112)“
  • then i use the layer metadata and if i want to redirect this uri i say id:112=uri://2
    - a layer with a higher strength will be resolved earlier than one with a lower strength so i would always say i only take the first time where i redirected this UUID
  • i would then also have some information in the UUID that allows me to know if im resolving a layer or an reference (could be as easy as having an l or an r in the beginning off the uuid)
    - this information could then be used when resolver() function is called i use this in order to read the custom layer metadata on in the usd that im resolving. and if no „highe/stronger“ layer already redirects this UUID i will then use this redirection information in order to well redirect the resolution“

problems i see:

  • encoding the data in the LayerUsd directly would mean i will probably have an overhead when reading this data. this might result in a big overhead if i have a lot off layers that creat a lot off overwrites. ( im thinking lighting layer and 20 sublayers for all the artists every artist overwrites some thing )
  • im also not to sure if this will creat conflicts and annoying thins in the workflow
    - im thinking some one says redirect Lighting layer to v001 instead of v002 and all the redirections they created in the v002 layer disappear and not only the sdf overwrites. this might be very strange to an artist.

i hope what i wrote makes sens outside off my scatterd brain.
but that what i think might be an idea if the sdfLayer walking you described works the way i imagine it to.
btw manny thanks for the massages before they are very helpful for understanding the ideas behind the system.

Yes, I realized last night you could embed GUIDs in your references to achieve “per-prim/reference pinning ability”, if you have sufficient GUI control to hide the GUIDs from the users. I think this should work.

Hmmm… my intuition says allowing people to pin different appearances of a subLayer differently is going to lead to difficult-to-debug inconsistencies, so why not just elide the GUID entirely from your subLayer assetPaths, so that your resolver would automatically just retain only one override per-layer?

The overhead of extracting dictionaries from layer metadata from 20 or so layers should be quite small compared to the cost of opening a stage, and even so if the number of layers is 100 or more (as are our shot layerStacks). However, you absolutely do not want to be doing that per-Resolve() call! Instead, read that data only upon ArResolverContext creation and in your RefreshContext() override, caching the merged results in an unordered_map or something, for use during Resolve().

Maybe… but it’s not even clear that the data contained in lighting v001 would have the same effect on the versions of assets pinned in v002? Lots of stuff will potentially change when moving between versions. But yeah, that’s the tradeoff for not managing pinning via an external database or unversioned sidecar mechanism…

It’s all making sense now, but one thing isn’t quite clear: Are you saying that an ArResolverContext can’t affect asset resolution across the reference/payload barrier because the context isn’t applied at that point, or that one shouldn’t assume it will be applied due to aliasing? (i.e. the first context used wins)

No, the Context will apply consistently throughout. It’s just that, even if all references have GUID’s in them, “nested references” will still alias.

So let’s say I build a “table assembly” that references in an Apple asset on a table, and I publish that assembly as an asset. Now I reference that table assembly asset into two different rooms in my house assembly. Both rooms reference the Apple via the exact same nested reference, and there’s no way to combine the “unique ancestral reference of the table assembly” into the Apple reference, so you will have nothing to latch onto to be able to pin those two Apples differently.

I should have mentioned I’m thinking about the case of different stages accessing the same assembly, and those stages having different contexts. As in this example from the docs:


ArDefaultResolverContext shotACtx({"/ShotA/assets"})
UsdStageRefPtr shotA = UsdStage::Open("ShotA.usd", shotACtx);

ArDefaultResolverContext shotBCtx({"/ShotB/assets"})
UsdStageRefPtr shotB = UsdStage::Open("ShotB.usd", shotBCtx);

My understanding is that Sdf layers are shared across stages. In theory an assembly intended to be shared between shots (i.e. published at the /Global level) shouldn’t be authored such that it would be affected by data in /ShotA or /ShotB. But just to complete my mental model of how things work, the ShotB stage will see the assembly as it was resolved from shotACtx, yeah?

No, by a not well-publicly-described mechanism, cached layers depend not just on the asset path identifying them, but also on the Context in effect when the identifier is resolved. So shotA and shotB should be able to resolve the references inside the shared assembly differently, based on their unique contexts.

1 Like

Just wanted to share a possible edge cases to consider when using asset resolver contexts. I wouldn’t let this discourage you, as it’s an edge case that doesn’t seem to cause many people problems in practice.

Document dangling layers through new test by nvmkuruc · Pull Request #2801 · PixarAnimationStudios/OpenUSD (github.com)

manny thanks for the answer i find the approach for reading recursively at creation very interesting and i will see when and how im gonna go around the do some test implementation.

im not yet sure how to handle software packages adding sublayers. (maya, houdini, …) i guess i will have to write some functions that replace there internal functions so that i can register the change on the resolver.

with that said, sorry for my long silence. and manny thanks for all the infos when i ultimately get around implementing i believe i will have a lot off fun with that. and realistically i will be back in that chat because i broke something beyond believe. until then have a great time i appreciate the information and the insights. cheers.