We don’t do it for deduplication, per se. We encountered that Houdini (mainly) is very dynamic, thereby causing re-resolution when reconstructing its scene.
Setting up the resolver with a cache object helped quite a bit.
The cache object is also shared, so we don’t have to care if something creates a new resolver because even a new resolver will pick up the “Global” cache while still being able to opt out of doing so
If I remember correctly, Luca Sheller went the other way around, where every resolver has its cache, and he used a Global std::map to avoid creating unneeded Resolvers.
There is also a second advantage: we are currently working on pinning support for farms to write out a file that gets loaded at startup to avoid communication with the server. Having a separate cache for all (all resolvers that don’t opt-out) makes this relatively simple to implement.
Oh, and before I forget, there is even a third advantage: we aren’t finished with the development of the feature on the Usd side, but our server allows us to resolve large amounts of Uris in one batch, which is a lot more performant than doing them one by one. We want to have a system in place that simply has a “file” or “Db-Info” that knows all the things related to the scene so that we can do one big server request instead of many small ones at once.
There are more ways to do the same job, but this one was simple and worked well, so I opted for it.
Ps: there is also the idea of doing “redirection” for nested version updates via the resolver, and having our “own” data structure that we can easily modify is quite handy for that, too. (I am not sure if you remember this thread: Updating nested asset paths - recommended workflow)
Sorry for the lengthy answer; I didn’t get it smaller. I hope this all makes sense and sounds reasonable.