Seeking Advice on Optimizing Equirectangular Panorama Map Rendering Code

Hello everyone,

I’m currently working on a project that involves generating an equirectangular panorama occlusion mask for a 3D position in the current scene. My goal is to render a mask array where a value of 0.0 indicates that a given direction (theta, phi) is occluded, and a value of 1.0 means there is no obstruction in that direction.

My current implementation generates a Cube Map rendering six 90 deg FOV images and interpolates to the equirectangular projection.

I was hoping to be able to do this rather quickly and update the mask on each frame.

My rendering approach is as follows:

    # Create a renderer object
    renderer = UsdImagingGL.Engine()


    # Set up the render parameters
    render_params = UsdImagingGL.RenderParams()
    render_params.frame = stage.GetStartTimeCode()  
    # Lower the rendering complexity. This reduces the quality but increases speed.
    render_params.complexity = 1.0

    # Turn off expensive features like anti-aliasing.
    render_params.enableIdRender = False
    render_params.enableSampleAlphaToCoverage = False

    # Use simpler drawing modes.
    render_params.drawMode = UsdImagingGL.DrawMode.DRAW_GEOM_FLAT
    render_params.enableSceneLights = False
    render_params.enableSceneMaterials = False
    render_params.enableUsdDrawModes = False
    render_params.enableIdRender = False
    render_params.enableLighting = False
    # render_params.enableSampleAlphaToCoverage = False


    camera = Gf.Camera()

    camera.SetPerspectiveFromAspectRatioAndFieldOfView(1.0, 90.0,Gf.Camera().FOVHorizontal)

    # order = ['back', 'down', 'front', 'left', 'right', 'up']
    look_vectors = [
        [-1,0,0], 
        [0,0,-1],
        [1,0,0],
        [0,-1,0],
        [0,1,0],
        [0,0,1]
        ]

    
    batch = torch.zeros(6, 1, width, height, device=device) #+ 20
    renderer.SetRenderViewport([0,0,width, height])
    renderer.SetRendererAov('depth')

    for i, look in enumerate(look_vectors):
        look_vec = Gf.Vec3d(look)
        # Z-up coordinate system
        look_at = position + look_vec
        if look_vec[2] == 0:
            up_dir = Gf.Vec3d(0, 0, 1)
        else:
            up_dir = Gf.Vec3d(-1,0,0)
        view_matrix = Gf.Matrix4d().SetLookAt(position, Gf.Vec3d(look_at), up_dir)
        camera.SetFromViewAndProjectionMatrix(view_matrix, camera.frustum.ComputeProjectionMatrix())


        renderer.SetCameraState(
            camera.frustum.ComputeViewMatrix(),
            camera.frustum.ComputeProjectionMatrix()
            )
        

        # Render the image
        renderer.Render(stage.GetPseudoRoot(), render_params)


        # Get the frame as a numpy array...

I would greatly appreciate any advice, tips, or insights on how to speed up this process. Are there any specific techniques, algorithms, render parameters, or optimizations that are particularly effective for this type of task? Perhaps there are some USD-specific features or optimizations I might be overlooking?

Thank you in advance for your help!

Best regards,
Nick

You are close to the correct solution, unless you omitted the filtering step from your example.

Rearranging your loop may help marginally, but python threading being what it is I wouldn’t expect much from that.

A missing step in your example is converting the data into an equirectangular map. I’ll add some notes here, just in case. If you don’t need these notes, skip to (A) :slight_smile: The solid angle covered by a texel in a cube map is greater in the corners than in the center of a face, so you’ll need to take that into account when filtering the faces into the map.

raytracing the cube map with a bit of bespoke code may also be faster than a filtering step, but that simply moves the resampling problem into a cube sampler instead of a complicated filtering step.

(A)
It’s possible that it is faster to use a raytracer to generate an equirectangular map these days, although I’m not sure that enough machinery is available from Hydra to do it directly.

hope this helps

Thank you for the feedback.

I think I have a working cube map to equirectangular mapping function.

Are there any render parameters you would change?
Is “depth” the right AOV to use? Perhaps “primId” would be faster?

Raytracing is interesting, I may look deeper into the HdEmbree or Aurora renderers.

The AOV buffer doesn’t need to run material based color shading, it’ll be very fast. In the case of the id buffer, I imagine your occlusion test would be checking pixels for a sentinel value, so that comparison is a wash compared to a depth threshold check. Since the depth aov version has to write to a color buffer, and the id writing shader is trivial, I would guess that the performance would not be measurably different.

Thank you for all your help, I found my issue to be that I was instantiating a new renderer on each evaluation.

1 Like