Skip to content

Improve surface point offsetting to prevent self-intersection#1871

Open
dongclin wants to merge 1 commit intomitsuba-renderer:masterfrom
dongclin:adaptive_offsetting
Open

Improve surface point offsetting to prevent self-intersection#1871
dongclin wants to merge 1 commit intomitsuba-renderer:masterfrom
dongclin:adaptive_offsetting

Conversation

@dongclin
Copy link
Copy Markdown
Contributor

@dongclin dongclin commented Apr 13, 2026

Description

This PR propose a robust, scale-invariant offsetting technique based on Ray Tracing Gems ("A Fast and Robust Method for Avoiding Self-Intersection") in Chapter 6. (6.6.2.4) to the current ray offset mechanism.

The Problem: Static RayEpsilon

Currently, Mitsuba's ray offset calculation uses a fairly generous static RayEpsilon to modify the origins of spawned rays. This constant approach frequently runs into issues in high-precision use cases where important intersections of nearby surfaces may be "missed."
A concrete example is rendering a realistic human eye: the distance between the inner eye structures and the cornea is often smaller than the default RayEpsilon, causing the renderer to skip critical intersections and produce noticeable rendering artifacts. Other renderers (e.g., Blender Cycles) seem to handle this case more robustly

Visual Comparison

To demonstrate the precision of the adaptive offsetting, I use two spheres at the origin:

  • Red Sphere: Radius = 1.0
  • Dielectric Sphere: Radius = 1.0 + 4e-5 (which is smaller than mi.math.RayEpsilon)

In the current version, it fails to find the intersection of the inside sphere due to the generous epsilon, leading to the light leaking. With adaptive offsetting, we could have the two sphere been rendered.

scene_dict = {
    "type": "scene",
    "integrator": {"type": "path"},
    "red_sphere": {
        "type": "sphere",
        "radius": 1.0,
        "center": [0, 0, 0],
        "bsdf": {
            "type": "diffuse",
            "reflectance": {
                "type": "rgb",
                "value": [1.0, 0.1, 0.1]
            }
        }
    },
    "outer_sphere": {
        "type": "sphere",
        "radius": 1.0 + 4e-5, # Distance is 4e-5, smaller than RayEpsilon (approx 8.9e-05 mi.math.RayEpsilon)
        "center": [0, 0, 0],
        "bsdf": {
            "type": "dielectric",
        }
    },
    "sensor": {
        'type': 'perspective',
        'to_world': mi.ScalarTransform4f().look_at(origin=(0.0, 4.0, 4.0), target=(0.0, 0.0, 0.0), up=(0, 0, 1)),
        'film': {'type': 'hdrfilm', 'width': 300, 'height': 300},   
    },
    "emitter": {'type': 'constant', 'radiance': 0.1},
    "light": {'type': 'cube', 'to_world': mi.ScalarTransform4f.translate([0, 0, 5]), 'emitter': {'type': 'area'}},
}

scene = mi.load_dict(scene_dict)

image = mi.render(scene=scene, spp=512)
mi.Bitmap(image).convert(mi.Bitmap.PixelFormat.RGB, mi.Struct.Type.UInt8, True)
Current version Adaptive offsetting
image image

Testing

Existing tests have been executed. I observed some failures during the process; however, these appear to be strictly related to the CUDA environment/drivers on the test machine rather than the core logic of this implementation.

Checklist

  • My code follows the style guidelines of this project
  • My changes generate no new warnings
  • My code also compiles for cuda_* and llvm_* variants. If you can't test this, please leave below
  • I have commented my code
  • I have made corresponding changes to the documentation
  • I have added tests that prove my fix is effective or that my feature works
  • I cleaned the commit history and removed any "Merge" commits
  • I give permission that the Mitsuba 3 project may redistribute my contributions under the terms of its license

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant