Abstract: We develop a simple and efficient method for soft shadows from planar area light sources, based on explicit occlusion calculation by raytracing, followed by adaptive image-space filtering. Since the method is based on Monte Carlo sampling, it is accurate. Since the filtering is in image-space, it adds minimal overhead and can be performed at real-time frame rates. We obtain interactive speeds, using the Optix GPU raytracing framework. Our technical approach derives from recent work on frequency analysis and sheared pixel-light filtering for offline soft shadows. While sample counts can be reduced dramatically, the sheared filtering step is slow, adding minutes of overhead. We develop the theoretical analysis to instead consider axis-aligned filtering, deriving the sampling rates and filter sizes. We also show how the filter size can be reduced as the number of samples increases, ensuring a consistent result that converges to ground truth as in standard Monte Carlo rendering.
Speaker: Saurabh Gupta/EECS - UC Berkeley
Title: Perceptual Organization and Recognition of Indoor Scenes from RGBD Images
Abstract: We address the problems of contour detection, bottom-up grouping and semantic segmentation using RGBD data. We focus on the challenging setting of cluttered indoor scenes, and evaluate our approach on the recently introduced NYU- Depth V2 (NYUD2) data set. We propose algorithms for object boundary detection and hierarchical segmentation that generalize the gPb−ucm approach of by making effective use of depth information. We show that our system can learn to detect specific types of geometric boundaries, such as depth discontinuities. We also propose a generic method for long-range amodal completion of surfaces and show its effectiveness in grouping. We then turn to the problem of semantic segmentation and propose a random forest approach that classifies superpixels into the 40 dominant object categories in NYUD2. We use both generic and class-specific features to encode the appearance and geometry of objects. We also show how our approach can be used for scene classification, and how this contextual information in turn improves object recognition. In all of these tasks, we report relative improvements of more than 10% over the state-of-the-art.
This is joint work with Pablo Arbelaez and Jitendra Malik.