Understanding and generating appearance are increasingly interdependent challenges. Shape, texture, and reflectance together define how we see the world—and how models should represent it. Advances in intrinsic decomposition, material capture and recognition, and generative rendering now let us analyze and synthesize appearance with unprecedented fidelity.
Yet a key question remains: how can a deeper understanding of surface appearance support material perception and interaction, while enabling controllable, identity-preserving editing and consistent multimodal generation?
The Appearance Understanding and Generation (APPX) workshop unites researchers in vision, graphics, and generative AI to explore representations, datasets, and methods that bridge analysis and synthesis for more interpretable and controllable methods.
Columbia University
Foundational contributions to computational photography and appearance modeling. Co-developer of the Oren–Nayar reflectance model.
UCSD
Leading researcher in physics-based vision and rendering. Known for light transport theory, inverse rendering, and neural radiance fields.
NVIDIA Research
Focuses on realistic material appearance, inverse material estimation, and high-resolution material synthesis.
Zhejiang University
Known for material and reflectance acquisition, SVBRDF datasets, and learning-based high-fidelity material reconstruction.
College of William & Mary
Pioneering work on multi-view material capture, relighting, appearance reproduction, and generative appearance.
Google DeepMind
Focuses on geometry-, physics-, and graphics-informed methods for understanding and synthesizing 3D scenes.
Meta Reality Labs
Expert in 3D object/scene reconstruction, differentiable rendering, and materials & lighting modeling.
Date: To be announced (CVPR 2026)