Estimating camera intrinsics from video is key to 3D reconstruction, but most methods assume they’re fixed per video. What if the camera keeps zooming and refocusing?

Meet InFlux, the first benchmark with per-frame ground truth for videos with dynamic intrinsics. 🧵1/5 pic.twitter.com/ckOznEXXkP

— Princeton Vision & Learning Lab (@PrincetonVL) December 1, 2025

Major update to Infinigen Articulated (formerly Infinigen-Sim)! You can now generate articulated 3D objects in 18 categories, simulation ready with physics parameters and improved efficiency. Also available: 20k pre-generated objects. Download links below👇 pic.twitter.com/XOZNjKoIOp

— Princeton Vision & Learning Lab (@PrincetonVL) November 17, 2025

🧵 Working on SLAM or Novel View Synthesis but need a new challenge? Try Princeton365, our new video benchmark built to push your model to the limit. It features reflective and transparent scenes, wild camera motion, night-time shots, flashing lights, video within video, and… pic.twitter.com/Axn2rJJuPD

— Princeton Vision & Learning Lab (@PrincetonVL) October 16, 2025

Depth models struggle with transparent surfaces. They may see a glass window, or what is behind it, but not both. Worse, they are often confused and inconsistent. How do we make them see the glass and see through it? Check out our ICCV 2025 paper “Seeing and Seeing Through the… pic.twitter.com/eFDDXNL90q

— Princeton Vision & Learning Lab (@PrincetonVL) October 15, 2025

Want to convert sparse depth to dense depth, but need a model that works well out of the box? Check out OMNI-DC, our new model that targets zero-shot depth completion. OMNI-DC is up to 43% more accurate than existing methods!🧵1/4 pic.twitter.com/cUK5HsP7IL

— Princeton Vision & Learning Lab (@PrincetonVL) October 14, 2025

(1/n) Robots must learn to interact with articulated objects. While training in simulation is promising, high-quality articulated assets remain scarce. We present Infinigen-Sim: procedurally generated articulated simulation assets for robot learning. 🧵 pic.twitter.com/Fa4GmzDzqi

— Princeton Vision & Learning Lab (@PrincetonVL) August 7, 2025