Updated: Aug 13, 2025
We published our project on building dynamics models from vision and contact-rich robot interaction at Robotics Science and Systems (RSS) 2025 in Los Angeles. My co-authors are Penn postdoc Minghan Zhu, our advisor Professor Michael Posa, and three other collaborators: Mengti Sun, Bowen Jiang, and Professor Camillo J Taylor. Our project presents Vysics, which is a vision-and-physics framework for a robot to build an expressive geometry and dynamics model of a single rigid body, using a seconds-long RGBD video and the robot's proprioception.
Updated: Aug 13, 2025
We published our project on building dynamics models from vision and contact-rich robot interaction at Robotics Science and Systems (RSS) 2025 in Los Angeles. My co-authors are Penn postdoc Minghan Zhu, our advisor Professor Michael Posa, and three other collaborators: Mengti Sun, Bowen Jiang, and Professor Camillo J Taylor. Our project presents Vysics, which is a vision-and-physics framework for a robot to build an expressive geometry and dynamics model of a single rigid body, using a seconds-long RGBD video and the robot's proprioception.
Updated: Aug 13, 2025
We published our project on building dynamics models from vision and contact-rich robot interaction at Robotics Science and Systems (RSS) 2025 in Los Angeles. My co-authors are Penn postdoc Minghan Zhu, our advisor Professor Michael Posa, and three other collaborators: Mengti Sun, Bowen Jiang, and Professor Camillo J Taylor. Our project presents Vysics, which is a vision-and-physics framework for a robot to build an expressive geometry and dynamics model of a single rigid body, using a seconds-long RGBD video and the robot's proprioception.
Updated: Aug 13, 2025
We published our project on building dynamics models from vision and contact-rich robot interaction at Robotics Science and Systems (RSS) 2025 in Los Angeles. My co-authors are Penn postdoc Minghan Zhu, our advisor Professor Michael Posa, and three other collaborators: Mengti Sun, Bowen Jiang, and Professor Camillo J Taylor. Our project presents Vysics, which is a vision-and-physics framework for a robot to build an expressive geometry and dynamics model of a single rigid body, using a seconds-long RGBD video and the robot's proprioception.