Robust SFM and SLAM in Challenging Environments
Although SFM and SLAM have achieved great success in the past decade, some critical issues are not adequately addressed, which greatly restrict their applications in practice. For example, how to efficiently obtain long and accurate feature tracks and close complex loops for multiple sequences? How to efficiently perform global bundle adjustment for large datasets with limited memory space? How to perform robust SLAM in dynamic environments? How to handle fast motion and strong rotation? In this talk, I will introduce our recent works for addressing these key issues. A live AR demo on a mobile device and a set of applications will be presented.
Dr. Hujun Bao is a professor in State Key Laboratory of Computer Aided Design and Computer Graphics, and the dean of Faculty of Information Technology in Zhejiang University. He graduated from Zhejiang University in 1987 with a B.Sc. degree in mathematics, and obtained his Ph.D. degrees in applied mathematics from the same university in 1993. In August 1993, he joined the laboratory. Currently, he leads the virtual reality group in the lab, which mainly makes researches on computer graphics and mixed reality to achieve good visual perception of mixed environments. He has published more than 100 papers in major journals and conferences in geometry, graphics and 3D vision computing. These algorithms have been successfully integrated into their virtual reality and augmented reality system.
Animating Characters: Data-driven simulation and biped motion capture
The animation and simulation of human behavior, as well as garments and hair, for humanoid characters are important issues in the context of computer animation and interactive applications like games and social VR. On the simulation side, the challenges are largely computational and of intuitive control. While algorithms exist that can realistically simulate highly complex fabric/garment or hair behaviors (even at a yarn or strand granularity), those approaches are typically computationally expensive and cannot be applied in real-time. Over the last 5-7 years we looked at data-driven proxy models that are able to approximate the simulation behaviors of arbitrarily complex simulators using lower-dimensional learned sub-spaces; as well as utilizing similar techniques coupled with perceptual experiments to re-parametrize traditional simulators in terms of simple and more intuitive controls.
On the biped humanoid behavior capture side, we experimented with a variety of prototypes that are able to capture human motion in (largely) unconstrained environments or with little to no instrumentation of the actor. Our solutions relied on various physics-based models and control to alleviate fundamental ambiguities and undesired properties (e.g., ground or segment penetration) of traditional video-based motion capture methods. In this talk I will present overview of the approaches and findings we discovered along the way.
Leonid Sigal is an Associate Professor in the Department of Computer Science at the University of British Columbia. Prior to this he was a Senior Research Scientist at Disney Research Pittsburgh and an adjunct faculty at Carnegie Mellon University. He completed his Ph.D. at Brown University in 2008; received his B.Sc. degrees in Computer Science and Mathematics from Boston University in 1999, his M.A. from Boston University in 1999, and his M.S. from Brown University in 2003. Leonid’s research interests lie in the areas of computer vision, machine learning, and computer graphics. Leonid’s research emphasis is on machine learning and statistical approaches for visual recognition, understanding, analytics and simulation. He has published more than 70 papers in venues and journals in these fields (including in PAMI, IJCV, CVPR, ICCV, ECCV, NIPS and ACM SIGGRAPH).