We are delighted to announce the fourth London Computer Vision meetup which will take place Wednesday the 28th of March at 6pm.
Location: One Alfred Place - 2nd Floor / WC1E 7EB
18:00: Doors open
19:00 - 19:45: 1st talk
20:00 - 20:45: 2nd talk
A limited amount of pizza and beer will be provided on a ‘first-come-first-served’ basis - and food will be available to order on the day.
Remember to sign up here.
Abstract: Spatial perception is a core enabler of exciting mobile robotics application – from home robots to drones. Recent years have seen tremendous progress of Simultaneous Localisation and Mapping (SLAM) and 3D scene understanding, increasingly influenced by deep learning. Stefan will present recent progress in the model-based world, e.g. dense SLAM enabling model-predictive drone control; but at the same time he will argue that we should include modern learning-based methods into robotic perception, since they are complementary – in order to evolve SLAM it into a general 3D scene understanding that includes semantics and object hierarchies, which can be used by the next generation of mobile robots to meaningfully and safely interact with a potentially unknown environment.
Bio: Stefan Leutenegger is a Lecturer in Robotics at Imperial College London, where he leads the Smart Robotics Lab. He also closely collaborates with Prof. Andrew Davison as the deputy director of the Dyson Robotics Lab. Before coming to London, Stefan received his PhD degree from ETH Zurich in 2014, working on “Unmanned Solar Airplanes: Design and Algorithms for Efficient and Robust Autonomous Operation” working in Prof. Roland Siegwart’s group. In the space of vision for mobile robotics, Stefan has been working on a range of challenges, mostly centred around spatial perception – from keypoint detection, description and matchiing (BRISK) to visual-inertial odometry (OKVIS), and more recently focusing on dense mapping (ElasticFusion) and the evolution towards general 3D understanding including joint semantic and geometric reconstruction (SemanticFusion).
For more information see: http://wp.doc.ic.ac.uk/sleutene
Abstract: In this talk I will present recent work on establishing dense correspondences between 2D images and 3D surface models ``in the wild'', namely in the presence of background, occlusions, and multiple objects. I will start by describing DenseReg and DensePose, two recently introduced systems for this goal which operate at multiple frames per second on a single GPU. Time permitting I will cover more recent works on extending such techniques to unsupervised learning.
Bio: Iasonas Kokkinos is a Senior Lecturer in the Department of Computer Science of University College London and a Research Scientist at Facebook AI Research (FAIR). His research interests are at the intersection of computer vision and deep learning, aiming at the development of models that unify problems of structured prediction with deep learning, as well as multi-task learning. He publishes and reviews regularly in the major computer vision conferences (CVPR,ICCV,ECCV); he has served multiple times as Area Chair and has been Associate Editor for CVIU and IJVC.