Meet #2 - 3D Vision and Volumetric Capture

Posted the 16th of November, 2017

We are delighted to announce the second London Computer Vision meetup which will take place the Wed 29th of November at 6pm.

Location: One Alfred Place - 2nd Floor / WC1E 7EB

18:00: Doors open
19:00 - 19:45: 1st talk
20:00 - 20:45: 2nd talk
23:00: Close

A limited amount of pizza and beer will be provided on a ‘first-come-first-served’ basis - and food will be available to order on the day.

Remember to sign up here.

Speaker 1: Steve Jelley - Director & Founder, Dimension Studio & Hammerhead VR

Title: "Dimension, the first Microsoft volumetric capture studio in London"

Abstract: Steve will tell us about Dimension, a world leading volumetric video and 3D capture studio, and first Microsoft Mixed Reality Capture studio partner. Dimension captures life in wonderful volumetric detail, providing a step change in the realism that can be achieved for the creation of virtual humans and environments. Inject new life into stories, games and experiences for both immersive and 2D media.

Bio: Steve Jelley is a producer and entrepreneur specialising in immersive technology and the creative industries. After a career in Electronic Arts, Curtis Brown Group and Apple, Steve founded the online video platform Videojuicer, then co-founded the Augmented Reality Platform, String and technology innovation company TMRW before co-founding the immersive content company, Hammerhead VR in 2014.

Speaker 2: Andy Davison - Professor of Robot Vision, Imperial College; co-founder of SLAMcore

Title: "Real-Time 3D Scene Perception Using Vision”

Abstract: Research in robotics and computer vision is leading us towards the generic real-time 3D scene understanding which will enable the next generation of smart robots and mobile devices.

SLAM (Simultaneous Localisation and Mapping) is the problem of joint estimation of a robot's motion and the shape of the environment it moves through, and cameras of a variety of types are now the main outward looking sensors used to achieve this. While early visual SLAM systems concentrated on real-time localisation as their main output, the latest ones are now capable of dense and detailed 3D reconstruction and, increasingly, semantic labelling and object awareness. Andy will describe and connect the research that he and others have conducted in this field over the recent years, with examples from some of the key breakthrough systems.

Bio: Since 1994 Andy has worked almost continuously on SLAM using vision, with a particular emphasis on methods that work in real-time with commodity cameras. Andy’s background includes many world-firsts, including:

• (2003), the first ever, real-time, single-camera SLAM system (MonoSLAM) which is widely acknowledged to be one of the key prototypes for recent commercial projects and products in low-cost mobile robotics (e.g. Dyson) or mobile phone/tablet/wearable 3D localisation and sensing (e.g. Google Project Tango).

• (2016), the first ever 3D, real-time SLAM system, using an Event Camera.

Among many other published advances since this time, Andy has collaborated with long-term colleagues and PhD students at Imperial College, Oxford, Zaragoza and others on important algorithms, studies and systems such as Inverse Depth Features (2006), Active Matching (2008), "Why Filter?" (2010), DTAM and KinectFusion (2011), SLAM++ (2013).

Alongside this academic research, Andy has a longstanding relationship with Dyson Ltd. in the UK, having worked for them as a consultant on robot vision technology since 2005. This collaboration led to the creation in 2014 of the Dyson Robotics Laboratory at Imperial College of which he is the Director and Founder.

For more information see:

go back


Please get in touch if you would like to speak, offer support or just find out more.