Paper Reading #5: KinectFusion: real-time 3D reconstruction and interaction using a moving depth camera
- Title - KinectFusion: real-time 3D reconstruction and interaction using a moving depth camera
- Reference Information -ACM Classification: H5.2 [Information Interfaces and Presentation]: User Interfaces. I4.5 [Image Processing and Computer Vision]: Reconstruction. I3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism.
General terms: Algorithms, Design, Human Factors. Keywords: 3D, GPU, Surface Reconstruction, Tracking, Depth Cameras, AR, Physics, Geometry-Aware Interactions
- Author Bios - Shahram Izadi, David Kim, Otmar Hilliges, David Molyneaux, Richard Newcombe, Pushmeet Kohli, Jamie Shotton, Steve Hodges, Dustin Freeman, Andrew Davison, Andrew Fitzgibbon
This group works together mainly at Microsoft Research in Cambridge, Cambridge, United Kingdom. Newcombe and Davison are from Imperial College London in London, United Kingdom though, and Freeman is from the University of Toronto in Canada. They all work in the research of input sensor devices, display technologies, computer graphics, and computer human interaction.
Summary:
Summary:
KinectFusion enables a user holding and moving a standard
Kinect camera to rapidly create detailed 3D reconstructions
of an indoor scene. Only the depth data from Kinect is used
to track the 3D pose of the sensor and reconstruct, geomet-
rically precise, 3D models of the physical scene in real-time. Novel extensions to the core GPU pipeline demonstrate object segmentation and user interaction directly in front of the sensor,
without degrading camera tracking or reconstruction. These
extensions are used to enable real-time multi-touch interactions anywhere, allowing any planar or non-planar reconstructed physical surface to be appropriated for touch.
Poisson surface reconstruction - Michael Kazhdan, Matthew Bolitho, Hughues Hoppe
- Towards Urban 3D Reconstruction from Video - Frahm, J.-M. ; Mordohai, P. ; Clipp, B. ; Engels, C. ; Gallup, D. ; Merrell, P. ; Phelps, M. ; Sinha, S. ; Talton, B. ; Wang, L.; Yang, Q. ; Stewenius, H. ; Yang, R. ; Welch, G. ; Towles, H.; Nister, D. ; Pollefeys, M.
- High-quality surface splatting on today's GPUs - Botsch, M., Hornung, A. ; Zwicker, M. ; Kobbelt, L.
- Data-Parallel Octrees for Surface Reconstruction - Kun Zhou, Minmin Gong ; Xin Huang ; Baining Guo
- Parallel Tracking and Mapping for Small AR Workspaces - G. Klein, D. Murray
- Real-time vision-based camera tracking for augmented reality applications - Dieter Koller, Gudrun Klinker, Eric Rose, David Breen, Ross Whitaker
- Scene modelling, recognition and tracking with invariant image features - I. Skrypnky, D.G. Lowe
A muscle model for animation three-dimensional facial expression - Keith Waters
- An image-based approach to three-dimensional computer graphics - Leonard McMillan Jr.
- Merging virtual objects with the real world: seeing ultrasound imagery within the patient - Michael Bajura, Henry Fuchs, Ryutarou Ohbuchi
These areas of study -- active sensors, passive cameras, unordered 3D points, etc -- are highly studied ares of research in computer graphics and vision, but KinectFusion differentiates itself by doing things better with interactive rates, no explicit feature detection, high-quality construction of geometry, and dynamic interaction. KinectFusion is also infrastructure-less with whole room reconstruction and interaction.
Evaluation:
Evaluation for this project was not very systemic. There were no quantitative or subjective evaluation measures done. There was no actual evaluation process, per se, in the paper, but it was more of how they did it and proof that it worked as they said. There were tests of each of the features of KinectFusion run, but there were no hard results other than whether it worked and what problems they ran into. This can be seen as qualitative, and measuring it's completeness. This is appropriate though, as there is no other applicable way to test this work other than to have users try it out and give their subjective opinions.
Discussion:
I think this project work is quite well done. The uses are quite broad as there are many functions to KinectFusion. I thought the physics interaction was the best part, as interacting with a 3D generated room is unique and it is novel in the way they did it. The evaluation could have been a bit more in depth, but they were more in the testing stages of proving it worked and preparing it then creating a prototype that could be tested for specific functions. Overall, I think it is a very worthy contribution and it will be interesting to see where it goes from here.
No comments:
Post a Comment