Wo ist die Publikation erschienen?PhD thesis (TU Graz)
Augmented reality (AR) has been demonstrated to be an effective way of presenting many types of tutorials and user guide handbooks. However, creating 3D content for AR is usually costly and requires specially trained technical authors. The research in this thesis aims to accelerate the authoring process of AR instructions by providing interactive authoring techniques for re-targeting conventional, two-dimensional content into three-dimensional AR tutorials. Unlike previous work,we do not simply overlay images or video but synthesize 3D-registered motion from the 2D input. Since the information in the resulting AR tutorial is registered to 3D objects, the user can freely change the viewpoint without degrading the experience. Our approaches can be applied to many styles of video tutorials. In this work, we concentrate on assembly and disassembly tutorials, body motion, and tutorials which show tools with surface contact, e.g. painting instructions.
In addition to offline authoring, we also show an approach for instant AR instruction authoring in a remote assistance use case. Spontaneous provisioning of remote assistance requires an easy, fast, and robust approach for capturing and sharing of unprepared environments. In this thesis, we make a case for utilizing interactive light fields for remote assistance. We demonstrate the advantages of object representation using light fields over conventional geometric reconstruction. Moreover, we introduce an interaction method for quickly annotating light fields in 3D space without requiring surface geometry to anchor annotations. We present results from a user study demonstrating the effectiveness of our interaction techniques, and we provide feedback on the usability of our overall system.
AR instruction systems require dedicated interaction methods to fully develop their potential. Therefore, we present novel interaction methods for AR on handheld devices, as well as for head-mounted displays. Handheld Augmented Reality commonly implements some variant of magic lens rendering, which turns only a fraction of the user’s real environment into AR while the rest of the environment remains unaffected. Since handheld AR devices are commonly equipped with video see-through capabilities, AR magic lens applications often suffer from spatial distortions, because the AR environment is presented from the perspective of the camera of the mobile device. In this thesis, we present a resource-efficient method for user perspective rendering by applying lightweight optical flow tracking and an estimation of the user’s motion before head tracking is started. For HMDs we present TrackCap, a novel approach for 3D tracking of input devices, which turns a conventional smartphone into a precise 6DOF input device for an HMD user. The device can be conveniently operated both inside and outside the HMD field of view, while it provides additional 2D input and output capabilities. Evaluations show that TrackCap competes favorably against common input devices for mobile HMDs.