Multiple Senses Group
This recently formed group focusses on methods and technologies employing more human senses than the traditional ones – seeing and hearing – in the interaction with computers.
Building on our main fields of expertise – Virtual reality and Visualization – VRVis has until now mainly dealt with the visual sense: as an output stream of data to the user in the form of interactive graphics and virtual and mixed environments, and as an input modality in the form of image and video processing. Many of the methods and technologies involved in successfully implementing a virtual environment, e.g. user tracking, real-time collision detection for interaction and registration techniques are applicable to other sensory channels such as haptics, sound and even olfactory stimuli.
Collaboration with experts on virtual acoustics enables us to integrate for precise spatial 3D audio using person specific HRTFs (head related transfer functions). Our research includes 2D to 3D conversion methods, scanning technology and 3D printing and computer aided machining. This allows for example to make data accessible to blind and visually impaired people, with a special focus on art in a museum context.
Our group will closely cooperate with the Visual Analytics group, the Semantic Modelling and Acquisition group and the Geospatial Visualization group. We will provide the visual analytics group with new and innovative interaction methods and output technologies. The Semantic Modelling and Acquisition group will support our development for acquisition methods for multiple senses. Together with the Geospatial Visualization group we will explore new environments for planetary sciences.
- 3D interaction methods
- Multi-sensory interfaces
- Mixed and augmented reality
- Real-time sensors and associated data processing
- Innovative output devices
- Assisted living applications
- 3D Printing and Computer Aided Machining
Andreas Reichinger et al. “Computer-Aided Design of Tactile Models. Taxonomy and Case-Studies”. In: ICCHP 2012, Part II. ed. by Klaus Miesenberger, Arthur Karshmer, Petr Penaz, and Wolfgang Zagler. Vol. 7383. LNCS. Heidelberg: Springer, 2012, pp. 497–504. isbn: 978-3-642-31534-3. doi: 10.1007/978-3-642-31534-3_73.
Moritz Neumüller, Andreas Reichinger, Florian Rist, and Christian Kern. “3D Printing for Cultural Heritage. Preservation, Accessibility, Research and Education”. In: 3D Research Challenges in Cultural Heritage: A Roadmap in Digital Heritage Preservation. Ed. by Marinos Ioannides and Ewald Quak. Berlin, Heidelberg: Springer Berlin Heidelberg, 2014, pp. 119–134. isbn: 978-3-662-44630-0. doi: 10.1007/978-3-662-44630-0_9.
Andreas Reichinger, Anton Fuhrmann, Stefan Maierhofer, and Werner Purgathofer. “A Concept for Re-Usable Interactive Tactile Reliefs”. In: ICCHP 2016, Part II. ed. by Klaus Miesenberger, C. Bühler, and Petr Penaz. Vol. 9759. LNCS. Heidelberg: Springer, 2016, pp. 108–115. isbn: 978-3-642-31534-3. doi: 10.1007/978-3-319-41267-2_15.
Andreas Reichinger, Stefan Maierhofer, Anton Fuhrmann, and Werner Purgathofer. “Gesture-Based Interactive Audio Guide on Tactile Reliefs”. In: Proceedings of the 18th International ACM SIGACCESS Conference on Computers & Accessibility. ASSETS ’16. New York, NY, USA: ACM, 2016, to appear.
Harald Ziegelwanger, Andreas Reichinger, and Piotr Majdak. “Calculation of listener-specific head-related transfer functions: Effect of mesh quality”. In: Proceedings of Meetings on Acoustics. Vol. 19. 1. Montréal: Acoustical Society of America, 2013-06, p. 050017. doi: 10.1121/1.4799868.
Acoustic Research Institute, Akademie der Wissenschaften
Kunsthistorisches Museum Wien
Technisches Museum Wien
Access to museums for blind and visually impaired people through 3D technology
Deep Pictures: Creating Visual and Haptic Vector Images.
Visual Analysis and Rendering: Fundamental research towards the combination and integration of the spatial and abstract domain.
Virtual Acoustics - Localisation Model & Numeric Simulations
Virtual Training in Hand Fire Extinguisher Use.
Development of a workflow that allows to convert gallery paintings into tactile representations suitable to be used in guided tours.
A System for Synthesizing Video from Still Images.