News

More Information

More about the VRVis Visual Computing Award.

Contact VRVis Visual Computing Award

Dipl.-Math.in Dr.in Katja Bühler

Scientific Director | Head of Biomedical Image Informatics Group, Biomedical Image Informatics

The winner of VRVis Visual Computing Award 2023

Gaia Pavoni standing on a beach, which can be seen out of focus in the background, smiles at the camera.
Gaia Pavoni (ISTI-CNR) receives the 2023 VRVis Visual Computing Award for her valuable contribution to marine ecology through her outstanding coral reef visualization.

About Gaia Pavoni

After a degree in mathematics and a few years of experience in a technological start-up, in November 2014, Gaia Pavoni joined the Visual Computing Lab of ISTI-CNR. During the first years of her career, her research was mainly focused on Cultural Heritage, developing digital tools and applications for the study, conservation, and dissemination. In 2016, she combined her PhD activities with her passions: an eternal love for the sea, diving, and attention to environmental issues. She decided to study computer vision/photogrammetry applications and AI-based tools for underwater ecological monitoring.

Since then, she collaborated with a highly multidisciplinary group of researchers worldwide, gaining a deep understanding of underwater surveying issues. With various ongoing research projects, especially with TagLab, Gaia Pavoni is looking to automate the interpretation of 2D and 3D data, a critical step in gaining a comprehensive understanding of marine habitats and predicting their future trends.

The image is divided into four sections, all showing screenshots of TagLab software, which is an important human-centric AI-based tool for data processing and monitoring coral reefs.
Human experts are incredibly accurate in image analysis but unable to handle the massive amount of images collected daily on coral reefs. Machines are fast, but their performance in complex cognitive recognition tasks over complex scenarios still suffers from poor accuracy. TagLab follows a human-in-the-loop labelling approach by proposing interactive AI tools and an internal learning pipeline. This pipeline enables the training of custom recognition models, the evaluation of training results, and the inference of predictions on new data. Finally, georeferenced automatic predictions can be easily explored and interactively edited, reaching an accuracy not achievable with standard machine learning methods alone. Digital tools like TagLab by Gaia Pavoni show, how Visual Computing contributes to solving complex questions and challenges on the way to a more sustainable future.

About the visual computing research work of Gaia Pavoni

The growing field of low-cost cameras and data-driven autonomous robotics has made large-scale underwater imaging increasingly popular in monitoring coral reefs. Ecological assessments inferred from images have so far been reserved for direct observation by scientists. However, human interpretation of the collected data is time-consuming, creating a bottleneck in downstream analysis; thus, each year, only a negligible amount of collected images are subsequently analyzed by ecologists.

While machine learning-based algorithms can significantly reduce processing time, they still cannot match the level of accuracy achieved by experts in this complex task. TagLab is an ArtificiaI Intelligence-based open-source annotation tool that, following a human-centric approach, accelerates the analysis of georeferenced photogrammetric outputs. Furthermore, since monitoring campaigns usually involve time-series data, TagLab integrates a set of semi-automated algorithms to track the individual evolution of coral reefs over time.

By reducing the time required for ecological post-processing of coral reef images, TagLab enables researchers to process increasingly large volumes of data without increasing staff time and ultimately facilitates a greater ability to understand and predict future changes in coral reef ecosystems. TagLab is an open-source software solution that mitigates technological disparities between labs and promotes shared data standards and protocols.

Thomas Höllt looks friendly into the camera, in the background an office and the view of an autumnal landscape can be seen.
The jury of the VRVis Visual Computing Award honors Thomas Höllt (TU Delft) for his important research contribution in the field of visual analysis of single cell data.

About Thomas Höllt

Thomas Höllt completed his Ph.D. at the King Abdullah University of Science and Technology, Saudi Arabia. After positions in Vienna, Salt Lake City, and Delft, he moved to Leiden University Medical Center as Assistant Professor in 2017 before returning to TU Delft in 2020. He has published over 50 peer-reviewed publications, including the winning paper for the Dirk Bartz Prize for Visual Computing in Medicine in 2019, and is a member of the Eurographics Association.

Screenshot of Visual Analytics application Cytosplore.
Screenshot of Thomas Höllts Cytosplore application, incorporating HSNE (Hierarchical Stochastic Neighbor Embedding) and CyteGuide for effective exploration of large single-cell datasets.

About the visual analytics research work of Thomas Höllt

In recent years, systems biology drastically changed due to the arrival of several high throughput single-cell acquisition techniques, enabling detailed transcriptomic and proteomic profiling of large numbers of individual cells from blood and tissue samples. From large-scale cataloguing initiatives, such as the Human Cell Atlas to targeted research on cell composition in autoimmune disease, cancer , parasitical and viral infections such as COVID-19, and many more, single-cell analysis provides in-depth insight into the interplay of the cellular functionality of living organisms. However, the size and complexity of the acquired data make them challenging to interpret. While some effort has been put into automatic classification of known cell types, for example through supervised machine learning, such techniques only allow the identification and quantification of data based on a-priori knowledge, but do not facilitate the formation of new hypotheses or the discovery of the previously unknown phenomena. Interactive visual exploration, with a human in the loop, is necessary for discovery and hypothesis formulation.

Thomas Höllt and his team designed, implemented, and deployed methods and complete, integrated visual analytics software tools for the interactive exploration of single-cell data. Methodological advances in dimensionality reduction, such as approximated t-SNE and HSNE, allowed to push the boundaries of scalability for non-linear manifold learning approaches and as such made them feasible for application in interactive visual analytics systems for large scale singlecell problems. Through his visual analytics systems Cytosplore and ImaCytE, Thomas Höllt brought these methodological advances to practitioners and researchers in single-cell analysis, where the presented methods are widely used and recognized by my collaborators and beyond.

The award ceremony takes place at the Visual Computing Trends symposium 2023