Sharc

Laser scans, photogrammetric data (blue) and tachymetric measurement points (dark green) are registered and visualized in a common coordinate system.

Interactive editing and cleaning poses no problem – even in data sets with up to several hundred million points.

A light visualization.

The overall goal in the project SHARC is to develop tools and methods to handle, manage, manipulate and assess multiple varying (in terms of type, time and environmental conditions like lighting) survey and light planning data sources in a common environment.

Since the amount of data that could potentially be used in commercially available GIS, BIM and light planning systems (planning data, laser scans, photogrammetry, traditional surveying instruments etc.) continuously increases, the complexity involved in processing and manipulating this data in a common context has already reached a level that is impossible to tackle with traditional approaches. We therefore strive for the development of a novel system that allows for dealing with extremely large amounts of heterogeneous, distributed and evolving (over time) geodetic data and simulated light data in a holistic, dynamic environment. The following aspects receive special consideration:

  • Lighting conditions –from both sunlight as well as from artificial light sources – play an important aspect during the acquisition process, as they have a major impact on the quality of the acquired data (e.g. artefacts like glare, reflex and shadows) and the perception of the scene. New LED systems require more sophisticated approaches to properly handle the influence the spectral distribution of light in terms of interactions with physiological effects based on human perception.
  • We will use and optionally enhance existing, suggestion-based smart modeling techniques for the creation of 3D geometry to work with multimodal, distributed data, including the development of new rule-based, adaptive strategies to handle the dynamism and complexity involved in the novel system. A special focus will lie on the exploitation of the semantic information that is available through a typical GIS,
  • In large scenes, it is oftentimes very challenging and tiresome for lighting designers to place light sources according to given standards or customer wishes. Furthermore, the 3D geometry of the objects in a scene is very expensive to model. In GIS systems, plenty of semantic information is available that can be potentially be used to derive the placement of various objects in outdoor scenes (trees, lanterns, block geometry of buildings, …), including an initial setup of light sources.
  • It is of utter importance to continuously track and evaluate the level of accuracy that is achieved by the geo-referencing information from the acquisition steps, during the registration of the individual data sets in a common coordinate system, during the creation of 3d models out of the data sets, by the consideration of light through simulation, etc. Furthermore, we will try to visualize physiological aspects of the lighting conditions in the reconstructed buildings and scenarios by finding a measure on how good the reconstruction is on a perceptual level.
  • The novel system is intended to not only provide static rules and strategies that help the user during the derivation of 3D geometry data, but should also learn and adapt on specific behavior: We will evaluate if an initial set of rules can be continuously enhanced based on the decisions made by the user, by meta data that will be put in context with geometry data, and by enforcing (or even automatizing) often-used suggestions.

Videos