Wo ist die Publikation erschienen?i-SAIRAS 2020
Robotic space missions contain optical instruments for various mission-related and science tasks, such as 2D and 3D mapping, geologic characterization, atmospheric investigations, or spectroscopy for exobiology, the characterization of scientific context, and the identification of scientific targets of interest. The considerable variability of appearance of such potential
scientific targets calls for well-adapted yet flexible techniques, one of them being Deep Learning (DL). Our “Mars-DL” (Planetary Scientific Target Detection via Deep Learning) approach focuses on training for visual DL by virtual placement of known targets in a true
context environment. The 3D context environment is taken from reconstructions using true Mars rover imagery. Scientifically interesting objects, such as impact-characteristic shatter cones (SCs) from several terrestrial impact structures, and/or meteorites, are captured and 3D reconstructed using photogrammetric techniques, gaining a 3D data base (high resolution
mesh and albedo map) of objects to be randomly placed in the realistic scenes. Using a powerful image rendering tool, the assembled virtual scenes deliver thousands of training data sets, which are used for data augmentation for the following Deep Learning assets.
So far the simulation components have been assembled and tested. We report on the current status and first results of training and inference using the simulated data sets as well as prospects of the approach.