R6-[SpaceGuide] - Details

Human and Robot Navigation in Structured Environments


The goal of this project is to investigate human and robot navigation in structured indoor environments designed by architects. In particular, we are interested in the interaction of environmental structure and cognitive processes. We focus on the challenge of spatial ambiguity where a tight collaboration of robotics and human spatial cognition approaches promises both new empirical and theoretical insights as well as technical applications. We will investigate how reoccurring structures and ambiguities of architectural environments influence human and robot navigation. It is a largely open question how humans handle a lack of architectural differentiation, i.e., the threat of confusing similar-looking locations in buildings. This directly corresponds to the challenge of undesired aliasing or unknown data association in the context of robot navigation, especially in the simultaneous localization and mapping (SLAM) problem. We will pay special attention to the mechanisms that anchor different information sources and decision-making processes into a successful solution strategy. Combining task analysis and empirical validation studies, we will quantify spatial ambiguity for different types of buildings and navigation tasks.

Our established research into wayfinding factors and the investigation of spatial ambiguity will be translated into a comprehensive set of training materials and tools for architecture education. Architecture students will learn about human wayfinding characteristics to guide their design activity towards better orientation and navigability. A set of computational design tools - including a spatial ambiguity visualizer - will allow students to assess navigation-related characteristics of their own designs and master underlying concepts in a hands-on approach.

Psychophysics and memory experiments into visual discriminability of architectural scenes provide training data for the development of a similarity measure based on range scans to establish similar behaviors with mobile robots. Complementing the wayfinding experiments with humans, we will develop a simulation environment to study wayfinding processes with mobile robots that use appropriate sensor models to interpret their sensations acquired in the same layouts used for the empirical studies. Based on this simulation environment we will provide a tool for analyzing spatial ambiguity in building layouts. This will not only be carried out on the basis of single observations but on sequences of observations along paths. Building on the capability to analyze ambiguities, we then will develop off- and online approaches for placing landmarks to maximize the navigation performance of the robots. To establish their viability as models of human cognition, we will compare the landmark placement strategies of human subjects with those obtained by the algorithms developed for the mobile robots.