报告人：Armin B. Cremers教授，德国波恩大学
报告人简介：Professor Armin B. Cremers received his doctoral degree in mathematics and his lectureship qualification in computer science from the University of Karlsruhe (now KIT). He has served on the Computer Science Faculties of the University of Southern California, the University of Dortmund, and, since 1990, the University of Bonn as Head of the Artificial Intelligence / Robotics/ Intelligent Vision Systems Research Groups. In 2002 he became Founding Director of the Bonn-Aachen International Center for Information Technology (B-IT), Emeritus since 2014. From Bonn he has contributed fundamentally to artificial intelligence and robotics, and to the development of software engineering, particularly in civil engineering, and information systems, particularly in the geosciences. The paper "The Interactive Museum Tour-Guide Robot" won the AAAI Classic Paper award of 2016. From 2004 to 2008 he was Dean of the School of Mathematics and Natural Sciences, and from April 2009 to July 2014 University Vice President for Planning and Finance.
报告内容简介：While there is impressive progress being made in all areas of computer vision, humans are still much more effective in general unconstrained interpretation of visual sensory input than any machine. Two fundamental, unconscious steps in the role model of human visual processing are Grouping and Saliency. While Grouping describes the human ability to perceive patterns and structure in raw data and constitute a scene composition model, Saliency defines a quality of how much certain scene parts stand out relative to their neighborhood and therefore attract attention.
This lecture will present our latest research on early vision processing inspired by that couple of biological mechanisms. While most other current computational approaches model either Grouping or Saliency, we have developed an integrated system combining both. Our method is designed to meet strict real-time requirements as present in mobile robotics. Nevertheless, our more holistic approach shows state-of-the-art results in both subdisciplines. Furthermore, we have successfully employed it on a mobile robot to assist grasping of arbitrary things by reliably segmenting the object of interest in an image.