Crossmodal Learning for Intelligent Robotics in conjunction with IEEE/RSJ IROS 2018

 

1st CALL FOR PAPERS for the international workshop:

* Crossmodal Learning for Intelligent Robotics * in conjunction with IEEE/RSJ IROS 2018

* Madrid, Spain – Friday 5 October 2018 *

* Website: http://www.informatik.uni-hamburg.de/wtm/WorkshopCLIR18/index.php *

I. Aim and Scope

The ability to efficiently process crossmodal information is a key feature of the human brain that provides a robust perceptual experience and behavioural responses. Consequently, the processing and integration of multisensory information streams such as vision, audio, haptics and proprioception play a crucial role in the development of autonomous agents and cognitive robots, yielding an efficient interaction with the environment also under conditions of sensory uncertainty.

Multisensory representations have been shown to improve performance in the research areas of human-robot interaction and sensory-driven motor behaviour. The perception, integration, and segregation of multisensory cues improve the capability to physically interact with objects and persons with higher levels of autonomy. However, the multisensory input must be represented and integrated in an appropriate way so that they result in a reliable perceptual experience aimed to trigger adequate behavioural responses. The interplay of multisensory representations can be used to solve stimulus-driven conflicts for executive control. Embodied agents can develop complex sensorimotor behaviour through the interaction with a crossmodal environment, leading to the development and evaluation of scenarios that better reflect the challenges faced by operating robots in the real world.

This half-day workshop focuses on presenting and discussing new findings, theories, systems, and trends in crossmodal learning applied to neurocognitive robotics. The workshop will feature a list of invited speakers with outstanding expertise in crossmodal learning.
II. Target Audience

This workshop is open to doctoral students and senior researchers
working in computer and cognitive science, psychology, neuroscience
and related areas with the focus on crossmodal learning.

III. Confirmed Speakers

1. * Yulia Sandamirskaya *
Institute of Neuroinformatics (INI), University and ETH Zurich
2. * Angelo Cangelosi *
Plymouth University and University of Manchester, UK
3. * Stefan Wermter *
Hamburg University, Germany

IV. Submission

1. Topics of interest:

– New methods and applications for crossmodal processing
(e.g., integrating vision, audio, haptics, proprioception)
– Machine learning and neural networks for multisensory robot perception
– Computational models of crossmodal attention and perception
– Bio-inspired approaches for crossmodal learning
– Crossmodal conflict resolution and executive control
– Sensorimotor learning for autonomous agents and robots
– Crossmodal learning for embodied and cognitive robots

2. For paper submission, use the following IEEE template:
<http://ras.papercept.net/conferences/support/support.php>*

3. Submitted papers should be limited to *2 pages (extended abstract)* or *4 pages (short paper)*.

4. Send your pdf file to barros@informatik.uni-hamburg.de AND jirak@informatik.uni-hamburg.de

Selected contributions will be presented during the workshop as spotlight talks and in a poster session.

Contributors to the workshop will be invited to submit extended versions of the manuscripts to a special issue (to be arranged). Submissions will be peer reviewed consistent with the journal practices.

V. Important Dates

* Paper submission deadline: August 15, 2018
* Notification of acceptance: September 5, 2018
* Camera-ready version: September 15, 2018
* Workshop: Friday 5 October 2018

VI. Organizers

* German I. Parisi * Hamburg University, Germany
* Pablo Barros * Hamburg University, Germany
* Doreen Jirak * Hamburg University, Germany
* Jun Tani * Okinawa Institute of Science and Technology, Japan
* Yoonsuck Choe * Samsung Research & Texas A&M University, TX, USA

Like it? Share it!
  •  
  •  
  •  
  •  
  •  
  •  
  •  

Leave a Reply

Your email address will not be published. Required fields are marked *