Last days of Uni Hamburg!

Yesterday was my last day of work at #uniHamburg.
I was there for 6 years, counting 3 years of # Ph.D. There I met lots of persons who changed the way I see the world, started a career I learned to love and to care about, and met my life partner to whom I am eternally grateful.

I am also very grateful to Prof. Stefan Wermter, head of the Knowledge Technology research group. He for sure made an uncertain gamble to accept in his group an unknown, and rather average, a Brazilian student who could barely speak English properly (not that it changed much :D).

Of course one cannot do everything alone, and at Uni Hamburg I met an amazing network of collaborators on many different levels! From supervised students, to research colleagues, and visiting established scientists. I am very very grateful to have met you all!

And I am looking much much forward to my next amazing adventure! Where in the world this will lead us next? 🤷‍♂️

a large body of water with buildings in the background

Like it? Share it!
  •  
  •  
  •  
  •  
  •  
  •  
  •  

Goodbye Crossmodal Learning Project – A5

In the last weeks, I took part in the defense of the first phase (~ 4 years of intense work!) and on the proposal of the second phase of the A5 subproject within the SFB Crossmodal Learning project (http://crossmodallearning.org).

4 great years collaborating with more than 80 of the best scientists from Hamburg and Beijing in computer science, psychology, and neuroscience. A5 was responsible for neurocomputational models inspired on cortical-collicular connections for crossmodal conflict resolution.

Together with our partners from the Chinese Academy of Sciences, A5 contributed to the crossmodal learning area with more than 10% of the entire project publications (15 subprojects, 300+ publications), organized conferences and workshops, and trained PhDs and Postdocs, I included:)

It was a very fruitful and educational experience and for sure changed how I see and act on the development and dissemination of science. I wish all the best for the possible second phase of the project, and I hope to keep collaborating with you all 🙂

Image
Like it? Share it!
  •  
  •  
  •  
  •  
  •  
  •  
  •  

Frontiers Research Topic

Are you working on Multisensory processing involving humans and/or robots? What about contributing to our special Research Topic at Frontiers? Abstract deadline on the 28th of August, paper deadline on the 02nd of December! More info: https://lnkd.in/eMW8s_j

Like it? Share it!
  •  
  •  
  •  
  •  
  •  
  •  
  •  

ICML Paper accepted & available

Extremely happy that our most recent paper “A Personalized Affective Memory Model for Improving Emotion Recognition” got accepted at ICML2019 !  You can have access to it from this link: http://proceedings.mlr.press/v97/barros19a.html

Pablo

 

Like it? Share it!
  •  
  •  
  •  
  •  
  •  
  •  
  •  

OMG-Empathy Final Results

Dear Teams,
I am very happy to announce that we have a winner 🙂
Before I congratulate the teams, I would like to say I am very happy with the engagement of all the teams during the challenge. Preparing this challenge, from the design and data collection to the dissemination and organization was a … challenge 😀
But we are very pleased with the outcome, and we also hope that the dataset and the evaluation protocol can contribute to the area of artificial empathy. I hope that we can collaborate and discuss with all of you in the future and that our paths cross again soon.
We were very impressed by all your efforts. You got our message that pure instantaneous perception would not work, and most of the solutions made use of some sort of temporal context. Also, amazing to see the use of different modalities and the solutions for multisensory synchronization.
By analyzing the results, we can see that Story 7 was the most challenging one. Probably due to the actor in story 7 were quite different from the others. Maybe this could a point to focus on future work: how to deal with this problem.
Was a very close competition for both tracks. Here are the final results: https://www2.informatik.uni-hamburg.de/wtm/omgchallenges/omg_empathy2018_results2018.html
Not all the teams sent us a short description of their submission. If you still want a short description of the table, please send it to me. Links to the papers will be added as soon as we have the reviewing process done. If any information is wrong or missing, please let me know!
I want to congratulate the Alpha-City team to win the 2018 OMGEmpathy Challenge on both tracks! Their solution included different modalities and contextual processing to achieve 0.17CCC for both tracks. Congratulations!
For the personalized track, the USTC-AC and the A*STAR AI team achieved both 0.14 CCC. Both teams used different solutions, all based on the synchronization of multisensory information. So both teams are awarded the 2nd place. The USTC-AC also obtained the 3rd best submission, with a CCC of 0.13, however as they were awarded the second place already, the Rosie team got the third place, with a CCC of 0.8. They proposed a solution based on processing audio, images, and semantic information. Congratulations!!
For the generalized track, the same happened: the USTC-AC and the A*STAR AI team were awarded a joint second place. The EIHW team was awarded third place. They provide a solution based on audio and images processing.
We will prepare and send to the three best teams of each track a certificate stating their achievements in the next weeks.
And I think that’s it. Once again, thank you so much for all the teams and their efforts on participating in the challenge. Was super fun to organize it and to interact with you all.
I published a Twitter with the link to the results, so if you want to like it/share it, be my guest: https://twitter.com/PBarros_br/status/1073605933056577538
Cheers,
Pablo, Angelica, and Nikhil
Like it? Share it!
  •  
  •  
  •  
  •  
  •  
  •  
  •  

The OMG-Empathy Prediction Challenge

Dear all, I am very happy to announce the opening of registration for our OMG-Empathy prediction challenge. For this challenge, we designed, collected and annotated a novel corpus based on human-human interaction. This novel corpus builds on top of the experience we gathered while organizing the OMG-Emotion Recognition Challenge, making use of state-of-the-art frameworks for data collection and annotation. The One-Minute Gradual Empathy datasets (OMG-Empathy) contain multi-modal recordings of different individuals discussing predefined topics. One of them, the actor, shares a story about themselves while the other, the listener, reacts to it emotionally. We annotated each interaction based on the listener’s own assessment of how they felt while the interaction was taking place.

For more information, please refer to our website: https://lnkd.in/dHx5mDs

Like it? Share it!
  •  
  •  
  •  
  •  
  •  
  •  
  •  

Emotion Appraisal and Gesture Recognition Resources

Screenshot

Dear all,

We are very happy to announce the release of resource material related to my research on affective computing and gestures recognition. It contains datasets, source codes for our proposed solutions, pre-trained models, and ready-to-run demos. Our proposed models are based on novel deep and self-organizing neural networks and deploy different mechanisms inspired by neuropsychological concepts. All our models are formally described in different high-impact peer-reviewed publications. We also provide a ready-to-run demo for visual emotion recognition built using our proposed models. These resources are accessible through our GitHub link: https://lnkd.in/deVT6eE

We hope that with these resources we can contribute to the areas of affective computing and gesture recognition and foster the development of innovative solutions.

 

Pablo

Like it? Share it!
  •  
  •  
  •  
  •  
  •  
  •  
  •  

International Workshop on Affective and Assistive Robotics – Recife, Brazil (18.07.2018)

Dear all,

I am very happy to announce the First??International Workshop on Affective and Assistive Robotics on the 18th of July 2018, organized in partnership with Prof. Bruno Fernandes (UPE) and with the??strong??support from Universidade de Pernambuco (UPE), Instituto de Inova????o Tecnol??gica (IIT), CESAR and Parqtel.

This full-day workshop unites academia and industry by bringing experts on the related topics from all over the world. We expect to foster discussions about the development and application of different affective and assistive robotic platforms over different scenarios.

We host special invited speakers with different expertise in this field and an invited??interactive contribution session where young researchers will detail their most recent work.

Please check our website for more information and a detailed view of the program:??http://iwaar.ecomp.poli.br/

 

Like it? Share it!
  •  
  •  
  •  
  •  
  •  
  •  
  •  

Crossmodal Learning for Intelligent Robotics in conjunction with IEEE/RSJ IROS 2018

 

1st CALL FOR PAPERS for the international workshop:

* Crossmodal Learning for Intelligent Robotics * in conjunction with IEEE/RSJ IROS 2018

* Madrid, Spain – Friday 5 October 2018 *

* Website: http://www.informatik.uni-hamburg.de/wtm/WorkshopCLIR18/index.php *

I. Aim and Scope

The ability to efficiently process crossmodal information is a key feature of the human brain that provides a robust perceptual experience and behavioural responses. Consequently, the processing and integration of multisensory information streams such as vision, audio, haptics and proprioception play a crucial role in the development of autonomous agents and cognitive robots, yielding an efficient interaction with the environment also under conditions of sensory uncertainty.

Multisensory representations have been shown to improve performance in the research areas of human-robot interaction and sensory-driven motor behaviour. The perception, integration, and segregation of multisensory cues improve the capability to physically interact with objects and persons with higher levels of autonomy. However, the multisensory input must be represented and integrated in an appropriate way so that they result in a reliable perceptual experience aimed to trigger adequate behavioural responses. The interplay of multisensory representations can be used to solve stimulus-driven conflicts for executive control. Embodied agents can develop complex sensorimotor behaviour through the interaction with a crossmodal environment, leading to the development and evaluation of scenarios that better reflect the challenges faced by operating robots in the real world.

This half-day workshop focuses on presenting and discussing new findings, theories, systems, and trends in crossmodal learning applied to neurocognitive robotics. The workshop will feature a list of invited speakers with outstanding expertise in crossmodal learning.
II. Target Audience

This workshop is open to doctoral students and senior researchers
working in computer and cognitive science, psychology, neuroscience
and related areas with the focus on crossmodal learning.

III. Confirmed Speakers

1. * Yulia Sandamirskaya *
Institute of Neuroinformatics (INI), University and ETH Zurich
2. * Angelo Cangelosi *
Plymouth University and University of Manchester, UK
3. * Stefan Wermter *
Hamburg University, Germany

IV. Submission

1. Topics of interest:

– New methods and applications for crossmodal processing
(e.g., integrating vision, audio, haptics, proprioception)
– Machine learning and neural networks for multisensory robot perception
– Computational models of crossmodal attention and perception
– Bio-inspired approaches for crossmodal learning
– Crossmodal conflict resolution and executive control
– Sensorimotor learning for autonomous agents and robots
– Crossmodal learning for embodied and cognitive robots

2. For paper submission, use the following IEEE template:
<http://ras.papercept.net/conferences/support/support.php>*

3. Submitted papers should be limited to *2 pages (extended abstract)* or *4 pages (short paper)*.

4. Send your pdf file to barros@informatik.uni-hamburg.de AND jirak@informatik.uni-hamburg.de

Selected contributions will be presented during the workshop as spotlight talks and in a poster session.

Contributors to the workshop will be invited to submit extended versions of the manuscripts to a special issue (to be arranged). Submissions will be peer reviewed consistent with the journal practices.

V. Important Dates

* Paper submission deadline: August 15, 2018
* Notification of acceptance: September 5, 2018
* Camera-ready version: September 15, 2018
* Workshop: Friday 5 October 2018

VI. Organizers

* German I. Parisi * Hamburg University, Germany
* Pablo Barros * Hamburg University, Germany
* Doreen Jirak * Hamburg University, Germany
* Jun Tani * Okinawa Institute of Science and Technology, Japan
* Yoonsuck Choe * Samsung Research & Texas A&M University, TX, USA

Like it? Share it!
  •  
  •  
  •  
  •  
  •  
  •  
  •  

OMG-Emotion Recognition Challenge – Final Results

The final results of the 2018 OMG-Emotion Recognition Challenge are out:??https://www2.informatik.uni-hamburg.de/wtm/OMG-EmotionChallenge/.

The leaderboard will be permanently stored on our website, and it will provide a quick access to the results, the links to the formal descriptions and code repository for each solution.

This will help to disseminate knowledge generated by the challenge even further and will improve the reproducibility of your solutions.

The solutions used different modalities (ranging from unimodal audio
and vision to multimodal audio, vision, and text), and thus provide us
with a very complex evaluation scenario. We then decided to separate the
results into one for valence and one for arousal.

For arousal, the best results came from the GammaLab team. Their three
submissions are our top 3 CCC arousal, followed by the three submissions
from the audEERING team, and the two submissions from the HKUST-NISL2018
team.

For valence,?? the GammaLab team stays still in first (with their three
submissions), followed by the two submissions of ADSC team and the three
submissions from the iBug team.

It is very interesting to note that the winning teams used a combination
of unimodal and multimodal solutions.

We will keep this leaderboard intact for our 2018 challenge and will
create a general leaderboard, later on, so the results of the challenge
will remain on our website as it is.

Congratulations to you all!

Like it? Share it!
  • 6
  •  
  •  
  •  
  •  
  •  
  •