Emotion Appraisal and Gesture Recognition Resources

Screenshot

Dear all,

We are very happy to announce the release of resource material related to my research on affective computing and gestures recognition. It contains datasets, source codes for our proposed solutions, pre-trained models, and ready-to-run demos. Our proposed models are based on novel deep and self-organizing neural networks and deploy different mechanisms inspired by neuropsychological concepts. All our models are formally described in different high-impact peer-reviewed publications. We also provide a ready-to-run demo for visual emotion recognition built using our proposed models. These resources are accessible through our GitHub link: https://lnkd.in/deVT6eE

We hope that with these resources we can contribute to the areas of affective computing and gesture recognition and foster the development of innovative solutions.

 

Pablo

Like it? Share it!
  •  
  •  
  •  
  •  
  •  
  •  
  •  

International Workshop on Affective and Assistive Robotics – Recife, Brazil (18.07.2018)

Dear all,

I am very happy to announce the First International Workshop on Affective and Assistive Robotics on the 18th of July 2018, organized in partnership with Prof. Bruno Fernandes (UPE) and with the strong support from Universidade de Pernambuco (UPE), Instituto de Inovação Tecnológica (IIT), CESAR and Parqtel.

This full-day workshop unites academia and industry by bringing experts on the related topics from all over the world. We expect to foster discussions about the development and application of different affective and assistive robotic platforms over different scenarios.

We host special invited speakers with different expertise in this field and an invited interactive contribution session where young researchers will detail their most recent work.

Please check our website for more information and a detailed view of the program: http://iwaar.ecomp.poli.br/

 

Like it? Share it!
  •  
  •  
  •  
  •  
  •  
  •  
  •  

Crossmodal Learning for Intelligent Robotics in conjunction with IEEE/RSJ IROS 2018

 

1st CALL FOR PAPERS for the international workshop:

* Crossmodal Learning for Intelligent Robotics * in conjunction with IEEE/RSJ IROS 2018

* Madrid, Spain – Friday 5 October 2018 *

* Website: http://www.informatik.uni-hamburg.de/wtm/WorkshopCLIR18/index.php *

I. Aim and Scope

The ability to efficiently process crossmodal information is a key feature of the human brain that provides a robust perceptual experience and behavioural responses. Consequently, the processing and integration of multisensory information streams such as vision, audio, haptics and proprioception play a crucial role in the development of autonomous agents and cognitive robots, yielding an efficient interaction with the environment also under conditions of sensory uncertainty.

Multisensory representations have been shown to improve performance in the research areas of human-robot interaction and sensory-driven motor behaviour. The perception, integration, and segregation of multisensory cues improve the capability to physically interact with objects and persons with higher levels of autonomy. However, the multisensory input must be represented and integrated in an appropriate way so that they result in a reliable perceptual experience aimed to trigger adequate behavioural responses. The interplay of multisensory representations can be used to solve stimulus-driven conflicts for executive control. Embodied agents can develop complex sensorimotor behaviour through the interaction with a crossmodal environment, leading to the development and evaluation of scenarios that better reflect the challenges faced by operating robots in the real world.

This half-day workshop focuses on presenting and discussing new findings, theories, systems, and trends in crossmodal learning applied to neurocognitive robotics. The workshop will feature a list of invited speakers with outstanding expertise in crossmodal learning.
II. Target Audience

This workshop is open to doctoral students and senior researchers
working in computer and cognitive science, psychology, neuroscience
and related areas with the focus on crossmodal learning.

III. Confirmed Speakers

1. * Yulia Sandamirskaya *
Institute of Neuroinformatics (INI), University and ETH Zurich
2. * Angelo Cangelosi *
Plymouth University and University of Manchester, UK
3. * Stefan Wermter *
Hamburg University, Germany

IV. Submission

1. Topics of interest:

– New methods and applications for crossmodal processing
(e.g., integrating vision, audio, haptics, proprioception)
– Machine learning and neural networks for multisensory robot perception
– Computational models of crossmodal attention and perception
– Bio-inspired approaches for crossmodal learning
– Crossmodal conflict resolution and executive control
– Sensorimotor learning for autonomous agents and robots
– Crossmodal learning for embodied and cognitive robots

2. For paper submission, use the following IEEE template:
<http://ras.papercept.net/conferences/support/support.php>*

3. Submitted papers should be limited to *2 pages (extended abstract)* or *4 pages (short paper)*.

4. Send your pdf file to barros@informatik.uni-hamburg.de AND jirak@informatik.uni-hamburg.de

Selected contributions will be presented during the workshop as spotlight talks and in a poster session.

Contributors to the workshop will be invited to submit extended versions of the manuscripts to a special issue (to be arranged). Submissions will be peer reviewed consistent with the journal practices.

V. Important Dates

* Paper submission deadline: August 15, 2018
* Notification of acceptance: September 5, 2018
* Camera-ready version: September 15, 2018
* Workshop: Friday 5 October 2018

VI. Organizers

* German I. Parisi * Hamburg University, Germany
* Pablo Barros * Hamburg University, Germany
* Doreen Jirak * Hamburg University, Germany
* Jun Tani * Okinawa Institute of Science and Technology, Japan
* Yoonsuck Choe * Samsung Research & Texas A&M University, TX, USA

Like it? Share it!
  •  
  •  
  •  
  •  
  •  
  •  
  •  

OMG-Emotion Recognition Challenge – Final Results

The final results of the 2018 OMG-Emotion Recognition Challenge are out: https://www2.informatik.uni-hamburg.de/wtm/OMG-EmotionChallenge/.

The leaderboard will be permanently stored on our website, and it will provide a quick access to the results, the links to the formal descriptions and code repository for each solution.

This will help to disseminate knowledge generated by the challenge even further and will improve the reproducibility of your solutions.

The solutions used different modalities (ranging from unimodal audio
and vision to multimodal audio, vision, and text), and thus provide us
with a very complex evaluation scenario. We then decided to separate the
results into one for valence and one for arousal.

For arousal, the best results came from the GammaLab team. Their three
submissions are our top 3 CCC arousal, followed by the three submissions
from the audEERING team, and the two submissions from the HKUST-NISL2018
team.

For valence,  the GammaLab team stays still in first (with their three
submissions), followed by the two submissions of ADSC team and the three
submissions from the iBug team.

It is very interesting to note that the winning teams used a combination
of unimodal and multimodal solutions.

We will keep this leaderboard intact for our 2018 challenge and will
create a general leaderboard, later on, so the results of the challenge
will remain on our website as it is.

Congratulations to you all!

Like it? Share it!
  • 6
  •  
  •  
  •  
  •  
  •  
  •  

The OMG-Emotion Recognition Challenge

 

CALL FOR PARTICIPATION

The One-Minute Gradual-Emotion Recognition (OMG-Emotion)
held in partnership with the WCCI/IJCNN 2018 in Rio de Janeiro, Brazil.

https://www2.informatik.uni-hamburg.de/wtm/OMG-EmotionChallenge/

I. Aim and Scope

Our One-Minute-Gradual Emotion Dataset (OMG-Emotion Dataset) is composed of 420 relatively long emotion videos with an average length of 1
minute, collected from a variety of Youtube channels. The videos were
selected automatically based on specific search terms related to the
term “monologue”. Using monologue videos allowed for different
emotional behaviors to be presented in one context and that changes
gradually over time. Videos were separated into clips based on
utterances and each utterance was annotated by at least five
independent subjects using the Amazon Mechanical Turk tool. To maintain
the contextual information for each video, each annotator watched the
clips of a video in sequence and had to annotate each video using an
arousal/valence scale and a categorical emotion based on the universal
emotions from Ekman.

We release the dataset with the gold standard for arousal and valence as
well the individual annotations for each reviewer, which can help the
development of different models. We will calculate the final Congruence
Correlation Coefficient against the gold standard for each video. We
also distribute the transcripts of what was spoken in each of the
videos, as the contextual information is important to determine gradual
emotional change through the utterances. The participants are encouraged
to use crossmodal information in their models, as the videos were
labeled by humans without distinction of any modality. We also will let
available to the participating teams a set of scripts which will help them
to pre-process the dataset and evaluate their model during in the
training phase.

We encourage the use of neural-computation models based on deep
learning, self-organization, and recurrent neural networks, just to
mention some of them, as they present the state-of-the-art performance in
such tasks.

II. How to Participate

To participate, please send us an email to
barros@informatik.uni-hamburg.de with the title “OMG-Emotion Recognition
Team Registration”. This e-mail must contain the following information:
Team Name
Team Members
Affiliation

Each team can have a maximum of 5 participants. You will receive from us
the access to the dataset and all the important information about how to
train and evaluate your models.
For the final submission, each team will have to send us a .csv file
containing the final arousal/valence values for each of the utterances
on the test dataset. We also request a link to a GitHub repository where
your solution must be stored, and a link to an ArXiv paper with 4-6
pages describing your model and results. The best papers will be invited
to submit their detailed research to a journal yet to be specified.
Also, the best participating teams will hold an oral presentation about
their solution during the WCCI/IJCNN 2018 conference.

III. Important Dates

Publishing of training and validation data with annotations: March 14,
2018.
Publishing of the test data, and an opening of the online submission:
April 11, 2018.
Closing of the submission portal: April 13, 2018.
Announcement of the winner through the submission portal: April 18, 2018.

IV. Organization

Pablo Barros, University of Hamburg, Germany
Egor Lakomkin, University of Hamburg, Germany
Henrique Siqueira, Hamburg University, Germany
Alexander Sutherland, Hamburg University, Germany
Stefan Wermter, Hamburg University, Germany

Like it? Share it!
  • 2
  •  
  •  
  •  
  •  
  •  
  •  

Workshop on Intelligent Assistive Computing at IEEE WCCI – July 8th, 2018

Workshop on Intelligent Assistive Computing at IEEE WCCI – July 8th, 2018

http://www.wac2018.ecomp.poli.br/

First call for papers

We kindly invite you to submit your contributions to the workshop
to be held in Rio de Janeiro, Brazil.

Assistive technologies have the goal to provide greater quality of life and independence in domestic environments by enhancing or changing the way people perform activities of daily living (ADLs), tailoring specific functionalities to the needs of the users. Significant advances have been made in intelligent adaptive technologies that adopt state-of-the-art learning systems applied to assistive and health-care-related domains. Prominent examples are fall detection systems that can detect domestic fall events through the use of wearable physiological sensors or non-invasive vision-based approaches, and body gait assessment for physical rehabilitation and the detection of abnormal body motion patterns, e.g., linked to age-related cognitive declines. In addition to an adequate sensor technology, such approaches require methods able to process rich streams of (often noisy) information with real-time performance.

Assistive technology has been the focus of research in the past decades. However, it flourished in the past years with the fast development of personal robots, smart homes, and embedded systems. The focus of this workshop is to gather neural network researchers, both with application and development focus, working on or being interested in building and deploying such systems. Despite the high impact and application potential of assistive systems for the society, there is still a significant gap between what is developed by researchers and the applicability of such solutions in real-world scenarios. This workshop will discuss how to alleviate this gap with help of the latest neural network research such as deep, self-organizing, generative and recurrent neural models for adaptable lifelong learning applications. In this workshop, we aim at collecting novel methods, computational models, and experimental strategies for intelligent assistive systems such as body motion and behavior assessment, rehabilitation and assisted living technologies, multisensory frameworks, navigation assistance, affective computing, and more accessible human-computer interaction.

The primary list of topics covers the following topics (but not limited to):

– Machine learning and neural networks for assistive computing
– Behavioral studies on assistive computing
– Models of behavior processing and learning
– New theories and findings on assistive computing
– human-machine, human-agent, and human-robot interaction focused on assistive computing
– Brain-machine interfaces for assistive computing
– Crossmodal models for assistive computing

Invited speakers
– Igor Farkas, Comenius University
– Giulio Sandini, Istituto Italiano di Tecnologia (IIT)
– Stefan Wermter, University of Hamburg

Call for contributions
Participants are required to submit a contribution as:

– Extended abstract (maximum 2 pages)
– Short paper (maximum 4 pages)

Selected contributions will be presented during the workshop as
spotlight talks and in a poster session.

Important dates
April 6, 2018 – Paper submission deadline
May 4, 2018 – Notification of acceptance
May 25, 2018 – Camera-ready version
July 8, 2018 – Workshop

Organizers
Pablo Barros, University of Hamburg
Francisco Cruz, Universidad Central de Chile
German I. Parisi, University of Hamburg
Bruno Fernandes, Universidade de Pernambuco

See more details in:
http://www.wac2018.ecomp.poli.br/

Like it? Share it!
  • 5
  •  
  •  
  •  
  •  
  •  
  •  

Opening of the page!

Hi there!

I created this page to serve as a hub for my work and some hobbies. Here you can find a bit more about some academic information, projects I work(ed) on, publications,  and current and past student orientations. Besides that, I will use this page for publicizing events or information which I may find interesting.

I will also use this space to host some information of some of my hobbies, including video gaming, RC Cars,  and the development of applications for DIY and commercial robots.

I will try to update this page as soon as things are happening.

Cheers,

Pablo

Like it? Share it!
  •  
  •  
  •  
  •  
  •  
  •  
  •