D6.1: Dissemination and exploitation

This deliverable summarises the efforts following T6.1-6.3 in WP6. It will document dissemination activities, the various activities for stimulating the robot audition community (with special regard to linking the robotics community as well as the acoustic signal processing community), and exploitation activities (with regard to the general public, the robot industry as a whole, and ALD as an industrial partner). Along all three axes, the tangible impact of the respective activities over the entire project duration as well as the expected future impact will be described.

This Deliverable is due in M36 and has been delivered to the EC on December 31, 2016.

Report: EARS_Report_on_D6-1_20161231_v1_HL

D5.4: Evaluation of humanoid robot demonstrators

Report on evaluation covering T5.4: The application scenario that has been implemented and evaluated at the end of the project is described. Further refinements and tuning to optimise prototype performance delivered by T2.6 and T3.3 in M31-M36 are documented. Evaluation methodologies are presented and results are discussed. Plans for future work beyond the project life-time are presented.

This Deliverable is due in M36 and has been delivered to the EC on 31.12.2016.

Report: EARS_Report_D5-4_20170228_final_HL

D4.3: Human-robot interaction

This delivery covering the efforts of T4.1-T4.5 provides a human-robot interaction demonstration documented by a report, software and a video. The interaction includes the following capabilities of the robot: attentional system, behaviours for event recognition and localisation, and interaction initiated by the
robot.

This Deliverable is due in M36 and has been delivered to the EC on December 31, 2016.

Report: EARS_Report_on_D4-3_20161231_v1_GS

The corresponding software (zip-archive) can be found here.

A demo video can be found here.

D3.2: Augmentation of robot audition by visual information

This deliverable reports on the audio-visual calibration process of multimodal data (including software) covering T1.5, and describes the methodology for the extraction of 3D descriptors based on visual cues (addressing T3.1, including software). Moreover it summarises the results of T3.2 and T3.3 by describing the developed methods for audio-visual event localisation and classification (including software).

This Deliverable is due in M30 and has been submitted to the EC on December 31, 2016.

Report: EARS_Report_on_D3-2_20161231_v1_RH

D1.2: Microphone array design for humanoid robots

This report for T1.1, T1.2, T1.3, T1.4 describes the design of anthropomorphic and robomorphic microphone arrays for the second prototype, including methods for active sensing and sound field representation, facilitating capture and representation of three-dimensional sound fields.

This Deliverable is due in M30 and has been submitted to the EC on June 30, 2016.

Report: EARS_Report_on_D12_20160630_final_NS

The corresponding software for the final prototype is provided by a repository (for EARS members) and the current version (June 30, 2016) can also be download as zip-archive. The corresponding hardware (Benchmark II head) has been presented at the Review Meeting on April 7, 2016 in Berlin.

D2.1: Microphone array signal processing for humanoid robots

This deliverable covers the entire set of acoustic signal processing algorithms as developed for the given robot audition scenario and comprises software in addition to the report. More specifically, it describes the findings for
• Acoustic source and environment mapping and tracking for real-world scenarios. Following a state-or-the-art review with analysis and the baseline algorithms, the advance beyond the state of the art in algorithms for acoustic source and environment mapping will be presented, together with tracking methods for the robot scenario relevant in EARS and with a special emphasis on the novel and time-varying array geometries. Spatial filtering (T2.4). Spatial filtering methods for
microphone arrays in robots. Evaluation methodology description as well as results will be provided for prototype microphone arrays of the Nao head.
• Acoustic echo cancellation (T2.5). This reports acoustic echo cancellation for the special case in which we consider the moving robot with integrated and/or prototype microphone arrays.
• Multichannel noise reduction (T2.6). This will present the outcomes of the research on noise and interference suppression exploiting multichannel processing from the prototype microphone arrays for the Nao robot.
• Dereverberation (T2.7). The research on dereverberation will be reported in this deliverable and the relevant software implementations will be cross-referenced. It will include a state-of-the-art review, the methodology of evaluation and the results for baseline approaches set against the progress made beyond the state of the art
in the new developed algorithms. For all areas relevant software will be cross-referenced for the developed algorithms and evaluation results reported.

This Deliverable is due in M30 and has been submitted to the EC on June 30, 2016.

Report: EARS_Report_on_D21_20160622_final_AM

The corresponding software for the final prototype is provided by a repository (for EARS members) and the current version (June 30, 2016) can also be download as zip-archive.

D5.3: Microphone arrays and video-augmented robot audition for the humanoid robot NAO

This deliverable comprises a report on the physical integration of the microphone arrays (described in D1.3) into NAO: Aldebaran documents the mechanical and electronic integration of the microphone arrays into the head and body of NAO to
optimize robot audition. D5.3 also includes the software and a report on its structure and implementation for the new video-augmented robot audition and interaction functionalities developed in EARS.

This Deliverable is due in M30 and has been submitted to the EC on June 30, 2016.

Report: EARS_Report_on_D53_20160630_PM

The corresponding software for the final prototype is provided by a repository (for EARS members) and the current version (June 30, 2016) can also be download as zip-archive.

D4.2: Methodology and Software for a Computational Internal Model

This Deliverable on methodology and software for a computational internal model that accounts for self-induced action consequences results from T4.1 and will provide software and a report on the internal model structure for predicting the consequences of actions and self-induced changes in the auditory signal. It is due on M24 and has been delivered to the EC on Dec. 21, 2015.

Report: EARS_WP4_D4.2_Methodology_and_software_for_a_computational_internal_model_20151215_Final_GS_HL
Software: EARS_D4.2_Software_and_Documentation_2015_12_21