SONICOM is a 60-month project, which started in January 2021. It was selected among proposals submitted for FET Proactive Emerging Paradigms and Communities call (FETPROACT-EIC-07-2020), in subtopic A: Artificial Intelligence for extended social interaction.
With the help of Artificial Intelligence (AI) and data-driven technological paradigms, the SONICOM project will transform auditory social interaction and communication in Virtual and Augmented Reality (VR&AR). It focuses on immersive audio technologies, which could eliminate the different hearing experiences we feel between physical and remote communications. The SONICOM team presents the idea behind the project, as follows:
“Picture yourself being able to dynamically change the position of the various participants within a virtual conversation, modifying also the acoustical characteristics of the simulated environment. Then extend this to an interaction where some participants are present in person in the same environment, and some are accessing it remotely; imagine ‘blending’ the real and virtual so that it is not possible, from an auditory point of view, to distinguish between the two.”
Lead investigator Dr Lorenzo Picinali, of Imperial College London’s Dyson School of Design Engineering, said: “Our current online interactions are very different to real life scenarios: social cues and tone of voice can be lost, and all the sound comes from one direction. Our technology could help users have more real, more immersive experiences that convey the nuanced feelings and intentions of face-to-face conversations.”
In the first phase, the SONICOM team of researchers and creative tech experts from across Europe will design a new generation of immersive audio technologies and techniques to transform social interactions, specifically looking at customisation and personalisation of the audio rendering. The researchers will explore and analyse behavioural, physiological, kinematic, and psychophysical reactions of listeners within social interaction scenarios, in order to develop appropriate hardware and software proofs of concept.
SONICOM is part of the paradigm of emerging virtual technologies. Over the five-year project, the team aim to release a comprehensive ecosystem for auditory data closely linked with model implementations and immersive audio rendering components, reinforcing the idea of reproducible research, and promoting future development and innovation in the area of auditory-based social interaction. Dr Picinali said:
“Imagine a virtual meeting space where you see colleagues to your right, left, and across from you. We want to make this possible in audio form, using AI not only to improve and personalise sound, but also to the reactions of the listeners and predict how this could influence the conversation.”
Alongside this work, the SONICOM consortium will be part of a cross-collaborative initiative with three other projects awarded under the same Horizon 2020 call known as ‘Artificial Intelligence for extended social interaction.’ The aim is to identify synergies within their projects and refine a joint vision to maximise the impact of each projects’ research in their respective emerging technological paradigms.
The SONICOM project brings together 10 experienced teams from 6 European countries (United Kingdom, France, Italy, Austria, Greece and Spain). The coordinating institution is Imperial College of Science, Technology and Medicine.
FET-Open and FET Proactive are now part of the Enhanced European Innovation Council (EIC) Pilot (specifically the Pathfinder), the new home for deep-tech research and innovation in Horizon 2020, the EU funding programme for research and innovation.