This workshop, organised by FET SceneNet project, deals with the concept of combining multiple sources of audio-visual information and user generated contents, in the form of images and videos, to create an enhanced description of a scene that can be shared among users. In this workshop we aim at discussing several aspects of this topic: data acquisition, data synchronization, 3D/4D visual data reconstruction, compute related issues, visualization aspects and last but not least ethical, legal and social questions that arise from this concept.
This workshop will be a one day event (30 September) that includes invited speakers from European projects funded in FP7 that deal with various aspects of this multidiscipline topic, from both industry and academy. The organisers also plan to have a panel and an open discussion session.
The following projects have confirmed their participation: