Objective ICT-2013.4.1c Natural spoken and multimodal interaction
Funding Schemes: STREP, CSA
Indicative budget distribution STREP: EUR 21 million
Call: FP7-ICT-2013-10
c) Natural spoken and multimodal interaction
Speech-enabled interfaces based upon multimodal verbal and non-verbal communication. Projects shall address autonomous human-like social agents that can handle conversational speech; learn from interaction and react proactively to new communicative situations; recognize and generate social cues. Systems should be able to cope with spontaneous dialogue and exhibit adequate communicative, affective and cognitive (e.g. question answering) capabilities in relation to the domain/task under consideration and the needs and abilities of the user. Technologies should be designed to match multiple delivery platforms, from virtual assistants e.g. for customer service, through smartphones to games.
The call invites:
(i) a few ambitious R&D projects investigating new approaches and research avenues well beyond the current state of the art; projects will be centred on cross-disciplinary approaches and partnerships, and address multimedia content and multimodal interaction; they will encompass everyday language as found in e.g. consumer generated content, cover multiple languages, and cater for written and/or spoken language as appropriate; technologies shall be adaptive, cope with massive volumes of content, and have a clear potential to support real-life processes;
(ii) one broad-based support action designed to establish a unifying roadmap in each of the domains under consideration, developing a compelling research and innovation agenda until and beyond 2020, centred on close collaboration between research centres and commercial players (particularly SMEs), and based upon agreed reference architectures, common resources (standards, software, data), and shared development and evaluation facilities.
What links here
No backlinks found.