当前位置:首页>>博士后之家>>国外博士后招聘>>正文内容

英国诺丁汉大学招聘面部表情识别方向的博士后

2014年12月22日
来源:知识人网整理
摘要:

Post-Doctoral Researcher in Facial Expression Recognition for Virtual Humans

University of Nottingham - Computer Science

The University of Nottingham are looking for a top-rate postdoctoral researcher to do pure research in facial expression analysis and visual human behaviour understanding to allow virtual humans to ‘see’ their dyadic partners. The research will be conducted within the recently awarded 3-year EU ARIA-VALUSPA project. As the facial expression recognition researcher, your task is to extend the state of the art in this field. We are particularly interested in novel adaptive expression recognition methodologies, improving the inference of the meaning behind expressions, and the use of temporal dynamics of expressions.

The ARIA-VALUSPA team at Nottingham will also include a Research Associate/Programmer who will work on the project to translate your research outcome into actual modules to implement in the framework. You will be working in the School of Computer Science’s Computer Vision Lab (http://www.cvl.cs.nott.ac.uk/). Within this lab, you will be have the opportunity of collaborating with a  number of face-researchers: two lecturers (Michel Valstar and Yorgos Tzimiropoulos), four postdocs, and 7 PhD students. 

For your information, below is a short summary of the ARIA-VALUSPA project:

ARIA-VALUSPA will create a disruptive new framework that will allow easy creation of Affective Retrieval of Information Assistants (ARIA agents) that are capable of holding multi-modal social interactions in challenging and unexpected situations. The system can generate search queries and return the information requested by interacting with humans through virtual characters. These virtual humans will be able to sustain an interaction with a user for some time, and react appropriately to the user's verbal and non-verbal behaviour when presenting the requested information and refining search results.

The ARIA-VALUSPA project builds on the capacities of existing Virtual Humans developed by the consortium partners and/or made publicly available by other researchers, but will greatly enhance them. The assistants will be able to handle unexpected situations and environmental conditions, add self-adaptation, learning, European multilingual skills, and extended dialogue abilities to a multimodal dialogue system.  The framework to be developed will be suitable for multiple platforms, including desktop PCs, laptops, tablets, and smartphones. The ARIAs will be able to be displayed and operate in a web browser.