Funding

Self-funded

Project code

CCTS4531021

Department

School of Film, Media, and Creative Technologies

Start dates

October, February and April

Application deadline

Applications accepted all year round

Applications are invited for a self-funded, 3 year full-time or 6 year part-time PhD project.

The PhD will be based in the School of Creative Technologies and will be supervised by Professor Hui Yu and Dr Brett Stevens.

The work on this project will:

  • Develop methods for high-fidelity 3D face generation from images or videos. 
  • The focus can be any of the following aspects: 3D geometric reconstruction, high-fidelity texture generation, facial detail generation, 3D animation and VR application.
  • Demonstrate the effect of the developed method based on 3D facial animation system  developed by VCG.  Application of the developed technologies to mobile devices or VR headsets.

With the advancement of mobile devices (e.g. smartphone) and virtual reality (VR) headsets, performance-based facial animation has attracted increased interests in recent years. The technologies for real-time 3D facial animation using a single camera have found applications in many areas such as VR, augmented reality (AR),  social media apps and online game. The core of the technologies is regenerating realistic 3D human faces from images or videos. Earlier technologies used facial markers, optical flow, or special equipment (e.g. camera arrays) for 3D facial reconstruction tracking for film and game production, which are not practical for consumer-level applications. Marker-less facial motion capture has been well researched in recent years, but reconstructing detailed dynamic 3D faces (dense 3D modelling) with high-fidelity texture from videos is still a challenging topic. 

This project aims to develop methods and systems that are able to capture a high-fidelity 3D face in real time from videos without any special requirements such as calibration. This includes the reconstruction of high quality of 3D face geometry, texture and the facial details. The outcomes of the 3-year project will be a system that given a video with a human face, the output of is a sequence of reconstructed high-fidelity 3D faces with facial details. The candidate can focus on any of the following aspects: 3D geometric reconstruction, high-fidelity texture generation, facial detail generation, 3D animation and VR application.

The candidate will work with a team in the Visual Computing Group (VCG) in the School of Creative Technology. The team is very strong in this area and has published a series of outputs in leading journals (see Bibliography below), so candidate has an established system to work on. VCG is in close collaboration with  industry. The candidate has the opportunity to do internship in industrial partners during the PhD study. The candidate can also have access to the state-of-art facilities in the multi-million pounds CCIXR center. 

The PhD candidate will work in the Visual Computing Group under the supervision of Professor Hui Yu in the School of Creative Technologies.

Fees and funding

Visit the research subject area page for fees and funding information for this project.

Funding availability: Self-funded PhD students only. 

PhD full-time and part-time courses are eligible for the UK  (UK and EU students only – eligibility criteria apply).

Bench fees

Some PhD projects may include additional fees – known as bench fees – for equipment and other consumables, and these will be added to your standard tuition fee. Speak to the supervisory team during your interview about any additional fees you may have to pay. Please note, bench fees are not eligible for discounts and are non-refundable.

Entry requirements

You'll need a good first degree from an internationally recognised university (minimum upper second class or equivalent, depending on your chosen course) or a Master’s degree in computer animation/game, computer science, mathematics, engineering or a related area. In exceptional cases, we may consider equivalent professional experience and/or Qualifications. English language proficiency at a minimum of IELTS band 6.5 with no component score below 6.0.

 

  • Knowledge of computer graphics (or image processing, machine learning)
  • Hands-on experience in using Python or Matlab; experience in implementing deep learning approaches using Tensorflow or Pytorch is preferable 
  • Ability to work in a collaborative environment
  • Good verbal and written communication skills

 

How to apply

We’d encourage you to contact  Professor Hui Yu (hui.yu@port.ac.uk) to discuss your interest before you apply, quoting the project code.

When you are ready to apply, please follow the 'Apply now' link on the Digital and Creative Technologies PhD subject area page and select the link for the relevant intake. Make sure you submit a personal statement, proof of your degrees and grades, details of two referees, proof of your English language proficiency and an up-to-date CV. Our ‘How to Apply’ page offers further guidance on the PhD application process. 

When applying please quote project code: CCTS4531021