Funding

Self-funded

Project code

CCTS4480219

Department

School of Computing

Start dates

February and October

Application deadline

Applications accepted all year round

Applications are invited for a self-funded, 3-year full-time or 6-year part-time PhD project, to commence in October 2020 or February 2021.

The PhD will be based in the School of Computing and will be supervised by and Dr Chenguang Yang (Swansea University).

In the human-robot interaction/collaboration, the robot is supposed to be able to detect, perceive and understand corresponding human motions in the environment to interact, co-operate, imitate or learn in an intelligent manner.

Sensory information of both human motions and the environment is captured by various types of sensors such as cameras, markers, accelerometers, and tactile sensors. Research applications of human motion analysis in human-robot interactions/collaborations include programming by demonstration, imitation, tele-operation, activity or context recognition and humanoid design. In addition, the extraction of meaningful information about the environment through perceptual systems also plays a key role in scene representation and recognition to future make the robot interact with human in a more natural way.

The aim of scene representation for HRI is to describe the way in which human and robot tend to interact around a scene and to generate a representation tied to geography, indicating which types of motions might happen in which part of the scene. It can enable a robot to respond efficiently to user commands, which refer to spatial locations, object features or object labels without re-performing a visual search each time.

We will investigate effective methods for scene representation using dynamic neural field including transient detectors, temporal variation model, etc. The scene representation will be incorporated into the motion analysis framework to achieve a more effective and stable system.


The objectives of the project are:

  • To develop a multimodal-sensing platform for human-robot interaction and collaboration, using various types of sensors such as depth cameras, markers, accelerometers, tactile sensors, force sensors, bio-signal sensors, etc. to capture both human motions and the operation environment.
  • To investigate a more robust and less noisy representation of human action features, including the local and globe features, incorporating a variety of uncertainties, e.g., quality of images, individual action habits, different environments, etc.
  • To investigate an advanced motion analysis framework including hierarchical data fusion strategies and off-the-shelf probabilistic recognition algorithms, to synchronise and fuse the sensory information for the real-time analysis and automatic recognition of the human action with satisfactory accuracy and reliable fusion results. The priority is given to balancing the effectiveness and efficiency of the system.

Fees and funding

Visit the research subject area page for fees and funding information for this project.

Funding availability: Self-funded PhD students only. 

PhD full-time and part-time courses are eligible for the (UK and EU students only).

How to apply

We’d encourage you to contact Dr Zhaojie Ju (Zhaojie.ju@port.ac.uk) to discuss your interest before you apply, quoting the project code CCTS4480219.

How to apply

When you are ready to apply, please follow the 'Apply now' link on the Computing PhD subject area page and select the link for the relevant intake. Make sure you submit a personal statement, proof of your degrees and grades, details of two referees, proof of your English language proficiency and an up-to-date CV. Our ‘How to Apply’ page offers further guidance on the PhD application process. 

October start

February start