We are concerned with the analysis, processing, classification and generation of audio and visual signals in applications such as machine vision, computer graphics, avatars and audio and speech processing.
We strive to model the relationship between audio speech and face-and-body gesture. We apply this to drive the lip motion, facial expression, head motion (e.g. nodding) and upper body motion of a graphics generated digital character, or avatar, from users’ speech in real time. We also exploit the inverse of this model to estimate audio from lip motions for applications in speech enhancement and speaker separation.
Our research into automatic 3D modelling of urban environments and historic places has been progressed with the aid of drones and reconstruction software that uses data from lidar, laser and photogrammetry.
Our interdisciplinary research into simulating the interactions of biomolecules during docking has led to haptic-assisted interactive docking software (www.haptimol.co.uk).
Our overarching goal in this area is to develop a software tool capable of visualising biomolecular structures, simulating the forces of interaction between them and crucially simulating the resulting conformational changes during molecular docking.
The BirthView virtual reality (VR) software aims to predict adverse childbirth outcomes on a per patient basis. It models the complex biomechanical interactions between the fetus and maternal pelvic tissues using explicit finite element (FE) and mechanical contact models.
These technologies share many theoretical foundations and we collaborate intensively with other laboratories namely Colour lab, Data Science and Statistics and Machine Learning. We also collaborate closely with Norwich and Norfolk University Hospital, School of History, UEA, various industries and internationally with EPIC games.
Loading...