Keynote Speakers

  • Christopher Nugent Ulster University, "Development and evaluation of technologies to support ambient assisted living"
  • Paris Smaragdis University of Illinois at Urbana-Champaign, "The CrowdMic: Making crowdsourced recordings at scale"
  • Maja Pantic Imperial College London, "Automatic analysis of facial expressions"
  • Katy Noland BBC R&D, "Hybrid Log-Gamma and high dynamic range television"
  • Samson Cheung University of Kentucky, "Multimedia and autism"
  • Tobi Delbruck Institute for Neuroinformatics, ETH Zurich, "Silicon retina technology"

Christopher Nugent, Ulster University

"Development and evaluation of technologies to support ambient assisted living"

Chris Nugent

Prof Chris Nugent is the Director of the Computer Science Research Institute and holds the position of Professor of Biomedical Engineering. He is based within the School of Computing and Mathematics, Faculty of Computing and Engineering at Ulster University. He received a Bachelor of Engineering in Electronic Systems and DPhil in Biomedical Engineering both from Ulster University. Chris joined Ulster University as a Research Fellow in 1999 and was appointed as Lecture in Computer Science in 2000. Following this he held positions of Senior Lecture and Reader within the Faculty of Computing and Engineering before his appointment as Professor of Biomedical Engineering in 2008. In 2016 he was awarded the Senior Distinguished Research Fellowship from Ulster University. His research within biomedical engineering addresses the themes of the development and evaluation of technologies to support ambient assisted living. Specifically, this has involved research in the topics of mobile based reminding solutions, activity recognition and behaviour modelling and more recently technology adoption modelling. He has published extensively in these areas with papers spanning theoretical, clinical and biomedical engineering domains. He has been a grant holder of Research Projects funded by National, European and International funding bodies. He is the Group Leader of the Smart Environments Research Group and also the co-Principal Investigator of the Connected Health Innovation Centre at Ulster University.

Paris Smaragdis, University of Illinois at Urbana-Champaign

"The CrowdMic: Making crowdsourced recordings at scale"

Paris Smaragdis

Prof Paris Smaragdis is an associate professor at the Computer Science and the Electrical and Computer Engineering departments of the University of Illinois at Urbana-Champaign, as well as a senior research scientist at Adobe Research. He completed his masters, PhD, and postdoctoral studies at MIT, performing research on computational audition. In 2006 he was selected by MIT’s Technology Review as one of the year’s top young technology innovators for his work on machine listening, in 2015 he was elevated to an IEEE Fellow for contributions in audio source separation and audio processing, and during 2016-2017 he is an IEEE Signal Processing Society Distinguished Lecturer. He has authored more than 100 papers on various aspects of audio signal processing, holds more than 40 patents worldwide, and his research has been productized by multiple companies.

Maja Pantic, Imperial College London

"Automatic analysis of facial expressions"

Maja Pantic

Prof Maja Pantic is a Professor of Affective and Behavioral Computing and leader of the i·BUG group at Imperial College London, UK, working on machine analysis of human non-verbal behaviour and its applications to human-computer, human-robot, and computer-mediated human-human interaction. Prof. Pantic published more than 250 technical papers in the areas of machine analysis of facial expressions, machine analysis of human body gestures, audiovisual analysis of emotions and social signals, and human-centred machine interfaces. She has more than 15,000 citations to her work, and has served as the Keynote Speaker, Chair and Co-Chair, and an organisation/ program committee member at numerous conferences in her areas of expertise.

Katy Noland, BBC R&D

"Hybrid Log-Gamma and high dynamic range television"

Katy Noland

Dr Katy Noland is a Senior Research and Development Engineer in the Broadcast and Connected Systems section, and specialises in how the human visual system perceives digital video, with applications in video format specification and conversion. As part of the team working on ultra-high definition television, she has investigated the limits of human motion perception for high-frame rate (HFR) television applications, and has been heavily involved with development work on the Hybrid Log-Gamma system for high dynamic range (HDR). She graduated from the Tonmeister course in Music and Sound Recording at the University of Surrey in 2003. She went on to receive an MSc in Digital Signal Processing from Queen Mary University of London, and joined the Centre for Digital Music there to study for a PhD in automatic analysis of tonal harmony, which she received in 2009. In 2006 she also became a teaching fellow in the Department of Electronic Engineering at Queen Mary, teaching audio and video signal processing. After 6 months as a visiting researcher at a leading consumer electronics manufacturer, Katy joined BBC Research and Development in 2011.

Samson Cheung, University of Kentucky

"Multimedia and autism"

Samson Cheung

Sen-Ching “Samson” Cheung is a Professor of Electrical and Computer Engineering and the director of Multimedia Information Laboratory (Mialab) at University of Kentucky (UKY), Lexington, KY, USA. He is currently the endowed Blazie Family Professor of Engineering. Before joining UKY in 2004, he was a postdoctoral researcher with the Scientific Data Mining Group at Lawrence Livermore National Laboratory. He received his Ph.D. degree from University of California, Berkeley in 2002. He won the R&D 100 Award in 2006, Best Poster Award in British Machine Vision Conference and Ralph E. Powe Junior Faculty Enhancement Award in 2005. He is a senior member of both IEEE and ACM. He has the fortune of working with a team of talented students and collaborators at Mialab in a number of areas in multimedia including video surveillance, privacy protection, encrypted domain signal processing, 3D data processing, virtual and augmented reality as well as computational multimedia for autism therapy. More details about current and past research projects at Mialab can be found at

Tobi Delbruck, Institute for Neuroinformatics, ETH Zurich

"Silicon retina technology"

Tobi Delbruck

Tobi Delbruck (IEEE M’99–SM’06–F’13) received a Ph.D. degree from Caltech in 1993. He is currently a professor of physics and electrical engineering at ETH Zurich in the Institute of Neuroinformatics, University of Zurich and ETH Zurich, Switzerland, where he has been since 1998. His group, which he coordinates together with Dr. Shih-Chii Liu, focuses on neuromorphic event-based sensors and sensory processing. He has co-organized the Telluride Neuromorphic Cognition Engineering summer workshop and the live demonstration sessions at ISCAS and NIPS. Delbruck is past Chair of the IEEE CAS Sensory Systems Technical Committee. He worked on electronic imaging at Arithmos, Synaptics, National Semiconductor, and Foveon and has founded 3 spin-off companies, including, a community-oriented organization that has distributed R&D prototype neuromorphic sensors to more than a hundred organizations around the world. He has been awarded 9 IEEE awards.

Contact Us

By telephone
During office hours
(Monday-Friday 08:30-17:00)
+44 (0)1234 400 400

Outside office hours
(Campus Watch)
+44 (0)1582 74 39 89

By email
(international) (student support) (registration) (exchanges)

By post
University of Bedfordshire
University Square