Dr Kate Farrahi of the University of Southampton introduces wearable tech for epidemiology, followed by the 2021 talk by Cecilia Mascolo, Professor of Mobile Systems at the University of Cambridge.
In 2015, Bill Gates warned us in now a very famous TED talk, about the greatest risk of global catastrophe being an epidemic, how unprepared we actually are for such a catastrophe and the power of mobile phones and wearable technology to help us in such an event.
In her introductory talk, Dr Kate Farrahi describes a research project she worked on prior to Gates and long before to our current pandemic, which demonstrates the power of wearable technology to help us during epidemics.
The main event - Turing Talk 2021
Professor Cecilia Mascolo’s talk - Sounding out wearable and audio data for health diagnostics - focuses on research related to making wearable sensing data collection and data processing more efficient and effective, including the innovative use of audio data for health diagnostics.
In her presentation, she examines whether wearables can be used to predict our future fitness, how data privacy can be maintained and, how we can literally listen to our bodies for answers about our health.
Current barriers
Wearable technology, although rapidly improving, is not quite where it needs to be. That’s partly because human behaviours are difficult to sense - for four main reasons:
- Continuous input is needed to get a proper sense of something but can be difficult to ensure.
- User input and attention is a precious resource, so if a behaviour requires manual input, it may not be precise.
- Sensors and devices can be expensive, so they aren’t easily affordable or scalable.
- The sensing can be invasive. This could be uncomfortable or even dangerous for human beings to enter this sort of sensing.
And so, in all these cases, there is research to be done to find better devices, or more efficient ways of gathering information.
Tech for better health
Fitness is a good example of a measure about our body which is useful to have; knowing someone is fit is a good indicator for predicting future health as well.
The way someone’s fitness is currently assessed is often either cumbersome and involved (think of the VO2 max test!) or by a more empirical, proxy fashion such as a questionnaire, asking how many times you’ve exercised this week, for how long, what kind of exercise and how intensively you worked out.
However, wearable sensing is now being used as a method for health monitoring and diagnostics, using information such as speed, GPS location and heartrate while exercising, to gauge a person’s VO2 max levels.
It’s not all in the wrist
A problem Mascolo mentions from her research is that current wearables often under predict VO2 max. One reason for this could be that the wrist is actually one of the worst places to accurately obtain a heart rate - and many smart watches/trackers use PPG (those green lights) to detect your pulse - from the wrist.
However, the wrist is not the only place from where we can get this kind of physiological information. In fact, an interesting area of research is trying to find what the next generation of devices, such as earbuds, can offer as mechanisms as sensing modalities for our health.
The importance of audio
Audio input is an important factor to consider, Mascolo explains, because microphones are embedded in almost all the devices we carry with us - and microphones are quite cheap, so behaviour monitoring techniques harnessing this technology could be very scalable. Another benefit is that whereas a doctor or other professional cannot be expected to monitor someone all day, these devices can input continuously.
Hearing voices
Surprisingly, as well as emotions, the human voice can be quite indicative of diseases. Mascolo references an article by MIT Technology Review, ‘Voice Analysis Tech Could Diagnose Disease’, which indicates that voice features could be analysed for patterns, such as signs of post-traumatic stress disorder, psychiatric diseases, as well as heart diseases. It suggests that our voice is affected by the hardening of your cardiovascular systems and, therefore, physiological changes can be detected via sound.
The same concept has been applied to potential diagnosis over calls to emergency services and now even from home assistants like Google Home or Amazon Alexa - imagine the possibilities.
Listen to your body
It's not just about the voice, however - microphones can be put on all parts of our body. Listening to the heart or lungs (auscultation) is a very old technique but it is quite difficult to be trained in. In fact, most of the skill is often substituted by more complex machinery such as ultrasound or echocardiogram.
But what if the ear being used to listen wasn't a human ear and instead a machine? It would be much easier and much more scalable. The problem is, there is no dataset of sounds to train the machine learning algorithms to do any better - the skill is currently learnt by listening to real patients. Mascolo has secured a European Research Council grant to collect large scale data and build robust models for the screening, classification and progression of disease.
However, machine learning alone won’t solve diagnostic issues; an approach that integrates clinical expertise and the advantages of using some automated techniques is needed.
The sound of COVID
As we know, COVID very often has some respiratory symptoms. It is already possible to do some automatic sound analysis to understand diseases such as COPD, asthma, or pneumonia; so along with colleagues in Papworth Hospital, Cambridge, Mascolo had the idea to conduct a large-scale data collection for COVID via an app.
The app collects demographics and symptoms as well as medical history, but it also allows the users to input their breathing sounds, their coughs and recordings of them reading a sentence, which is then analysed.
Having an app that, in theory, could use sounds to diagnose COVID, or diagnose progression of COVID, would allow for a very scalable solution. As it’s an app, there is no in-person contact, it's not invasive, it's affordable and the testing side of things is simply a machine learning algorithm.
So, the premises are quite good. Can the team do it? Well, that's what they’re trying to understand. Over 30,000 participants have so far contributed data from many countries.
Watch the full recording to learn what Professor Mascolo’s team have found.