Two prominent universities are trying to develop apps that listen to users’ voices and coughs to predict whether they are ill with the coronavirus.
But the two projects are taking different methods to privacy.
The Cambridge University attempt seeks to keep volunteers anonymous, but says this is presently limiting its work.
In the meantime, a team at Carnegie Mellon University says it is important that users register themselves, but it has had to briefly go offline.
The two plans are independent of one another.
Both depend on machine learning, a form of artificial intelligence in which computers analyze large amounts of data to find arrangements that can be used to solve problems.
In this circumstance, the goal is to be able to differentiate the Covid-19 from other diseases including the flu.
Both teams admit that the resulting software would not substitute the need for other medical tests.
Cambridge University initiated the Covid-19 Sounds project on Tuesday.
Members of the public are being requested to breathe and cough into a computer’s microphone, and also provide details of their age, approximate location, gender and whether they have lately tested positive for the coronavirus.
They are then requested to read the following sentence three times: “I hope my data can help to manage the virus pandemic.”
“The goal is to collect adequate data to check whether from these sounds we’re able to recognize people who have Covid-19 and possibly even the stage of the disease,” clarifies Prof Cecilia Mascolo.
“If we make this work, we could possibly help services like the UK’s 111 NHS helpline.”
In its first day, about 1,200 people gave recordings, 22 of whom said they had lately tested positive.
The team hopes to have a product prepared in as little as two months’ time.
“The evaluation won’t take too long, but it all depends on the quality of the data we gather,” Prof Mascolo adds.
At present, the project is restricted to collecting samples through a website, instead of a smartphone app.
This is in part because Google and Apple are restricting who can publish coronavirus-related apps to their stores, and this attempt has yet to qualify.
“The app is improved because it can go back to the helpers on following days and ask them to make soundtracks again,” says Prof Pietro Cicuta, another team member.
This is not possible to do through the website, he adds, without giving in users’ anonymity.
It temporarily went live on 30 March. Users were requested to cough, recite the alphabet and record vowel sounds as well as provide information about themselves.
At the of the process, the tool exhibited an indication of how expected they were to have Covid-19.
But the researchers realized a rethink was required.
“It doesn’t matter how many denials you put up there – how clearly you tell people that this has not been medically certified – some people will consider the machine as the word of God,” explains Dr Rita Singh.
“If a procedure tells a person who has caught Covid-19 that they don’t have it, it may kill that person.
“And if it states to a healthy person they have it, and they go off to off to be tested, they may utilize valuable resources that are inadequate.
“So, we have little room for error anyway, and are thinking on how to present the results so that these threats vanish.”
She still wants to bring the data-gathering feature of the service back online before the end of this week.
The plan is to let users to register without having to give their names. But not like Cambridge’s effort, volunteers will need to set up an account connected to their email address.
Prof Singh says this is compulsory to provide users with revised feedback as the tool becomes more accurate – for instance, if someone moves into a high-risk group.
“The other thing is that we take the right to be gone seriously,” she adds.
“So they should have the capability to come back to us years down the line, push a button and say I want every sample of my voice deleted.”
While both projects are optimistic about their visions, another expert in AI-based sound recognition has fears.
“With the London and Midlands suffering the worst Covid-19 outbreaks in the UK, the regional variations in the way people sound means some regions could have excessive influence on the AI model unless sensibly controlled in the data,” remarked Chris Mitchell, chief executive of Audio Analytic.
“The other trial is purely technical.
“Gathering up detailed respiratory sounds for expert analysis is made difficult without using specialist microphones, and both experiments require patients to give audio using a smartphone [or PC].”