This browser does not support the Video element.
TAMPA, Fla. - Your voice gives a lot of information, and voice doctors with the University of South Florida are working on a way to use changes in the voice to detect early signs of diseases and disorders.
The National Institutes of Health awarded USF millions of dollars this month to develop a database of patient voices to train artificial intelligence for medicine. USF Health’s Dr. Yael Bensoussan is leading the research project called, "Voice as a Biomarker of Health."
The project is a collaboration with Weill Cornell Medicine at Cornell University and 10 other institutions across the U.S. and Canada.
"So when we think about the voice, we think about speech and the way we speak with the way we say sentences, how fast we speak with what volume," said Dr. Bensoussan, a laryngologist at USF Health and director of the USF Health Voice Center. "We know that it's been linked to a lot of health issues and health disease, and we know it can bring a lot of information. I think what we didn't have until now is that technology."
The research and development are part of the Bridge2AI initiative from NIH. It will last four years, with the first year using $3.8 million from the NIH. Subsequent funding is contingent upon yearly amounts approved by congress that could bring the overall award to $14 million.
MORE: New remote health monitor delivers blood pressure, glucose data to doctors in real-time
Bensoussan said they are gathering between 20,000 and 30,000 patient voices to help train AI to spot early signs of diseases or disorders. This includes cancers, respiratory problems, pediatric speech delays, autism, depression and bipolar disorder.
"We know that a lot of neurological disorder can change the way we speak. For example, when people have had a stroke or when people have Alzheimer's, the content of what they say is different. When people have Parkinson's, for example, the way they talk slows down is a lot lower," said Bensoussan.
They are also collecting voices of people who have neurological and neurodegenerative disorders.
"It's also important to get normal patients or patients who don't have that disease for the machine learning to understand the difference between the voices," she said.
The idea is to develop an app that doctors anywhere could use with their patients.
READ: Ryan Reynolds films colonoscopy to raise cancer awareness; doctors find polyp
"Sometimes for clinicians that do not do what we do, it's hard to distinguish if the patient has a cancer from the voice box or just has laryngitis, for example. And it's really important, because we don't treat these two patients the same," said Bensoussan.
A patient would speak into their phone or use another device to record their voice, then the app would check the sounds against the database, Bensoussan said.
"At some point the app can say, well, you're not doing well. You know, your breathing doesn't sound good. You should go to the doctor," she added.
It will take about four years to develop, and doctors said they will need every day to track the health of thousands of voices over that time.
"We know that that really helps in terms of patient outcome with early screening and early diagnosis," said Bensoussan.
The project plans to collect patient voices from different genders, races and dialects to help train the AI on diverse speech and sounds. The NIH award is one of four projects funded for the AI initiative.