Some awesome things are happening in healthcare right now, largely driven by the increasing involvement of artificial intelligence in various health-orientated developments, and the rising adoption and integration of voice technology.
The industry is undergoing massive changes, tech firms are identifying the potential, and besides my love for audio and voice, I’m fascinated by this because, and you’ll probably agree with me, it’s closely tied to the greater good.
According to research, the healthcare sector is by far the most popular category for vertical
voice-based applications, commanding almost half of the market share.
It’s safe to say that digital health companies are sprouting left and right in no small part thanks to voice technology. The very nature of voice assistance offers greater efficiency to healthcare because it’s the most natural way of communication.
As such, it has the capacity to impact all stakeholders in this space, from patients on one side to traditional pharma on the other and everyone in between whose goal is better health outcomes.
So, how are these game-changing techs manifested?
The many facets of voice healthcare
Since I’m feeling puntastic, I’ll say that voice technology is making a lot of noise in digital healthcare.
As expected, voice recognition software is playing a key role here, starting with:
Voice-based detection and diagnosis
When it comes to detecting, preventing, and even treating various common and uncommon diseases, natural language processing or NLP methods are at the forefront.
These extract information from unstructured data sets to augment structured medical data, placing a personalized experience at the core of voice-enabled assistance.
When it comes to advanced use cases, predicting and diagnosing conditions based on vocal input is sprawling. There is a wide range of applications, from mental health improvement (which I go into more detail further below) to detecting heart diseases, for instance.
Based on the speech pattern, tone, inflection, pitch, and other elements of the patient’s voice, voice AI can be used to signal there is an underlying illness, from the more simple stuff like the common cold to severe neurological and degenerative diseases.
Through a verbal cue, AI can detect subtle problems with mood or mental abilities and prevent abnormal physical or emotional conditions. This is especially important with early-onset detection and estimation of disease progression, particularly serious ones, which can be used as vital parameters in order to deliver timely and effective treatment.
Case in point: audEERING, an audio AI application company that helps identify patients with Parkinson’s with an accuracy of 92% thanks to its audio intelligence technology for speech analysis, along with other biometric data. The company’s AI systems can also recognize changes in vocal characteristics such as intonation, intensity, and tempo, to detect COVID-19 with up to 82% accuracy.
Productivity boosting and burnout prevention
Voice recognition is slowly becoming a major driver of productivity and reduced workloads for medical professionals.
Documentation, notation, and quite a bit of paperwork - all of this is a reality for medical professionals, taking away from the time they could spend more effectively. You know - with patients.
Then, there’s the matter of human errors that create various risks, most notably in the form of burnout.
So, AI-powered digital assistants are being increasingly used to eliminate administrative and cognitive burdens as much as possible.
Case in point: Suki, an AI assistant for doctors that helps lift the burden of medical documentation. In a test run with the American Academy of Family Physicians
Innovation Labs, physicians who adopted it saw a 72% reduction in their median documentation time per note.
This resulted in a calculated time savings of 3.3 hours per week per clinician. Equally important, participants reported improved satisfaction with their workload and overall with their practice.
It’s clear that conversational AI technology will reduce physicians' administrative demands and create greater patient engagement. In doing so, we’ll see a greater width to the current trend of expanding AI beyond its prevalent backend tool position and have it at the forefront of the clinician and consumer experience.
Better mental health outcomes
Voice tech also has a place in mental health. Some apps are using AI-driven voice technology to analyze the vocal patterns of users in a method known as sentiment analysis. Apps like Wysa aim to walk users through conversation prompts and offer suggestions to reduce stress through AI conversation.
Others can help predict suicide risks or monitor patients diagnosed with severe mental illness through regular check-ins. These check-ins are driven by question responses, and AI analyzes patient responses to offer personalized analysis. Initial tests show that these response-oriented AI monitoring options show similar outcomes to physician-led monitoring.
A smaller, but important barrier to the quality of patient care is verbal or auditory challenges.
Because of communication impairments that include speech and language, a specific group of patients suffers. They often report a loss of autonomy in their health-related decision-making, are at a greater risk for medical errors and are generally less satisfied with health care compared to patients without communication disorders.
There is also the fact that medical students and other healthcare providers are often unprepared to meet the communication needs in these situations, despite their best intentions.
To minimize the stress when interacting with healthcare providers, some voice tech vendors are leveraging the technology to recognize nonstandard speech.
Case in point: Voiceitt, an accessibility app for people with speech disabilities or impairments. It translates atypical speech to facilitate communication using your voice with people and smart assistants. It learns how you say a phrase, so it's ready to use in everyday conversations and routines.
At the other end of the conversation is VocaliD, which uses a combination of text and voice samples to allow students to “speak” by typing.
The focus of voice AI in a healthcare environment will be facilitating real-time data to improve patient care and make it more personal. I’m envisioning a streamlined medical experience with the added convenience of online check-ups and faster response. Let’s face it: nobody likes going multiple times to a doctor.
Bottom line: the entire industry is figuring out how these new tools can be used with patients, in research, clinical trials, and more. While being HIPAA compliant is something that will take some time considering current data standards and privacy compliance, there is quite a lot of interest in voice tech to improve patient engagement.
There is no shortage of players in the market
Just run a quick Google search on the number of startups tackling this segment. I already mentioned a few startups, but there are so many more doing impressive work.
BeyondVerbal focuses on the analysis side, extracting various acoustic features from a speaker's voice to provide insights on personal health conditions and wellbeing.
In a similar fashion, AI companies such as Hyfe and ResApp Health are working on a fairly new digital diagnostic field called “acoustic epidemiology”. The goal is to leverage the omnipresence of smartphones and help physicians and clinicians diagnose patients by coughing into their device’s microphones.
Then, there are startups such as the aforementioned Suki and HealthTap’s Doctor A.I. which is a personal AI-powered, voice-enabled physician that quickly routes users to doctor-recommended insights and care.
Of course, Big Tech has a stake too, adjusting its consumer-facing approach to voice tech.
The most obvious example is Amazon, which keeps deploying Alexa in healthcare organizations in a fairly non-clinical way. Seniors in living communities, as well as hospital-bound patients, remain connected, informed, and entertained. As a result, care delivery is improved via routine, non-medical needs.
One of the more improbable names to hear in the context of digital health is Oracle, which is poised to become a major player in what the company calls "the largest and most important vertical market in the world."
It recently acquired Cerner, a supplier of health information technology services, devices, and hardware, for a gigantic $28.3 billion, making it the biggest-ever acquisition for the database giant. The plan is to expand clinical voice assistants to more physicians, where Oracle's hands-free Voice Digital Assistant will serve as the primary interface to Cerner's clinical systems.
There is so much more that can be said about the integration of voice technology into healthcare.
Suffice it to say that voice technology will continue to create a demand in digital health as it employs more accurate real-time data to engage with patients in a personalized way.
Historically speaking, proper use of technology dramatically affects best practices, and voice tech is exceptionally well-positioned to disrupt the entire health industry for decades to come.
That will place digital healthcare, accessible from smart devices, on top of the trend list as people will want a streamlined and hassle-free medical experience: quick response and no visits or routine check-ups.
Voice tech also shows a lot of promise in reducing the barriers to patient engagement, where access to more sensitive or subjective information can be further eased. Some companies claim readmissions are reduced by lowering infection risk thanks to the usage of voice assistants for online visits and remote monitoring.
I’d argue there’s even an emotional side to consider here, as patients just might be more open to communicating with a machine rather than being vulnerable in front of a doctor. This both makes and doesn’t make sense but then again – we humans are strange beings.
Still, the maturity of the tools, especially in a clinical context, is a long way off from the promise of the technology.
The biggest obstacle to overcome is a need for near-absolute accuracy in healthcare, which presents tremendous challenges with accents, dialects, and clinical language that varies by specialty.
However, what we’re seeing now is super promising, and close to the technology that is more human by the day. Exciting times are ahead, and it will be interesting to see all the ways voice tech develops and affects healthcare.