The Problem: Significant Patient Populations Don’t Speak English as a First Language
The US has a very diverse population of peoples from around the world that are living in our country. When a non-English-speaking person presents for healthcare services, the inability for the person to converse in English with care providers can become a safety and care quality issue. This is referred to as limited English proficiency (LEP). LEP is an independent driver of health disparities and negatively impacts other social determinants of health. Language interpreters may not be readily available to some care providers in their markets. If the person needing healthcare does not have a family member that can translate for LEP patients, diagnosing and delivery of care can be a challenge.
In many cases, providers hire interpreters to assist with patient communications. Specifically, the use of professional interpreters is likely to decrease communication errors, increase patient comprehension, equalize health care utilization, improve clinical outcomes, and increase satisfaction patient satisfaction for patients with LEP. These interpreter services add a cost factor to the healthcare service. The cost of interpreter services can be considerable, ranging from $45–$150/hour for in-person interpreters to $1.25–$3.00/minute for telephone interpreters and $1.95–$3.49/minute for remote video interpreting. These services may be reimbursed or covered by a patient’s Medicaid or other federally funded medical insurance.
New translator services from Microsoft, Amazon, and Google may improve language interpretation use and integration with EHRs, which help to reduce interpreter costs and improve provider access for these services.
The Solution: Big Tech Driving Higher Levels of Interpretative Language Services
Microsoft has more than 100 languages in the translator service library to assist with building and supporting language interpretation in consumer applications. Azure Cognitive Services also handles AI models for optical character recognition, so any written text can be processed and translated into any of the supported translator service languages. Google beat Microsoft to the milestone of 100 languages back in 2016, and Amazon has only achieved 71 language interpretations.
Google recently released Translatotron 2.0, a new version of its model that re-creates a speaker’s voice in a different language, which will likely inform future versions of Google Translate and its real-time transcription and the instant translation feature for Google Assistant on Android. Translatotron listens to someone speaking in one language, translates what they are saying into a second language, then broadcasts the translated speech as if the original speaker were now fluent in another language.
Amazon Alexa is adding new languages to its multimodal mode. Multimodal mode was designed to make Alexa easier to use in bilingual homes where people speak more than one language. Alexa essentially holds two language models at the ready and picks which one to use based on the language it identifies. Setting up Alexa’s multilingual mode is also relatively easy. As more post–acute care providers implement Alexa services, multimodal language capabilities will further enhance the value of this solution.
The Justification: Integration of Language Translators into Healthcare Applications Improves Care Services
The ability to integrate language translators into mobile and enterprise healthcare applications will reduce the need for human translators, thereby improving translator availability and the ability to match the needed language for optimum communications. Mobile applications supporting patient experience services will benefit from higher patient satisfaction scores for markets where there are high levels of different nationalities. The ability of providers to easily access translator services in the EHR when examining patients and creating associated patient documentation and instructions will reduce provider frustrations in these situations. Patient intake functions will also benefit by improving communication and process efficiency by eliminating rework that may be generated from inaccurate patient speech interpretations with first patient interactions.
The Players: Big Tech Will Own the Market
Microsoft, Google, and Amazon are leading speech recognition vendors that will continue to advance language translation services.
The US has a rich multicultural patient population that represents most of the countries in the world. The ability to effectively engage these populations with healthcare services using the language they are most comfortable with will improve care access, patient safety, quality of care, and care equity in the US. While language translation services are available from human resources and phone resources, they may not be readily available or affordable to many community hospitals, critical access hospitals, or federally qualified health centers.
The inability to easily communicate between patient and provider drives high levels of frustration between the two parties and may lead to significant misunderstandings that drive care interventions, medication orders, patient instructions/education, and follow-up care processes—all of which impact the satisfaction and expected outcomes for the patient service.
The ability to integrate language translator services into healthcare and patient engagement applications will optimize the care interactions for LEP patients, resulting in improved community interactions with provider patient populations.
Photo Credit: olenaari, Adobe Stock
End of Messages