29 - 11 - 2024
Login Form



 


Share this post

Submit to FacebookSubmit to TwitterSubmit to LinkedIn

The next time you open wide and say, “aah,” your doctor may be able to detect much more than inflamed tonsils. She may recognize laryngeal cancer.

But it won’t happen with the doctor peering down your throat with a flashlight.  

Instead, you’ll be stretching that universally recognized sound – “aah” – for 2 to 3 seconds into an app on your phone. Clever artificial intelligence takes over immediately with a medley of Intel-based hardware and software comparing the quality of your voice against scores of trained data and running the results through other complex algorithms.

The results – exceeding 80% accuracy – are returned seconds after you’ve recorded your voice.

This marvel of medical technology has been developed by Taiwan hospitals as a promising proof of concept for how AI technology can save lives.

Over the past year, Intel has partnered with Far EasTone Telecom (FET) – a leading network provider in Taiwan – and Taiwan-based hospitals to build more accurate AI models to support the app and power diagnostics research.

By analyzing voice data from a network of participating hospitals including the Far Eastern Memorial Hospital (FEMH) and Taichung Veterans General Hospital, both in Taiwan, and the United States’ Vanderbilt University Medical Center in Tennessee, researchers collaborate and contribute toward a common larynx cancer diagnostic AI model.

All data is federated and, most important, anonymized to preserve patient privacy. (Learn more from this graphic.)

“Intel’s Open Federated Learning allows all healthcare providers participating in larynx cancer detection modeling to contribute to training the model while helping to protect patients' private data,” says Grace Wang, Intel VP and general manager of Intel Taiwan.

“With the addition of more partners and more data, the team will continue to optimize the model, and the outcome will benefit more and more people,” Wang says, explaining how JelloX Biotech, an Intel ecosystem partner, contributed to the partnership.

Early Detection Saves Lives and Preserves Voices

For years, larynx expert Dr. Chi-Te Wang, M.D., Ph.D., the director of AI Center of Far Eastern Memorial Hospital, struggled to detect early-stage laryngeal cancer.

Dr. Wang says there was a huge incentive to diagnose early. By catching laryngeal (throat) cancer early, treatment with surgery or radiation could reach success rates of about 90% and, crucially, preserve a patient’s voice.

Delays in treatment often result in permanent vocal cord damage and, potentially, the loss of one’s ability to speak naturally.

Interactions with numerous patients soon led Dr. Wang to a what-if idea: Could voice-based analysis lead to early cancer detection? But that revelation soon led to a brick wall – there was just too much data to process.

A Two-Decade-Old Problem

That’s when Intel stepped in to help.

“We learnt about Dr. Wang’s project from our friends at FET, and we realized he was struggling with training an accurate AI model given the vast amounts of data collected across numerous hospitals,” explains George Tai, director of healthcare, Regional Center of Excellence (CoE), Sales, Marketing and Communications Group at Intel. “We leveraged our broad partner network to bring in tech experts who could help, and we assisted with building the infrastructure from the ground up.”

That AI model is built on 4th Gen Intel® Xeon® Scalable processors (Sapphire Rapids), FPGA accelerators and two open source software tools: Intel’s very own OpenVINO™ and the open sourced OpenFL.

“Previous efforts to detect abnormal voice samples using machine learning began 20 years ago, but the technology was never good enough for clinical scenarios,” says Dr. Wang, explaining how his experience in the field led him to hypothesize that abnormal voice quality would be one indication of early-stage laryngeal cancer.

“Thanks to Intel’s IRTI [Intel RISE Technology Initiative] project, we developed an AI model based on a universal sound: ‘aah.’ The best advantage of this model is its potential application around the world, no matter what languages we speak.”

Extending Laryngeal Cancer Detection Thanks to 5G

Tai adds that the ability to detect early-stage laryngeal cancer extends far beyond packed cities in Taiwan.

“Thanks to low-latency 5G mobile networks, we’re able to extend this service to rural areas in Taiwan that are far from major hospitals. All a patient has to do is to say ‘aah’ into a phone and the inferenced model will return an immediate result,” Tai says, adding that a positive result will direct the user to a nearby hospital for physical checkups and, possibly, a biopsy to confirm the diagnosis.

For its next act, the joint team intends to scale this win broadly across Asia and into other markets. Discussions are underway with interested parties in the U.S. and Vietnam, Tai says.  

“In 2020, as soon as 5G service was launched, Far EasTone introduced 5G telemedicine. Today, we provide services in 12 municipalities and 35 townships across Taiwan, serving over 30,000 people,” explains Eric Chen, telecommunications director, Healthcare Products, Far EasTone Telecom. “We are thrilled to work with FEMH and Intel on developing the larynx cancer detection application and we value the cross-vertical partnerships formed. We will continue to leverage our expertise in 5G connectivity, information, and communication technology and integration, making the society better.”