Have you ever played a virtual reality game? If you have, then you know it’s a truly immersive experience. Not only are you “inside” the game visually, it’s interactive: The non-player (or computer-controlled) characters react to your behavior, as do the storylines (in some games, anyway).
Indeed, in gaming, artificial intelligence (AI) creates a responsive, adaptive and interactive video game experience for the player. But in ophthalmology, we’re hearing a lot about AI as well.
According to Prof. Jeff Henderer, chairman of ophthalmology at Temple University, (Philadelphia, Pennsylvania, USA), image-based specialties are more likely to have the opportunity to have AI-assisted image interpretation: “Radiology is probably the leader, but ophthalmology — for certain retinal image applications — is coming along,” he said.
To learn more about the considerations, challenges and benefits of using AI in ophthalmic practice, we spoke with Prof. Majda Hadziahmetovic, MD, assistant professor, Department of Ophthalmology at Duke University, (Durham, North Carolina, USA). She shared that diabetic retinopathy (DR) is one condition that could benefit from AI-enabled screening and monitoring.
Using AI to Create Access
According to the International Diabetes Federation, in 2019, 463 million adults had diabetes — this is expected to increase to 700 million by 2045. This large population with diabetes, combined with fewer physicians to treat them, could create a perfect storm of debilitating health conditions, including blindness.
In the United States, Prof. Hadziahmetovic said, diabetic retinopathy is a leading cause of blindness that affects millions of people: “In addition to the devastating consequences on those with the disease, the care of DR creates a significant economic burden to the U.S. healthcare system. Fortunately, early detection and treatment have been shown to significantly improve clinical [outcomes] and cost-effectiveness.”
Part of the reason for the burden is the need for annual screening — and it’s estimated that only 40-50% of patients are adherent to the recommended screening intervals, said Prof. Hadziahmetovic.
“As a result, much emphasis has been placed on developing effective and accessible ways of screening,” she continued. “Screening for DR has clear guidelines and is becoming increasingly mandated in the U.S. Regardless, DR screening via traditional methods (e.g., dilated fundus exam by eye care specialists) is heavily dependent on resource availability in each community, and thus frequently, is not very successful.”
She explained that although teleophthalmology can be very effective and has attempted to address this unmet need, barriers still exist for widespread implementation (i.e., cost and lack of universal standards). A potential solution, she suggests, would be to perform retinal imaging for DR screening at the patients’ primary care provider (PCP) or endocrinologist follow-up clinics.
“Retina screening in this setting has multiple advantages, including accessibility, patient capture and cost-effectiveness,” said Prof. Hadziahmetovic. To achieve the best possible care and improve visual outcomes, she said standardized methods for retinal imaging and interpretation must be developed. This includes assessing the cost, visual outcomes, patients’ satisfaction, and testing novel imaging modalities in these settings.
“Last but not least, we should strengthen multidisciplinary collaboration focused on the development of deep-learning methods for high-accuracy, fast and inexpensive image interpretation,” said Prof. Hadziahmetovic. Patients who will benefit most are diabetic patients, but also this approach can be used for glaucoma screening, and in general, for the screening of any other retinal pathology (e.g., age-related macular degeneration; AMD).
Remote Imaging and Teleophthalmology
Prof. Majda Hadziahmetovic was the co-author of a prospective, nonrandomized study that explored feasibility of remote ophthalmic imaging to identify referable retinal abnormalities and assess the effectiveness of color fundus photography (CFP) versus optical coherence tomography (OCT) cameras (MaestroCare, Topcon). The study included 633 patients with diabetes at Duke Primary Care.1
“Besides the previously reported better image interpretability of OCT relative to fundus photography on the un-dilated pupil, using both imaging modalities independently could potentially lead to higher false-positive rates. Combining OCT and fundus photography was associated with improvements in operational outcomes,” she said.
“On the other hand, we have demonstrated that color fundus photography was significantly more supportive in identifying DR relative to OCT (p<0.001). In contrast, OCT was substantially more supportive for retina-referable incidental findings (e.g., AMD, ERM. P <0.001).”
Results showed that remote images from OCT were significantly more interpretable than CFP (98% vs 83%, respectively; P < .001). Among patients with DR, OCT and CFP were helpful in 58% and 87% of cases, respectively (P < .001).
“Albeit requiring deeper insight, these findings are important to guide the choice of imaging modality depending on the targeted retinal pathology,” said Prof. Hadziahmetovic.
For the last two years, Dr. Alan Wagner, retina surgeon at Wagner Macula and Retina Center, (Virginia Beach, Virginia, USA) has been working closely with Eyenuk (Los Angeles, California, USA) on its EyeArt platform, the first FDA-cleared AI technology that autonomously detects both mild and vision-threatening DR.
“Their product [EyeArt] on diabetic eye disease is extremely good … there’s no question that it’s better than humans, or as good,” he continued. “We found it to be exceptional. And the idea that they can do the reading for us on the fly … it’s fast, it’s real time. That’s very helpful when you’re out in the public, and with different kinds of people coming through.”
A study published in the British Journal of Ophthalmology2 showed that EyeArt’s “algorithm demonstrated safe levels of sensitivity for high-risk retinopathy in a real-world screening service, with specificity that could halve the workload for human graders.”
In the study, 30,405 retinal images were both manually graded and assessed by EyeArt (v2.1). The authors found that sensitivity (95% CIs) of EyeArt was 95.7% (94.8% to 96.5%) for referable retinopathy (human graded ungradable, referable maculopathy, moderate-to-severe non-proliferative or proliferative). Further, EyeArt agreed with the human grade of no retinopathy (specificity) in 68% (67% to 69%), with a specificity of 54.0% (53.4% to 54.5%) when combined with non-referable retinopathy.1
These results led investigators to conclude that “AI machine learning and deep learning algorithms such as this can provide clinically equivalent, rapid detection of retinopathy, particularly in settings where a trained workforce is unavailable or where large-scale and rapid results are needed.”
Prof. Alicja Rudnicka, one of the study’s authors, said that with this technology, “people with diabetes may get their results quicker from screening visits, with potential savings to the NHS as fewer images would need to be graded by humans.”
Using AI to grade these images will not only improve DR screening in countries like England, it will also assist in less developed countries or areas with fewer resources. “It’s probably quite important for less developed countries with limited healthcare facilities,” said Prof. Rudnicka. “The possibility of a digital interface to analyze retinal images may be the only option.”
AI in Practice
Prof. Jeff Henderer has been using EyeArt in practice for the past year, and in a research study prior to that. “I’d say it [AI] has replaced our screening practices. We used to take photos and have our optometrists read the photos and the turnaround time was a couple days at best. Now the AI interprets the photos at the point of care, so the patient leaves the camera with an answer,” said Prof. Henderer.
He hopes that these improvements will lead to better follow-up rates: “First, the patient knows if they need a follow-up exam. Second, the primary care doctor (PCP) knows if they need a follow-up exam, which makes for happier primary care doctors; it also allows for the primary care doctor to participate in scheduling the patient for the eye exam if needed, rather than asking the patient to do it.”
Prof. Henderer explained that one thing has remained the same — and that’s having optometrists review the images (at least) every three years. This helps ensure that other eye diseases like AMD and glaucoma aren’t missed.
Screening for eye disease is a three-legged stool, added Prof. Henderer. “One leg is making sure you can actually do the screening (e.g., the location, camera, software, photographer training, the ability to bill and get paid for the image).”
Then, he said, the second leg is getting the patient to the camera. “If the location is in a primary care doctor’s office, this requires buy-in from the PCP to ensure a workflow is established to enable photos. Whether that is while patients are having a regular doctor visit or a separate appointment type, without the PCP’s help, the program will fail.”
The third leg is what happens after the patient is screened: “You have to make sure the PCP has a place to send the patient and that the receiving ophthalmologist accepts the insurance and understands why the patient was referred,” said Prof. Henderer.
He also says it’s important to track down patients who missed their follow-up to ensure they’re able to get care. “Getting patients to the camera and closing the loop between screening and follow-up care doesn’t attract a lot of attention, but it’s critical to ensuring the screening program works. In fact, without all three legs symmetrically in place, the screening program will fail.”
Prof. Henderer added: “Now the issue is the cost of the cameras and how you will handle those patients who fail the screening. It’s not helpful, and probably not ethical, to screen for disease if you have no plan to care for the patients after the screening.”
Impacts beyond Screening
AI has other potential uses outside of screening, said Dr. Alan Wagner. It can be used to predict patient behavior as well. Before the patient walks in the door, it can help identify if they will show up — for example, will the weather impact their presence? He said this helps them maximize the experience for the patient and enhance compliance.
“If you know what it takes to make sure they show up, that’s the first step,” he said. “Second, if we can understand how to get all their information together (i.e., pathology, medications, comorbidities, etc.) that helps too.”
AI can also help with simple things, like determining the best pathway through the practice. All of this ensures a far more streamlined and orchestrated event for the patient, said Dr. Wagner. “The patient is getting more efficient care as opposed to getting unnecessary tests.
With the use of AI, it’s possible to deliver the highest quality and lowest cost care, continued Dr. Wagner. “You’re going to be able to predict what you need, where you need to be … to predict when is the right time to give someone an injection — 3 weeks, 5 weeks or 8 weeks?
“There’s also at-home monitoring of subretinal fluid, and whether fluid should or shouldn’t be there, treating to dry, all that data all is crunchable,” he said. “This will reduce the demands on the population, doctor and resources. It should be able to significantly reduce the spend that we have on drugs, too.”
In the End, Patients Win
At the end of the day, Dr. Wagner said the patients will benefit most from AI: “If we’re really, truly patient-focused, if we’re really community-focused … the concept of rebuilding the world anew every day, then it’s the exact right thing we need. I can tell you, ‘embrace it, don’t fight it.’ Let’s make the world better, let’s help people, let’s do the right thing,” he said.
Prof. Jeff Henderer agreed and shared that patients will benefit most from using AI to screen for DR. “They will be able to get a rapid test result in a setting other than an eye doctor’s office.
“No one wants to see AI replace a human, but we don’t have enough humans to manage all the patients. Using automated image analysis to help us ensure that we are only seeing those patients who need to be seen will be a great help. There is no bigger waste of time and money than the annual diabetic eye exam when the patient is normal,” he continued.
“Figuring out how to better target those who need to be screened and using AI-image interpretation to make sure the normal patients aren’t coming in for eye exams are two keys to helping a limited number of providers manage a growing population of diabetics.”
Secondary to patients, the use of AI will be beneficial to primary care doctors: “They will be able to trigger quality incentive payments which can be worth a great deal of money,” said Prof. Henderer. “The third beneficiary will be ophthalmologists. We will no longer be seeing diabetics without disease and will be able to target our resources more efficiently toward those who need care.”
- Lee T, Amason J, Del Risco A, Kim J-B, Cousins SW, Hadziahmetovic M. Incidence of Referable Retinal Disease in Diabetic Patients at a Primary Care Practice. J Vitreoretin Dis. 2021: 24741264211044223
- Heydon P, Egan C, Bolter L. Prospective evaluation of an artificial intelligence-enabled algorithm for automated diabetic retinopathy screening of 30,000 patients. Br J Ophthalmol. 2021;105(5):723-728.