AI in Ophthalmology_Maximizing Potential while Ensuring Data Safety

AI in Ophthalmology: Maximizing Potential while Ensuring Data Safety

Artificial intelligence (AI) technologies hold immense promise in revolutionizing the management of eye diseases. However, as these technologies become increasingly integrated into clinical practice, it is essential to prioritize patient safety and privacy. 

On the third day of the 39th Congress of the Asia-Pacific Academy of Ophthalmology (APAO 2024), experts delved into the potential of AI in ocular imaging, clinical research, clinical diagnosis, disease epidemiology and safeguarding data privacy.

First to present was Dr. Srinivas Sadda (USA), who asserted that AI or deep learning is poised to significantly impact the field of ophthalmology, with major applications in disease screening, diagnosis, treatment/management and predicting outcomes, education and surgery. 

He noted that the availability of large training datasets and computer processing power to utilize them is fueling AI evolution. Nevertheless, various challenges exist, including big data sources, security and privacy, ethical concerns, and evolving regulatory and reporting guidelines. 

Federated machine learning 

Subsequently, Dr. John Campbell (USA) discussed federated machine learning (FL), presenting a case study on retinopathy of prematurity (ROP). He and his colleagues simulated the diagnostic accuracy of an FL approach using an existing multi-center centralized dataset containing both clinical labels and a centralized reference standard diagnosis (RSD). The implications for classification, clinical diagnosis and disease epidemiology were thoroughly investigated.

In terms of clinical diagnosis, their findings revealed that FL can produce a more generalizable disease label that can be used to standardize clinical diagnosis across multiple sites. This capability holds potential for identifying instances of misdiagnosis and discrepancies in the assessment of medical severity.

In terms of disease epidemiology, FL facilitates objective measurement of disease severity across various sites and may be useful for determining geographic and temporal variation in disease epidemiology.

“FL using local labels was as effective as creating a large centralized data set. It is most beneficial for smaller institutions. It may be useful to help standardize disease labels such as codes and billing, and it may be useful to track disease epidemiology regionally and over time,” Dr. Campbell concluded.

AI in clinical trials

Meanwhile, Dr. Stela Vujosevic (Italy) noted that AI can assist with patient screening for ophthalmic clinical trials by analyzing clinical features and medical history. 

She emphasized that AI can analyze and precisely compare treatment outcomes, thereby leading to personalized and more effective treatments for retinal diseases. 

Additionally, AI can assist in clinical trial design by analyzing data, identifying study endpoints, patient populations and treatment protocols. Moreover, AI’s capability to analyze retinal images enables monitoring of retinal diseases, tracking disease progression and predicting the likelihood of serious illness or response to therapy. 

Nevertheless, Dr. Vujosevic cautioned about the associated risks, including data quality and bias, lack of transparency of algorithms, ethical considerations and limited generalizability of populations. “This is why it is crucial to have robust data and regulatory guidelines, and a close collaboration between AI experts and ophthalmologists,” she stressed. 

Ensuring data safety

While AI continues to prove to be indispensable, it is not without its weaknesses. One of the main concerns is data safety. 

Speaking on data privacy concerning large language models (LLM), such as ChatGPT, Dr. Ting Fang Tan (Singapore) noted that LLM has taken the world by storm with its ability to leverage AI neural networks to learn complex associations between vast amounts of data and unstructured text. 

However, the use of LLM involves several pitfalls, which include data leakage and the potential of extracting training data from LLMs through adversarial attacks, such as data extraction and model inversion. Types of data at risk include personally identifiable information, health conditions and intellectual property, she noted.  

Dr. Tan, in her presentation, proceeded to provide strategies to safeguard data privacy. “The first is data augmentation, where you detect sensitive information and either anonymize it by replacing it with entity label, censoring it, or pseudonymizing it. Microsoft Presidio and RoBERTa De-ID are tools that can be utilized to do this,” she suggested. 

Other strategies include: running LLMs locally (e.g. Private GPT and Jan AI to keep data within local networks), adding noise during training or fine-tuning to make it less feasible to extract training data (differential privacy), using AI tools to monitor prompts for filtering and flagging up toxic prompts, and carrying out regular audits. 

Indeed, AI offers limitless opportunities for improved efficiency, accuracy and accessibility in the delivery of eye care. By adhering to the principles shared during this session, we can ensure the safe and responsible use of AI in ophthalmology, ultimately leading to enhanced patient outcomes and advancements while upholding the highest standards of integrity and accountability. 

Editor’s Note: The 39th Congress of the Asia-Pacific Academy of Ophthalmology (APAO 2024) is being held in Bali, Indonesia, from February 22-25. Reporting for this story took place during the event. 

Subscribe
Notify of
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments