From machine learning (ML) to large language models (LLMs), the applications of AI to clinical trials are multitudinous. Most notably, they can help streamline data collected during a trial’s execution. In the long term, the tech can help reduce work burdens and improve trial efficiency by optimising budgets and timelines.
The uptake of AI is already underway, as GlobalData’s Digital Transformation and Emerging Technologies in the Healthcare Industry Report 2024 discovered. The report surveyed over 100 pharma professionals on their attitudes to tech adoption, finding that 58% of respondents said that they were planning to invest in AI over the next two years, up from 8% in 2022 and 14% in 2021.
While tech companies make big promises regarding AI capabilities, there remain concerns over its ethical use. Depending on the use case, ML programmes often feed on huge quantities of patient data, which make clinical trials prime targets for hackers seeking to steal information. Sponsors and partners using AI must ensure that data privacy is guaranteed if they want to maintain patient trust and avoid legal challenges.
Patient trust in AI needs improvement
AI has seen many successful use cases across clinical trials, from automating time-consuming medical coding to spotting trends within data sets that humans might miss. Natural language processing (NLP) is another branch of AI with useful applications to clinical trials. NLP enables computers to understand, interpret, and generate realistic language, which has key applications in powering chatbots to assist decentralised clinical trials (DCTS), and in streamlining the translation and simplification of vital patient documents such as eConsent and clinical outcome assessments (COAs).
However, all these applications may involve the processing of private patient data and other confidential trial information. GlobalData’s AI in Clinical Practice survey assessed the sentiments of over 550 patients across different therapy areas and ages around the world. 48% of those who described themselves as familiar with AI stated that data privacy and security issues were a concern if participating in a trial using AI technology.
The healthcare industry is frequently subject to data breaches on account of the high volumes of sensitive patient data it processes. The HIPAA journal found that more than 133 million patient records were breached in 2023, a record high. Incorporating AI into various processes might make clinical trials more efficient, but it also exposes another front for hackers to attack.
Regulations such as the European Union’s GDPR also mean that trials could face costly fines for failing to protect patient data sufficiently. When trials already run on tight budgets, incurring a penalty will result in studies getting delayed or even cancelled. Central to GDPR and similar legislation elsewhere, including the UK Data Protection Act, is the protection of Confidential Business Information (CBI) and Personally Identifiable Information (PII). To prevent such compromising attacks, sponsors must be sure that they and their partners are doing the utmost to deploy AI securely and responsibly.
Implementing an AI strategy
Sponsors can avoid the risks associated with data breaches by implementing a strong AI strategy. It is especially important to consolidate this strategy between stakeholders so that all involved parties are operating at the same high standards. Joining a pharma-focused think tank can also be a good step to help shape and understand AI use more directly.
Notably, sponsors need to know how data put into AI models is being used. Open-source AI tools such as ChatGPT keep data inputted in exchange for a free service, which undermines company IP and patient data privacy. However, many secure models on the market do not mine data and implement stringent cybersecurity measures that can match sponsor’s standards. Features such as a solid information security management system (ISMS) and multi-factor authentication (MFA) will help mitigate the inherent risk with cloud operations.
Techniques such as data anonymisation or injecting noise into data can help improve the security of information being processed by AI models. Moreover, sponsors can keep themselves accountable by implementing thorough auditing processes and maintaining detailed records of data collection.
Cybersecure translation services
RWS are a leading partner for many clinical trials, providing world-class translation and localisation services that ensure key documents, including eConsent and eCOA forms, are comprehensible to all patients involved. In the face of high rates of patient dropouts, RWS empower patients to understand their responsibilities during a trial.
With over 20 years of experience in ML/AI, including more than 45 AI patents, the company combines the efficiency of machine translation with the expertise of human linguists. It can make sure that even the most complex clinical trials materials have translations that are understandable and accurate.
The RWS offering includes self-developed AI translation platforms like Evolve, which can help users achieve efficiency gains of up to 65%.
Efficient, effective, and secure, RWS’s AI tools can help make the difference for patients in clinical trials. To learn more, download the free whitepaper below.