Mohammed Haneefa Nizamudeen/isto
Unraveling confusing AI regulations, their impact on life sciences
Leaders from the Pistoia Alliance revealed that many life sciences professionals do not understand AI regulations globally, which may be hindering research.
Only 9% of life sciences professionals believe they know and understand AI regulations well, revealed the Pistoia Alliance, a non-profit that engages major pharmaceutical companies in pre-competitive collaboration, in recently published results of a survey on artificial intelligence (AI) regulation. The ever-changing AI regulatory landscape across the pharmaceutical industry has prompted confusion and hesitancy among biopharmaceutical companies and researchers.
Christian Baber, PhD, Chief Portfolio Officer of the Pistoia Alliance, and Vladimir Makarov, PhD, Project Manager of the Pistoia Alliance AI and ML Center of Excellence, discussed the survey and the ongoing concerns of AI regulation across the life sciences industry with LifeSciencesIntelligence.
Pistoia Life Sciences Leader Surveys
The Lab of the Future Report, which included broader written-out answers and follow-up meetings, highlighted data from the first survey, which emphasized the importance of AI and machine learning (ML) as technological investments for life sciences companies. The survey included data from over 200 respondents worldwide, including lab professionals and research and development (R&D) experts from Europe, the Americas, and Asia–Pacific (APAC).
Among all the digital transformations mentioned in the survey, AI and ML emerged as one of the most significant priorities for the industry, with approximately 60% of respondents highlighting it as the most critical technology investment for their company in the next two years. The data also revealed that 54% of labs were already implementing AI and ML in some way.
Understanding the critical and ongoing importance of AI and ML in life sciences, the company conducted its second survey to assess industry knowledge of AI regulations. The survey revealed that only 9% of respondents know the regulations well. The remainder had some knowledge ( 56%) or no knowledge at all (35%).
“I'm sure [the percentage that knows the regulation well] is much higher than the population in general because, to some extent, we are self-selecting here with scientists interested in the topic. The fact that still such a low percentage of them are confident that they understand it is very worrying,” Baber noted.
Beyond identifying a lack of knowledge surrounding AI regulations, the survey highlighted multiple reasons biopharmaceutical companies may have difficulty keeping up with AI regulations.
The most common response was the complexity and ambiguity of AI regulations (37%), followed by variations across regions (23%), insufficient collaboration between industry and regulatory bodies (20%), rapid pace of change (11%), and technological limitations (9%).
Comparing US, EU Regulations
There is a lot of complexity and ambiguity around AI regulations. According to Makarov, AI regulations in the United States are based on an Executive Order issued by President Biden in October 2023.
The order “will require new guidelines on AI to be signed into US law within 90-270 days,” he explained.
“The proposed regulations center on creating AI that is safe, secure, and trustworthy. Developers of the most powerful AI systems will have to share safety test results with the US government,” he noted.
Baber maintained that regulations surrounding AI in the US are quite vague. The guiding document covers a multitude of industries. While it does include some insight into how AI should be used in the biopharmaceutical industry, it is not specific.
“The regulations also call out companies needing to protect against the risk of using AI to engineer dangerous biological materials. In relation to pharma-specific legislation, the development of US regulations is ongoing, but the government has already published guidance related to AI software as a medical device,” Makarov added.
He noted that since US AI legislation is still evolving, it is not fully clear what the challenges and gaps are; however, he anticipates that they will appear in time.
Conversely, regulations in the EU are more specific.
“The EU’s AI Act is more specific and will likely set a standard that other nations will follow, in what is known as the ‘Brussels effect’ seen with other regulations such as GDPR. The EU regulations are based on a risk system where unacceptable risk use cases are banned; high risk must complete conformity assessments, and lower risk use cases must adopt a code of conduct,” explained Makarov.
He highlights two primary concerns with the EU regulations: exemptions and risk categories.
First, the EU regulations include an exemption for AI use in research. But Makarov notes that pharmaceutical members of Pistoia are unclear about what use cases are included in that exemption. There is no clear line for pharma companies when research becomes too clinical to be exempt.
“For example, is using AI for synthetic monitoring in early-stage trials too close to the human at the end of the pipeline to be considered R&D? And what about digital twin technology,” he asked. “Since the EU AI act is universal for all industries, pharma will need to work with regulators to ensure they are interpreting the rules — which may, in the end, differ on a country-by-country basis depending on the exact legislation enacted nationally — correctly.”
Second, the other concern is the risk categories outlined in the EU guidance. For example, Makarov noted that all medical devices are categorized as high-risk, requiring additional conformity tests. However, some of these devices have been around for a long time, and implementing AI assessments may cause supply chain delays.
“Many of these issues are likely to be transitionary with minimal long-term impact since anything likely to have an impact on patients is already fully tested and documented, so once processes are defined and implemented, they will become just another regulatory submission in the process of developing a medicinal product.”
Implications of AI Confusion
The implications of AI regulation and the resulting confusion are vast. In some cases, they may even impact research. According to the survey, 21% of respondents believe that existing AI regulations block their research.
“Quality organizations within pharma companies are rightly very conservative,” explained Baber, highlighting how regulations may be hindering pharmaceutical research. “They want to ensure there is no risk of a contaminated product getting out there. They're very conservative in those areas, and actually, we've found in some spaces it's the legal and compliance organizations that slow things down, whereas the regulatory agencies are very keen to move forward with this.”
Baber noted that regulatory organizations are not as worried about legal repercussions because they are backed by government lawyers. While they still have to follow the law, like any organization, the general assumption is that there is no malicious intention behind the actions of these governing authorities, so they are less likely to face the repercussions that quality and compliance organizations within a pharma company are up against.
Addressing AI Confusion
To discuss how to combat AI confusion, Makarov first highlighted areas that respondents believe regulatory bodies should focus on when drafting new regulations. The most crucial concern for respondents was patient data privacy and security, with 29% of participants highlighting its vitality. Similarly, ethical guidelines and bias were a priority for 28% of life sciences professionals.
“This isn’t surprising, considering the industry’s goal is to become more patient-centric and ensure the best outcomes for all patients,” said Makarov. “Although the technology may be changing quickly, the fundamental concepts of patient privacy and ethical guidelines are not, and governments and regulators will need to take care to ensure that they include those key, constant factors when drafting legislation flexible enough to handle as yet unimagined technologies.”
Other priorities included compliance and quality control, intellectual property rights, and international regulatory harmonization.
“Improved collaboration between governments, regulators, and pharma companies will be the cornerstone of understanding and implementing new AI laws,” Makarov emphasized. “To overcome confusion and get research moving again, companies, governments, and regulators must come together in a shared space.”
“We, Pistoia Alliance, collaborate with the regulatory agencies, the FDA with EMA — [in addition to the] NIH and WHO — [through] ongoing private–public partnerships,” added Baber. “To some extent, we can act as a conglomeration of the pharma industry to communicate with them.”
While the organization does not lobby for policy or related changes, Baber noted they are uniquely positioned to facilitate communication between pharmaceutical stakeholders.
“We are focused very much on being practical and delivering practical solutions. There is a role for us there. This is the classic role for pre-competitive collaboration in that analysis of restrictions, the building up of quality requirements, compliance, registration documents, all of that sort of stuff is something every company is going to have to do, and there is no competitive advantage there for them doing it,” Baber continued.
As AI continues to evolve and advance, so will the regulatory landscape. Through comprehensive communication and pre-competitive collaboration, life sciences leaders and regulatory bodies can collaborate to ensure a widespread understanding of regulations that help guide ongoing R&D.