berya113/istock via Getty Images

Employing ML to boost pharmacokinetics in drug development

Machine learning revolutionizes pharmacokinetics by accurately predicting drug behavior and optimizing dosages, streamlining drug development and regulatory approval.

As biopharma companies strive for more efficient drug development processes, machine learning has emerged as a powerful tool for enhancing pharmacokinetics -- studying how drugs move through the body. At the 2024 AAPS PharmSci 360 conference in Salt Lake City, Dr. Conor L. Evans, an associate professor at Harvard Medical School and a leader at the Wellman Center for Photomedicine, presented insights on how ML models can predict drug concentrations, evaluate bioequivalence and inform regulatory decision-making, ultimately enhancing the efficiency of drug development.

Evans described himself as "a practitioner of machine learning," not as a developer of ML models, and expressed a clear objective to convey ML's practical applications.

"Machine learning is exceptionally well-suited to detect patterns, particularly when scaled to hundreds or thousands of patients with multivariate data," he explained.

Through sophisticated models and data processing, ML enables companies to extract insights from complex data sets in ways that traditional analysis cannot, creating opportunities for precise pharmacokinetic predictions that support better therapeutic outcomes.

Classifications and predictions in pharmacokinetics

Evans categorized ML applications in pharmacokinetics into two primary types:

  1. Classification.
  2. Regression.

He explained that classification produces categorical outputs, such as determining whether a disease is present, while regression generates continuous outcomes, like dosage recommendations.

"These tasks can be categorization, prediction or object recognition," he noted, emphasizing that each method offers unique advantages in specific contexts.

For example, Evans shared that ML classification could distinguish between atopic dermatitis, psoriasis or vitiligo, while regression could determine the optimal dose of ruxolitinib for a skin condition based on patient variables. This ability to refine model output -- whether categorical or continuous -- is central to pharmacokinetics, where precise drug concentration predictions inform crucial dosage and therapeutic decisions.

Model selection

One of the central themes Evans emphasized is that ML is only as good as its data and chosen models. He presented a comparative study on predicting plasma concentrations of rifampin, a common antibiotic, using multiple ML models. Four methods were evaluated: gradient boosting machines, XGBoost, random forests, and the least absolute shrinkage and selection operator.

"Four different methods were used, revealing that XGBoost was highly accurate, with 95% accuracy in predicting rifampin levels in the blood at a given time compared to observed levels," Evans shared.

The research exemplifies the precision ML models can achieve. However, he quickly noted that models must be tested on data outside of their training sets to avoid "cheating," or memorizing, which could otherwise give inaccurate predictions.

"Teaching the model on specific data requires always testing it on data the model has not seen," he cautioned, underscoring a core principle in machine learning training.

Artificial neural networks

Artificial neural networks (ANNs) present an advanced pharmacokinetics solution for tasks that benefit from pattern recognition. These models include layers -- an input layer, one or more hidden layers and an output layer.

Evans explained that "models like this are actually not that hard to construct and code" with frameworks like TensorFlow and Keras. Despite their structural simplicity, ANNs can solve complex tasks in pharmacokinetics, such as predicting how a drug will disperse within cellular structures over time.

Evans further noted that while ANNs have the ability to "think" in layers, the computations they perform in hidden layers are often invisible to users.

"It's not fully understood what's happening in those layers. The weights can be inspected, but it remains challenging to grasp how the model is, for lack of a better word, 'thinking.'" Evans explained.

For this reason, he turned to the SHapley Additive exPlanations (SHAP) tool. "SHAP gives us the ability of peeking into the black box and saying for each of these inputs what is important and why," he explained, describing how SHAP analysis can reveal the influence of each input on model outputs, thereby supporting more transparent and interpretable models.

Expanding pattern recognition with convolutional neural networks

When dealing with imaging data in pharmacokinetics, Evans favored convolutional neural networks (CNNs), which excel in pattern recognition for data-rich applications like segmentation, where ML distinguishes different parts of an image based on characteristics like texture and shape.

"CNNs are incredibly well suited for tasks that we call segmentation, such as finding objects in data such as images," Evans stated.

His work demonstrates how CNNs are ideal for identifying and tracking drug permeation through biological tissues, offering unprecedented resolution for these processes.

Evans provided an example involving coherent Raman scattering, an imaging technique using laser frequencies to tune into molecular vibrations. This enables researchers to see specific drug molecules' distribution within tissues.

"This allows us to see where these drugs have that given molecular vibration," Evans shared.

Through this technique, he could visualize how ruxolitinib permeates skin tissue by tracking its movement over time. This data is invaluable for pharmacokinetics as it provides insights into how quickly a drug moves into target areas, informing dosages and administration timing.

Enhancing efficiency with autoencoders and U-Nets

Evans praised autoencoders and U-Nets for their efficiency, particularly in cases with limited training data. U-Net models connect information across a U-shaped network, allowing ML algorithms to retain insights across size scales, improving accuracy with fewer data inputs.

"This allows the machine learning model to get away with considerably smaller training sets with much higher accuracy," Evans explained.

In pharmacokinetics, these models assist in creating image segmentation masks that identify specific areas in tissues, enhancing researchers' ability to measure drug concentrations over time.

His team's work with U-Nets involved a time-intensive training process: manually annotating images to identify lipid-rich and lipid-poor areas for model training. Once trained, the U-Net produced accurate segmentation masks on its own, allowing for efficient analysis of drug permeation in skin samples. This image-driven segmentation is pivotal in pharmacokinetics, as it enables researchers to measure drug concentration at different tissue levels with remarkable precision.

Partnership with the FDA on bioequivalence testing

Evans' collaboration with the FDA illustrates a high-value application of ML in regulatory contexts. By accurately segmenting and quantifying drug concentrations in specific regions within the skin, Evans' team calculated pharmacokinetic parameters such as maximum concentration, time to maximum concentration and area under the curve. These calculations are integral for bioequivalence testing -- a requirement for generic drug approvals.

Evans' team compared a reference drug against itself in one experiment, demonstrating bioequivalence with statistical confidence. However, when the team tested a polyethylene glycol formulation, it failed to meet bioequivalence standards.

"The experiments showed that comparing the reference drug against itself demonstrated bioequivalence," Evans noted.

His findings underscore how ML-based bioequivalence testing can streamline the regulatory pathway, ensuring safe and effective generics reach the market faster.

Navigating ML challenges in drug development

Evans emphasized that ML models require meticulous design and careful data selection, noting, "Garbage in equals garbage out."

He cautioned that models trained on low-quality or irrelevant data may produce inaccurate results, a reminder that quality data is paramount for robust ML model development. Generalization, or the model's ability to accurately handle diverse data sets, remains a core challenge, especially in healthcare where patient data varies widely.

However, Evans' work points to a promising future where biopharma can leverage ML to minimize the guesswork in drug development. His emphasis on pattern recognition, transparency and testing reflects a thoughtful approach prioritizing model integrity over mere efficiency.

"These kinds of approaches are really important," he concluded, expressing optimism about ML's role in refining pharmacokinetics and drug development. As biopharma companies expand their use of ML, its potential to improve therapeutic efficacy, patient safety, and access to generics could reshape the industry landscape.

Alivia Kaylor is a scientist and the senior site editor of Pharma Life Sciences.

Dig Deeper on Pharmaceuticals

xtelligent Healthtech Analytics
xtelligent Healthcare Payers
xtelligent Health IT and EHR
xtelligent Healthtech Security
Close