Getty Images

Exploring a Framework for Pediatric Data Use in Health AI Research

The ACCEPT-AI framework provides recommendations for the safe inclusion of pediatric health data in AI and ML research.

The number of artificial intelligence (AI) and machine learning (ML)-enabled medical devices authorized by the United States Food and Drug Administration (FDA) has risen in recent years as healthcare organizations have become increasingly interested in how these tools can contribute to population health efforts.

A plethora of these medical devices are authorized for use in adult populations, and some are licensed for use in pediatric populations. However, some stakeholders have expressed concern that there is no guidance on the ethical use of pediatric data in AI/ML research.

To remedy this, a research team from Stanford University developed ACCEPT-AI, a framework designed to incorporate ethical principles related to age, communication, consent and assent, equity, protection of data, and technology and provide recommendations for the safe inclusion of pediatric data in AI/ML research.

Vijaytha Muralidharan, MBChB, MRCP, a clinical AI researcher in the Department of Dermatology at Stanford University and lead author of ACCEPT-AI, recently spoke with HealthITAnalytics about how to protect children’s interests in health AI research, alongside how researchers and regulators can leverage the framework.

THE CHALLENGES OF PEDIATRIC DATA USE

Muralidharan —  who designed the framework alongside Alyssa Burgart, MD, Roxana Daneshjou, MD, PhD, and Sherri Rose, PhD — noted that one of the major challenges of using pediatric data in AI research is potential algorithmic bias.

She explained that many image-based dermatology and radiology research databases, for example, mix children’s data with that of adults, making it difficult to discern between them. This can lead an AI algorithm trained on these data to produce erroneous results.

“The first part of this framework addresses that crucial aspect of age labeling,” Muralidharan noted. “[Researchers] need to know the age range of the population we're plugging into the training dataset. We also need to be clear on where we're testing the algorithms and whether the algorithm is for adults, kids, or both. If it is for both, it should be trained on both because we need to represent the training and the testing [cohorts].”

She further indicated that age labeling is crucial because “pediatric” covers a heterogeneous age range, including individuals from zero to 18 years of age. The age ranges that fall into the “pediatric” category also vary depending on the country, with some countries making the cutoff 16 years of age compared to the United States’ cutoff of 18.

When researchers are training an algorithm, it is imperative to know whether it is aimed at neonates, children, or adolescents. Each group can carry different disease burdens or presentations, which are reflected in the data, Muralidharan explained.

She also noted that pediatric populations are underrepresented in health AI research, which presents unique ethical challenges.

“Involving children [in AI research] is harder,” she stated. “The processes for consent, and in some countries, assent, are more complex and not always standardized. Even within the US, there are a number of laws that differ on consent procedures between different parts of the country. So, having that in mind when we're trying to represent children in AI is important, and asking the question of not only how we can include them, but include them safely is crucial.”

Muralidharan underscored that many medical devices are approved by the FDA for use in both adults and children, but of these, few make it clear whether or not the device was trained explicitly using pediatric data.

She noted that this is an issue, as a key aspect of algorithm deployment involves knowing the specificities of a tool’s training and testing cohorts before generalizing it for use across a whole population.

However, she indicated that some researchers have argued that mixing children’s and adults’ data may be useful in some cases, and much of the work assessing pediatric data use in AI research is still in its early stages.

“We're very much at the infancy of knowing the implications, [but] some people may believe that mixing the data is useful,” Muralidharan explained. “It’s very early to say either way, but everybody handling pediatric data needs to know what is going in and out. They also need to be very clear on why they're using pediatric data, its purpose, and its potential effects on children.”

She noted that following clear guidelines on age labeling and ensuring that the training and testing data mirror the target population is key to ensuring transparency and safety when handling pediatric data, which is where the ACCEPT-AI framework comes in.

THE ACCEPT-AI FRAMEWORK

ACCEPT-AI addresses some of the hurdles associated with pediatric data use by guiding researchers to consider age, communication, consent and assent, equity, protection of data, and technological factors in their studies. The framework can be used independently or in conjunction with existing AI/ML guidelines, depending on the study’s parameters.

“It's [the ACCEPT-AI team’s] firm belief that all of those sections need to be addressed in any AI study that involves children,” Muralidharan emphasized, noting that the framework is designed for both clinical researchers and tech companies who may be developing pediatric algorithms.

The framework’s age component prioritizes transparency in the age labeling of pediatric data, while the communication portion underscores the importance of communication as a general principle in clinical studies.

Specifically, ACCEPT-AI focuses heavily on involving children as much as possible in developing safe healthcare AI. Muralidharan noted that assuming pediatric populations lack knowledge and competence around technological advancements like AI can be detrimental to research efforts, and some studies suggest that engaging young people could be a boon to AI development.

“Many children are very capable these days,” she said. “I'm astounded by the amount of technological knowledge they harbor, and it's important to try to include them in as many efforts as possible.”

The framework recommends age-appropriate communication with pediatric study participants regarding the purpose and nature of the research and future data use.

When providing information about how the data will be used to parents and guardians, ACCEPT-AI dictates that communications should be tailored appropriately based on educational best practices to ensure all stakeholders are on the same page.

The framework also indicates that community-level communication efforts should focus on improving digital literacy for pediatric participants and their guardians.

In terms of consent and assent, Muralidharan noted that any AI study involving children must have clear documentation of consent.

When researchers are working with de-identified pediatric data, consent is typically not required from participants or their guardians, but in cases where the data may not be fully de-identified, consent is required.

In either case, Muralidharan explained that it should be clear to what extent the data are de-identified, whether or not consent is required, and how that consent is obtained and documented.

This can present a challenge for researchers, as the laws around consent for the use of pediatric data can vary significantly depending on where the study is conducted. However, documenting discussions of the risks, benefits, and alternatives inherent in a given study — including explicit conversations around whether study subjects can have their data removed from a dataset in the future — can alleviate some of the concerns that prevent pediatric participation in AI research.

ACCEPT-AI’s equity component emphasizes equality, diversity, and inclusion principles in study design.

“For children, pediatric data is underrepresented broadly [in AI studies], but even within pediatric data, there are numerous biases, as with adult data,” explained Muralidharan. “Are we representing children from minority populations? Are we representing children with developmental delays and rare diseases? Because that's important when we're designing studies for AI.”

The data protection aspect of the framework rests on the assertion that pediatric data should only be used if those data and the technology developed as a result address a clear need for pediatric populations.

To this end, ACCEPT-AI states that research teams must be transparent about the needs, benefits, and risks of pediatric data use within a given study. Similar to the other aspects of the framework, the data protection portion focuses heavily on ensuring clear communication, documentation, and adherence to applicable laws.

However, Muralidharan indicated that data protection legislation specific to minors is limited. The framework itself cites the European Union’s General Data Protection Regulation (GDPR) and guidance from the US Department of Health and Human Services (HHS) around applying the “Safe Harbor Rule” to achieve data de-identification in accordance with the Health Insurance Portability and Accountability Act of 1996 (HIPAA) Privacy Rule, but both are broad.

The framework notes that creating legislation tailored to pediatric data use may be beneficial but highlights that existing legal processes need to be optimized for transparency to promote data security without sacrificing patient privacy.

The technological component of ACCEPT-AI helps illustrate how researchers should proactively assess for age-related bias during the dataset curation, training, and testing phases of algorithm development.

Muralidharan noted that the technological considerations laid out in the framework are mainly concerned with justifying the need to use pediatric data in AI studies, identifying and specifying potential physiological and psychological harms to children that could occur from their participation in the research, and emphasizing transparency in testing and training processes by clearly documenting the reasons for using specific data types in the research.

She underscored that the principles set out in ACCEPT-AI are fundamental for anyone handling pediatric data, and she and her team are using what they learned from developing the framework to craft guidance on incorporating data from other vulnerable populations into healthcare AI research.

USING DATA FROM OTHER UNDERREPRESENTED POPULATIONS

“I realized very early on that a lot of these issues that apply to children also apply to other vulnerable groups, and there is limited guidance to date on how to apply these broad principles to vulnerable populations,” Muralidharan explained.

Certain groups, like the elderly, require researchers to consider unique ethical concerns — like consent, capacity, and cognition — before leveraging the population’s data in AI research, she noted. The heterogeneity of elderly populations, in addition to multi-morbidity within these groups and the role of the caregiver, are also crucial considerations for researchers.

Like children and the elderly, rare disease populations are often underrepresented in AI research and pose specific ethical questions for research teams.

“Rare disease groups are a hugely underrepresented population in all research,” she said. “The funding for rare disease research is very poor. There are several hundred thousand rare diseases that are quite common, if put together, that are neglected from research efforts.”

Often, rare disease research is driven by patients, communities, and charities, placing a significant intellectual burden on those impacted directly by rare diseases rather than on the healthcare community.

A lack of available data can also undermine rare disease research, especially for small disease groups. However, Muralidharan emphasized that these groups must be represented in AI research, leading her team to investigate how creating guidance similar to ACCEPT-AI can help.

“That's the key here, drilling into these unique ethical considerations. All of these policies need to be developed from that fundamental pillar of beneficence, non-maleficence, autonomy, and justice,” she stated.

However, she also noted that this work, including ACCEPT-AI, is still in its early stages and somewhat limited.

“A limitation of this particular framework so far is that it is based on our opinion at the moment. It's not gone through a huge consensus process, which does reduce the methodological rigor potentially. But that's the next stage we're working on,” she indicated.

Moving forward, Muralidharan and her colleagues will seek that consensus and additional funding for their work, in addition to connecting with regulatory bodies to support the development of policies to support safe data use in AI research for vulnerable and underrepresented populations.

Next Steps

Dig Deeper on Population health management