Skip to content

AI-driven drug approvals by FDA reportedly generate phony clinical studies

FDA's AI alleged to fabricate study results and unable to access essential records, according to insiders. Agency leadership maintains that the AI is improving and use is voluntary.

FDA's AI-driven drug approval system raises concerns due to its practice of fabricating clinical...
FDA's AI-driven drug approval system raises concerns due to its practice of fabricating clinical trials.

AI-driven drug approvals by FDA reportedly generate phony clinical studies

In a bid to revolutionise drug and medical device approvals, the US Food and Drug Administration (FDA) has unveiled Elsa, an artificial intelligence (AI) tool, in June 2025. Operating on Anthropic’s Claude model within a secure government cloud, Elsa is designed to assist FDA staff in handling large volumes of information, generating study summaries, drafting code, comparing labels, and aiding inspection planning [1][3].

However, early reports suggest mixed performance for Elsa, with concerns about its accuracy. While the AI tool has reportedly saved time on administrative tasks and sped up clinical protocol reviews, there are concerns that it may "confidently hallucinate" studies that do not exist, implying it sometimes generates inaccurate or fabricated information [5]. Such hallucinations raise caution about fully relying on Elsa for critical regulatory decisions without human oversight.

The potential implications of using Elsa for drug and medical device approvals are significant. On one hand, Elsa could potentially accelerate review timelines, improving the speed at which new drugs and devices enter the market [3]. It could also enhance efficiency by handling administrative and summary tasks, allowing experts to focus on scientific and clinical evaluation [1][3].

On the other hand, AI-generated summaries may oversimplify, exaggerate, or create misleading data, which could affect the integrity of regulatory reviews and approval decisions [3][5]. The need for human oversight is therefore essential to ensure regulatory rigor and patient safety [1][5].

In light of these concerns, the FDA is taking steps to enhance Elsa's reliability. The agency is updating Elsa to allow users to upload documents to their own libraries [2]. This move is expected to improve the tool's access to relevant information, thereby reducing the likelihood of inaccuracies.

In conclusion, Elsa shows promise as a tool to enhance FDA work efficiency in regulatory review, but it currently exhibits accuracy limitations, including generating false study data. Cautious implementation with continued human supervision is necessary to avoid compromising drug and device approval quality and safety [1][3][5]. The coming months and further evaluations will be critical in determining Elsa’s ultimate reliability and impact on FDA regulatory processes.

The government's efforts to develop AI plans in earnest began in 2018, with the Pentagon evaluating its potential for national security and health care [4]. The Health and Human Services Secretary, Robert F. Kennedy Jr., has declared that the AI revolution has arrived [6]. However, without federal regulations, the future of AI use in the US remains uncertain.

  1. The integration of artificial intelligence (AI) tools, such as Elsa, into the medical field raises questions about the integration of AI in other sectors, like politics, science, and health-and-wellness.
  2. As technology advances, the use of AI, like Elsa, in approving medical devices and drugs may have implications for other fields, such as the development of gadgets or the advancement of artificial intelligence itself.
  3. The concerns regarding the accuracy of Elsa, an AI tool used by the FDA, extend beyond the medical field, and highlight the need for careful oversight and regulation of AI use in areas like politics, science, and art.

Read also:

    Latest