Skip to content

FDA's AI tool for managing medical devices encounters challenges with basic duties

struggling AI tool from the Food and Drug Administration encounters difficulties with basic tasks in relation to speeding up medical device approvals, as disclosed by two individuals with inside knowledge.

Facing Challenges: The Food and Drug Administration's AI Tool for Expediting Medical Device...
Facing Challenges: The Food and Drug Administration's AI Tool for Expediting Medical Device Approvals Faces Difficulties in Performing Basic Tasks, as Revealed by Informed Sources.

A Troublesome AI: The FDA's AI Tool for Medical Device Reviews Facing Challenges

FDA's AI tool for managing medical devices encounters challenges with basic duties

In a bid to expedite medical device reviews, the FDA has developed an AI tool called CDRH-GPT. However, this tool, still in beta testing, is struggling with basic tasks, according to insiders. The AI is buggy, incapable of connecting to the FDA's internal systems, and has issues with uploading documents or allowing users to submit questions. Moreover, it's not connected to the internet, limiting its ability to access new content, such as recently published studies or paywalled material.

This artificial intelligence is intended to assist staffers at the Center for Devices and Radiological Health, which is responsible for the safety of devices implanted in the body, including pacemakers and insulin pumps. The division had been impacted by mass layoffs at the Department for Health and Human Services earlier this year, and the agency has been striving to streamline its processes to continue issuing approval decisions on time.

Experts are concerned that the FDA's push toward AI might outpace what the technology is actually prepared for. Since April 1, Commissioner Dr. Marty Makary has been championing AI across the FDA's divisions, but questions about its impact on the safety and effectiveness of drugs or medical devices remain unanswered.

The work of reviewers involves sifting through extensive data from animal studies and clinical trials, which can take several months or over a year. An AI tool could feasibly shorten this process, but there's a worry that the FDA may be moving too quickly toward AI without ensuring it's ready for its complex regulatory work.

Arthur Caplan, the head of the medical ethics division at NYU Langone Medical Center, expressed concern that the technology may not be intelligent enough to ensure accurate reviews, since people's lives depend on it. Caplan believes that AI still needs human supplementation and is not ready to "probe the applicant or challenge or interact" effectively.

While the concept of AI for specific tasks for reviewers and scientists seems reasonable, some FDA staff feel it's being rushed and not yet ready for widespread use. Staff have worked hard to get another AI tool, called Elsa, up and running, but it still can't handle core functions and requires further development.

Concerns about potential conflicts of interest and the role of AI in potentially replacing staff also persist within the FDA. To maintain integrity and reputation, it's crucial to have a protocol in place to prevent any government official from having financial ties with companies that could benefit from AI.

Extra Insights:

  • The FDA aims to optimize performance and streamline scientific reviews across all centers by June 30, 2025[2][4]. However, there are concerns about the effectiveness and safety of AI tools in making regulatory decisions[3][5].
  • The successful pilot using generative AI for scientific reviews significantly reduced the time spent on repetitive tasks, leading to the aggressive rollout plan across all FDA centers[2].
  • Despite the promise of AI, there are significant operational challenges to overcome, including ensuring secure and effective integration into existing workflows[3][5].

The troubled CDRH-GPT AI, intended to streamline medical device reviews at the FDA, is currently grappling with basic tasks, as it's disconnected from internal systems, has document uploading issues, and can't allow users to submit questions. Meanwhile, experts are questioning whether the FDA's rush to integrate AI technology, particularly in health-and-wellness sectors like science and medical-conditions, might outpace the technology's readiness for intricate regulatory work. In light of these challenges, it's crucial to ensure the development and implementation of artificial intelligence solutions in health and technology are done with utmost care and consideration for their potential impact on safety, effectiveness, and human interaction.

Read also:

    Latest