TGA charts AI future for medical devices with landmark review
TGA charts AI future for medical devices with landmark review
The review, part of a broader $39.9 million federal initiative on Safe and Responsible AI, finds Australia’s legislative framework largely fit for purpose, but in need of sharper edges. While the TGA’s technology-agnostic, risk-based approach gives flexibility to regulate emerging tools, the report identifies critical areas where clarification and reform are needed to ensure safety, transparency, and alignment with global best practice.
Key challenges include the definition of who bears responsibility for AI outputs, especially when systems replace services traditionally performed by humans. Current law centres on “manufacturers” and “sponsors”, but stakeholders argue this does not always map neatly onto AI lifecycles involving developers, deployers, and overseas platforms. The report also calls for updates to the definition of “supply” to capture online and virtual access, where traditional regulatory controls may not apply.
Some of the most pressing safety concerns relate to AI that adapts after deployment. Unlike static systems, adaptive AI can evolve over time, making it harder to know when significant changes require reassessment. The TGA will prioritise guidance on continuous change control, use of datasets with unknown provenance, and performance monitoring—particularly for tools like digital scribes, which may cross into regulated territory if they influence diagnosis or treatment.
Transparency is another theme. Stakeholders want clearer information about whether a device uses AI, what datasets it was trained on, and how updates might affect performance. The upcoming Unique Device Identification (UDI) system offers one route to improve visibility, but the report suggests reviewing advertising rules to ensure users get the information they need, both before and after deployment.
Internationally, the TGA remains committed to harmonisation with comparable regulators, noting that over 85% of devices supplied in Australia rely on overseas certification. The agency is active in global forums developing standards for AI in medical devices and is watching closely as jurisdictions like the EU and US refine their approaches.
The next steps will involve targeted consultations through 2025 and 2026 on refining definitions, enhancing compliance, and producing technical guidance. With AI already influencing patient care, the review makes clear that regulatory agility will be as important as regulatory strength—ensuring innovation can proceed without compromising safety.

Renae Beardmore
Managing Director, Evohealth