AI & Emerging Tech
What Australia's new AI regulations mean for your business

These documents mark a major shift in AI regulation in Australia. While the mandatory guardrails are under consultation, businesses are advised to adopt the Voluntary Standards now to prepare for future changes and manage AI responsibly.
The Australian Federal Government unveiled two pivotal documents aimed at fostering safe and responsible artificial intelligence (AI) practices across the country. These documents are the Proposal for Mandatory Guardrails for AI in High-Risk Settings and the Voluntary AI Safety Standards. The initiative reflects a broader governmental strategy to ensure that AI development and deployment align with high standards of safety and ethical responsibility.
The mandatory guardrails proposal
The Australian Government's Proposal outlines a framework designed to regulate AI, particularly in high-risk scenarios. It proposes defining "high-risk AI" and implementing ten specific guardrails intended to mitigate the associated risks. This proposal is now open for public consultation, which will end on October 4, 2024.
Defining High-Risk AI
The Proposal categorises high-risk AI into two broad areas:
1. High-Risk Uses: AI systems whose applications can directly impact individual rights, health, safety, or have significant legal and economic effects.
2. Advanced General-Purpose AI (GPAI) Models: AI models with versatile applications that may present unforeseen risks due to their broad capabilities.
The guidelines emphasise a principles-based approach, focusing on risks such as bias, privacy breaches, and misinformation. This aligns with international practices seen in the European Union and Canada, reflecting a global trend towards more stringent AI oversight.
Proposed guardrails
The Proposal specifies ten mandatory guardrails for developing and deploying high-risk AI systems:
These guardrails aim to integrate seamlessly with existing legal frameworks and adapt to evolving AI technologies. They also highlight the need for ongoing stakeholder engagement throughout the AI lifecycle.
Voluntary AI safety standards
Alongside the Proposal, the Government introduced the Voluntary AI Safety Standards. These standards provide practical guidance for organisations seeking to innovate responsibly with AI and are intended to complement future mandatory regulations.
Key Features of the Voluntary Standards
Foundation for Safe AI Use: The standards help organisations build a foundation for safe AI practices and prepare for potential mandatory requirements.
Procurement guidance: They offer advice on how to collaborate with AI suppliers to ensure alignment with safety and ethical standards.
Ongoing obligations: Unlike one-off activities, these standards require continuous adherence to maintain safety and fairness.
International alignment: The standards align with international benchmarks such as ISO/IEC 42001:2023 and the US NIST AI Risk Management Framework 1.0.
Future updates: The standards will evolve over time to reflect updates in mandatory regulations and emerging best practices.
You can also read:
What this means for businesses
The introduction of these documents signals a significant shift in how AI will be regulated in Australia. While the mandatory guardrails are still under consultation, businesses are encouraged to adopt the Voluntary Standards now. Doing so will not only prepare them for future regulatory changes but also enhance their ability to manage AI responsibly.
Organisations should view these standards as an opportunity to build consumer trust and gain a competitive edge by demonstrating their commitment to ethical AI practices. Even if the mandatory guardrails do not come into effect as proposed, the Voluntary Standards reflect global best practices that align with international norms for AI development and deployment.
As AI technology continues to advance, the Australian Government’s proactive approach aims to balance innovation with safety and ethical considerations. By implementing these guardrails and standards, Australia positions itself as a leader in responsible AI governance, setting a benchmark for other nations to follow.
Author
Loading...
Loading...





