Artificial Intelligence is evolving at breakneck speed. While AI businesses pledge their commitment to the ethical design, development and deployment of artificial intelligence through internal controls and industry-developed non-binding standards, nations across the world are grappling with how best to ensure AI’s risks are appropriately managed and its opportunities maximised in a safe and responsible manner.
There’s so much to unpack when it comes to how to effectively regulate for the safe and responsible use of AI. From how we define AI to what we intend by “safe and responsible”, from who in the AI value chain we aim to influence through regulation and methods to identify tangible (and ideally quantifiable) risks and challenges that this emerging technology presents. It’s a balancing act as in tandem, we try to support businesses and individuals with how to leverage the technology to benefit from the incredible potential of AI. This article unpacks a few key concepts and suggests how government portfolios grappling with the regulation of this cross-sectoral technology can collaborate and align approaches to ensure its effective regulation.
Featured in this article
-
Declan Norrie
Special Advisor, Team Lead – Regulatory Development and Reform
-
Kyle Wood
Senior Advisor
Published
26 July 2024
The impetus behind calls for regulation of AI
As ASIC Chair Joe Longo highlighted in a speech on AI regulation earlier this year, the development and deployment of AI in Australia is hardly a lawless “Wild West”.1 To varying extents, AI developers and deployers are subject to Australia’s existing suite of (generally) technology neutral laws and associated regulatory frameworks.
Despite this, evidence indicates that a majority of Australians have low trust in AI, and are either unsure or disagree that existing protections are sufficient to ensure safety against AI-related harms2. They are not isolated in their concerns: the Bletchley Declaration, signed by Australia amongst a group of 28 countries and the EU on 1 November 2023, welcomed “recognition that the protection of human rights, transparency and explainability, fairness, accountability, regulation, safety, appropriate human oversight, ethics, bias mitigation, privacy and data protection needs to be addressed”.
With trust already low, AI-related safety incidents risk hampering the sector’s development and impacting our ability to reap the significant public and private benefit of this emerging technology. Effective regulation is critical to mitigate the risk of individual and social harms, to ultimately provide the public and businesses with certainty and confidence. Longo’s closing point on existing regulation remains salient: “is this enough?”
How governments are responding
The Australian Government has committed to investigating options for a risk-based approach to regulating to ensure safe and responsible AI. As confirmed at the 2024 Budget, this will include consultation on potential mandatory, risk-based guardrails applying generally to AI systems, and consideration of options to strengthen and clarify existing laws which already regulate (or should regulate) AI in particular domains.
At time of writing, there are diverse approaches to AI regulation amongst similar developed nations despite agreement that alignment will be crucial. Figure 1 provides a simplified snapshot of how these approaches compare, both in terms of how mandatory key regulatory instruments are, and the breadth of their application.
The complex nature of AI regulation
The characteristics of AI technologies pose specific challenges to designing and implementing effective regulation, so they need to be closely considered in any regulatory approach.
- Defining AI: Any bespoke regulatory approach faces the challenge of how to define AI to ensure sufficient legal certainty of what it applies to, while remaining sufficiently flexible to account for paradigmatic changes in AI’s nature and capabilities.
- Setting the requirements for “safe and responsible” AI: Agencies must determine what safe and responsible means in their particular context, and what obligations and associated regulatory tools are required to achieve that.
- Identifying and quantifying critical risks: Quantifying tangible risks and challenges is critical to operating a risk-based regulatory system, which can use limited resources to monitor, investigate, and enforce regulatory non-compliance most effectively.
- Addressing the complex AI value chain: Regulation must be targeted to achieve regulatory outcomes that are efficient and effective. It needs to influence the right actors at the right time to minimise burden and maximise outcomes. The complex nature of the AI value chain, which may include a range of organisations across multiple jurisdictions, makes this challenging.
Initial Actions for Policymakers and Regulators
All areas of government will need a baseline understanding of AI issues to ensure effective coordination of an approach to safe and responsible AI. As a starting point, public sector personnel at all levels can engage meaningfully with safe and responsible AI in their domain, by taking the following actions:
- Read up: Develop a baseline understanding of AI’s applications, technical and ethical challenges. Acknowledging the complexity of the field and rapidity of change, utilise accessible resources including those published by DISR, the National AI Centre and academic institutions. Engage with experts and stay informed about emerging trends—both in your domain and more broadly.
- Build capability: Invest in AI literacy. Recruit and train policymakers, regulators, and legal professionals at all levels to understand AI and navigate AI-related issues effectively. Review your policy, regulatory, and legislative tools to identify any gaps, challenges or risks to mitigating AI-related harms.
- Link up: Collaborate with critical stakeholders— including other government agencies, industry, academia, and civil society. Share insights, concerns and positions to ensure that critical risks are shared and don’t fall through cracks. Seek to engage with both central agencies and line agencies to address key pain points, especially areas of intersection and duplication.
- Horizon scan: Anticipate future AI developments. Consider the impact of quantum computing, autonomous systems, and AI-driven decision-making on key activities and stakeholders in your domain. While you might not be able to predict all in a fast moving and complex field of technology, practicing preparation will give you the tools to adapt more quickly to change.
Proximity’s multi-disciplinary experts are knowledgeable in the challenges of designing, developing and reviewing complex and innovative regulatory frameworks. From assurance reviews to seconded lawyers, we can help ensure that you’re equipped to best seize the opportunities and manage the risks of AI in your domain of regulatory and policy expertise.
Proximity’s multi‑disciplinary experts are knowledgeable in the challenges of designing, developing and reviewing complex and innovative regulatory frameworks.