AI rules that protect people and enable innovation

Artificial intelligence is already influencing how we work, learn and make decisions.
AI holds enormous potential to improve lives – from advancing healthcare to helping protect natural ecosystems – but only if it’s developed and used responsibly.
Right now, artificial intelligence is moving faster than the rules needed to protect people and our environment from harm.
Without appropriate guardrails, AI risks widening inequality, weakening privacy and damaging our environment.
Australia has an opportunity to set clear, balanced domestic rules that keep Australians safe from potential harms, enable innovation, and help shape international policy.
SafeAI brings together researchers, policymakers and civil society to help design practical, evidence-based policy to ensure AI benefits people.
Open letter
AI has the potential to improve lives, from boosting productivity to transforming healthcare, education and environmental management.
But the technology is advancing faster than our laws and institutions can keep pace.
Without effective safeguards, AI could amplify bias, erode privacy and displace human accountability.
Australia can lead by setting clear, balanced domestic rules that keep people safe, support innovation and help shape international policy.
Some protections are non-negotiable, including children’s safety. As AI becomes more embedded in everyday life, we cannot afford to wait for harm before we act.
Creating safe AI will take international coordination, aligned standards and shared best practice. It also requires rules that are practical, enforceable and guided by evidence and human rights.
Governments can act now by embedding AI safeguards into existing legislation and regulatory frameworks, addressing risks as they emerge and ensuring accountability from the outset.
We call for:
• Balanced, enforceable rules that protect people and our environment, while enabling innovation that benefits humanity
• Immediate action to address existing regulatory gaps and address known and emerging harms
• Transparency and accountability to ensure organisations that design and deploy AI systems can be held responsible for harms
This is a collective call to guide progress, not slow it. With the right rules in place, Australians can benefit from AI with confidence and trust.
SafeAI
SafeAI is convened by Minderoo Foundation, bringing together leaders committed to clear rules and strong safeguards for artificial intelligence that protect people and our environment, while enabling innovation that benefits humanity.
We believe building a safe AI future requires collaboration across government, academia, industry and civil society.
From understanding risks to advancing practical policy solutions, we are committed to strengthening the evidence, policy and advocacy needed to ensure AI benefits people.
Research and insights
What Australians think about AI
Minderoo Foundation commissioned research to understand Australians’ views on artificial intelligence and regulation.
The research found that Australians see the potential benefits of AI but believe those benefits depend on clear, balanced and transparent rules. Without them, public trust will erode and support for innovation will weaken.
Key Findings
61%
61% of Australians prefer a balanced but firm approach to AI regulation, one that protects people while allowing innovation. Only 4% favour minimal rules.
64%
When a balanced option isn’t offered, support for strict regulation rises to 64%, even at the expense of innovation or productivity.
42%
42% believe government should lead in managing AI risks, well ahead of tech companies (25%) or international bodies (10%).
35%
Trust in government (35%) to develop fair, safe and transparent rules is notably higher than trust in tech companies (22%) or unions (12%).
68%
68% say they would be more likely to trust AI if clear laws were in place.
39%
Only 39% trust technology companies to develop AI responsibly.
The research is clear: Well-defined, balanced rules and strong safeguards are essential to earning public trust and unlocking AI’s full potential.
Resources
How do people feel about AI?
Joint research by the Alan Turing Institute and the Ada Lovelace Institute finds the UK public remains cautious about AI. People support beneficial uses but want stronger safeguards, clearer accountability and meaningful public involvement in decisions about how AI is developed and used.
How Californians feel about AI
Research by TechEquity shows Californians are concerned about AI’s impacts on jobs, privacy and fairness. While recognising potential benefits, the public strongly supports clear rules, worker protections and accountability measures to ensure AI is developed and used in the public interest.
Great (public) expectations
Research by the Ada Lovelace Institute shows strong public support in the UK for fair, safe and accountable AI. People prioritise fairness and social impact over speed or profit, distrust self-regulation by tech companies and support independent regulators with real enforcement powers.