National AI Plan Doorstop with Minister Ayres and Assistant Minister Charlton

02 December 2025

SENATOR TIM AYRES, MINISTER FOR INDUSTRY AND INNOVATION AND MINISTER FOR SCIENCE: Well, g'day, I'm really pleased to be here at WSTI, Western Sydney Tech Innovators with Andrew Charlton, the Assistant Minister.

Western Sydney Tech Innovators is an extraordinary thing that just shows how important it is that we spread the benefits of artificial intelligence and technological adoption, not just in our central business districts and in our tech sector but broadly across our suburbs and regions. 1,700 Western Sydney innovators already collaborating, working together on projects, sharing experiences, supported by their local member, Andrew Charlton, the Member for Parramatta, who is doing incredible work. And it's just given me so much optimism and enthusiasm this morning for this enormous task of making sure that Australia's strategy with artificial intelligence includes everybody in Australia.

Today we'll be launching Australia's National Artificial Intelligence Plan. It is directed towards three national interest priorities: firstly, making sure that Australia captures the opportunity, the economic opportunity, the productivity opportunity, making sure that we're making AI technology here as well as taking the best of artificial intelligence technology into Australia to do important national interest things.

It's also, secondly, about making sure that we share the benefits amongst small workplaces and big workplaces, across all of our suburbs and our regions across Australia.

And finally, it's about making sure that we keep Australians safe. That's why we announced the AI Safety Institute, a capability at the heart of government that's about testing new models, analysing threats and challenges, working across government with our regulators, our security and intelligence agencies, our financial regulators as well to make sure that we keep Australians safe and evolve our approach to this technology in the national interest.

I want to hand over to Andrew Charlton, who will have more to say about this important day for Australia, making sure that we've got a clear framework for the investment community, for Australian regulators and for Australians themselves as we build confidence in the adoption of this important new technology. Andrew.

 

DR ANDREW CHARLTON MP, ASSISTANT MINISTER FOR SCIENCE, TECHNOLOGY AND THE DIGITAL ECONOMY: Thanks very much, Tim. Well, it's terrific to have Tim here in Parramatta at the Western Sydney Tech Innovators. WSTI is an organisation that is grassroots and community based. It has more than 1,700 people who come together to learn about AI and adopt it in their daily lives and business.

And it's fitting that we're here, because the plan that the Government is releasing today is all about people, making sure that the benefits of artificial intelligence accrue to every single Australian.

If we get this right, AI has the potential to increase the prosperity and potential of the Australian people. And this plan lays out the Government's objectives, to make sure we capture the economic opportunity for Australia so that Australia is a maker as well as a taker of AI.  

Secondly, to make sure that the benefits of AI are spread broadly, and all Australians feel those benefits in their daily lives, whether it be improved productivity in their work, or better citizens' services or more skills and opportunities.

And the third principle today is about making sure that we keep Australians safe, and that's about having a robust capability at the centre of government, the AI Safety Institute, which can identify risks on the horizon, make sure that those risks are effectively mitigated by the agencies and regulators right across every part of the Australian Government.

This is a plan which should give us confidence for the future, confidence that we can turn AI to Australia's advantage and make sure that it benefits all the people of this country. Thank you.

 

AYRES: Okay, happy to take a few questions, and then I'm keen to get back in there. Like if you get a chance to spend to a bit of time talking to them about the work that they do, it really is quite exciting. Shoot.

 

JOURNALIST: Just on mitigating risk, just to start off, has there been any meaningful progress on protecting particularly Australian creatives against AI, and is there anything in the plan that fundamentally protects their rights as workers?

 

AYRES: Well, the first thing to say about this is copyright law, for example, an important element of protecting creatives – we’ve been very clear that we won't be weakening our approach to copyright law. The Attorney‑General and other Ministers have been engaged with the creative community in a really thorough‑going way to make sure that the approach that we take isn't just about protecting their copyright interests but is also about finding new ways of making sure that they are benefitting from the technology.

So that process will continue to evolve. This plan isn't the last word on that. But Australian creatives, Australian writers, Australian journalists should be confident that the Albanese Government will in no way weaken our approach to copyright law; we're for the creative community, we know how much it contributes to Australia's growth and Australia's culture.

 

JOURNALIST: And we were talking about guardrails a few months ago. Do you think that with the guardrails effectively removed, do you think the current legal framework that we have is capable of adapting to a technology that everyone sort of concedes is moving at this incredible exponential speed?

 

AYRES: Well, I don't want to play with words too much, but the National AI Centre released a series of guardrails, not mandatory guardrails, but guardrails for business adoption that will see significant improvement in the way that businesses understand their responsibilities in terms of artificial intelligence.

We've introduced the AI Safety Institute last week, funded to begin early next year, and it will monitor the adoption of guardrails; it will monitor the behaviour of firms, including tech firms, analyse, as Andrew says, those threats on the horizon, and make sure that Australia's government capability and our regulatory framework is fit‑for‑purpose, not just in 2025, but every year from now on.

 

JOURNALIST: Now, David Shoebridge is quite concerned that AI algorithms are effectively driving the rise of extremism in Australia. Now do you agree with that, and if so, why are these platforms effectively getting a free run under the new plan?

 

AYRES: That's why we have introduced the Artificial Intelligence Safety Institute, a capability at the heart of government to work with our intelligence agencies, our policing agencies, to test new models, to monitor what's going on in the social media landscape, to work with the eSafety Commissioner to make sure that our approach is fit‑for‑purpose and relevant every day.

This Government has demonstrated we're absolutely up to the task of cracking down hard where there's harms in the digital landscape. The eSafety Commissioner and the Government cracked down hard on deepfake pornographic images. We've cracked down hard in other areas of social media. We're making sure that we're protecting our kids from social media harms, and we'll be watching very closely the interaction of artificial intelligence with social media and other digital platforms, because of all of its implications; for our kids, for our society, for our families and for our national security.

 

JOURNALIST: What would a threshold point be for an intervention, because obviously part of the big reason of removing those frameworks, or those guardrails, were to give the industry itself a little bit more leverage and leeway, so what would the threshold point be for the government to intervene on an AI algorithm and say that it's effectively inspiring or drawing extremism?

 

AYRES: Well, it's the responsibility of the AI Safety Institute and the portfolio agencies and the intelligence community and our security agencies to monitor those developments, and we will not hesitate to crack down hard when it's required.

The motivation for the approach that we've taken is in no small part for the fact that Australian law applies now to all of these technologies, and we want a very clear line of sight on the responsibilities that agencies have now to deliver, we want accountability, and we want absolute clarity, and that's what this framework delivers.

 

JOURNALIST: Will it be an unknown entity until we reach a point where that intervention happens, or do you already have an idea of what particular examples that already exist in some of those algorithms may constitute a breach or require an intervention?

 

AYRES: I'm very confident that the AI Safety Institute will do its work, will have the expertise and capability at the heart of government to work across government.

This is a capability question, this is a responsiveness question, and this well‑funded safety institute is going to support every part of government to deliver on their work.

 

ENDS.