Australia's AI Roadmap is a Gift to Extractive AI Corporations: We should be mad as hell.
Listen to this episode
About this episode
Hey all, I’ve looked better and felt better! But after getting out of hospital I just wanted to show that I am on the mend, and will be back writing in the coming days.
Hospital stays are not fun, and for 7 days involving rigors and being on a drip was awful. But I’m back, and getting healthy again.
Also I have a qualification that I’ve been studying towards (to allow me to teach in Australia, in tech) and that has meant doing practical exams that also took up some of my very little energy. But now that is done for 2025 I now have time to write again!
Thanks for reading Totally a Thing! This post is public so feel free to share it.
But you are not here for my health! What on earth is Minister Ayres and the Aussie government doing with AI now?
The Australian AI Roadmap
* Reuters - “ramp up adoption, rely on existing laws”
* The Guardian - no guardrails needed
* ABC news on the roadmap - see images below:
At the present time generative AI, of the sort that can rip off books, is a violation of the very concept of intellectual property; its a digital slave ship for desperate data-labelling workers (many in the developing world) and overall its a mechanism for wealth transfer. It’s an extractive process that rips-off regular folks, and transfers wealth to tech oligarchs, huge investment companies and corporations.
It’s absolutely outrageous that the Australian government is stepping back from its responsibility to protect our local Aussie intellectual property from extractive AI companies. It’s unbelievable that public servants like Minister Ayers (who I mispronounced as Akers in the video, sorry) are so starry-eyed and delusional that they think “AI adoption” will somehow benefit Australians. It’s a massive breach of trust.
When AI can be Good
I am pro technology, and pro computing solutions that help businesses and people. But it is not being a “luddite” or a “hater” to point out that huge over-heated startups running on investor money, stealing IP are doing it because they want to “win”; to become a “Google search” for AI, so that no-one goes elsewhere than them. They want a “moat” as they say, a monopoly. This idea is both really evil, and really stupid.
The bad:
* Big AI vendors trying to corner the market
* Billion dollar corporations profiting from stolen IP
* AI corporations addicting users, and corrupting government to get their way
The good:
* Regular companies using AI tech in house as part of a compute solution
* Software engineers helping Aussie industry deal with Big Data via AI
* Using AI (not just LLM/AI from a vendor) as a component of real solutions
What I see as the biggest problem here is companies in Australia hearing that the Government is supporting “AI” and then rushing out broken, failed “solutions” that just wrappers around US based big AI vendors products.
* Agents do not work. They are garbage.
* Software professionals are the only way to get compute solutions that work.
Stop talking about “adopting AI” and start talking about building solutions - using the right tech for that, whatever it needs to be. Use AI for what it’s good at, for example indexing big data, and search. No-one has “cornered the market” on AI and they never should, and I doubt ever will. There’s no “AI race” to win. It’s nonsensical. Just look at DeepSeek.
So many decision makers have been deluded into thinking “AI” is some kind of glowing blue plastic person - a self-deploying solution - that will walk through the door of their company ands start typing on their keyboards:
So many lies from the AI industry; and I’m afraid the Minister here is swallowing it hook, line and sinker.
AI Safety Institute
One bright possibility is the creation of the “AI Safety Institute” which could be an NHTSA for AI. If there’s an accident and people are harmed you have a technically competent and responsive body, independent of industry who can step in to analyse what happened, what the harms are, and recommend any legal action needed.
I am currently working on a series of articles about the mechanisms of AI regulation in AI vendor companies, and the analogy of an NHTSA for AI is one of the possible mechanisms that has promise. Stay tuned for that.
The National Highway Traffic Safety Administration works in the USA when companies make unsafe cars. And the same model (a qualified body of technically competent responders and investigators) can work here and elsewhere to monitor unsafe AI.
The AISI seems to have a reasonable charter at present, but my concern is that it could have the same fate as the Climate Commission in Australia which was set up by the Labour Government to monitor harms from polluting industries: it was destroyed for political gain by an opposition government looking to cosy up to industry. The outrage from the public when this happened, by the way, was so huge that public campaigns to keep it open resulted in the staff who left the commission starting the Climate Council to continue that work with crowd-funding.
It would be so easy for the AISI to either become a channel for AI industry propaganda to go into the ears of politicians; and effectively become an industry group itself; or if its actually effective in protecting people that a subsequent government turns it into a political point scoring exercise.
Theft of IP by AI has to be Legislated
As I mentioned in my video legislation had to be added to the century old crimes act to specifically include electric power as a thing able to be stolen. This happens often with technology and we’ve been down this road before. It’s not radical to legislate to clarify what can be stolen, even when “the law already exists”:
The idea that “existing laws cover this” are garbage because what happens is small creators get into court and face billion dollar AI companies hiring an army of “AI Experts” to say they did not steal the creators intellectual property.
Of course they stole it. They took our books, our images, artworks and our online articles against the explicit terms that those are things over which we retain moral rights & copyright. Billion dollar US based companies encoded that into their AI models, stored in to data centres to be served up for their own profit.
But decisions are going against creators because there is no legal clarity around what constitutes illegal copying, even though judges are making positive findings, it’s taking years for any justice. Mind you, in this linked case, I agree with the judge that the correct argument to make for harms and damages against AI companies is that they are flooding the market with competing AI slop products directly generated from the IP that they stole.
The deceptive practices of AI companies trying to hide their crimes goes to intent — in the same way cigarette companies hid what they knew. But for copyright law you are stuck proving that your IP was taken and that you were harmed.
Decisions sometimes go against authors and creators, and sometimes for them. It’s a mess, and law is necessary to fix this so its clear that:
* AI encoding is not transformative, it is storage and it is illegal copying
* Taking IP and encoding it is substantive & not some 0.001% as claimed.
See my previous post on how this works:
Vulnerable Folks and AI
AI vendors are never going to do the right thing for at risk people unless regulation forces them to. OpenAI says millions of users are discussing suicide, but their response instead of hard interlocks to deal with this, is a “Wellness Council” and controls that push the problem back onto parents.
Note that in my upcoming writing on AI regulation I want to cover the problem of “AI Alignment” which is the stupid approach of trying to get LLMs to behave via “system prompts” and fine tuning. We need symbolic/procedural computing solutions that allow no statistical hacks or failures: a “regex” or behaviour tree that can catch these sorts of conversations external to the LLM and route the person to get real help.
If you or someone you know is vulnerable, please get real help:
* Get mental health emergency help in Australia: Beyond Blue
* Queensland Government list of mental health services
* US based mental health help
Reportage on AI
Further reading/listening I recommend:
* Karl Brown, Internet of Bugs
* Ed Zitron, Where’s your Ed At
* Dr Luiza Jarovsky, AI Tech & Privacy
* Prof Gary Marcus, Marcus on AI
* Matt Bevan, If You’re Listening / ABC
Thanks and Conclusion
Thanks for reading, and for listening. Please do not sit back and write off what I’m saying as “passionate” or “ranting” as though dismissing what I’m saying as “AI Hate” just explains away what I am saying.
I have over 20 years as a Software Engineer, for some of the biggest names in tech; and also in the startup space. What is happening in the AI space right now is a 5-alarm fire, and speaking up loudly is the only way we are going to get any action on it.
Please help me in this.
Totally a Thing is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.
This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit totallyathing.substack.com/subscribe
Want to find AI jobs?
Join thousands of AI professionals finding their next opportunity