The Australian AI Adoption Tracker, published quarterly by the Department of Industry, Science and Resources, put small-business AI use at 41% in its June 2025 report. That was a five-percentage-point jump in a single quarter. Every piece of policy commentary since has led with that number.
The AI Lab Australia 2026 State of AI Adoption report, released in February, put another number next to it. Of the small businesses using AI, only 5% are, in the report’s phrase, “fully enabled to realise potential benefits.” 46% of those using AI are measuring no business impact at all. Among non-adopters, the single most-cited reason is “don’t know where to start,” at roughly a third.
If you stack those numbers the story is not adoption. The story is the distance between trying and operationalising, and whether anyone is closing it.
What “using AI” actually covers
The headline 41% figure is enormous in its looseness. It captures a business with a single employee using ChatGPT once a week to draft a cold email. It also captures a business with an end-to-end document-intelligence pipeline routing invoices to its accounting system. Both return “yes” to the adoption question. Neither has comparable economic impact.
The distribution underneath that 41% is, by every operator I spoke to who had a real answer, extremely skewed. Most usage is ad-hoc, human-in-the-loop, and attached to a single role (usually marketing or admin). A small minority of users have integrated AI into a process, with measured outputs and a person accountable for the output.
That minority is, roughly, the 5%.
Why measurement is the missing piece
Of the AI Lab Australia sample, 46% of AI-using businesses said they did not measure the impact of their AI usage at all. A further 35% measured in a “general” sense (the phrase used in the report is “we feel it is saving us time”). Only the remaining 19% had a specific before-and-after metric attached to any AI deployment.
That matters because the gap between trying and operationalising is mostly a measurement gap. A small business that tries a tool for three months without a baseline cannot answer the question the tool was meant to answer. They cannot tell whether they should expand the use of it, contract it, or pay for the next tier. They remain in trial mode indefinitely, because trial mode is the only mode they have the data for.
The 5% who have moved past trial mode, in every case I looked at, had done a specific thing: they had picked one process, instrumented it with a before-and-after metric, and only scaled usage after the metric had moved.
A worked example
One of the operators I spent time with was running a twelve-person professional-services firm in Adelaide. She had, in mid-2024, rolled out a general-purpose AI assistant across her team with no specific use-case or metric. For ten months the usage was enthusiastic, patchy, and uncorrelated with any outcome she could point to.
In March 2025 she did something different. She picked one process (first-draft client correspondence) and measured, for four weeks, the time her team spent on it. It was 11.2 hours per week in aggregate. She then rolled out a specific template workflow using the same assistant, measured again, and got 4.1 hours per week. The tool had not changed. The process and the measurement had.
“I’d been paying for AI for a year and not known whether it was doing anything,” she told me. “When I finally put a number next to it, the answer changed how we used every other tool we had.”
The adoption ceiling is a methodology problem
The $45 billion GDP contribution projected by Deloitte and Amazon in their November 2025 AI Edge report is real, but it is conditional. It assumes that the distribution of usage (which is currently shaped like a pyramid) flattens, and that the 41% who are trying becomes the 41% who are measuring.
The policy scaffolding to help that happen exists. The Senate Select Committee on Adopting AI reported in late 2024; the federal government has responded; the department publishes the quarterly tracker that surfaces these numbers in the first place. The policy settings are not the binding constraint.
The binding constraint is inside the business. It is:
- Picking a single process, not a “platform.”
- Baselining that process with a quantitative metric before any AI is introduced.
- Holding the AI deployment to the metric for long enough to read a trend.
- Moving on to the next process only after the first one has settled.
That is not a sexy playbook. It is a reporting discipline. The small businesses that will get to the 5% are the ones that treat AI like any other operational tool, not the ones that treat it like a transformative theology.
For the remaining third
For the third of small businesses who are not using AI and cite “don’t know where to start,” the honest operator’s advice is to not start at all until a specific, measurable problem presents itself. “Using AI” is not a goal. A twelve-percent reduction in the time the team spends on client correspondence is a goal. The second will find the first, if the measurement is in place.
The businesses that will close the gap between 41% and 5% will not close it by adopting more AI. They will close it by adopting more rigour about what they already have.