Alex is Sprintlaw’s co-founder and principal lawyer. Alex previously worked at a top-tier firm as a lawyer specialising in technology and media contracts, and founded a digital agency which he sold in 2015.
- Why the business is often the first party exposed
- What “legally responsible” actually means
- When an AI mistake becomes a consumer law problem
- When an AI mistake becomes a privacy problem
- When an AI mistake becomes a fairness or discrimination issue
- The software provider may still matter - but usually behind the scenes
- What should a business do if something has already gone wrong?
- What should a business put in place before anything goes wrong?
- Where Sprintlaw can help
- AI may change the process, but not the need for legal oversight
AI is now part of everyday business. Small businesses are using it to answer customer enquiries, automate admin, draft communications, help with internal workflows and speed up decision-making. Used carefully, it can save time and reduce cost.
But when an AI system gets something wrong, the legal risk does not disappear into the software.
A chatbot might tell a customer the wrong thing about a refund. An automated billing process might generate the wrong invoice. A staff member might paste personal information into a public AI tool without understanding where that information goes. An AI-assisted screening process might produce an unfair outcome for a job applicant or customer. In each of those situations, the legal question is not just whether the technology failed. The real question is what legal obligation may have been breached, and who is exposed when that happens.
In many cases, that will be the business using the tool. Australian regulators are increasingly approaching AI this way: not as a separate legal actor, but as part of the business’s systems, governance and conduct. OAIC says the Privacy Act applies to all uses of AI involving personal information, while the ACCC continues to frame misleading statements and consumer harm through existing consumer law principles.
Why the business is often the first party exposed
The simplest reason is also the most important one: the customer usually deals with the business, not the AI provider.
So if your business chooses to use AI in customer service, marketing, onboarding, billing, recruitment or internal operations, the law will often focus on the role your business played in deploying and relying on that system. That does not mean the business is automatically liable every time an AI output is wrong. But it does mean the business is often the first party exposed where the output affects a customer’s rights, personal information, or access to a service.
That distinction matters. “The software made the mistake” may explain what happened, but it does not usually end the legal analysis. The next question is whether the business made a misleading representation, mishandled personal information, made an unfair decision, or failed to put proper oversight around a tool it chose to use. ASIC’s Report 798 makes a similar point in the financial services context, warning of a governance gap where organisations adopt AI faster than their risk controls and oversight arrangements are updated.
What “legally responsible” actually means
One reason this topic gets confusing is that “legal responsibility” can mean several different things.
Sometimes it means a customer complaint, refund demand or dispute. Sometimes it means regulatory risk, such as scrutiny under consumer law or privacy law. Sometimes it means the business has to assess whether a privacy incident has triggered further obligations. And sometimes it means a separate contractual fight with the software provider over who bears the loss internally.
Not every AI mistake creates the same kind of liability. A clumsy or inaccurate response might be a service issue with no real legal consequence. But the position changes when the output causes financial loss, affects legal rights, involves personal information, or contributes to an unfair or discriminatory outcome.
That is why the article should not present AI liability as one single rule. It is more accurate to say that the business is often the first party exposed, and whether it is actually liable depends on the facts, the harm caused, and the legal framework engaged.
When an AI mistake becomes a consumer law problem
A good example is customer-facing information.
If an AI chatbot tells a customer they are not entitled to a refund, when Australian Consumer Law says they may be, the problem is not just that the bot hallucinated. The legal issue may be that the business communicated something false or misleading about the customer’s rights. The ACCC’s guidance on false or misleading claims makes clear that businesses can face issues where consumers are given incorrect or misleading information about products, services or their entitlements. The ACCC also explains that consumer guarantees apply when consumers buy products or services, including rights around remedies when those guarantees are not met.
That same logic can apply if AI-generated content misstates pricing, service inclusions, cancellation rights, delivery timeframes, subscription terms, or the features of a product. Once AI is speaking on behalf of the business, the question becomes whether the business has made a representation it can stand behind.
When an AI mistake becomes a privacy problem
Privacy is another major pressure point, especially where staff use off-the-shelf generative AI tools casually.
OAIC’s guidance says the Privacy Act applies to all uses of AI involving personal information and is intended to help organisations comply with their privacy obligations when using commercially available AI products. That matters because businesses sometimes treat AI as a harmless productivity tool when, legally, it may be part of how personal information is collected, used, disclosed or generated.
So if staff paste customer information into a public AI tool, or if an AI system generates or infers personal information that the business then uses, the issue is not merely “bad practice.” It may raise compliance questions under the Privacy Act. And if an AI-related incident involves unauthorised access to, disclosure of, or loss of personal information, the business may need to assess whether the Notifiable Data Breaches scheme is engaged. OAIC explains that covered entities must assess suspected eligible data breaches and notify eligible breaches under the scheme.
That is one of the key practical points small businesses often miss: privacy risk is not limited to whether a tool is “secure.” It also includes how staff are using it, what data is being entered into it, and whether the business understands what happens to that data afterwards.
When an AI mistake becomes a fairness or discrimination issue
Some AI risks are less obvious until you look at the decision being made.
If AI is used to screen applicants, rank customers, triage complaints, detect fraud, or assess eligibility for a service, the legal concern may be less about accuracy in the abstract and more about fairness. The Australian Human Rights Commission has warned that AI can replicate existing biases and entrench discriminatory outcomes if it is not designed and deployed carefully.
For a small business, that means the risk is not only in what the AI says, but also in what the AI does. The more a tool influences a real-world decision about a person, the more important human review, transparency and proper governance become.
The software provider may still matter - but usually behind the scenes
None of this means the vendor is off the hook. It just means the vendor’s role is usually a second question, not the first one.
From the customer’s point of view, the relevant relationship is often with your business. From your point of view, the next question is whether your contract with the AI provider gives you any recourse. That is where liability caps, warranties, indemnities, data-use clauses, service levels and exclusions start to matter.
Many standard software and AI terms are drafted to protect the provider, not the customer. They often limit what the provider promises, cap liability heavily, and push responsibility for reviewing outputs back onto the business using the tool. The ACCC notes that contracts are what allocate rights and responsibilities between parties, which is why supplier terms need proper review rather than a quick click-through.
So there are often two different liability conversations happening at once. One is external, involving the customer or regulator. The other is internal, involving the business and the vendor. Confusing those two conversations is one of the easiest ways to misunderstand AI risk.
What should a business do if something has already gone wrong?
If an AI-related problem has already happened, the first step is to work out what kind of legal issue it actually is.
Was the customer misled about their rights or the service being provided? Was personal information disclosed or entered into a system in a way that creates privacy risk? Did the AI output affect a hiring, service-access or compliance decision? Once that is clear, the response becomes more practical.
The business may need to correct the statement, review complaint handling, check whether customer remedies are available, assess whether a privacy incident must be investigated, or examine the supplier contract to see whether there is any recourse against the vendor. Where personal information is involved, the business may also need to consider whether a notifiable data breach assessment is required. OAIC’s guidance on the NDB scheme specifically covers assessing suspected breaches and notifying eligible ones.
That kind of incident response is where legal advice can be especially useful, because the right next step depends on which legal framework has actually been triggered.
What should a business put in place before anything goes wrong?
The more useful legal work usually happens before the incident.
If your business is using AI in customer service, internal operations or staff workflows, it is worth checking whether your legal documents and processes reflect that reality. For many businesses, they do not. The privacy policy was written before anyone started using generative AI. The website terms assume all customer interactions are handled manually. The supplier agreement was accepted online without any real review. Staff are experimenting with AI informally, but no one has said what can or cannot be entered into those tools.
That is where practical legal protection comes from. Not from a vague promise to “use AI responsibly,” but from aligning the business’s contracts, privacy documents and internal rules with how the business actually operates.
In practice, that can mean reviewing customer-facing terms, checking supplier contracts with AI vendors, updating privacy policies and collection notices, preparing an internal AI use policy for staff, and identifying where human review needs to be built into customer-facing or higher-risk workflows. Those are ordinary legal and risk-management tasks, but AI makes them more urgent.
Where Sprintlaw can help
This is the part many readers will actually care about most.
For a small business, the challenge is usually not a theoretical question about robot liability. It is much more practical than that. The business wants to know whether its current documents and systems are fit for the way it is now using technology.
That is where legal help can make a real difference. Sprintlaw can help review customer terms and website terms, assess supplier contracts with AI providers, update privacy documentation, prepare internal AI use policies, and advise on what safeguards should exist where AI is used in customer-facing or higher-risk processes. If something has already gone wrong, legal advice can also help the business work out what obligations may have been triggered and what the next steps should be.
That is often the most commercially useful way to think about AI law for small businesses: not as a whole new legal universe, but as a signal that your existing contracts, policies and compliance settings may need updating.
AI may change the process, but not the need for legal oversight
The biggest misconception about AI in business is that using a sophisticated tool shifts legal responsibility somewhere else. Often, the opposite is true. The more a business relies on AI in communication, data handling and decision-making, the more important it becomes to understand the legal consequences of that reliance.
That does not mean a business will be liable every time an AI output is imperfect. But it does mean that when AI creates a misleading statement, a privacy issue, an unfair outcome or a contractual problem, the business using the tool is often still the first place the law will look.
The practical takeaway is simple: if your business is using AI, the legal question is not just whether the tool works. It is whether your contracts, privacy settings, staff rules and oversight processes are ready for what happens when it does not.If you would like a consultation on the legal side of using AI in your business, you can reach us at 1800 730 617 or team@sprintlaw.com.au for a free, no-obligations chat.








