Alex is Sprintlaw’s co-founder and principal lawyer. Alex previously worked at a top-tier firm as a lawyer specialising in technology and media contracts, and founded a digital agency which he sold in 2015.
- What the EU AI Act Is Trying to Achieve
- What Counts as an AI System
- Does the Act Apply to Australian Businesses?
- Understanding the Risk Levels
- Unacceptable Risk - Prohibited Uses
- High Risk - Heavily Regulated AI
- Limited Risk - Transparency Duties
- Minimal Risk - Everyday AI
- Providers vs Deployers - Understanding Your Role
- General-Purpose AI (GPAI)
- EU AI Act Timeline - Key Dates for SMEs
- Practical Steps for Australian SMEs
- When to Seek Legal Support
- Final Thoughts
Artificial intelligence now sits quietly inside many tools Australian businesses use every day. It helps personalise marketing, automate admin, assess job applications, analyse risks, verify identities and triage customer support. Much of this technology isn’t labelled as AI, which makes it easy for a business to overlook how deeply AI is already embedded in its operations.
As AI becomes more influential, governments are stepping in. The most significant development so far is the EU Artificial Intelligence Act (EU AI Act), the world’s first comprehensive legal framework for AI. Even though it is a European law, it affects businesses everywhere when their AI systems or outputs are used inside the EU.
For Australian SMEs that work with EU customers, have EU users, or provide tools that could influence decisions overseas, the Act raises important questions. This guide explains the EU AI Act in plain English, clarifies when it applies, and outlines sensible steps to support compliance without overwhelming your business.
What the EU AI Act Is Trying to Achieve
The Act aims to ensure AI is used safely and responsibly. It sets baseline expectations for human oversight, data quality, transparency and accountability. To make the rules workable, the Act does not treat AI as one single category. Instead, it adopts a risk-based structure where obligations increase depending on the level of risk.
For Australian businesses, this approach matters. Most tools fall into low-risk categories, but certain activities, especially in HR, finance, education, health and essential services, can trigger more stringent duties.
What Counts as an AI System
Under the EU AI Act, an AI system is any machine-based system that infers from input data how to generate outputs such as predictions, recommendations, decisions or content. These outputs must be capable of influencing a physical or virtual environment.
This includes systems built using machine-learning techniques, logic- or knowledge-based approaches, and statistical or optimisation methods. Importantly, not every simple rule-based workflow or basic automation counts as AI. However, many tools businesses think of as “automation” do fall under the definition because they infer patterns or make predictions, even if that is not how they are marketed.
For an Australian SME, AI may already be present in tools such as hiring filters, personalised marketing engines, identity verification tools, fraud or risk scoring engines, document classifiers or customer support chatbots. Mapping these systems early helps clarify later compliance questions.
Does the Act Apply to Australian Businesses?
Yes, if your AI system or its outputs are used in the EU. The law focuses on where the AI system is marketed or used, not where the business is physically located. An Australian company may fall into scope even without having a European office.
For example:
- An HR consultancy in Sydney uses an AI tool to shortlist applicants for a company in Germany. The shortlisting affects decisions in the EU, so the Act may apply.
- An Australian creative agency generates AI-assisted advertising content for EU audiences. Transparency obligations may apply.
- A fintech in Brisbane provides AI-based risk scoring that feeds into affordability decisions made by an EU lender. This may fall into a high-risk category depending on how the system is used.
A website that is merely accessible in Europe is not automatically covered. The key question is whether your AI system or its outputs play a role in EU situations. A short legal scoping review can clarify this.
Understanding the Risk Levels
The Act contains four risk levels. Understanding which one applies to your tools will shape what obligations you face.
| Risk Level | Meaning | What It Requires |
| Unacceptable Risk | AI uses that pose unacceptable harm to people’s rights, safety or democratic values. These systems are banned entirely. | Cannot be sold, provided or used in the EU. Includes practices such as social scoring and certain forms of biometric surveillance. |
| High Risk | AI used in sensitive areas where decisions can significantly affect individuals (such as hiring, credit, education or essential services). | Strict obligations: documented risk management, high-quality data, human oversight, technical documentation, monitoring and (in many cases) conformity assessments. |
| Limited Risk | AI where the main concern is that users may not realise AI is involved. | Transparency duties. Users must be told when they are interacting with AI or receiving AI-generated or AI-altered content. |
| Minimal Risk | Everyday AI tools with very low potential for harm (such as spam filters and simple recommendations). | No additional risk-specific duties. Businesses still need basic responsible-use practices and staff AI literacy. |
Unacceptable Risk - Prohibited Uses
These are AI practices that the EU bans outright because of their potential for serious harm. Examples include untargeted scraping of facial images to build recognition databases, systems that manipulate vulnerable groups, emotion recognition used in schools or workplaces, and social scoring of individuals by public authorities.
Australian SMEs rarely operate directly in these areas, but risk can emerge indirectly through third-party tools. Reviewing vendor features is an effective safeguard.
High Risk - Heavily Regulated AI
This is the category most relevant to Australian SMEs offering services to the EU.
A system is high-risk when it falls within specific categories listed in the Act. These include employment-related AI systems, education and assessment tools, systems affecting access to essential private services (such as credit decisions), certain health and safety uses, and other categories listed in Annex III. Not every HR or credit tool is automatically high-risk, but many familiar use cases fall into this group.
High-risk systems require strong controls around data quality, risk management, human oversight, documentation, transparency and ongoing monitoring.
Here is an important clarification for accuracy: the most detailed high-risk obligations primarily apply to providers (the party that develops or substantially modifies the AI system). Deployers (businesses that use AI built by someone else) face a narrower set of obligations focused on oversight, data suitability, transparency and correct system use. This distinction is important for Australian SMEs, because many businesses will be deployers rather than providers.
Limited Risk - Transparency Duties
Some AI systems aren’t harmful but could mislead people if they don’t realise AI is involved. This category covers tools such as chatbots, AI-generated content that could appear authentic, and systems that synthetically alter media. The key duty here is transparency.
For example:
- An Australian retailer using an AI chatbot for EU customers must disclose that customers are interacting with AI.
- A marketing agency using AI-generated images or video for EU clients may need to label synthetic content when it could reasonably be mistaken for genuine content.
Minimal Risk - Everyday AI
Minimal-risk systems include spam filters, writing assistants, search and recommendation engines and everyday productivity tools. No specific EU AI Act obligations apply, although documenting their use is still advisable.
Providers vs Deployers - Understanding Your Role
Most Australian SMEs will be considered deployers, meaning they use AI systems created by others. Deployers must follow provider instructions, ensure that data they supply is accurate and lawful, supervise outputs where required, and provide any necessary transparency notices.
A business becomes a provider when it develops an AI system, fine-tunes a model, significantly modifies an existing system, or integrates AI into its own product and places that product on the EU market. The legal threshold is whether the business makes a substantial modification to the system, changing the intended purpose or affecting compliance.
The EU is also establishing a central regulator known as the AI Office, which will coordinate enforcement and oversee general-purpose AI models across Europe. This helps create consistent expectations for companies outside Europe.
General-Purpose AI (GPAI)
Large multi-use models such as GPT, Claude or Gemini are treated as general-purpose AI under the Act. The strongest obligations fall on the developers of these models, not the everyday business users who rely on them.
However, an Australian business may have obligations if it fine-tunes a GPAI model, integrates it into a high-risk tool, or provides downstream access to EU users in ways that change the risk profile. A legal check before scaling or commercialising a GPAI-enabled product is wise.
EU AI Act Timeline - Key Dates for SMEs
- August 2024: The AI Act entered into force.
- February 2025: Prohibitions on unacceptable-risk AI practices and certain AI literacy requirements apply.
- August 2025: Obligations for general-purpose AI models commence.
- 2026 to 2027: Most high-risk system obligations come into effect.
This phased rollout gives Australian SMEs time to build compliance.
Practical Steps for Australian SMEs
Even without heavy checklists, a few core actions will make compliance significantly easier.
First, work out whether the EU AI Act applies to you. This depends on where your users are, what your tool does and whether any decisions influenced by your AI occur inside the EU.
Second, map the AI you already use. Outline what tools you rely on, what data they use and how they influence decisions. This small exercise becomes the foundation of any compliance work.
Third, conduct simple risk assessments. Ask how each system affects people, what could go wrong and what human review is required. Short assessments for each system can reveal issues early.
Fourth, strengthen your data governance. High-risk systems must rely on accurate, lawful and representative data. Reviewing where your data comes from, how it is stored and whether cross-border transfers are lawful will help avoid future issues.
Finally, update contracts and internal processes. EU clients increasingly ask for AI-related assurances. Clear responsibilities around data, transparency and oversight protect your business and reduce the risk of taking on obligations unintentionally.
When to Seek Legal Support
Legal support is most helpful when you are unsure whether the Act applies, you operate in a sensitive area such as hiring or credit decisions, you modify or fine-tune AI systems, or EU clients ask for compliance documentation. Sprintlaw can help Australian SMEs map obligations, update contracts and set up practical processes that fit the scale of the business.
Final Thoughts
The EU AI Act is a significant step in global AI regulation, and it will shape expectations for responsible AI use for years to come. For most Australian SMEs, the first step is simply understanding whether the Act applies. The good news is that many AI systems fall into low-risk categories, and even high-risk obligations become manageable with advance planning.
By identifying the AI you use, improving data quality, strengthening oversight and being transparent with EU users, you can operate confidently in a changing regulatory environment. Where questions arise, targeted legal support can ensure your approach remains compliant, efficient and commercially practical.
If you would like a consultation on legal compliance with the EU AI Act, you can reach us at 1800 730 617 or team@sprintlaw.com.au for a free, no-obligations chat.








