Alex is Sprintlaw’s co-founder and principal lawyer. Alex previously worked at a top-tier firm as a lawyer specialising in technology and media contracts, and founded a digital agency which he sold in 2015.
If you run a startup or small business online, chances are you’re doing more than just “having a website”. You might be hosting user comments, running a community, offering direct messages, letting customers upload content, or operating an online marketplace.
That’s exactly where the Online Safety Act 2021 can become relevant. While the law is often discussed in the context of big social media platforms, it can also affect smaller digital businesses - particularly where your product includes user-generated content, messaging, or community features.
Below, we’ll walk you through what the Online Safety Act 2021 is designed to do, the types of businesses it can apply to, and the practical steps you can take to reduce risk while keeping your platform (and your team) safe.
What Is The Online Safety Act 2021 (And Why It Matters For Businesses)?
The Online Safety Act 2021 is a Commonwealth law designed to improve online safety in Australia. It gives the eSafety Commissioner a range of powers to deal with harmful online content and, in some cases, require certain online services to take steps to prevent and respond to harm.
From a small business perspective, the key idea is this: if your business provides an online service where users can interact, share content, or message each other, you should understand whether your service falls within any of the Act’s specific regulatory schemes - because the main legal duties are often tied to those schemes (and are commonly triggered by complaints, notices, or removal processes).
Even if you’re not a large social platform, regulators generally look at what your product does (functionality) and how it’s used (risk profile), not just how big you are. That said, the Act does not automatically mean every SME has an ongoing duty to proactively monitor all content - many obligations in practice are notice-and-action based (for example, responding appropriately when content is reported or when a formal notice is received).
What The Law Is Trying To Prevent
The Online Safety Act 2021 covers several categories of harmful content and online harms, including:
- image-based abuse (including non-consensual sharing of intimate images)
- cyber-abuse (for example, serious harassment and intimidation)
- online content involving children (including very serious categories such as child sexual abuse material)
- certain harmful online content that can be subject to removal notices
For startups and SMEs, this can translate into a very practical question: if a user posts or sends something harmful through your service, what do you do next - and are you required to take action under a particular part of the Act?
Does This Replace Other Laws You Already Have To Follow?
No. The Online Safety Act 2021 sits alongside other legal obligations you may already be dealing with, such as privacy and consumer law.
For example, if you collect personal information through your platform, you’ll still need to think about whether you need a Privacy Policy and how you handle personal data more broadly. If you sell products or services online, your customer-facing terms also still matter.
Does The Online Safety Act 2021 Apply To Your Startup Or SME?
The Online Safety Act 2021 is broad, and it doesn’t only capture “social media companies”. However, it doesn’t automatically apply to every online business in the same way - obligations can depend on whether your business is operating a type of regulated “service” under the Act and which scheme is engaged (for example, certain removal notice regimes, cyber-abuse schemes, or image-based abuse frameworks).
You should pay close attention if your business offers any of the following:
- User-generated content (comments, posts, profiles, reviews, listings, uploads)
- Messaging or chat features (DMs, group chats, in-app messaging)
- Community functionality (forums, groups, creator communities, member spaces)
- Platforms for third-party sellers or providers (marketplaces, directories, gig platforms)
- Apps or online services used by (or accessible to) children
Even if your platform is primarily B2B, you may still be exposed if (for example) users can post content publicly, interact with others, or communicate within the system.
Common SME Scenarios Where The Law Can Become Relevant
Here are a few common examples we see in growing businesses:
- Marketplace startups: users create listings and upload images; disputes can escalate into harassment; scammers may post harmful content.
- SaaS platforms: you add a “community” tab, customer forum, or job board and suddenly you’re hosting content you didn’t write.
- Membership businesses: paid groups or “creator communities” can still become channels for bullying, threatening content, or image-based abuse.
- Education and coaching businesses: student communities can raise particular safety concerns, especially if under-18s are involved.
If any of these feel familiar, it’s worth taking online safety compliance seriously early - it’s much easier to build the right systems now than to retrofit them after an incident.
Key Risks And Obligations Under The Online Safety Act 2021 For SMEs
At a practical level, the Online Safety Act 2021 can increase the likelihood that your business may need to:
- respond quickly to complaints about harmful content
- remove or restrict access to certain content (particularly where a valid complaint or notice is made under a relevant scheme)
- co-operate with formal processes (including notices) in more serious scenarios
- have clear rules and reporting pathways for users
For many SMEs, the “obligation” isn’t that you must monitor everything at all times. Instead, the real risk tends to arise when your service design makes harm more likely (for example, public posting, anonymous messaging, or easy image sharing) and you don’t have a reasonable way to receive, triage, and respond to reports - or you don’t respond appropriately if the eSafety Commissioner issues a notice under the Act.
1) User-Generated Content: You’re Not Just Building Features - You’re Hosting Risk
User-generated content is great for growth, SEO, and engagement. It can also introduce risk, including:
- defamation claims (separate to the Online Safety Act 2021)
- harassment or threats between users
- non-consensual sharing of images
- content involving children
That’s why clear platform rules are important. Many businesses use a mix of:
- Community Guidelines (what users can and can’t do)
- Acceptable Use Policy (technical and behavioural rules, often for SaaS)
- reporting and moderation processes (so users can flag issues and you can act quickly)
2) Complaints Handling: Speed And Process Matter
Online safety regulation is heavily focused on outcomes, and for many businesses the pressure point is what happens after a report is made (or a formal notice is received). If harmful material is on your platform, the key questions often become:
- How can someone report it?
- How quickly will you review it?
- What action will you take?
- How will you communicate your decision?
- How do you stop repeat behaviour?
If your business doesn’t have a clear process, your team may respond inconsistently under pressure (especially if the issue goes public, involves vulnerable users, or escalates quickly).
3) Services Used By Children: Extra Care Is Needed
If your platform is likely to be used by under-18s - even if that’s not your primary target market - you should treat this as a higher-risk area.
From a legal and reputational standpoint, child safety issues can escalate fast. It’s important to think through:
- age-gating and onboarding controls (where appropriate)
- stricter moderation for youth-facing areas
- how reports are triaged when they involve under-18s
- staff training and escalation pathways
4) Serious Content Categories: “We Didn’t Know” Usually Isn’t A Good Plan
Some categories of harmful content are so serious that you should have a plan in place before an incident occurs. This includes image-based abuse and child safety issues.
In practice, that means having:
- a clear internal incident response workflow
- a way to preserve evidence (where needed and lawful)
- clear rules for removing, blocking, or limiting access to content
- escalation steps (including when to get legal advice)
Many businesses align this with broader privacy and security planning too, including a Data Breach Response Plan where personal information may be involved.
Practical Compliance Steps: What You Can Do Now (Even If You’re Small)
If you’re building fast, it’s tempting to treat online safety as “something to deal with later”. But later is often when you’re juggling growth, fundraising, hiring, and customer issues - not when you want to be designing processes from scratch.
Here’s a practical, startup-friendly checklist you can work through.
1) Map Your Product’s Risk Areas
Start by listing every feature where users can interact or upload content. For each feature, ask:
- Can users post publicly or message privately?
- Can users upload images or videos?
- Can users be anonymous or use pseudonyms?
- Can users contact each other off-platform (links, contact details)?
- Is the feature likely to be used by minors?
This doesn’t need to be overly complex. Even a one-page internal document can help your team stay aligned.
2) Put Clear Rules In Writing (And Make Them Easy To Find)
It’s much easier to enforce standards when you’ve actually set them out clearly. Depending on your business, you might need:
- Website terms setting out rules for site use, user accounts, and your rights to remove content (your Website Terms and Conditions can be a key home for this)
- Community rules for behaviour, harassment, hate content, and image sharing
- Platform enforcement processes (warnings, suspensions, bans, and appeal options)
These documents don’t just help you “look professional” - they can help you act quickly, consistently, and defensibly when something goes wrong.
3) Build A Simple Reporting And Moderation Workflow
You don’t need a large trust-and-safety team to start. But you do need a workable process, such as:
- Report intake: a clear “Report” button and/or dedicated email.
- Triage: a quick way to sort urgent reports (threats, image-based abuse, child safety) from lower-risk reports (spam, mild rudeness).
- Decision: remove, restrict, warn, suspend, or ban.
- Record-keeping: log what happened, what you did, and why.
- User communication: short templates so responses are consistent.
As you scale, you can refine this into more formal playbooks. The key is to start with something your team can actually follow.
4) Review Your Privacy Settings And Data Handling
Online safety incidents often overlap with privacy issues - for example, where personal information is disclosed, doxxing occurs, or intimate images are shared.
As a baseline, check that your privacy approach matches what you say you do publicly, including your Privacy Policy. If you’re collecting sensitive information or handling higher-risk communities, it’s worth getting advice early so your policies and practices align.
5) Train Your Team (Even If It’s Only Three People)
In a small business, moderation and complaints often end up in a founder’s inbox, a support channel, or with a junior team member who’s doing their best under pressure.
Simple training can include:
- what to escalate immediately (threats, child safety issues, image-based abuse)
- what not to do (for example, making promises you can’t keep, or engaging in arguments)
- how to keep communications neutral and consistent
- how to preserve internal records
This also protects your team’s wellbeing - handling distressing content without a process can be extremely stressful.
What Legal Documents Can Help Your Business Manage Online Safety Risks?
Most online safety issues become harder when there’s uncertainty about your rights, your users’ responsibilities, and what happens if rules are broken.
Well-drafted legal documents help you set expectations early, reduce disputes, and act quickly when you need to remove content or restrict users.
Depending on your platform, these are common documents to consider:
- Website Terms and Conditions: sets out rules for using your website or platform, account controls, and your content removal rights (many businesses include these in their Website Terms and Conditions).
- Community Guidelines: clear behavioural standards for user posts, harassment, hate content, and image-sharing (often set out in Community Guidelines).
- Acceptable Use Policy: particularly useful for SaaS and platforms with workplace use, describing prohibited activity and misuse (such as an Acceptable Use Policy).
- Privacy Policy: explains how you handle personal information, including what you collect, why, and who you share it with (your Privacy Policy should align with your actual practices).
- Data Breach Response Plan: an internal playbook for when something goes wrong involving personal information or security, like a Data Breach Response Plan.
Not every business needs every document from day one, but if your growth plan involves community features, user uploads, or messaging, these tend to become “must-haves” sooner than you think.
What Should You Do If There’s Harmful Content Or A Serious Complaint?
When something serious happens on your platform, it’s normal to feel like you need to act immediately - and you do. But acting quickly doesn’t mean acting randomly.
Here’s a structured approach many startups and SMEs use to stay calm and respond effectively.
1) Prioritise Safety First
If there’s an immediate risk to someone’s safety (for example, threats of violence, stalking behaviour, or severe harassment), your priority should be to:
- restrict or remove access to the content (where appropriate)
- limit the offender’s access (temporary lock or suspension, if needed)
- preserve relevant logs or evidence (as appropriate)
Having clear internal escalation steps helps you avoid delays when time matters.
2) Document Your Decision-Making
Even if you’re small, record-keeping matters. A basic incident log should capture:
- what was reported (and by whom)
- the date/time you received the report
- the steps you took
- why you made that decision
- any communications sent to users
This can help if a complaint escalates, if users dispute your actions, or if you later need to show you acted responsibly.
3) Communicate Clearly (And Carefully)
It’s usually best to keep messages short, factual, and non-inflammatory. Avoid legal conclusions or emotional language.
Where possible, refer back to your written rules (for example, your platform terms or community standards), rather than debating the behaviour itself.
4) Know When To Escalate For Legal Advice
Some incidents can create wider legal exposure beyond online safety law, including defamation, privacy complaints, employment issues (if staff are involved), or law enforcement issues.
If you’re unsure, it’s a good time to get advice - particularly if:
- there’s a risk of serious harm
- the issue involves minors
- intimate images are involved
- you’ve received a formal complaint or notice
- the matter is attracting public attention
Getting guidance early can help you respond confidently without making things worse.
Key Takeaways
- The Online Safety Act 2021 isn’t only about big platforms - if your startup or SME hosts user-generated content, messaging, or communities, it can be relevant to your risk profile (and the specific legal duties will often depend on whether you’re a regulated service under a particular scheme).
- Online safety compliance is largely about having clear, workable processes: reporting pathways, fast triage, consistent moderation, and good record-keeping - especially so you can respond properly to complaints or notices.
- Clear platform rules in writing (like terms, community standards, and acceptable use rules) help you act quickly and reduce disputes when harmful content appears.
- Serious incidents often overlap with privacy and security, so your public policies and internal response plans should align with how your platform actually operates.
- Building an online safety approach early is usually faster, cheaper, and less stressful than trying to fix things after a major complaint or incident.
Note: This article is general information only and is not legal advice. Online safety obligations can vary depending on your platform features, user base, and whether a particular regulatory scheme applies. If you’re dealing with serious content (including child safety matters or image-based abuse), you may need urgent specialist support.
If you’d like help reviewing your platform’s online safety risks or putting the right documents and processes in place, you can reach us at 1800 730 617 or team@sprintlaw.com.au for a free, no-obligations chat.








