Alex is Sprintlaw’s co-founder and principal lawyer. Alex previously worked at a top-tier firm as a lawyer specialising in technology and media contracts, and founded a digital agency which he sold in 2015.
Tools like ChatGPT can help your team draft emails, summarise documents and brainstorm ideas in minutes. That speed is great for small businesses.
But there’s a big question we hear from business owners every week: is ChatGPT confidential?
If your team is pasting customer details, contracts or internal strategies into an AI tool, you’re potentially creating legal and commercial risk. The good news is you can get the benefits of AI while protecting your business - if you put the right guardrails in place.
In this guide, we break down how confidentiality works with AI tools, the key risks for Australian businesses, the laws that apply, and a practical checklist to use ChatGPT safely at work.
Why Confidentiality Matters When Your Team Uses ChatGPT
Confidentiality isn’t just about being cautious. It underpins trust with your customers, partners and staff. If sensitive information leaks (even accidentally), the consequences can include reputational damage, breach of contract, and regulatory issues.
For small businesses, the stakes are often higher because you likely hold a lot of “all-in-one” information - sales pipeline notes, customer support messages, supplier pricing and IP - in a few systems. If any of that content leaves your control, it can be hard to pull it back.
That’s why it’s important to treat AI tools like any other third-party service provider: understand what data they collect, how it’s used, where it’s stored, and who can access it. Then set clear rules for your team so nothing sensitive is shared without proper safeguards.
Is ChatGPT Confidential By Default? The Short Answer
No tool should be assumed confidential by default. Whether an AI tool is confidential depends on the specific product, plan, settings and provider terms you’ve agreed to.
Generally, consumer or free versions of AI tools may use inputs to improve the service (via model training, analytics or moderation). Business or enterprise plans may offer stronger controls, including settings to limit data retention or training use, enterprise-grade security, and contractual commitments about data handling.
In practice, you should take a “privacy by design” approach: assume prompts and outputs could be visible to the provider or its subprocessors unless your contract and settings say otherwise. If you need confidentiality, you’ll want appropriate vendor terms, technical controls and internal policies to match.
What Risks Should Small Businesses Consider?
Before your team pastes anything into ChatGPT (or any AI tool), map the main risks and decide how you’ll reduce them.
1) Personal Information And Privacy
If prompts contain personal information (names, emails, phone numbers, health information or employee details), you may trigger obligations under the Privacy Act 1988 (Cth) and the Australian Privacy Principles (APPs). You should only collect, use and disclose personal information where you have a lawful basis, and you must handle it securely and transparently.
If your website or app collects personal information, you’ll also need a clear Privacy Policy that explains what you collect and how you use it, including any use of AI processing where relevant.
2) Client Confidentiality And Contractual Promises
Many B2B and professional services agreements include confidentiality clauses. Sharing client materials with an external tool might count as disclosure unless your contract permits it or you have the client’s consent.
If you plan to use AI to support client work, consider updating your client terms to reflect this, and use a Non-Disclosure Agreement when exchanging sensitive information with third parties.
3) Intellectual Property (IP) Leakage
Internal strategies, code snippets, product designs and pricing models are all valuable IP. If you paste them into an AI prompt, that content may be processed outside your environment. Even if the provider promises not to train on your data, there’s still a risk of accidental disclosure through logs, support tickets, or misconfigured access.
Adopt a “no confidential IP in prompts” rule unless you have enterprise-grade controls and a contract that covers confidentiality, security and IP ownership.
4) Inaccurate Or Misleading Outputs
AI tools can be wrong, outdated or overconfident. If you publish or act on AI-generated content without proper review, you risk breaching the Australian Consumer Law (misleading or deceptive conduct), making inaccurate claims, or sharing non-compliant advice.
Always require human review for client-facing or legally sensitive outputs.
5) Data Location, Security And Retention
Where data is stored and how long it is kept matters. Depending on your sector, you may have requirements around offshore storage, encryption standards or retention limits. Make sure your vendor discloses data locations, security certifications and deletion options - and that these align with your obligations and risk appetite.
How To Use ChatGPT Safely In Your Business (Step-By-Step)
Here’s a practical framework to get the benefits of AI without compromising confidentiality.
1) Set A Clear AI Use Policy
Start with simple, written rules: what your team can and can’t paste into AI tools, approved use cases, and required checks before sharing any customer or client content. A practical way to do this is to implement a tailored Generative AI Use Policy so everyone is on the same page.
2) Classify Your Data
Create three buckets your team can understand at a glance:
- Public or marketing-friendly (safe to use as examples);
- Internal but non-sensitive (drafts, templates without real names);
- Confidential or personal (client files, pricing, identifiable data) - do not share without specific approval and controls.
3) Choose The Right Plan And Settings
If you will use AI with business information, consider business or enterprise plans that provide privacy controls, admin dashboards, SSO, audit logs, and options to disable training/retention. Configure settings centrally so staff can’t change them ad hoc.
4) Redact Or Anonymise Inputs
Require staff to strip out names, contact details, dollar figures and client identifiers before prompting. If you need an accurate analysis of sensitive documents, use an internal tool or a vendor under a signed Data Processing Agreement with strict confidentiality and security obligations.
5) Bake In Human Review
AI can speed up drafting, but people must verify accuracy and context. Establish a quick review checklist for anything client-facing or compliance-related: facts checked, claims are substantiated, tone matches your brand, and no confidential details have been included accidentally.
6) Update Your Contracts And Customer Disclosures
If you will use AI in delivering services, make sure your customer contracts allow it, reflect how you handle data, and set boundaries on liability. Your public-facing disclosures should align with your Privacy Policy so customers understand how their data may be processed.
7) Strengthen Security And Access Controls
Apply the same discipline you use for any SaaS platform: single sign-on, role-based access, prompt hygiene training, and an Information Security Policy covering appropriate use, passwords, devices and incident response.
8) Prepare For Incidents
Even with good controls, mistakes can happen. Have a tested Data Breach Response Plan so your team knows how to escalate, contain and notify if information is shared improperly, including when the Notifiable Data Breaches scheme might apply.
What Laws Apply In Australia?
Your legal responsibilities don’t change just because you’re using a new tool. The following laws commonly apply when Australian businesses use AI.
Privacy Act 1988 (Cth) And The APPs
If you handle personal information, you must comply with the Australian Privacy Principles. Key points include only collecting what you need, using it for the purpose collected (or a related one the person would reasonably expect), storing it securely, and being transparent about third-party disclosures and overseas transfers.
A practical way to embed this compliance is to maintain an up-to-date Privacy Policy, keep records of your data flows, and ensure any AI vendor that processes personal information is covered by a suitable Data Processing Agreement.
Notifiable Data Breaches Scheme
If a data breach involving personal information is likely to result in serious harm, you may need to notify affected individuals and the OAIC. Having a documented Data Breach Response Plan helps you assess and act quickly.
Australian Consumer Law (ACL)
AI-generated marketing claims still need to be accurate and verifiable. Misleading or deceptive conduct can attract penalties. Build human review and sign-off into your workflows, and keep substantiation for any claims the AI suggests.
Confidentiality And IP
Equitable duties of confidence and contractual confidentiality clauses can apply to information you hold for clients, suppliers and partners. If you disclose that information to an AI provider without permission or adequate safeguards, you could be in breach. Use robust confidentiality clauses in your customer and supplier contracts, and deploy an internal Generative AI Use Policy so staff know the boundaries.
Employment And Workplace Policies
Set clear expectations for staff about responsible AI use, acceptable prompts, and review standards. Align this with your broader Information Security Policy and training.
What Legal Documents Should You Put In Place?
You don’t need an army of documents to use AI safely, but a few targeted tools will make a big difference.
- Generative AI Use Policy: Sets out approved tools, do/don’t rules for prompts, review requirements and escalation pathways. A tailored Generative AI Use Policy helps your team move fast without guesswork.
- Privacy Policy: Explains how you collect and use personal information, including any AI-related processing that’s relevant to customers or users. Keep your Privacy Policy consistent with your actual practices.
- Data Processing Agreement (DPA): Contract terms with AI or analytics vendors that process personal information on your behalf (confidentiality, purpose limitation, security, subprocessor controls, deletion and audits). See Data Processing Agreement.
- Non-Disclosure Agreement (NDA): When you collaborate with agencies, contractors or consultants who might use AI in delivery, an NDA helps protect your confidential information.
- Information Security Policy: Covers password management, access control, device use, incident response and vendor risk management. A practical Information Security Policy supports day-to-day decisions.
- Data Breach Response Plan: A step-by-step playbook so your team can identify, contain and assess incidents quickly, including when to notify. A tested Data Breach Response Plan saves time when it counts.
Depending on your model, you may also review your client terms, supplier agreements and marketing approvals to make sure they reflect AI-assisted delivery and accuracy checks. If you need one-off advice on how these pieces fit together for your business, our team provides practical privacy advice aligned to the way you actually operate.
Practical Do’s And Don’ts For Using ChatGPT At Work
- Do use AI for generic tasks: brainstorming headings, summarising non-sensitive text, first-draft templates, tone adjustments.
- Don’t paste confidential client information, identifiable personal data or unique pricing unless you have enterprise controls and an appropriate contract in place.
- Do anonymise and redact: remove names, numbers, locations and proprietary details before prompting.
- Don’t publish AI output without human review, especially claims that could be misleading or regulated.
- Do train your team: short, practical training on safe prompts and red flags goes a long way.
- Don’t forget vendor management: review provider terms, configure privacy settings, and keep an internal register of tools in use.
Key Takeaways
- ChatGPT isn’t confidential by default - confidentiality depends on your plan, settings and contract with the provider.
- Treat AI tools like any other vendor: check data use, storage, retention and access, and put contractual controls in place.
- Map the main risks (privacy, confidentiality, IP and accuracy) and set simple rules your team can follow day to day.
- Use core documents such as a Generative AI Use Policy, Privacy Policy, DPA, NDA, Information Security Policy and a Data Breach Response Plan to protect your business.
- Australian laws still apply: the Privacy Act and APPs, Notifiable Data Breaches, and the Australian Consumer Law all remain relevant when AI is involved.
- With the right guardrails, small businesses can enjoy AI’s benefits while safeguarding client trust and compliance.
If you’d like a consultation on using ChatGPT and AI tools safely in your business, you can reach us at 1800 730 617 or team@sprintlaw.com.au for a free, no-obligations chat.








