How to Audit Your AI Security Before Someone Else Does

A self-assessment business professionals can run today — no IT team required

While volunteering with NPower recently, I spent time with students in an NPower cohort who want to become cybersecurity professionals. I especially want to give a quick shout-out to Aidan Gross, Tara Mason, and Paola Blonde Ndjeudja Tchouake, whose curiosity about the field helped shape the direction of this piece.

In those conversations, one thing became obvious: people do not just want to know that AI matters in cybersecurity. They want to know how to use it to build real skill.

That changed the assignment for this issue. Instead of using AI to produce a generic security checklist, I wanted to show a workflow that teaches you how to do something useful. The workflow in this lesson starts with a real report, moves into Claude for guided reasoning, connects to GitHub so the model can inspect actual files, and then ends with the step many people skip: analyzing the output and reverse-engineering how the model reached its conclusions.

And then it got me thinking about how we approach personal security in the age of AI. You spent the last two years learning how to use AI tools more effectively. Now there's a second skill worth learning: how to use them without handing your business data to someone you've never met.

This is not a theoretical risk. In February 2025, a cybercriminal offered 20 million OpenAI user login credentials for sale on dark web markets, harvested from compromised employee devices. Attackers did not breach OpenAI's systems. They stole login credentials and then walked straight into complete chat histories — everything typed into those accounts. Every internal strategy question. Every personnel situation. Every client detail shared while asking for help drafting an email.

Most business professionals have not audited their AI usage. They've adopted tools fast, shared context liberally to get better answers, and trusted that "enterprise" in a product name means their data is safe. Often it doesn't.

This lesson teaches you how to run a personal AI security audit in under 30 minutes. No IT department required.

LISTEN TO THE AI ENTERPRISE ON THE ROGUE AGENTS PODCAST

This is my latest project, while we do have audio summaries for each newsletter. They are not ideal for listening; they are simple text-to-speech. We created a way to provide a weekly summary of the newsletters in this podcast. And actually, it’s a work in progress. Right now, you get a pretty good podcast recap of the previous week’s newsletters. But over time, they will be better. That’s the plan.

This week’s episode breaks down Anthropic’s tightly restricted Project Glasswing, Meta’s fast-moving Muse Spark push, and the growing pressure chip tariffs are putting on everyone outside the hyperscaler tier. It closes with a fast set of quick hits on NotebookLM in Gemini, Gemma 4, Coefficient Bio, and Utah’s move on AI prescriptions.

AI LESSON

How to Audit Your AI Security Before Someone Else Does

A self-assessment business professionals can run today — no IT team required

You don't need a security team to take this seriously. You need 30 minutes, a browser, and a willingness to be honest about what you've been pasting into chat boxes.

Here's how to do it.

Step 1: Run a Personal AI Tool Inventory

Before you can secure your AI usage, you need to know what you're using.

Open a fresh document and list every AI tool you've touched in the past 90 days — at work, at home, on your phone. Include the obvious ones (ChatGPT, Claude, Copilot) but also the less obvious: AI writing assistants built into email clients, AI features inside Google Docs, Notion AI, AI summarizers in your browser, AI-powered scheduling tools, and any AI features baked into software you already use.

For each tool, note three things: whether you're using a free or paid account, whether it's a personal account or an employer-provided account, and whether you've ever typed anything into it that you wouldn't want made public.

That last question is the diagnostic. Most people who answer honestly find at least one tool where the answer is yes.

Time: 10 minutes

What you're looking for: Any tool where you've shared sensitive work content without knowing how that data is stored or used

Step 2: Review the Data Retention Settings on Your Three Most-Used Tools

Free AI tools generally use your conversations to improve their models unless you explicitly opt out. Enterprise accounts and paid tiers often offer different defaults — but "different" doesn't always mean "off." Defaults matter because most people never change them.

For ChatGPT, go to Settings → Data Controls. Confirm whether "Improve the model for everyone" is toggled on or off for your account type. Note that free consumer accounts and paid individual accounts have different defaults, and team or enterprise accounts may have organization-level controls.

For Claude, go to Privacy Settings and review whether your conversations are used for training. Anthropic's policy distinguishes between API access and consumer product usage (which defaults to opting you in for training unless you explicitly opt out).

For Microsoft Copilot, check whether you're accessing it through a personal Microsoft account or a work account connected to a Microsoft 365 tenant. Employer-managed accounts have different data handling governed by your organization's agreement, not Microsoft's consumer terms.

Time: 10 minutes

What you're looking for: Any tool where the default setting allows your content to be used for model training, and you haven't made a deliberate choice about that

Step 3: Apply the Substitution Test to Your Prompt Habits

This is the most useful diagnostic and it takes about five minutes once you know what to do.

Think of the last five substantive prompts you sent to any AI tool. For each one, substitute the names, numbers, and specific details with placeholders and ask yourself: if I posted this prompt publicly with those placeholders, would a reader know I'd shared something sensitive?

Common high-risk prompt patterns include:

Client or customer information:

"Summarize this email thread with [client] about their Q2 budget challenges and draft a response."

The AI doesn't need the client name or budget figure to help you write. Substitute them before you paste.

Internal personnel situations:

"Help me draft a performance improvement plan for an employee who..."

Even without a name, specific behavioral details can be identifiable. Keep it general.

Financial data: Revenue numbers, headcount, pricing models, and customer contract values are all information that can be useful to a competitor. Consider whether the AI actually needs that level of specificity to help you.

Legal or compliance details: Anything that would be covered by attorney-client privilege or that involves regulatory exposure deserves extra caution.

Time: 5 minutes

What you're looking for: Patterns in your own usage where you're sharing more context than the AI actually needs to help you

How to Access Privacy-Protective Settings Across Major Tools

Most of the major AI platforms have added more granular privacy controls in the past 12 months. The challenge is they're not always easy to find.

ChatGPT (OpenAI): Settings → Data Controls → Model Training toggle. Also review "Memory" settings — ChatGPT can retain context about you across conversations, which is useful but means more information is stored.

Claude (Anthropic): Privacy settings are accessible from your account settings. Anthropic's enterprise API product and its consumer product have different data handling policies.

Microsoft Copilot: The settings depend entirely on which version you're using. Copilot integrated into Microsoft 365 through an enterprise license is governed by your organization's contract. The free consumer version follows Microsoft's consumer privacy terms.

Google Gemini: My Account → Data & Privacy → Gemini Apps Activity. Turning this off prevents conversations from being saved and used for model improvement.

For any tool where you're unsure, the safest default is to assume your conversations may be stored and used unless you've taken an explicit action to change that.

What This Audit Can't Do

This audit addresses your own behavior. It doesn't address your organization's policies, your vendors' security posture, or the AI tools your colleagues may be using in ways that expose shared data.

It also won't protect you against credential theft — which is how most of those 20 million ChatGPT accounts were compromised. Using a strong, unique password and enabling multi-factor authentication on every AI tool you use is a prerequisite, not a supplement.

If you work in a regulated industry — healthcare, financial services, legal, or government contracting — this self-audit is a starting point, not a complete solution. Those environments often have compliance obligations that require formal policies, vendor agreements, and documented controls that go well beyond what any individual employee can manage alone.

Finally, this audit reflects how these tools work today. AI tool privacy policies and default settings change. The settings you review today may be different in six months.

Getting Started Today

Block 30 minutes on your calendar before the end of this week. Run steps one through three. You'll likely find that your AI usage is more exposed than you realized in one or two specific areas, and more controlled than you thought in others.

The point isn't to stop using AI tools. The real question in 2026 is no longer "Should we use AI?" but "Do we actually know what happens to the data we put into AI tools?" Most businesses don't. That's what this audit is for.

Once you've run it, the next step is to establish a repeating habit — a quarterly review of your AI tool list and settings — so the audit stays current as the tools evolve.

AI EXTRA CREDIT

Keep learning with these upcoming free virtual events from the All Things AI community.

April 22nd | Live at The American Underground | Building Your Startup in the Age of AI — In this session, Mark Hinkle is joining forces with The American Underground as part of Raleigh Durham Startup Week to share what he's learned the hard way about where AI actually delivers for early-stage companies. From capital strategy to agent-powered execution, this session is for founders who want to move faster and build smarter.

May 6th | Linkedin Live | Why Jensen Huang's Betting on Confidential Computing in the AI Factory — In this session, Mark Hinkle sits down with Aaron Fulkerson, CEO of Opaque Systems — the leading Confidential AI platform born from UC Berkeley's RISELab and backed by Intel, Accenture, and many others — for a conversation that will fundamentally change how you think about enterprise AI.

I appreciate your support.

Your AI Sherpa,

Mark R. Hinkle
Publisher, The AIE Network
Connect with me on LinkedIn
Follow Me on Twitter

Reply

Avatar

or to participate

Keep Reading