This week in San Francisco, I watched some of the brightest minds in AI from Google, AMD, Microsoft, IBM, Anthropic, and Berkeley wrestle with a problem that's keeping executives awake at night:

How do you secure systems that can think faster than humans can react?”

That reality came into full focus at this year's Confidential Computing Summit hosted by Opaque with the Confidential Computing Consortium.

Researchers from academia issued a similar warning: in 2025, risk is no longer just about breached endpoints—it's about autonomous systems acting before a human can even respond.

Aaron Fulkerson, my co-host on AI Confidential, said it best:

"AI is now human actions at machine speed."

We've automated intelligence, yet most organizations still rely on legacy models for trust, identity, and permission.

The disconnect is staggering. Enterprises are embedding AI agents everywhere, often faster than they're building guardrails. The result? Tools that can outthink attackers—but also be tricked into leaking credentials, downloading sensitive data, or granting unauthorized access.

While generative AI unlocks unprecedented capabilities, it's simultaneously rewriting the entire threat landscape. The data backs this up, and the stakes couldn't be higher.

The conversation made one thing clear: AI is simultaneously the best and worst thing to happen to cybersecurity. But that’s not something we should fear, just something we should be prepared for. Let me walk you through it.

FROM THE ARTIFICIALLY INTELLIGENT ENTERPRISE NETWORK

🎯 The AI Marketing Advantage - Altman’s Warning on AI’s Trajectory

 📚 AIOS - This is an evolving project. I started with a 14-day free Al email course to get smart on Al. But the next evolution will be a ChatGPT Super-user Course and a course on How to Build Al Agents.

AI DEEP DIVE

Confidential AI

Security at machine speed

The integration of artificial intelligence (AI) into enterprise operations has significantly transformed the cybersecurity landscape. While AI offers enhanced capabilities for threat detection and response, it also introduces new vulnerabilities and challenges.

In a conversation with Anthropic CISO, Jason Clinton (coming soon to AI Confidential), he pointed out that frontier model providers need to take a proactive approach toward model security. Instead of waiting to see if Claude Opus 4 becomes dangerous, they're locking it down now—based on early warning signs alone. The concern? The AI might help with chemical, biological, radiological, or nuclear threats.

So Anthropic built safeguards. These include systems that automatically block dangerous responses, top-tier security for the AI's code, and monitoring to catch any data leaks. It's a "secure first, ask questions later" approach that should be the default for the industry.

But this is only one of many aspects we need to consider as AI permeates our businesses.

Proliferation of AI Tools and Associated Risks

The adoption of generative AI tools has surged, with organizations deploying an average of 66 such tools within their environments. However, this rapid integration often outpaces the development of corresponding security measures. A significant concern is the potential for malicious actors to embed harmful prompts in seemingly benign communications, exploiting AI agents to perform unintended actions.

Further highlighting the risks, a survey by Dimensional Research and Sailpoint revealed that 80% of companies experienced unintended actions by AI agents, including unauthorized access (39%), sharing inappropriate data (33%), and downloading sensitive content (32%). Alarmingly, 23% reported instances where AI agents were deceived into revealing credentials. Despite these threats, only 44% of organizations have formal governance policies in place for AI agents.

AI-Driven Cyber Threats Escalate

Cybercriminals are increasingly leveraging AI to enhance the sophistication and scale of their attacks. Automated scanning activities have risen to 36,000 scans per second, marking a 16.7% year-over-year increase. This escalation has contributed to a 42% surge in credential-based targeted attacks.

In the financial sector, AI-fueled fraud has led to over £1 billion in losses, with 3.3 million incidents reported—a 12% increase from the previous year. Fraudsters are utilizing generative AI, deepfakes, and voice cloning to execute convincing scams, often outpacing the defensive capabilities of financial institutions.

Data Exposure and Governance Challenges

A comprehensive analysis of 1,000 organizations revealed that 99% had sensitive data accessible to AI tools, posing significant risks of data breaches. Additionally, 98% had unverified applications, including shadow AI, within their environments. These findings underscore the urgent need for robust data governance and access controls in the age of AI.

A lot of the industry’s smartest technical leaders think that the answer could come from privacy preserving techniques like confidential computing.

What Is Confidential Computing?

Confidential computing refers to the use of hardware-based secure enclaves to protect data during processing. Unlike traditional security protocols, which protect data at rest or in transit, confidential computing ensures that even when data is being computed upon—such as during model inference—it remains inaccessible to the infrastructure operator, cloud provider, or external threat actors.

At the Confidential Computing Summit, what stood out was how quickly this is becoming table stakes. It's no longer a specialty option for top-secret workloads—it's now seen as a baseline for enterprise AI operations. Whether you’re processing regulated customer data, proprietary IP, or autonomous agent outputs, the ability to isolate execution environments is crucial.

The business case is equally strong. Confidential computing enables:

  • Secure collaboration on sensitive models between partners and vendors

  • Deployment of AI agents on shared infrastructure with zero data leakage

  • Enforced runtime governance that meets regulatory and compliance requirements

In essence, it transforms trust from something you architect into your system—to something you build upon from the first layer of your stack.

The Confidential Computing Consortium (CCC) brings together hardware vendors, cloud providers, and software developers to accelerate the adoption of Trusted Execution Environment (TEE) technologies and standards.

CCC is a project community at the Linux Foundation dedicated to defining and accelerating the adoption of confidential computing. Members include Google, NVIDIA, Meta, Anthropic, AMD, ARM, Intel, and end-user organizations like TikTok. It will likely drive some of the most important standards in data privacy.

From Confidential Computing to Confidential AI

Confidential AI (a term coined by Fulkerson and echoed on slides from many other vendors) is the operational layer that emerges when these principles are applied end-to-end across the AI lifecycle.

Whereas confidential computing protects data at the infrastructure level, confidential AI integrates security, auditability, and policy enforcement directly into the way agents behave, learn, and act. It addresses new realities:

  • Agents making decisions independent of direct supervision

  • Models retraining or fine-tuning on live enterprise data

  • AI pipelines connecting dozens of components across insecure boundaries

In a Confidential AI system:

  • Identity is not just a login—it’s a chain of trust for every model, API, and agent.

  • Permissions are enforced at the prompt, not just at the firewall.

  • Logs, alerts, and interventions are designed with runtime unpredictability in mind.

Confidential AI is not a product category. It’s a design philosophy assuming autonomous AI operation and requiring security to match its speed, scale, and sophistication.

I was also fortunate enough to lead a panel on Agentic Security and I got to pick the minds of a few folks including Jaren Hansen co-founder of Keycard who is trying to solve an AI agent identity management, Daniel Chalef, founder of Y! Combinator company, Zep AI, and Srinivas Mantripragada, former CTO of IBM cloud, and founder of Maaya AI. Here’s what I saw from our conversation.

1. Agent Identity Is the New Perimeter

As agentic systems proliferate, the question becomes not just who is accessing data—but which agent? Identity isn't just about human users anymore. Enterprises must implement cryptographic agent IDs, permission scopes, and zero-trust behaviors at the process level. I think the work that the folks at Keycard are doing to solve this is very important.

2. Guardrails Must Be Embedded in Pipelines

Security can’t be a wraparound—it must live in the CI/CD pipeline. This means scanning prompts, validating outputs, enforcing policy mid-flight, and alerting on unapproved actions. Build your AI lifecycle with policy-as-code.

3. Trusted Data Chains Are Mandatory

What’s powering your models? If you can’t trace it, you can’t trust it. Confidential AI demands supply chain-level provenance. From retrieval-augmented generation (RAG) to synthetic corpora, trust must be auditable.

4. Agent Management = Risk Management

Every autonomous agent is a potential escalation path. Do you know what they’re doing at 3 a.m.? Enterprises must institute runtime oversight, enforce kill switches, and maintain logs that meet both compliance and operational needs.

5. Speed Without Oversight = Threat

The biggest risk isn’t the model. It’s the velocity. Systems making real-time decisions—from routing payments to auto-coding workflows—need delay-tolerant alerting and pre-action review for sensitive ops. Machine-speed requires machine-grade safety.

Don’t Let Safety Take a Vacation

The transformation toward Confidential AI represents more than a technological evolution—it's a strategic imperative for organizations that want to harness AI's transformative potential without sacrificing security, compliance, or operational control. The companies that successfully implement these principles won't just be more secure; they'll be more competitive, more trusted by their customers, and better positioned to take advantage of emerging AI capabilities.

The question facing organizations today isn't whether to implement these security measures, but how quickly they can adapt their infrastructure, policies, and operational practices to support the secure deployment of AI at enterprise scale. The window for proactive security implementation is narrowing as AI adoption accelerates and threat actors become more sophisticated in their attacks on AI systems.

To succeed, organizations must move beyond reactive security measures toward predictive, adaptive frameworks that can evolve alongside AI technology. It demands investment in new technical capabilities, updated governance frameworks, and security expertise specifically focused on AI-related risks and opportunities.

The future belongs to organizations that can operate AI systems with confidence, transparency, and appropriate oversight. The foundation for that future must be built today, with security principles that match the speed and sophistication of the AI systems they're designed to protect.

AI TOOLBOX
  • Crew AI - An open-source platform for building cooperative AI agents.

  • Galileo - An observability platform for insights into what’s going on in your AI infrastructure.

  • Granite - Fit for purpose and open source, our enterprise-ready AI models deliver exceptional performance with less resources and cost.

  • Langchain - Tools for every step of the agent development lifecycle - built to unlock powerful AI in production.

  • Maaya AI - A native AI security platform for contextual automation and intelligence.

  • Opaque Confidential Agents for RAG - Secures every agent action inside hardware-backed Trusted Execution Environments (TEEs), keeping data encrypted even in use.

  • Zep AI - Powers AI agents with agent memory built from user interactions and business data.

PRODUCTIVITY PROMPT

Prompt of the Week: Personal AI Security Audit

As AI tools become embedded in everyday workflows, they introduce new vectors for data leakage, misuse, and compliance violations—especially on unmanaged or developer-class machines. This week’s prompt is designed to help security professionals, IT auditors, and AI risk managers evaluate the exposure and operational integrity of AI-enabled desktop environments.

Use it to:

  • Audit local and cloud-based AI tools

  • Identify prompt injection, agent misuse, and data risks

  • Check for compliance with internal AI usage policies

Run it through GPT-4, Claude, or your internal audit bot. Fast, focused, and security-first.

How to Use This Prompt

Paste the full text below into ChatGPT, Claude, or another AI assistant. It will guide you through a step-by-step security check of your desktop. You'll get simple explanations, risk flags, and easy fixes—no technical skills required.

## `🧠 AI Desktop Security Checkup`

`I want to audit my own desktop to make sure AI tools I use aren't putting my data or device at risk. I'm not an IT professional, so explain everything simply and clearly.`

`Please walk me through each of the following checks and tell me:`

- `What it means`
- `Why it matters`
- `What to look for`
- `What to do if there's a problem`

### `1. What AI Tools Are Installed?`

- `Help me identify apps like ChatGPT, Copilot, Replit, or local AI tools`
- `Let me know if any run in the background or launch at startup`
- `Tell me how to remove or restrict anything suspicious`

### `2. Are AI Tools Accessing My Files?`

- `Can any AI tool see or send files from my Desktop, Documents, or Downloads?`
- `How do I check what permissions these tools have?`
- `Show me how to limit access if needed`

### `3. Is My Data Being Sent to the Internet?`

- `Are any tools sending prompts or files to external servers?`
- `How can I tell if my info is being logged or stored online?`
- `What should I do to stay in control?`

### `4. Is Anything Saving What I Type?`

- `Do any AI apps keep a history of my chats or prompts?`
- `How do I clear or disable that history?`
- `Should I be worried about what I typed before?`

---

### `5. Are AI Tools Acting Automatically?`

- `Are there AI agents that run tasks or commands on their own?`
- `How do I check and turn off automatic behaviors I don’t understand?`

---

### `6. How Can I Stay Safe?`

- `Give me simple tips to keep my AI tools secure`
- `Recommend settings or habits that protect my data`
- `Tell me what’s safe to share—and what’s not`

---

### `7. Bonus: What to Watch Out For`

- `What are the signs an AI tool might be risky?`
- `Are there any apps or behaviors I should avoid?`

I appreciate your support.

Your AI Sherpa,

Mark R. Hinkle
Publisher, The AIE Network
Connect with me on LinkedIn
Follow Me on Twitter

Reply

or to participate

Keep Reading

No posts found