The Trust Gap in Enterprise AI

Why your AI stack is bleeding data — and what confidential AI does about it

EXECUTIVE SUMMARY

Enterprise AI adoption has outpaced enterprise AI security by a significant margin. Organizations are running sensitive workflows on AI systems that were not designed for the threat environment they now face. Three developments make this a strategic priority, not a future concern:

  • AI models have reached a level of coding capability where they can surpass all but the most skilled humans at finding and exploiting software vulnerabilities, prompting major tech companies to form cross-industry defensive alliances like Anthropic's Project Glasswing.

  • By 2027, more than 40% of AI-related data breaches will be caused by the improper use of generative AI across borders, driven by a lack of consistent global AI standards, according to Gartner predictions.

  • Sensitive enterprise data is flowing into AI tools at scale, with 39.7% of all AI interactions involving sensitive data, while security teams lack meaningful visibility into this fragmented usage, as reported by Cyberhaven.

  • The MITRE ATLAS framework—specifically designed to catalog threats against AI systems—now documents 167 techniques, 16 tactics, and 57 real-world case studies of adversarial attacks against AI.

  • At GTC 2026, NVIDIA CEO Jensen Huang placed confidential computing at the center of the company's enterprise AI strategy, asserting that cryptographic privacy guarantees are now a mandatory prerequisite for deploying autonomous AI agents inside corporate networks.

The organizations managing this risk most effectively are those moving toward confidential AI—a technical approach that provides cryptographic proof of data protection at every step of the AI workflow.

BUILD A WEBSITE WITH AI — NO CODING NEEDED

What if you could turn your idea into a live website in about an hour — without writing a single line of code?

Join us Monday, April 20 at 12 PM for Vibe Coding a Website (For Non-Technical Users) with Mark Hinkle. Learn how to turn your idea into a live, professional website in about an hour using AI — no coding required. Perfect for entrepreneurs, marketers, freelancers, and curious professionals.

The revelation came not as a gradual shift, but as a stark demonstration of capability. When Anthropic recently unveiled Claude Mythos Preview, it wasn't just another iterative model update. It was a system that had autonomously found thousands of high-severity, zero-day vulnerabilities in every major operating system and web browser. It uncovered a 27-year-old flaw in OpenBSD, long considered one of the most secure operating systems in the world. It found a 16-year-old vulnerability in FFmpeg that automated testing tools had missed five million times.

This capability is so profound—and potentially so dangerous—that Anthropic withheld the model from general release. Instead, they convened Project Glasswing, a cross-industry consortium including AWS, Apple, Microsoft, Google, and JPMorganChase, committing $100 million in usage credits to deploy Mythos defensively before adversaries can harness similar capabilities offensively.

The message for enterprise leaders is clear: the window between a vulnerability being discovered and being exploited has collapsed. What once took months of human effort now happens in minutes with AI. If attackers can leverage models of this caliber to exploit zero-day vulnerabilities at machine speed and scale, defenders must fundamentally rethink how they secure their AI stacks and the sensitive data flowing through them.

UPCOMING LEARNING OPPORTUNITIES

Keep learning with these upcoming free virtual events from the All Things AI community.

April 22nd | Live at The American Underground | Building Your Startup in the Age of AI — In this session, Mark Hinkle is joining forces with The American Underground as part of Raleigh Durham Startup Week to share what he's learned the hard way about where AI actually delivers for early-stage companies. From capital strategy to agent-powered execution, this session is for founders who want to move faster and build smarter.

May 6th | Linkedin Live | Why Jensen Huang's Betting on Confidential Computing in the AI Factory — In this session, Mark Hinkle sits down with Aaron Fulkerson, CEO of Opaque Systems — the leading Confidential AI platform born from UC Berkeley's RISELab and backed by Intel, Accenture, and many others — for a conversation that will fundamentally change how you think about enterprise AI.

AI DEEPDIVE

I was sitting across from the Chief Information Security Officer of a global top-ten pharmaceutical company last month when he pulled up a dashboard that made the entire room go quiet. The dashboard wasn't tracking external attacks or firewall pings. It was tracking the volume of internal data flowing into unmanaged generative AI applications.

Over the previous thirty days, his engineering teams had pasted nearly half a million lines of proprietary source code into public LLMs. His clinical research teams had summarized hundreds of patient trial protocols using tools that explicitly reserved the right to train on user inputs. The CISO leaned back and stated the problem plainly: "We spent five years building a zero-trust architecture to keep attackers out, and five months watching our own employees hand-deliver our most sensitive IP to third-party models because the tools are just too useful to ignore."

This pharmaceutical company is not an outlier. It represents the exact plateau where most enterprises currently find themselves. They have successfully run the initial AI playbook: deploy models on public data, automate low-risk workflows, and prove the technology works. But when it comes time to move the truly valuable, proprietary data into production AI—the clinical data, the financial positions, the unreleased product roadmaps—they hit a wall. They cannot verify that the data will remain protected, so the security teams block the deployment.

"Later" has officially arrived. The strategy of waiting to figure out the sensitive data problem is no longer viable. That trust gap is widening rapidly as AI becomes more autonomous and the threat landscape, as demonstrated by models like Anthropic's Mythos Preview, becomes exponentially more sophisticated. The organizations that break through this plateau are the ones that solve the data trust problem—not by avoiding sensitive data in AI, but by running AI on sensitive data with verifiable, cryptographic guarantees that it stays protected. This is the structural shift that confidential AI makes possible.

What Is Confidential AI

Confidential AI is an approach to running AI workloads that provides cryptographic proof—not promises—that data, model weights, and agent actions remain private and policy-compliant throughout every step of the workflow.

It combines three technical capabilities that traditional cloud security does not provide:

Confidential computing runs AI workloads inside hardware-enforced secure enclaves, where data stays encrypted even during processing. The cloud provider, the infrastructure operator, and other tenants cannot access the data while the computation is running.

Verifiable runtime governance produces cryptographic proofs that data was handled according to defined policies—who accessed it, what the model did with it, and what the output was. This is the audit trail that regulators are increasingly demanding and that enterprise legal teams need to approve production deployment.

Confidential RAG (Retrieval-Augmented Generation) extends these protections to the retrieval layer, where most enterprise AI applications are most exposed. When an AI agent retrieves from a knowledge base to answer a question, it accesses potentially sensitive information across many documents. Confidential RAG ensures that retrieval occurs without exposing the underlying data to the inference infrastructure.

How It Works in Business Contexts

The risk of data exposure escalates through four distinct stages in the enterprise AI lifecycle. Understanding this progression is critical for deploying confidential AI effectively.

Stage 1: Shadow Usage and Fragmented Access

The problem begins with invisible usage. Cyberhaven reports that 39.7% of all AI interactions involve sensitive data, and a significant portion occurs on personal accounts—32.3% of ChatGPT usage and 58.2% of Claude usage bypass corporate SSO and logging. Employees are feeding core business data into models without security oversight.

Stage 2: Inference Memory Exposure

When organizations move from shadow usage to sanctioned enterprise deployments, new, more subtle risks emerge. Inference memory exposure occurs when AI models retain information from one user session that inadvertently influences responses in another. In a multi-tenant cloud environment, this creates a severe cross-contamination risk that is almost entirely invisible to conventional security monitoring. The access logs will show normal, authorized behavior, but the underlying information has moved between contexts it was never intended to enter. For example, if a financial analyst queries a model about an impending, unannounced merger, the model's weights or short-term context window may temporarily adjust. If another user in a different department subsequently asks a related question, the model might synthesize an answer that implicitly reveals the merger details. Traditional data loss prevention (DLP) tools cannot catch this because the data isn't being explicitly exfiltrated; it is being probabilistically leaked through the model's generative process.

Stage 3: Attestation Gaps and Metadata Leakage

As AI deployments scale across the enterprise, the gap between a cloud vendor's privacy promise and verifiable mathematical proof becomes a massive liability. Most enterprise AI today relies entirely on vendor attestations—legal contracts stating that the vendor will not look at your data or train on your inputs. However, in regulated industries, a promise is not a control. Furthermore, even when the actual content of a query is encrypted in transit and at rest, the metadata surrounding it remains exposed during processing. The metadata—who queried the model, at what time, the size of the payload, and which internal databases were accessed for retrieval—can often reveal sensitive business strategies to the infrastructure provider. Confidential computing addresses this by replacing paper attestations with cryptographic proofs, ensuring that the data remains encrypted even during active processing, thereby closing both the attestation gap and the metadata vulnerability.

Stage 4: Agentic Autonomy and Systemic Vulnerability

The final stage is where the capabilities demonstrated by Project Glasswing become highly relevant. As AI agents gain persistent context windows, local memory, and direct access to file systems, they introduce structural risk. If a model like Mythos Preview can find a 27-year-old zero-day in OpenBSD, adversaries will inevitably use similar capabilities to exploit the expanding attack surface of autonomous enterprise AI agents.

This reality was explicitly acknowledged by NVIDIA CEO Jensen Huang during his GTC 2026 keynote. While announcing NemoClaw—NVIDIA's enterprise-grade agentic AI framework—Huang emphasized that the open-source agent ecosystem is "genuinely dangerous for corporate networks" without hardware-level security. He positioned confidential computing not as an optional feature, but as the foundational requirement for the agentic era, stating that "ensuring that even carriers cannot view user data and models" is what makes deploying frontier models in production viable. When the company building the hardware infrastructure for the entire AI industry declares that autonomous agents require cryptographic isolation to be safe, the market must listen.

How to Implement Confidential AI

Moving from traditional AI security to a confidential AI posture is a phased approach that most organizations can begin without replacing existing infrastructure.

Phase 1: Inventory and Assess

Catalog every AI workload running in your organization against two dimensions: sensitivity of the data it touches, and adequacy of the current controls. Use the MITRE ATLAS (Adversarial Threat Landscape for Artificial-Intelligence Systems) framework to map the specific tactics and techniques attackers might use against your systems. Identify your highest-priority target: typically the AI workload that either touches the most sensitive data or would cause the most damage if compromised.

  • Map the Data Flows: Document exactly where data originates, where it is processed, and where the outputs are stored for every sanctioned AI application.

  • Apply the ATLAS Framework: Run a tabletop exercise using the MITRE ATLAS framework to identify which of the 167 known adversarial techniques your current architecture is most vulnerable to.

  • Identify the High-Value Target: Select one specific, high-value workload that is currently blocked from production due to data privacy concerns. This will become your pilot.

Phase 2: Pilot Confidential Controls on a High-Value Workload

Begin by applying confidential computing principles to a specific, high-value workload. Most organizations start with a RAG-based application—a knowledge retrieval system or an internal analyst tool—where the retrieval layer is the primary exposure risk. The goal of this pilot is to produce evidence: audit logs, cryptographic proofs, and compliance documentation that can be reviewed by legal and security teams.

  • Isolate the Retrieval Layer: Implement confidential RAG to ensure that the vector database and the retrieval mechanism operate within a secure enclave.

  • Generate Cryptographic Proofs: Configure the environment to produce verifiable runtime governance logs that cryptographically prove the data was not exposed during processing.

  • Conduct a Joint Review: Bring the security, legal, and compliance teams together to review the cryptographic proofs and establish a new baseline for what constitutes acceptable evidence of data protection.

Phase 3: Extend and Govern

Once the pilot demonstrates that confidential AI controls work and can be audited, extend the approach to additional workloads. Establish a governance model that includes ATLAS-informed threat modeling as a standard part of AI initiative approval. Evaluate solutions from emerging confidential AI providers—such as Opaque Systems, which offers platforms for building and deploying AI agents with embedded privacy and compliance checks—as part of your vendor landscape review.

  • Standardize the Architecture: Create a repeatable reference architecture for deploying confidential AI workloads across different business units.

  • Update Procurement Requirements: Mandate that all future enterprise AI vendor evaluations include an assessment of their support for confidential computing and verifiable runtime governance.

  • Monitor the Threat Landscape: Assign a dedicated team to continuously monitor updates to the MITRE ATLAS framework and adjust internal controls as new adversarial techniques emerge.

Key Success Factors: Cross-functional alignment is critical. AI security decisions belong in the same room as AI investment decisions. Security teams must work closely with legal and compliance to ensure the cryptographic proofs generated meet regulatory requirements. Leaders must also acknowledge the current friction involved in adopting confidential AI: confidential computing environments often carry a cost premium over standard cloud compute instances, and hardware-enforced secure enclaves are not yet universally available across all cloud regions. Organizations should plan for a phased rollout that accounts for these constraints rather than expecting an overnight transition.

Common Missteps

Treating AI security as an extension of cloud security. Traditional cloud security tools do not understand AI-specific threat vectors. Prompt injection, model poisoning, and inference memory exposure require dedicated detection and mitigation that general-purpose security platforms are not designed to provide.

Relying on vendor attestations as a substitute for verifiable proof. When a vendor says your data is protected, that is a contractual representation. Confidential computing provides cryptographic proof that the claim is true. These are not equivalent—especially in regulated industries where organizations must demonstrate compliance, not just assert it.

Underestimating the speed of AI-driven exploitation. As Anthropic's Project Glasswing illustrates, the time between vulnerability discovery and exploitation is shrinking to near zero. Organizations that rely on manual patching and legacy vulnerability management will be outpaced by AI-augmented adversaries.

Waiting for regulation to drive action. DORA is already live for EU financial services, and the EU AI Act's enforcement provisions are ramping up. Organizations that build confidential AI capabilities now will have a compliance advantage when mandates arrive, rather than a rushed remediation project.

The Regulatory Convergence: Why Wait-and-See Is a Failing Strategy

Beyond the immediate technical threats demonstrated by models like Mythos Preview, enterprise leaders must contend with a rapidly shifting regulatory landscape. The era of unregulated, experimental AI in the enterprise is closing. Global regulators are increasingly sophisticated in their understanding of how AI models process data, and they are demanding technical controls that match that sophistication.

The European Union's Artificial Intelligence Act (EU AI Act) is the most prominent example, but it is not an isolated piece of legislation. It represents a global convergence toward strict liability for AI-driven data exposure. Under the EU AI Act, organizations deploying high-risk AI systems must implement robust data governance, ensure transparency, and maintain detailed technical documentation. Crucially, the burden of proof rests entirely on the deploying organization. When regulators audit an enterprise AI system, vendor attestations and marketing promises will not suffice. Regulators will demand verifiable proof that data was protected during processing.

Similarly, the Digital Operational Resilience Act (DORA), which applies to the EU financial services sector and their ICT providers, mandates stringent risk management frameworks for all critical technology, including AI. Financial institutions must demonstrate that their AI systems can withstand severe operational disruptions and cyberattacks without compromising data integrity or confidentiality.

In the United States, while comprehensive federal AI legislation is still pending, sector-specific regulators are aggressively stepping into the void. The Securities and Exchange Commission (SEC) has proposed rules requiring broker-dealers and investment advisers to eliminate conflicts of interest associated with the use of predictive data analytics and AI. The Department of Health and Human Services (HHS) is scrutinizing how AI is used in clinical decision support systems, emphasizing the need to protect Protected Health Information (PHI) under HIPAA regulations even as it flows through complex AI models.

This regulatory convergence creates a massive compliance burden for organizations that have deployed AI using traditional cloud security models. Traditional DLP and identity access management (IAM) tools were designed to protect data at rest and data in transit. They were not designed to protect data in use—specifically, data being actively processed inside the memory of a GPU during an AI inference task. Confidential AI bridges this exact regulatory gap. By leveraging confidential computing, organizations can prove to regulators that data remained encrypted and isolated even while the AI model was actively generating a response. The verifiable runtime governance logs provide the exact audit trail that regulators demand, demonstrating mathematically who accessed the system, what policies were enforced, and that no data was exposed to the infrastructure provider or unauthorized tenants.

The Role of Open Source and the AI Security Ecosystem

The push toward confidential AI is not happening in a vacuum. It is being accelerated by a robust ecosystem of open source projects, academic research, and industry consortiums. The Confidential Computing Consortium (CCC), a project community at the Linux Foundation, has been instrumental in standardizing the hardware and software interfaces required to build secure enclaves. Major silicon vendors, including Intel, AMD, and NVIDIA, are continuously enhancing their hardware to support larger, more complex AI workloads within these protected environments.

Furthermore, the open source AI community is actively developing tools and frameworks to make confidential AI more accessible to enterprise developers. Projects focused on secure multi-party computation (SMPC), homomorphic encryption, and federated learning are providing alternative methods for training and deploying AI models on sensitive data without exposing the underlying information. While some of these technologies are still maturing and may introduce computational overhead, they represent a fundamental shift in how the industry approaches data privacy in the AI era. Enterprise leaders must actively engage with this ecosystem. By participating in industry consortiums, collaborating with peers to share threat intelligence (such as the tactics cataloged in MITRE ATLAS), and mandating confidential computing support in vendor evaluations, organizations can build more resilient, adaptable AI architectures. The formation of Project Glasswing is a testament to the power of collective defense. In the age of AI, no single organization can secure the ecosystem alone.

LISTEN TO THE AI ENTERPRISE ON THE ROGUE AGENTS PODCAST

This is my latest project, while we do have audio summaries for each newsletter. They are not ideal for listening; they are simple text-to-speech. We created a way to provide a weekly summary of the newsletters in this podcast. And actually, it’s a work in progress. Right now, you get a pretty good podcast recap of the previous week’s newsletters. But over time, they will be better. That’s the plan.

What happens when two AI agents break down the week's biggest AI news? You get Rogue Agents. Vera and Neuro deliver the stories that matter in enterprise AI — the deals, the tools, the breakthroughs, and the stuff everyone's getting wrong — in 15-20 minutes every week.

Business Value

ROI Considerations:

  • Accelerates Deployment Timelines: By removing the subjective debate over data privacy during legal and security reviews, organizations can cut the time-to-production for sensitive AI workloads by up to 50%. Cryptographic proof replaces lengthy risk exception processes.

  • Reduces Breach and Compliance Costs: With Gartner predicting that 40% of AI-related data breaches will stem from cross-border GenAI misuse by 2027, the financial risk of non-compliance is massive. Confidential AI provides the technical controls necessary to satisfy stringent regulations like the EU AI Act and DORA, potentially saving millions in regulatory fines.

  • Lowers Long-Term Remediation Expenses: Proactively securing the AI architecture with hardware-enforced enclaves is significantly more cost-effective than attempting to bolt on security controls post-deployment or conducting expensive incident response after an inference memory exposure event.

Competitive Implications:

The case for confidential AI is not primarily defensive; it is offensive. When legal and security teams can verify that data is protected, they stop being blockers. That unlocks use cases that were previously off-limits: AI on patient records, on financial positions, on personnel data, on proprietary research. Organizations that can confidently deploy AI against their most sensitive, proprietary data will build a widening competitive moat against peers still stuck in the sandbox.

What This Means for Your Planning

The formation of Project Glasswing is a watershed moment for enterprise technology. When the world's most sophisticated AI labs and largest cloud providers band together to defensively deploy an AI model because its vulnerability-finding capabilities are too dangerous for general release, the paradigm has shifted. Human-speed cybersecurity is no longer sufficient to protect enterprise assets.

For your next planning cycle, this means AI security can no longer be treated as an afterthought or a compliance checklist. You must assume that adversaries will soon possess the same AI-driven capabilities to find and exploit vulnerabilities in your systems at machine speed. Your defensive architecture must evolve to meet this reality.

The transition to confidential AI—where data protection is cryptographically proven rather than merely promised—is the necessary strategic response. It is the only way to confidently deploy AI against your most valuable, sensitive data while mitigating the exponential risks of the new threat landscape.

Are your current AI deployments protected by verifiable cryptographic proofs, or are you relying on vendor promises and the hope that adversaries haven't yet found your vulnerabilities?

I appreciate your support.

Your AI Sherpa,

Mark R. Hinkle
Publisher, The AIE Network
Connect with me on LinkedIn
Follow Me on Twitter

Reply

Avatar

or to participate

Keep Reading