If you use AI daily, you’ve felt this frustration: every new conversation starts from zero. You explain your role. You restate your preferences. You paste the same formatting instructions. You remind the AI — again — that you prefer bullet points over paragraphs, or that your company uses AP style, or that you need responses under 200 words.

It’s the equivalent of introducing yourself to a coworker every Monday morning.

That problem now has real solutions. Claude, Manus, and ChatGPT each offer ways to make your instructions persistent — so the AI remembers how you work without you repeating a word. The approaches differ significantly, and the best option depends on your needs: simple global preferences, project-specific context, or modular expertise that activates only when relevant.

IN PARTNERSHIP WITH STACKLOK

Is the Model Context Protocol on your radar? Has it become a point of contention between developers keen to use MCP servers and security teams concerned about the lack of guardrails?

Stacklok is working with leaders across industries to bring the Model Context Protocol into production on a secure, scalable platform. Curate a registry of trusted MCP servers. Control auth via an MCP gateway.

Learn more at stacklok.com or join us at an upcoming MCP roadshow stop in San Diego, Austin, Atlanta, Boston, New York, or Chicago.

AI LESSON

Build AI Skills That Remember How You Work

The practical guide to persistent prompts — teach your AI once and stop repeating yourself

Every major AI platform now supports persistent instructions — context that carries across conversations so you don’t start from scratch each time. But the implementations range from simple text fields to modular file-based systems that load expertise on demand.

The Three Levels of Persistent AI Instructions

Think of persistent instructions as a spectrum. At one end, global preferences apply to every conversation. In the middle, project-specific context shapes a body of work. At the advanced end, modular Skills activate only when relevant — like specialists on call.

  • ChatGPT Custom Instructions — Global preferences (Level 1)

  • Claude/ChatGPT Projects — Workspace-specific context (Level 2)

  • Claude Skills/Manus Skills — Modular, on-demand expertise (Level 3)

Most professionals benefit from setting up at least the first two.

ChatGPT Custom Instructions: The Starting Point

The Custom Instructions in ChatGPT’s Growing Settings Panel (starting to get pretty ugly)

What it does: Two persistent text fields that shape every ChatGPT conversation — one for background about you, one for response formatting. Available on all plans at no cost.

Where to find it: Settings → Personalization → Custom Instructions (Web, Desktop, iOS, Android)

Step 1: Open ChatGPT and click your profile icon in the bottom-left corner

Step 2: Select “Customize ChatGPT” (or Settings → Personalization on mobile)

Step 3: Fill in “What would you like ChatGPT to know about you?” with your role, industry, and key context

Step 4: Fill in “How would you like ChatGPT to respond?” with formatting preferences, tone, and output structure

Example — “About You” field:

I’m a marketing director at a B2B SaaS company (150 employees). 
I manage a team of 6 and report to the CMO. Our primary channels 
are LinkedIn, email, and webinars. I use HubSpot for CRM and 
marketing automation. Budget decisions require ROI projections.

Example — “Response” field:

Write in a direct, professional tone. Lead with the recommendation, 
then supporting evidence. Use short paragraphs. Include specific 
numbers when possible. Flag assumptions clearly. Keep responses 
under 300 words unless I ask for more detail.

Time: 10 minutes to set up, immediate effect on all conversations

Limitations: 1,500 characters per field — roughly 250 words each. Applied globally (no per-project customization). Not modular — you can’t swap instruction sets based on what you’re working on.

Claude Projects: Workspace-Level Context

The Claude Project We Use to Help Author the AIE, with Humans in the Loop

What it does: Creates a persistent workspace with custom instructions, a document library, and conversation history. Everything you add informs every conversation within that workspace. Available on Pro ($20/mo), Team (from $25/user/mo), and Enterprise plans.

Where to find it: Claude.ai sidebar → “Projects”

Step 1: Click “Projects” in the left sidebar and select “Create a project”

Step 2: Name your project and write custom instructions — this persistent context applies to every conversation in the project

Step 3: Upload relevant documents (reports, style guides, data files) to the project’s knowledge base

Step 4: Start a conversation within the project — Claude automatically has access to your instructions and documents

Example Project — “Q1 Marketing Campaigns”:

Project Instructions:
You are helping me plan Q1 2026 marketing campaigns. Target 
audience: mid-market IT directors. Messaging emphasizes security, 
compliance, and time-to-value. Budget: $85K across three programs. 
Reference the uploaded competitive analysis and brand guidelines.

Time: 15 minutes for initial setup, then add documents as needed

Limitations: Documents count against a 200,000-token context window. Custom instructions are always loaded — you can’t conditionally activate different instructions within the same project. When approaching context limits, Claude uses RAG mode to search documents rather than loading everything at once.

Claude Skills: Modular Expertise on Demand

What it does: File-based instruction packages that Claude discovers automatically and loads only when relevant. Unlike Projects (which load everything every time), Skills activate selectively — specialists on call rather than in the room. Available on Claude.ai (Pro/Team), Claude Code, and API.

How it works: Each Skill is a folder containing a SKILL.md file plus optional resources. Claude scans just the skill name and description at conversation start — roughly 100 tokens per skill. When your request matches a skill’s description, Claude loads the full instructions on demand.

Anthropic calls this “progressive disclosure”:

  • Metadata (always loaded): Skill name and description. ~100 tokens per skill — dozens of Skills barely touch your context window.

  • Instructions (loaded on demand): Full SKILL.md content, up to 5,000 tokens. Loaded only when relevant to your current request.

  • Resources (loaded as needed): Templates, data files, and scripts. Loaded only when the skill’s instructions call for them.

Example SKILL.md — Brand Voice Checker:

---
name: brand-voice-checker
description: Reviews content for brand voice compliance, 
  checking tone, terminology, and style guide adherence
arguments:
  - name: content
    description: The content to review
    required: true
---

# Brand Voice Checker

Review the provided content against these brand standards:

## Tone
- Professional but approachable
- Confident without being aggressive
- Technical accuracy over marketing hype

## Terminology
- Use "platform" not "solution"
- Use "customers" not "users"  
- Use "AI-assisted" not "AI-powered"

Provide specific line-by-line feedback with suggested rewrites.

Don’t panic if you aren’t a programmer, you can ask Claude or ChatGPT to help you write the skills file, and you will get what you need.

Time: 20-30 minutes to create your first skill. Minutes to reuse it forever.

Limitations: Skills are instruction-based, not executable code — they guide Claude’s approach but don’t run programs. SKILL.md files should stay under 5,000 tokens for optimal performance.

Manus Skills: Same Format, With Execution Power

What it does: Manus adopted the same Agent Skills open standard as Claude (same SKILL.md format) but adds execution. Where Claude Skills guide how to approach a task, Manus Skills run browser automation, execute code, and manage files — all in a sandboxed virtual machine—announced January 27, 2026.

Step 1: Describe a complex workflow to Manus and let it complete the task

Step 2: If it succeeds, use “Build a Skill with Manus” to auto-generate a SKILL.md from the successful run

Step 3: The skill saves to your Skill Library for reuse

Step 4: Invoke with slash commands (e.g., /competitor-analysis) or let Manus select automatically

Key differentiator: You don’t write SKILL.md files manually. Complete a task well once, capture it as a skill, and reuse it — the fastest path from “I did this once” to “my AI does this automatically.”

Limitations: Newer platform with a maturing ecosystem. The execution sandbox means skills can do more, but require more trust in what you’re automating.

What Persistent Instructions Can’t Do (Yet)

Cross-platform portability is limited. Claude and Manus Skills share the Agent Skills open standard, so a SKILL.md works on both. However, ChatGPT Custom Instructions are platform-locked and have no export format.

Memory and Skills are separate systems. Claude’s memory (what it learns about you over time) and Skills (what you explicitly teach it) don’t automatically inform each other.

Skill discovery depends on good descriptions. Claude loads skills based on name and description text. Vague descriptions mean the skill won’t activate when needed.

Getting Started Today

  1. Right now (10 minutes): Set up ChatGPT Custom Instructions with your role, industry, and formatting preferences. Every conversation immediately benefits.

  2. This week (30 minutes): Create a Claude or ChatGPT Project for your most active workstream. Add custom instructions and upload 2-3 key reference documents.

  3. Next week (1 hour): Build your first Claude Skill for a task you repeat weekly — a review checklist, a report template, a communication style guide.

  4. When you’re ready: Explore Manus for workflows requiring execution — browser automation, data processing, multi-step file operations. Let Manus generate skills from successful runs rather than writing them from scratch.

The days of pasting the same instructions into every AI conversation are over. Set up persistent context now, and your AI gets better at helping you with every session.

ALL THINGS AI LUNCH AND LEARN SCHEDULE

As a cofounder of All Things AI, I try to host many live events in Raleigh, NC. Still, I’ve found that even locals are strapped for time to get to live events, so we’ve done many virtual events to make it convenient and accessible for everyone, no matter where you are.

Agentic Systems: How AI Actually Gets Work Done
Tuesday, February 10 · 12:00 PM EST · Online

The Missing Link: Adding Your Data to Your App
Tuesday, February 24 · 12:00 PM EST · Online

Missed a session? Catch up on recent recordings. Here are some past events:

I appreciate your support.

Your AI Sherpa,

Mark R. Hinkle
Publisher, The AIE Network
Connect with me on LinkedIn
Follow Me on Twitter

Reply

Avatar

or to participate

Keep Reading