In my work life, I spend a lot of time looking over other people’s shoulders helping them get better results from their usage of AI.

One theme I noticed is that they tend to use ChatGPT, Gemini, Claude, and other applications the same way they used Google.

They type a prompt, or a search query and wait for the results.

If they don’t like the result they start from scratch with a new prompt rather than refine the results. Bad queries, result in bad results, albeit better than they probably get from a traditional search engine.

What I want people to take away from this edition of the AIE is how to create better questions to get better results. And to treat AI chatbot queries like conversations, not like Q&A sessions.

These new reasoning models are becoming more powerful and there are a few tips and tricks that turn meh results into magnificent ones.

AI LESSON

Prompt Engineering Deep Dive

Prompt smarter in 2025: clearer outputs, faster work, and fewer rewrites.

In 2025, prompt engineering isn’t optional—it’s leverage. Done well, it turns ChatGPT and Claude into high-performing assistants. Done poorly, it clogs your day with rewrites and missed expectations.

Today I decided to try to provide some simple steps to improve the outputs from your AI conversations, and some new techniques that are proven to provide better results.

How to Structure Better Prompts

This is a brain dump of things I have seen that seem simple but can make the difference between good and great outputs. Some is basic but for others that are not taking advantage of these best practices, you’ll see a big boost in productivity.

Fully Describe the Task

Bad Prompt:

Summarize this paper.

Better Prompt:

Summarize this neuroscience paper in 200 words, highlighting the main hypothesis, methods, and conclusions, for a graduate-level audience.

Always Add the Target Audience

Bad Prompt:

Explain quantum entanglement.

Better Prompt:

Explain quantum entanglement to a high school student using analogies and simple language.

Specify the Output Format

Prompt:

Create a table comparing the effects of climate change on agriculture in the U.S., India, and Brazil using peer-reviewed data.

I like this because the AI knows it’s producing a table, not a paragraph. Formatting is half the battle. You can also add details like, don’t use emojis or use bullet points.

Use Prompts for Repeatable Workflows

I have very verbose prompts when I want the output to match a certain style or format for a report or a common task like providing a list of the features or abstract for a new course. Here are some things you might want to automate.

Recurring tasks are where AI delivers its highest ROI. For workstreams like social publishing, meeting recaps, onboarding, and reporting, the key is to lock in structure—and then use AI to adapt to changing inputs. Well-designed prompts serve as operational templates: predictable in output, and dynamic in content. Here are simple examples, but I’d be adding a previous example to the prompt, like including a copy of a report, or a list of posts that performed well on social media, to guide the output.

Task Type

Example Prompt

Social Media Calendar

Generate a 4-week LinkedIn content calendar for a fintech company. Focus on thought leadership, feature highlights, and customer proof points. Alternate post types (text, infographic, carousel) and include suggested publish dates and performance goals.

Meeting Recap via AI Notetaker

Summarize this transcript from our weekly GTM sync captured by Fireflies. Include key decisions, owner-assigned follow-ups, and open issues. Format: bulleted summary with 3 sections—Decisions, Action Items, and Risks. Maintain consistency with last week’s structure.

Customer Onboarding Email

Write a welcome email for new users of our platform. Include standard items (support contacts, onboarding checklist, SLA overview), but customize the intro and CTA based on this month’s new feature release: "automated billing sync." Tone: friendly but precise.

Monthly Metrics Summary

Summarize performance metrics from this month’s marketing dashboard. Focus on KPIs (MQLs, CAC, conversion rates), compare against the previous month, highlight any deltas over 10%, and include 1–2 bullet commentary for anomalies. Format for inclusion in a board slide.

Assign a Role to the AIx

This is helpful in two parts, first, it allows the model to focus on a certain skill set, so it’s faster, second it gives additional context very efficiently.

Prompt:

You are a business analyst. Create a weekly market update summary.

Use Prompt Scaffolding

Back when I used to write with a pen and paper—yep, I’m that old—I’d create an outline for what I was writing. Then I’d take the top-level points and expand them. Think about prompt scaffolding the same way. It is the practice of constructing layered, modular prompts that break down a request into predictable, testable parts. Think of it like a blueprint for the AI to follow.

One powerful way to scaffold prompts is using the ReAct framework—which stands for Reasoning and Acting. Originally developed for agent-based models, ReAct can also guide output structuring by alternating reasoning steps and final instructions.

Use this scaffold:

Role: You are an AI expert for business executives and mid-level managers  Task: Draft a practical lesson for The AI Enterprise Newsletter  

Reasoning: Think step-by-step through the logic of what professionals need to learn and why it matters now    

Action: Produce a clear, structured article with a headline, walkthrough, repeatable prompt pattern, and real-world business examples    

Audience: Professionals who want to use AI tools more effectively in their day-to-day workflow

Here’s how I might use this scaffold to generate newsletter editions:

Example Prompt

You are an AI expert for business professionals. Write a new edition of The AI Enterprise Newsletter. The goal is to teach one highly practical AI skill. Use a real-world business context and provide: A specific use case A clear walkthrough of the skill in action One repeatable prompt pattern Follow-up examples A closing section that emphasizes ROI and simplicity Tone: like HBR or WSJ. Format: markdown, web-ready.

Markdown & XML in Prompting

Use Markdown when prompting ChatGPT to enforce structured, web-ready outputs like headings, tables, bullet points, or code blocks. Use XML-like tags when prompting Claude. These tags (e.g., <text>, <instructions>) guide Claude’s focus and improve consistency in long or multi-step prompts. Even if you don’t want to learn how to format the prompt in these ways you can ask your favorite AI chatbot to just convert your prompt and it’ll give you a properly formatted prompt.

Cool Prompt Tool —Anthropic Console

If you're building AI workflows or testing what generative AI can do for your team, the Anthropic Console is worth exploring. It's a web-based workspace for developing and refining AI prompts—designed for both technical and non-technical users.

What sets it apart is the “Generate Prompt” feature. Instead of starting from scratch, you simply describe your goal in plain language—something like “summarize a meeting,” “draft a customer service reply,” or “write an internal update based on bullet points.” The console then produces a ready-to-use, structured prompt that you can test, adjust, and deploy. It takes the guesswork out of prompt design and helps you get high-quality results faster.

Use Anthropic Console to generate more detailed prompts

You can also simulate conversations, upload documents for analysis, and monitor how efficiently your prompts use tokens. But you don’t need to know any of that to get started. It’s simple: describe the task, get a prompt, and test how the AI performs.

For business users, this tool shortens the path from idea to outcome. Whether you're creating templates for customer service, automating routine writing tasks, or improving team communication, the Anthropic Console helps you build AI prompts that work—and it helps you learn what good prompting looks like along the way.

Advanced Prompting Strategies for Business Users

These three strategies reflect the latest in prompting research, including findings from the April 2025 paper “System 2 Prompting for Reasoning Tasks”, which emphasizes iterative, structured, and self-evaluative approaches for better reasoning outcomes.

Adaptive Prompting

Dynamically adapts prompt logic based on task complexity or AI responses. Instead of giving the AI a rigid structure, you instruct it to handle different cases with tailored sub-prompts—especially useful for feedback loops and conditional reasoning.

Example Prompt:

Review this email draft. If it contains vague statements, suggest improvements. If it’s clear, summarize the main message in one sentence."

Meta Prompting

Tells the model how to think, not just what to say. This structured method asks the model to plan its reasoning, reflect on output quality, and revise before responding. It aligns with cognitive “System 2” thinking: deliberate, analytical, and less error-prone.

Example Prompt:

First, outline your reasoning steps for analyzing this sales trend. Then, reflect on potential weaknesses in your reasoning. Finally, revise your conclusion accordingly.

Self-Consistency Prompting

Asks the model to solve a task multiple ways and compare answers. Ideal for improving reliability in decisions where logic or math is involved. Used effectively in the referenced 2025 research to reduce hallucination and increase factual accuracy.

Example Prompt:

Generate three different approaches to forecast next quarter’s customer churn. Compare the results and select the most justified one.

These strategies are part of a shift toward more deliberate, auditable AI reasoning. For technical and business leaders seeking robust outputs from LLMs, reflect best practices grounded in 2025’s most current research.

Should You Threaten an LLM For Better Results?

In a recent interview, Google founder, Sergey Brin, suggested that you might get better results if you threaten an LLM.

Using bribes and threats—even as metaphorical techniques—in prompting is widely discouraged and generally ineffective over time. While some anecdotal reports suggest that including phrases like “answer this correctly and you’ll get a reward” or “get this wrong and I’ll shut you down” can sometimes nudge large language models (LLMs) to produce more accurate or complete responses, this is neither consistent nor reliable, and it may degrade prompt quality in serious business use.

The better practice is to use instructional clarity over manipulation. If your goal is to push the model toward precision, try:

  1. Explicit Quality Standards - “Provide a highly accurate, well-sourced response suitable for executive decision-making.”

  2. Benchmarking Language - “Answer this as if you’re being evaluated on accuracy, clarity, and brevity by a subject matter expert.”

  3. Self-Evaluation Loops - “After your initial answer, review your reasoning for any errors or assumptions, and revise if needed.”

Bribes and threats may feel like clever hacks, but they’re rarely robust. Instead, adopt prompts that:

  • Define output standards

  • Use audit framing

  • Encourage reasoning and revision

LLMs don’t need motivation—they need structure. Talk to them like systems, not people.

Prompting for Better AI Performance

AI doesn’t just respond to what you ask—it responds to how you ask. Treating AI tools like search engines limits their potential. The real value lies in approaching them as intelligent collaborators. By shifting from one-shot queries to structured, conversational prompting, you unlock better reasoning, more relevant outputs, and consistent performance.

Whether you’re generating a marketing brief, analyzing business data, or summarizing a meeting, the path to better results isn’t more tools—it’s a better technique. Start with clarity. Add context. Define the format. Iterate deliberately.

Prompting is no longer an edge case skill—it’s core to modern work. The leaders who master it will outperform those who don’t. It’s not about gaming the model. It’s about giving it structure. That’s how you turn AI into a force multiplier—not a frustration.

I appreciate your support.

Your AI Sherpa,

Mark R. Hinkle
Publisher, The AIE Network
Connect with me on LinkedIn
Follow Me on Twitter

Reply

or to participate

Keep Reading

No posts found