article Part 3 of 6

What is Prompt Engineering?

Prompt engineering is the art and science of crafting input text (prompts) that get the best possible outputs from AI language models. It's the primary way you "program" modern AI systems.

Unlike traditional programming where you write explicit instructions in code, with AI you write natural language instructions that guide the model toward your desired outcome. The quality of your prompts directly determines the quality of the AI's responses.

Think of it this way: AI models are incredibly capable but they need clear direction. A vague prompt gets a vague response. A well-crafted prompt gets exactly what you need.

The Prompt Engineering Process

Writing a good prompt is rarely a one-shot activity. Treat it as an iterative loop:

  1. Draft — Write your best first attempt at a prompt for the task.
  2. Test — Run it against a representative set of real inputs (aim for 10–20 examples that cover typical cases and likely edge cases).
  3. Evaluate — Score the outputs against your success criteria. Was the format right? Was the answer accurate? Did it stay on topic?
  4. Refine — Adjust the prompt to fix the failure cases you found, then repeat.

At scale, prompt engineering becomes an engineering discipline in its own right. Version-control your prompts the same way you version-control code — a prompt is a first-class artefact that determines product behaviour. For high-volume features, systematic A/B testing of prompt variants (measuring accuracy, hallucination rate, or task completion) is worth the investment. A 5% improvement in a prompt that runs a million times a day is significant.

Anatomy of an Effective Prompt

Great prompts typically include several key components. You won't always need all six, but knowing each one helps you diagnose why a prompt isn't working:

  1. Role / Persona — Tell the model what perspective to adopt. "You are a senior JavaScript developer reviewing code for a production system." This primes the model's tone, expertise level, and area of focus.
  2. Context / Background — Provide the information the model needs that it wouldn't otherwise have. Your app's tech stack, the user's subscription tier, a relevant policy document.
  3. Task instruction — A clear, specific statement of what you want done. "Summarise", "Classify", "Translate", "Rewrite in the style of…".
  4. Input data (delimited) — The actual content to work on, separated from the instruction with delimiters (""", ---, or XML-style tags) to prevent prompt injection.
  5. Output format specification — Exactly how you want the result structured: JSON with specific keys, a bulleted list, two sentences maximum, a markdown table.
  6. Constraints / Guardrails — What the model should not do. "Never mention competitor products." "If the question is outside the topic of cooking, politely decline."

Annotated example combining all six components:

// [1] Role
You are a customer support agent for an online bookstore.
Your tone is friendly, concise, and helpful.

// [2] Context
The customer has a Premium membership (free returns, priority shipping).
Today's date is [DATE].

// [3] Task
Answer the customer's question below using only the information provided.
If you cannot answer from the provided information, say so clearly and
offer to escalate to a human agent.

// [4] Input data (delimited)
Customer message:
"""
[CUSTOMER_MESSAGE]
"""

// [5] Output format
Respond in plain text. Keep your reply under 80 words.

// [6] Constraints
Do not discuss pricing, discounts, or promotions.
Do not mention other bookstores.

Core Prompting Techniques

1. Be Specific and Clear

Vague prompts get vague results. The more specific you are, the better.

Examples:

❌ Bad (too vague):
"Write about coffee"

✅ Better:
"Write a 150-word blog intro about the health benefits of drinking coffee,
targeting health-conscious millennials. Tone should be informative but
conversational."

❌ Bad:
"Summarize this"

✅ Better:
"Summarize this customer review in one sentence, focusing on the main complaint
and whether they recommend the product."

2. Assign a Role or Persona

Tell the AI what perspective or expertise to adopt. This dramatically improves response quality.

// Generic response
"Explain machine learning"

// With role - much better response
"You are a patient teacher explaining to a 10-year-old. Explain machine
learning using simple analogies they would understand."

// Another role example
"You are a senior JavaScript developer reviewing code. Point out potential
bugs and suggest improvements."

3. Provide Examples (Few-Shot Prompting)

Show the AI examples of the format or style you want. This is incredibly powerful.

// Zero-shot (no examples)
"Extract product name and price from this text"

// Few-shot (with examples) - much more accurate
"Extract product name and price from product descriptions. Return as JSON.

Examples:
Input: "The UltraWidget 3000 is on sale for just $29.99!"
Output: {"product": "UltraWidget 3000", "price": 29.99}

Input: "Get the ProGadget today - only $149"
Output: {"product": "ProGadget", "price": 149.00}

Now extract from this:
Input: "

4. Break Down Complex Tasks (Chain of Thought)

For complex problems, ask the AI to think step-by-step. This improves reasoning.

❌ Without chain of thought:
"Is this customer review positive or negative?
'The product arrived late but the quality exceeded expectations.'"

✅ With chain of thought:
"Analyze this customer review step by step:
1. Identify positive points
2. Identify negative points
3. Determine overall sentiment
4. Provide final rating (1-5)

Review: 'The product arrived late but the quality exceeded expectations.'"

5. Use Delimiters for Structure

Clearly separate different parts of your prompt, especially user input from instructions.

// Using delimiters to avoid prompt injection
You are a product description generator.

Instructions:
- Keep descriptions under 100 words
- Highlight key benefits
- Use enthusiastic but professional tone

Product details:
"""

"""

Generate the description:

6. Specify Output Format

Tell the AI exactly how you want the response formatted. This is critical for structured data.

// Specifying JSON output
"Analyze the sentiment of this customer feedback and return ONLY valid JSON
with this exact structure (no additional text):

{
  "sentiment": "positive" | "negative" | "neutral",
  "confidence": 0-100,
  "key_themes": ["theme1", "theme2"],
  "summary": "one sentence summary"
}

Customer feedback: "

7. Add Constraints and Guardrails

Tell the AI what NOT to do, or set boundaries on its behavior.

// Adding constraints
"You are a homework helper for 8th grade math students.

Rules:
- Never give direct answers to homework problems
- Instead, guide students through the problem-solving process
- Use encouraging, patient language
- If a question is not about math, politely decline and redirect
- If a problem seems beyond 8th grade level, say so

Student question: "

Advanced Prompting Patterns

1. Multi-Step Prompts

Break complex tasks into multiple AI calls, where each step informs the next.

// Step 1: Extract key information
const keyInfo = await ai.generate(`
  Extract the main topics discussed in this article:
  ${article}
`);

// Step 2: Generate summary using extracted info
const summary = await ai.generate(`
  Write a summary focusing on these topics: ${keyInfo}

  Article: ${article}
`);

2. Self-Critique / Reflection

Ask the AI to review and improve its own output.

// First generation
const draft = await ai.generate("Write a product description for a smartwatch");

// Self-critique
const final = await ai.generate(`
  Review this product description and improve it:

  Original: ${draft}

  Check for:
  - Clarity and readability
  - Compelling benefits
  - Any marketing clichés to remove
  - Grammar and tone

  Provide an improved version.
`);

3. Retrieval-Augmented Generation (RAG)

Provide relevant context/data in your prompt that the AI can reference.

// RAG pattern - inject your own data
const relevantDocs = searchDatabase(userQuery); // Your search logic

const response = await ai.generate(`
  Answer the user's question using ONLY the information provided below.
  If the answer is not in the provided information, say "I don't have
  information about that."

  Context:
  ${relevantDocs.join('\n---\n')}

  User question: ${userQuery}

  Answer:
`);

Prompt Engineering for Web Development

Code Generation Prompts

// Specific, with constraints
"Generate a React component that:
- Displays a list of blog posts
- Each post shows title, excerpt, and publish date
- Uses TypeScript with proper types
- Includes loading and error states
- Styled with Tailwind CSS
- Include prop types and JSDoc comments

Return only the component code, no explanations."

Data Extraction Prompts

// Extracting structured data from unstructured text
"Extract structured information from this user message and return as JSON:

Required fields:
{
  "intent": "question" | "complaint" | "feedback" | "request",
  "product_mentioned": string or null,
  "urgency": "low" | "medium" | "high",
  "action_required": boolean,
  "summary": string (max 50 chars)
}

User message: ${userMessage}"

Content Moderation Prompts

// Safety and moderation
"Analyze this user-generated content for policy violations.

Check for:
- Spam or promotional content
- Offensive language
- Personal information (emails, phone numbers, addresses)
- Misinformation or scams

Return JSON:
{
  "is_safe": boolean,
  "violations": string[],
  "confidence": 0-100,
  "sanitized_version": string (if needed)
}

Content: ${userContent}"

Common Prompting Mistakes to Avoid

  • Being too vague – "Write something good" → Be specific about what you want
  • Not providing context – AI doesn't know your business domain unless you tell it
  • Overcomplicating – Sometimes simple prompts work best; don't overengineer
  • Ignoring output format – If you need JSON, explicitly request it
  • Not iterating – First prompt rarely perfect; refine based on results
  • Trusting blindly – Always validate AI outputs, especially for critical tasks
  • Not setting boundaries – Tell AI what NOT to do, not just what to do

Testing and Optimizing Prompts

Systematic approach to prompt improvement:

  1. Create a test set – Collect 10-20 example inputs that represent real use cases
  2. Establish success criteria – What makes a good output? (accuracy, format, tone, etc.)
  3. Test variations – Try different phrasings, structures, examples
  4. Measure performance – Track how often you get the desired result
  5. Iterate – Refine based on failures and edge cases
  6. Document – Keep a library of proven prompts for different tasks

A/B testing example:

// Version A - Direct
const promptA = "Summarize this article in 2 sentences: ${article}";

// Version B - With constraints
const promptB = "Summarize this article in exactly 2 sentences.
Focus on the main conclusion and key supporting point: ${article}";

// Test both, measure which produces better summaries

Key Takeaways

  • Prompt engineering is how you "program" AI models—the quality of your prompts determines output quality.
  • Effective prompts include: role/persona, context, task, examples, and constraints.
  • Be specific and clear—vague prompts get vague results.
  • Assign roles to get expert perspectives ("You are a senior developer...")
  • Use few-shot prompting (provide examples) for significantly better accuracy.
  • Break down complex tasks into steps (chain of thought reasoning).
  • Use delimiters (""", ---, etc.) to clearly separate instructions from data.
  • Always specify output format, especially for structured data (JSON, tables, etc.).
  • Add constraints and guardrails to prevent unwanted behaviors.
  • Advanced patterns: multi-step prompts, self-critique, RAG (providing your own context).
  • Test prompts systematically with real examples and iterate based on results.
  • Keep a library of proven prompt templates for common tasks.

Now that you know how to craft effective prompts, let's learn how to integrate AI APIs into your web applications.