LLM Prompt Optimizer
This skill transforms weak, vague, or inconsistent prompts into precision-engineered instructions that reliably produce high-quality outputs from any LLM (Claude, Gemini, GPT-4, Llama, etc.).
Content
Overview
This skill transforms weak, vague, or inconsistent prompts into precision-engineered instructions that reliably produce high-quality outputs from any LLM (Claude, Gemini, GPT-4, Llama, etc.). It applies systematic prompt engineering frameworks — from zero-shot to few-shot, chain-of-thought, and structured output patterns.
When to Use This Skill
- -Use when a prompt returns inconsistent, vague, or hallucinated results
- -Use when you need structured/JSON output from an LLM reliably
- -Use when designing system prompts for AI agents or chatbots
- -Use when you want to reduce token usage without sacrificing quality
- -Use when implementing chain-of-thought reasoning for complex tasks
- -Use when prompts work on one model but fail on another
Step-by-Step Guide
1. Diagnose the Weak Prompt
Before optimizing, identify which problem pattern applies:
| Problem | Symptom | Fix |
|---|---|---|
| Too vague | Generic, unhelpful answers | Add role + context + constraints |
| No structure | Unformatted, hard-to-parse output | Specify output format explicitly |
| Hallucination | Confident wrong answers | Add "say I don't know if unsure" |
| Inconsistent | Different answers each run | Add few-shot examples |
| Too long | Verbose, padded responses | Add length constraints |
2. Apply the RSCIT Framework
Every optimized prompt should have:
- -R — Role: Who is the AI in this interaction?
- -S — Situation: What context does it need?
- -C — Constraints: What are the rules and limits?
- -I — Instructions: What exactly should it do?
- -T — Template: What should the output look like?
Before (weak prompt):
After (optimized prompt):
3. Chain-of-Thought (CoT) Pattern
For reasoning tasks, instruct the model to think step-by-step:
4. Few-Shot Examples Pattern
Provide 2-3 examples to establish the pattern:
5. Structured JSON Output Pattern
6. Reduce Hallucination Pattern
7. Prompt Compression Techniques
Reduce token count without losing effectiveness:
Best Practices
- -✅ Do: Always specify the output format (JSON, markdown, plain text, bullet list)
- -✅ Do: Use delimiters (```, ---) to separate instructions from content
- -✅ Do: Test prompts with edge cases (empty input, unusual data)
- -✅ Do: Version your system prompts in source control
- -✅ Do: Add "think step by step" for math, logic, or multi-step tasks
- -❌ Don't: Use negative-only instructions ("don't be verbose") — add positive alternatives
- -❌ Don't: Assume the model knows your codebase context — always include it
- -❌ Don't: Use the same prompt across different models without testing — they behave differently
Prompt Audit Checklist
Before using a prompt in production:
- -[ ] Does it have a clear role/persona?
- -[ ] Is the output format explicitly defined?
- -[ ] Are edge cases handled (empty input, ambiguous data)?
- -[ ] Is the length appropriate (not too long/short)?
- -[ ] Has it been tested on 5+ varied inputs?
- -[ ] Is hallucination risk addressed for factual tasks?
Troubleshooting
Problem: Model ignores format instructions
Solution: Move format instructions to the END of the prompt, after examples. Use strong language: "You MUST return only valid JSON."
Problem: Inconsistent results between runs
Solution: Lower the temperature setting (0.0-0.3 for factual tasks). Add more few-shot examples.
Problem: Prompt works in playground but fails in production
Solution: Check if system prompt is being sent correctly. Verify token limits aren't being exceeded (use a token counter).
Problem: Output is too long
Solution: Add explicit word/sentence limits: "Respond in exactly 3 bullet points, each under 20 words."
FAQ
Discussion
Loading comments...