Skip to main content
Admin Panel
SC
Published
1 min read
255 words
Save
Publish
## Prompts Are Code Treat your prompts like code: version them, test them, and review them. A poorly written prompt will give inconsistent results no matter how powerful the model. ## Technique 1: Structured Output Always request structured output for programmatic use: ``` Analyze the following code review comment and return JSON: { "severity": "critical" | "warning" | "suggestion", "category": "bug" | "style" | "performance" | "security", "summary": "one line summary", "suggestion": "proposed fix" } ``` ## Technique 2: Chain of Thought For complex reasoning, make the model show its work: ``` Analyze this database query for performance issues. Think step by step: 1. Identify the tables and joins 2. Check for missing indexes 3. Look for N+1 query patterns 4. Estimate the query complexity Then provide your recommendations. ``` > **Tip:** Chain-of-thought prompting dramatically improves accuracy for reasoning tasks, even if you discard the reasoning from the final output. ## Technique 3: Few-Shot Learning Show examples of desired input/output pairs: ``` Convert these user stories to technical tasks: Example 1: User story: "As a user, I want to reset my password" Tasks: - Add /forgot-password route - Create email template - Implement token generation Now convert: User story: "As a user, I want to export my data as CSV" ``` ## Evaluation Build an eval suite for your prompts: ```typescript const testCases = [ { input: "...", expected: "...", metric: "exact_match" }, { input: "...", expected: "...", metric: "semantic_similarity" }, ] ``` Run evals on every prompt change, just like unit tests for code.