What is this about?
When working with AI enhanced IDEs like Cursor or Windsurf, we often tend to give somewhat ambiguous requests like these exaggerated examples:
or
Much better wording for these examples would be
AI tends to deliver much better results when we give it concise, technical instructions that fit the context.
My approach
I have tried to "automate" this in some ways with simple AI rules. Both Cursor and windsurf have a feature called AI rules where you can set global and project specific rules that the assistant will follow.
This snippet is in my global rules:
## User Prompt Rephrasing
Every time you encounter the exact keyword "rephrase" in a user prompt, do the following:
1. rephrase the user prompt in concise technical terms focusing on:
- specific technical task scope
- affected components/files
- required functionality changes
2. preserve the users intent in the rephrased prompt
3. output the rephrased prompt and ask for confirmation with exact phrase "Act on the rephrased prompt? [y/n]"
4. IMPORTANT: after asking for confirmation, STOP and wait for explicit user response
5. proceed ONLY after receiving "y" confirmation, otherwise ask for clarification
6. when proceeding, act only on the rephrased prompt
How it works
Now, whenever I add "rephrase" to my prompt, the assistant will act accordingly.
Example:
Benefits
The rephrased version offers several benefits over the original version:
Component Clarity - Original was vague, rephrased version explicitly lists required components (GUI framework, drag-drop handler, PDF converter)
Scope Definition - Clearly separates existing functionality (PDF conversion) from new requirements (GUI wrapper)
Implementation Direction - Suggests specific technical approaches (tkinter/PyQt) while maintaining flexibility
The AI assistant will perform better with the rephrased prompt because:
More precise input leads to more precise output - technical specifications eliminate ambiguity about what to implement
Breaking down into components helps the AI reason systematically about the solution architecture
Explicit requirements (e.g., "single file processing") prevent the AI from making incorrect assumptions about scope
Clear instructions to AI yield clear results. Or as the old saying goes: garbage in, garbage out :-)
A bit of theory behind the concept
The idea for a rule like that came to me when I heard about the concept of "Latent Space Activation" in Large Language Models. Very brief explanation from Claude:
My rephrasing prompt supposedly has some impact on latent space activation. Claude again:
Does it really work?
I've been experimenting with this for several days now and my subjective impression is that I really get better results with this approach. Better, working code often on the first shot.
Try it yourself
Have a play and let me know if you get better results, too.