Keep conversations short
LLMs are known to experience context window degradation. This means that long conversations can lead to worse output. Try to open new conversation windows often and keep prompts fairly self-contained/focused, rather than having long running, open-ended conversations.
Don't ask for too many things at once
Try to break things down into smaller steps and ask for one thing at a time, rather than asking for groups of new features/implementations at once. Begin your conversation with “Start with…” or similar phrases to indicate there are multiple steps/asks that you’ll be making.
Confirm successful output
When an output looks good, let the LLM know – “that looks good” or “let’s use that” before moving to the next step. This can prevent unnecessary rework of outputs.
Avoid negative prompting
Focus your prompts more on what the behavior should be rather than what the behavior should not be. LLMs tend to do better with positive identifications than negative ones, so focus on your desired state and less on anti-states.
Include examples of what you're looking for
Supply examples that show the sort of output you’d like from the agent. Links to documents, uploads, text, etc…
Ask "why?"
Asking why the LLM chose the approach it did can often yield better results, in addition to the learning it provides you. The LLM will often come to “realizations” during the reflection process, and make improvements from there.