Prompt like a pro

Great prompt writing = great collaboration

LLMs are improving rapidly, so understanding where they are strongest will improve your collaboration experience with Korey, and all ai agents. Consider the following tips next time your chatting with Korey.

1

Keep conversations short

LLMs are known to experience context window degradation. This means that long conversations can lead to worse output. Try to open new conversation windows often and keep prompts fairly self-contained/focused, rather than having long running, open-ended conversations.

2

Don't ask for too many things at once

Try to break things down into smaller steps and ask for one thing at a time, rather than asking for groups of new features/implementations at once. Begin your conversation with “Start with…” or similar phrases to indicate there are multiple steps/asks that you’ll be making.

3

Confirm successful output

When an output looks good, let the LLM know – “that looks good” or “let’s use that” before moving to the next step. This can prevent unnecessary rework of outputs.

4

Avoid negative prompting

Focus your prompts more on what the behavior should be rather than what the behavior should not be. LLMs tend to do better with positive identifications than negative ones, so focus on your desired state and less on anti-states.

5

Include examples of what you're looking for

Supply examples that show the sort of output you’d like from the agent. Links to documents, uploads, text, etc…

6

Ask "why?"

Asking why the LLM chose the approach it did can often yield better results, in addition to the learning it provides you. The LLM will often come to “realizations” during the reflection process, and make improvements from there.