Building AI agents is new to most of us, and we all start at a similar level. The great thing is that AI is a very accessible technology. Not only can developers build agents, but also people in marketing, legal, or HR. You can get results quickly by tweaking prompts and instructions. However, after running workshops with hundreds of people, I discovered a few patterns that help write effective agents beyond the first prompts.
When I start building an agent, I write down the problem. So many people ask, "What can I sue an agent for?” Give me examples. This is great for inspiration.
However, the great thing about GenAI is that this technology is so accessible. Because you’re plain Eo do. If you give them 30 instructions at once, they will get confused, and English, German, or French to instruct an agent, everyone can do it. So we don’t require an out-of-the-box solution; we can adapt those to our needs and solve our unique problems.
However, it needs some experience to solve your problem with an agent or multiple agents.
It’s like teaching an intern what to do. If you give them 50 instructions at once, they get confused. The results will differ. One time, the intern does exactly what you expect, and other times, you wonder why they’ve solved the task that way.
Same with agents. If you write 100 lines of instructions, the results may differ, or you may need a lot of time to fine-tune the agent to do what you expect. It’s pretty obvious: When writing code, you also break it down into multiple classes and functions.
I’ll build an agent for every small task to test each agent separately. In the end, you might want to combine them all. There are different AI orchestration frameworks, like LangGraph and Rivet, or combining Atlassian Rovo agents with automation.
Let AI help us with small and large challenges, one agent at a time. .
It has happened to me quite some time when building a new agent: I have an idea in mind, I create the agent, I try it out, I tweak until I get a good result, and I release it.
However, when I use it daily, I realize I get mixed results with real-world data, so I do manual work on the agents' results. In other words, I quickly released the agent and only tested the happy path.
I’m a developer and should know better: Start with the expected outcome and have defined test cases to prove you get what you want.
Starting with the solution is never a good idea; we must start with the problem. So, for every agent I build, I start by thinking about a couple of test cases, creating some test data, and defining the expected result.
However, you can’t test an agent like you’re testing software because GenAI is non-deterministic, and the result can differ every time you call the agent. I know, vendors are trying to build test suites that try to validate non-deterministic outputs from agents.
For now, I only want to ensure that my agent works in more than just the happy path. It helps me develop a test plan and data to validate the result manually when building the agent.
No matter what prompt framework you use, give your agent structure.
Imagine you teach your new intern what they should do, starting with detailed instructions, putting their goal in the middle of the conversation, and adding the most important thing at the end. Your intern will stare at you, looking confused.
Your agent won’t give you that irritated look, but might give you strange results.
You can use different frameworks, but I use the following for my instructions in this specific order: Role/Purpose, Goal, Character, Tasks, Outputs, Examples.
Don’t use instructions that leave room for interpretation; choose the words wisely so they’re clear and the instructions are concise.
Even though agents often like to write long text, they prefer bullet points in their instructions.
Here’s a pro tip: Use upper-case letters to express your desire to grab the agent’s attention, like “DON’T REPEAT THE QUESTION.”
You can also use a prompt generator like the one for Rovo that is built into Rovo Studio. Some start a conversation with you, so you don’t miss an essential detail for your agent prompt.
This is the best tip from a friend: “Ask the agent why.”
I often get frustrated when an agent doesn’t do what I want. I think: I told you NOT to do it this way, but why are you still doing it?
So, I changed the agent's instructions and tested them again. It can take some time until I find the right words or order of instructions so that the agent does precisely what I planned.
But with a simple prompt, asking the agent what I want and how I should instruct the agent, I can solve the problem.
For example, I’ve built an agent who reviews bug reports in Jira and tells the author how to improve them. It worked great with very weak reports, but I wanted the agent not to give suggestions on good bug reports, and the agent didn’t listen to me.
So I told the agent, “I think this is an excellent bug report. Please tell me how I need to change your instructions so you don’t give me any suggestions. Just answer with: No changes needed.”
And viola: The agent told me exactly the phrase, I added to its instructions, and boom, it worked.
AI models might see the world differently from humans. Most of us don’t have time to learn how the AI model ticks, but we can ask the AI to help by explaining what we want to achieve. Over time, we will understand the AI models better, know what content windows are, and write better prompts from the get-go. But when stuck, ask the AI how to get out of it.
It’s like with humans: The more info you have, the better you can solve a problem.
Don’t expect GenAI to know everything. Provide as much guidance as needed. Agents also learn from good documentation and examples, like humans.
AI models are trained with all kinds of information. The problem for companies: You don’t know if it aligns with your ways of dealing with things.
That’s why it’s important to document your processes and guidelines so that the AI can learn from them.
When I build an agent, I always write a document that provides good and bad examples. That way, the agent will better understand what I expect.
I also try to find good guidelines that we use in our organization. That way, I have more control over the agent's recommendations and decisions for its users.
With some agent technologies, you can point to specific documents; with others, you’ll add context to the instructions. With Atlassian Rovo, I often create a Confluence page or Google Doc and point the agent directly to those documents.
Agents won’t solve everything for you, but with the right approach, they can help you tackle big and small challenges—one agent at a time.
Would love to hear your own tips!
Sven Peters
5 comments