I've been playing around with the ROVO OKR Generator Agent and while it was helpful, it hallucinated a couple of the key results. I was working on a single Objective so it wasn't mistaking KRs from somewhere else. This happened twice within the first 45 minutes. Is this common? Also, does anyone know if there are plans to automatically update Atlas goals? For now, that's not supported.
Hi @Ted Henry
Rovo uses commercially available models from several companies, including OpenAI GPT-4, and routes requests as appropriate to models based on the requested task. Rovo's tendency for hallucination will follow whichever underlying model it selects.
I've found that current gen-ai is best at operations that summarize and categorize, and less good at generating new content. Rovo is similar.
Hi @Ted Henry
If you still need a second pair of eyes to look at your OKRs, feel free to reach out: https://www.linkedin.com/in/margosakova/
I occasionally do free OKR reviews to support the community, so I can help you here.
Also, here's a brief intro on how one of our customers leveraged AI to support the internal team with OKRs:
I can also ask the customer for an introduction to share some advice with you directly.
Good luck with your journey!
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Yes, AI tools like Rovo can hallucinate when prompts lack context or are ambiguous—this is a known limitation. Atlassian recently added a Deep Research feature to improve accuracy by pulling structured insights from the web, Atlassian apps, and connected tools.
You're also right—automatic updates to Atlas goals aren’t supported yet (feel free to submit a features request)
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.