When a Forge custom Rovo Agent is asked to generate a large volume of structured content (e.g. a full course with multiple sections and components) in a single conversation turn, the LLM that powers the Rovo Agent silently truncates its output mid-generation. Because action inputs are constructed from the LLM's raw output, the truncated content results in malformed (non-parseable) JSON being passed to the action handler ,causing the action to fail or receive incomplete data. So i need to know if :
@Habib ZOUARI this isn’t just a “token limit” issue—what you’re hitting is a mix of LLM output limits + Rovo action payload constraints, and there’s no documented hard limit or truncation signal today. Large single-turn JSON outputs aren’t reliable. The current pattern is to generate in chunks or stages, then pass smaller, validated payloads to actions.
Unfortunately, there isn’t a way yet to preserve full volume + quality in one pass without tradeoffs.
@Dr Valeri Colon _Connect Centric_
Thank you for the detailed response — really appreciate you taking the time to clarify this.
We're already working with a chunked approach, but it still comes with its own set of limitations and edge cases that are tricky to handle reliably on the client side. Hopefully the platform evolves to offer better guarantees around output integrity and structured truncation signals in the future.
Thanks again!
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
@Habib ZOUARI appreciate you sharing that—what you’re seeing is pretty consistent with how others are hitting the edges right now. You’re right, chunking solves the truncation problem, but shifts complexity to orchestration and consistency. Today, the most reliable pattern I’ve seen is combining:
It’s not elegant, but it’s stable.
Agree with you on the need for structured truncation signals or streaming-safe outputs—that would remove a lot of this overhead. Definitely worth sharing your feedback with Atlassian, as this is a real builder pain point.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
This is a classic 'Token Limit' issue. When Rovo generates a large JSON object, it hits the maximum output token threshold and simply stops mid-sentence, leaving the JSON string unclosed and 'corrupted.'
To fix this, try these three strategies in your Agent's prompt or configuration:
1. Explicitly Limit the Array Size
LLMs struggle to estimate how much space a JSON list will take. Add this to your System Prompt:
"Limit your JSON output to a maximum of 10 items. If there are more, stop at 10 and do not attempt to summarize the rest within the JSON block."
2. Use a 'Compacted' JSON Format
Ask the agent to strip whitespace to save tokens:
"Output the JSON in a minified/compact format without extra spaces or newlines to maximize data density."
3. The 'Validation' Prompt (The most effective fix)
Add this specific instruction to the end of your Agent's prompt:
"Crucial: Ensure the JSON object is always syntactically correct and properly closed with } or ]}. If you run out of space, truncate the data items, but never truncate the closing syntax."
4. Check for 'Nested' Blobs
If you are pulling in issue.description or comment.body, these often contain hidden characters or are too long. Use a smart value to limit the input text: {{issue.description.left(500)}} so the agent has more 'room' to write the JSON response.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
thanks for you response @Founder - NewSysRS
The main challenge I’m facing is balancing content quality and volume. I experimented with batching as a workaround, and while it successfully resolved the JSON truncation issue, it introduced a new problem: it negatively impacts the richness of the generated content.
Specifically, when I increase the number of items , the payload allocated to each item becomes smaller, which directly reduces the depth and quality of the output.
That’s why I’m currently exploring alternative approaches that can eliminate truncation issues while preserving both the quality and completeness of the generated content, rather than trading one for the other.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.