When a Forge custom Rovo Agent is asked to generate a large volume of structured content (e.g. a full course with multiple sections and components) in a single conversation turn, the LLM that powers the Rovo Agent silently truncates its output mid-generation. Because action inputs are constructed from the LLM's raw output, the truncated content results in malformed (non-parseable) JSON being passed to the action handler ,causing the action to fail or receive incomplete data. So i need to know if :
This is a classic 'Token Limit' issue. When Rovo generates a large JSON object, it hits the maximum output token threshold and simply stops mid-sentence, leaving the JSON string unclosed and 'corrupted.'
To fix this, try these three strategies in your Agent's prompt or configuration:
1. Explicitly Limit the Array Size
LLMs struggle to estimate how much space a JSON list will take. Add this to your System Prompt:
"Limit your JSON output to a maximum of 10 items. If there are more, stop at 10 and do not attempt to summarize the rest within the JSON block."
2. Use a 'Compacted' JSON Format
Ask the agent to strip whitespace to save tokens:
"Output the JSON in a minified/compact format without extra spaces or newlines to maximize data density."
3. The 'Validation' Prompt (The most effective fix)
Add this specific instruction to the end of your Agent's prompt:
"Crucial: Ensure the JSON object is always syntactically correct and properly closed with } or ]}. If you run out of space, truncate the data items, but never truncate the closing syntax."
4. Check for 'Nested' Blobs
If you are pulling in issue.description or comment.body, these often contain hidden characters or are too long. Use a smart value to limit the input text: {{issue.description.left(500)}} so the agent has more 'room' to write the JSON response.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.