I am reporting a persistent failure with the Opus 4.6 model in Rovo Dev that began around February 5, 2026. Every attempt to generate a response with this specific model results in a "Failed to generate an LLM response" error, please help!
Yes, that is correct. All other models, including Opus 4.5, Haiku 4.5, and GPT-5.2, are working perfectly for me. The "Failed to generate an LLM response" error only occurs when Opus 4.6 is selected. I have tested this across multiple tasks on my paid plan, and the failure is consistent regardless of the prompt complexity. Any assistance in restoring 4.6 stability would be greatly appreciated, as its improved agentic capabilities are highly anticipated for our workflows.
We just resolved some issues that might be causing this to happen for you. When you get a chance can you try again and let us know whether the problem persists?
Unfortunately, I'm still encountering the same issue.
EDIT: I've switched chat sessions and it worked, I'll keep this forum posted if there are any changes.