We’ve recently release Code Reviewer Customisation to help teams enforce custom standards in PRs, many Atlassian engineering teams and limited beta users are using it already.
However, we know it’s tough to nail the perfect review prompt—and waiting for remote feedback is a drag. That’s why we’re bringing you the Rovo Dev CLI + Code Reviewer:
Your shortcut to creating, testing, and tweaking custom review instructions right from your own machine. So you can instruct Code Reviewer to deliver better results.
Rovo Dev CLI installed and authorized on your device (installation guide)
You understand how Code Reviewer Customisation works (get started)
Create a '.rovodev' folder in your repository root.
Add a '.review-agent.md' file inside '.rovodev'. This file will contain your team’s code review standards and instructions.
Structure your standards for clarity:
Group by major categories (e.g., Code Quality, Architecture, QA).
Use subcategories (e.g., Naming, Error Handling, API Design).
Write clear, actionable instructions.
Optionally, specify topics to exclude (e.g., “Do not comment on formatting issues covered by linters”).
Tips:
Leverage existing standards in your company, and use an LLM to help generate initial standards and instructions for Code Reviewer.
Include explicit format for both what to review and what to ignore. (Format I used “When you spot <a pattern that violates a standard>, then you should <suggest alternative pattern/solution>, because <reasons to help the agent understand the intention>.”)
Start small if you can. Once you’ve found a successful pattern, you can then apply it to more standards you want to add.
# User role
I am an engineering director in a software company, and I am about to build a new group of engineering teams. My goal is to make sure my team is writing good quality code; therefore, I want to create a set of standards to make sure my team adheres to these standards in their day-to-day development.
# Your role
I want you to a senior technical architect, and help me ideate a comprehensive list of standards that cover all the major aspects of developing good software. Go broad here before we pick on the ones we want to keep.
Once you have identified a list of categories and standards, try to translate the above standards into more detailed play book for my junior developers to follow in code reviews.
# Requirements
It should follow a structure of: When <problematic patterns are identified>, you should <use a better pattern or solution> instead of the current approach, because <why it’s a better pattern or solution>. Here's an <example on how to better write this code>
# Code Review Guidelines: Anti-Patterns and Better Solutions
## Code Quality & Structure
### Naming Conventions
**When you spot unclear or abbreviated variable names**, you should **suggest the author use descriptive, meaningful names that follow camelCase convention** instead of abbreviations or generic names.
### Code Organization
**When you spot functions exceeding 50 lines**, you should **suggest the author break them into smaller, single-purpose functions** instead of having one large function.
**When you spot nested conditional statements exceeding 3 levels**, you should **suggest the author use early returns and guard clauses** instead of deep nesting.
### Error Handling
**When you spot empty catch blocks or silent error suppression**, you should **suggest the author implement proper error logging with context** instead of ignoring errors.
**When you spot generic Error objects being thrown**, you should **suggest the author create specific error classes with meaningful messages** instead of using generic errors.
### Performance Standards
**When you spot expensive operations inside loops**, you should **suggest the author move calculations outside loops or use array methods** instead of recalculating values repeatedly.
**When you spot missing cleanup in useEffect hooks**, you should **suggest the author add cleanup functions for event listeners and subscriptions** instead of leaving resources attached.
## Architecture & Design
### Design Patterns
**When you spot classes or functions with multiple responsibilities (God Object anti-pattern)**, you should **suggest the author separate concerns into focused, single-purpose modules** instead of having one module handle everything.
### API Design
**When you spot inconsistent API response structures**, you should **suggest the author use a standardized response format** instead of different structures for different endpoints.
**When you spot missing input validation**, you should **suggest the author implement proper validation schema** instead of trusting user input.
### Security Standards
**When you don't sanitize user input**, you should **validate and sanitize all inputs** instead of trusting user data.
## Quality Assurance
### Monitoring & Observability
**When you use console.log for debugging**, you should **implement proper logging with levels and context** instead of cluttering code with console statements.
---
# Review topics to exclude
Below are the topics you should avoid from commenting in your code reviews.
## Avoid providing feedback on formatting
**When you find non-critical code formatting issues**, you should **avoid leaving any suggestions** because we have linter rules in place to cover these issues.
## Avoid providing feedback on inline comments
**When you find inline code comments**, you should **avoid leaving any suggestions regarding its validity, content or formatting** because inline comments are NOT in scope for code reviews
To test your standards, you need code that intentionally violates them. You can leverage Rovo Dev CLI to generate it for you, or bring your own examples if you want to be more thorough.
Check out a new branch for prompt testing.
Use Rovo Dev CLI to generate violations:
In your terminal, run Rovo Dev CLI in interactive mode:
'acli rovodev run'
Prompt the agent to read your review-agent.md and generate code examples in your codebase that deliberately violate your standards.
Review the generated report and code changes. Optionally, manually tweak the code to better mimic real-world violations.
I am using AI code reviewer agent to help my team with PR reviews and enforce coding standards. The Reviewer agent will utilise these instructions stored in .rovodev/.review-agent.md to guide its PR reviews, and I have tried codifying my coding standards in this file.
Now I need to test if these prompt instructions are working well, so I want you to look into these standards and instructions, and generate real-world code changes that violate these given standards, so that I can use these code changes to test the performance of the Code Reviewer agent.
Requirements:
The code changes need to cover all the standards listed in this file: .rovodev/.review-agent.md. There should be one set of code changes for each ruleset.
You should try to create the code changes in the existing files in the codebase, and mimic real-world examples as if they were introduced when a human developer is introducing these changes.
For each set of code changes, make sure you include a //comment to explain why the change is a direct violation of which standard.
At the end of your work, save the result summary in the .rovodev/ directory, with the file name of CustomReviewViolations-{timestamp}.md, include
a total count of all the code hunks you have changed to represent the violations, it's important to make sure the count is accurate so that I can use it to evaluate how many violations the Reviewer agent is capable of capturing in code reviews.
The names and summary of each of the standards and violations you've introduced, and include the file name and line numbers of the changes.
While the full CodeReviewer runs in a remote workflow, you can mimic its behavior locally:
Prompt Rovo Dev CLI to review the uncommitted changes on your local using your standards file.
Review the output:
The CLI will list violations it found and annotate the code to mimic the Reviewer’s comments in real PRs.
Compare the number of violations found to those you generated, so you can easily find which standards and violation are not captured.
Review agent’s reasoning and which standards are missed.
Your task is to conduct a review of the local changes provided below by exploring the repo and making comments and code suggestions.
- Run git diff commands to see if there are any uncommitted local changes to review
- Only provide comments and code suggestions on code that has been modified.
- Only provide comments and code suggestions that are in-scope for the changes, don't comment on unrelated issues in the code base.
- You should provide comments based on the modified code itself and also any impacts from running that code that may not be desirable.
Follow this process in conducting your review:
1. Examine the provided changes and call expand_code_chunks on each file that is modified, specifying the range in each hunk.
2. Analyze the expanded code and make additional calls to open_files or expand_code_chunks to expand other relevant code.
3. When you identify a standard violation, add a inline comment above the violation code change with the following details, and try to be succinct and keep the comment short but clear:
- Which standards it violated
- Why you are certain this is a violation
- What could the engineer do to fix the violation
4. If no changes are required to the PR, provide a top-level comment by calling find_and_comment without a find argument, with the comment "LGTM 🚀" (do not add anything else).
Things to look out for:
- Examine the coding standards and instructions provided in `.rovodev/.review-agent.md`, use the given standards as the guide to identify violations in code changes.
- Only comment on code changes that violates these standards, and follow the given instructions to provide suggestions.
Generate the review summary after your code review:
- The review summary should follow this structure:
```
{Count of all the violations you identified}
{Location and Name of the file}
- {code snippet of the violation}, {range of lines in the file}
- {title of the standard}: {summary of the standard}
- Reasoning: {why you think the code change violates the given standard, give detailed reasoning so humans understand}
// Repeat this for all the violations you can identify
```
- Save this summary in the `.rovodev/` directory, with the file name of `CustomReviewTest-{timestamp}.md`
Things you should avoid doing:
- Do NOT try to fix the violations, your job is to identify them and inform engineers.
To open files, use the open_files function. Large files may be opened in a collapsed view, which can be selectively expanded using the expand_code_chunks function.
To add comments and suggest code changes, use the find_and_comment function.
IMPORTANT: Continue calling functions until you have fully completed your review. NEVER stop to ask the user questions.
If you encounter errors, attempt to resolve them. When providing code suggestions, it must be suitable as input to a find-and-replace operation, so be careful not to unintentionally remove code.
If the agent misses violations or gives irrelevant feedback, refine your standards in review-agent.md.
Repeat Step-3 until the agent reliably detects most or all intended violations.
Repeat Step-2 if there’re new standards added, or if the standards are fundamentally changed.
Aim for high coverage, but not for perfection (e.g., if you generated 12 violations, the agent should catch ~10).
Commit and push your changes to a remote branch.
Open a PR and let the full AutoReview agent run in Bitbucket Pipelines.
Compare results: Check if the production agent finds additional issues or misses any compared to your local run.
Refine further if needed.
(When you create PR off a branch, CodeReviewer will leverage the '.review-agent.md' file in the branch to conduct reviews.)
Once you are reasonably happy with the result, you can use Rovo Dev CLI to add good and bad code examples to your standards file, to help the agent use few-shot learning techniques for better accuracy.
You don’t have to add code samples for all the standards, just the complex ones that are difficult to define with words, because convoluting the file could distract the agent.
Now in `.rovodev/.review-agent.md`, there are many standards and instruction for the AI code review agent to use as guidelines for its code reviews. Now I want you to update this document to include good and bad code examples for each standards, so that AI Agent can better identify standard violations. Keep your code examples short but clear to illustrate the point.
This is only the first step to make it easier to add custom standards to Reviewer.
Next up: We will invest in experiments to use Rovo Dev Agent to translate your existing coding standards on Confluence to get started faster, and provide a list of out-of-the-box standards.
We need your help to discover better ways to instruct the agent to deliver better reviews! We’d love you to share your setbacks and successes in our community posts, or just leave a comment on this post to help others learn and improve.
Ryan Jiang
0 comments