We get asked this question quite a bit at Atlassian. One of our customers, inspired me to write this question (and the answer) because he helped me see that we hadn't been clear about how we use the JIRA ("JRA") project on jira.atlassian.com (a.k.a "JAC" in internal Atlassian-speak). Here, I'm going to focus on the JIRA team, though a lot of the notions below apply to all of Atlassian's products
Q. How does the JIRA team use jira.atlassian.com?
Ever since "JRA-1", jira.atlassian.com (JAC) has been Atlassian (and JIRA's) public issue tracker, with thousands of bugs, feature requests, improvements both raised and fixed. And up until 2009, JAC was actually where the JIRA team tracked all their work. Today, that's no longer the case, and the JIRA project on JAC is dedicated to only three things:
We don't use JAC to track our day to day feature development work, our sprints, our internal bugs (that have been discovered before making it into a release), or our tasks, etc. Since we've never said "we don't use JAC for our day to day development work," it's no surprise that you might make that assumption. And, if you make that assumption, you'd naturally look at some things in JAC and wonder if the JIRA team really uses their own product!
Q. So, why doesn't JIRA use JAC for everything?
Primarily, we believe in "dogfooding" at Atlassian, so we're always using the latest code, hot off the last build, and our team lives on that on a daily basis. This makes all of us painfully aware of lapses in quality because it brings our work to a halt. So it forces us to be vigilant and efficient about automated testing, code review, and think about the impact of every commit. Even with this approach, our internal dogfood systems are less stable than our published releases, and we want a stable environment for our public issue tracker.
In addition, since most of our teams are practicing variants of agile with fine-grained user stories, the volume of user stories would swamp the bugs, improvements and feature requests that customers want to find and track. This was actually starting to happen when we used JAC as both our public issue tracker and internal issue tracker. By using a separate, internal issue tracker we make sure the important public facing feature requests and bugs are the focus of JAC.
Similarly, there is an internal confluence instance that Atlassian uses, where we live off the latest and greatest and the bleeding edge. Same for all the other Atlassian products.
Q. Why don't all Atlassian teams follow the same approach and process?
Just like each team dogfoods the latest code, our teams experiment with new approaches and processes because that's the business we're in, and because we believe that's where innovation can come from.
Our build engineering team was one of the first to try kanban and reported amazing success. Confluence was the first to introduce continuous deployment to Atlassian. The JIRA team was the first to release to both our Cloud and Server offerings simultaneously.
Our development approach is designed to encourage teams to innovate, specifically with the aim to achieve results, rather than simply having a consistent and unified process.
Q. Are votes what determine the priority of a feature?
Feedback from JIRA customers is an incredibly important part of our prioritization. Feedback comes from a variety of sources:
The amount of feedback is massive, especially when you have over tens of thousands of active customers for JIRA alone, and hundreds of people every day trying your product.
So votes are not the only data we consider when prioritizing a feature.
Q. Do votes matter then?
Absolutely. Votes do matter, and the number of votes matter. The product management team reviews all issues with a significant number of votes on a quarterly basis, and reviews and triages every single feature request that gets created. Since JIRA 4.3, we've satisfied over 6,000 votes.
However, votes are not a trump card, and we don't translate votes directly into priority. This would ignore all the other sources of feedback we have on JIRA that I mentioned above.
Even when an issue has been around for a long time, if it has a large amount of votes, we still plan on resolving it. For example: JRA-9, a request for user time zones, was resolved after nine years. So we are always reviewing the list and our roadmap.
We look at feature requests and how to solve our customers' overall goals rather than just implementing what is described in the feature request. In many cases, we've created solutions (like being able to use JIRA as a directory server for other applications) for something that wasn't a specific feature request but that built the foundation to address a number of issues (LDAP support, multiple directories, etc). And we aim to be better about providing more insight into our direction around highly voted feature requests.
Sometimes, no matter how many votes a request has or how much overall feedback we have, we will "Close - Won't Fix" a feature request. It is always hard to say no, but we'd rather be direct if we have no intent to fix something. We want to be clear when we don't have any plans to satisfy a feature request.
Sometimes the cost of a new feature in terms of development effort is larger than the benefit of the feature. In many cases, we decide that the complexity of adding a feature may help one set of customers but hurt a much larger set of customers by making the product more complex. One of our mail goals for JIRA is to make sure we don't carelessly increase the complexity of the product, but that we continue making JIRA the most powerful issue tracker possible while making it easier and easier for new users and administrators to adopt.
I hope the message gets across that we do care about votes, and comments on JAC. But they aren't the only thing that factor in to our decisions.
Q. So how do you decide how to prioritize features?
In addition to all of the customer feedback, we have strategies, goals and a direction for our product. But we also take the overall health of the product into consideration, and we always have budget. For those top voted issues, we love to find ways to fit them into a broader strategy: What is the real customer goal that drives this set of related feature requests? We want to solve it holistically rather than on a one-off basis. A feature might be important to a small set of customers while other features might have a broader impact across all of our different customer groups and segments.
In grouping similar features together, we get higher velocity. We've seen this directly, when a team is motivated to deliver a broad improvement in JIRA with a big mission, that the team can deliver something incredibly exciting. One example is our revamp of search in JIRA 5.2 (an area where we've received a lot of feedback from customers over the years from a broad set of customers). That's a specific example of why you see a feature implemented in JIRA before another feature with a higher vote count.
We have a published approach for feature prioritization on our Implementation of New Features page.
Q. Why don't you publish a feature roadmap on JAC for what you plan to implement?
Atlassian has been agile for quite a while, and as a result, our approach is to combine long term strategy with a continuous feedback cycle. While we do set out a strategy and long term plan, we don't build a backlog of the 1,000 stories to accomplish that mission. Before we finished that backlog it would have changed!
We also care a lot about setting and meeting any expectations and commitments we make - and when we make statements in public forums, we know our customers, partners, and ecosystem begin to plan based on that data. So committing publicly to a fix version or general time frame is something we don't take lightly.
There are also some painful issues around roadmaps and revenue recognition. Having a public facing roadmap actually impacts whether you can recognize revenue, and if you decide to change your roadmap, you can end up screwing up your company's financials. In general, it's also not a great idea to tell all of your competitors where you are heading.
Q. So how do you decide which bugs to fix?
We review and triage bugs on a daily basis. As a general rule, we look at the overall impact of a bug on our customer base. How big is the impact? How many customers does it affect? Issues that are blockers for customers get the highest priority, and customer support helps us understand the number of customers who are being affected, as do customer votes on a bug. So the majority of our bug fix backlog is prioritized by customer impact.
But we still fix a number of smaller issues, that may not prevent use of JIRA, but that hamper the experience, because we want to prevent the "death by 1000 paper cuts" that can come if we only budget for the critical and blockers and majors.
Q. Is there anything you want to change or improve?
First, hopefully this answer helps. My goal was to make it easier to understand and much more clear on how we use JAC.
Specifically in product management, we want to do a better job of communicating our intent for the top voted JAC issues. Since 2011, we have been providing regular updates on the top voted issues in JAC in order to let customers know our stance on a particular issue. Since we are implementing a number of those top issues with each release, the "top" requests are always changing, and we want to make sure we give you a clear picture of our intentions for those top requests.
Thanks to those customers who have asked (rather passionately in many cases) about how we prioritize features in the JIRA team.
Are votes what determine the priority of a feature?
If your getting all these other feedbacks, why aren't they being tracked in the JIRA board. Your meant to be a big promoter of agile, the whole point of it is to promote greater visibility and ability to adapt to customer needs.
I have no idea how your prioritising your tickets, it's black magic right now - this is not agile.
We have no say in the prioritising - not agile again. Where is the customer representative input?
Your saying your getting feature request via other mediums, why aren't they tracked? again not agile.
Your not agile at all, it's very reminiscent of the waterfall days, where IT did what they thought best, regardless of what the business is telling them, or new feature request takes year to develop because IT wanted to roll out this big new feature no one asked for first.
I'm extremely dissatisfied by the lack of action to our requests, of all the tickets that I care about, that others have already raised and have hundreds - thousands of votes, not one have been done. In fact you've got feature request from 2002 thats still not done.
The current way that Atlassian is handling feature requests is abysmal. I started out a fan of the product, which we only switched to because it was recommended by a colleague. At this point, I would not recommend the product anymore because of the lack of visible improvements. An item like drag and drop for a part of a UI, which someone actually wrote browser plugin to fix, that is the 5th most voted on feature request, has been sitting with a note that it won't be changed for years. What is going on at Atlassian?
For https://jira.atlassian.com/browse/JRA-28730 case, I agree with Atlassian, though I don't quite like JIRA (I prefer Trello much more, but now it is acquired...lol).
Honestly I believe what a true product team do is to carefully stick to the core of the product, instead of making their customer happy. That's funny but it takes courage to do so.
I must agree with the full name approach, because I always found myself searching among dozens of people with the same first name and have to type the following. I believe the consideration is that, a small, startup team will adopt Trello, or whiteboard, but an enterprise that JIRA aiming at is meant to have a huge number of employees, making recognizing people by only first name amazingly difficult. Too difficult to a point that even forcing everyone to upload their profile picture with their true face does not help. And maybe you do not have the chance to actually know how they look like.
Anyway, nothing is perfect, and I'll keep suggesting people to adopt whiteboard instead of JIRA unless they must to.
I can't agree more to Benjamin's comment above and like to add one issue to the list:
It's disappointing to see how such a simple feature isn't addressed for more than 11 years. If it takes more than a few hours solve something seems wrong in the development process: Simply expose the existing field type to the custom fields, implement it as a plugin if necessary (JIRA ships custom field types already so it should be straight forward for Atlassian to do).
But it's a pattern: Even the simplest suggestions reported in the public tracker are ignored despite heavy user interest.
I really don't get the idea of a public issue track if you don't pay attention to any suggestions posted on it. Restrict the track to bug reports as the sole issue type and close all issue not reporting a bug as invalid. This would at least be more transparent than the current situation!
Thanks for considering
Followup Question: For issues which are closed with "Won't Fix", how do we add our vote to it? There are things that I'd really like to see done in Jira, and others have already asked for the same, but the request was closed. sometimes years ago. Shouldn't these "closed" issues be able to live on and collect votes until there's some sort of actual action? The need doesn't magically go away.
How does Jira Servicedesk aka. support.atlassian.com play into all of this ? I'd like to know how an issue that comes in on Support follows it's way to the developer and back to the customer ? The same goes from issues comming on JAC. Do yu duplicate the issue in an internal jira instance ? Does developers have issues to them assigned from JAC or as asked previously do you have a duplicate in that case who updates the JAC issue ? How do you remember to do that ? So i am interested in the practical process you are using.