Create
cancel
Showing results for 
Search instead for 
Did you mean: 
Sign up Log in

Performance testing of Jira, Confluence and Bitbucket with dc_app_performance_toolkit Part3

You can read Part 2 here.

In Part 3 we will talk about preparing test data for dc-app-performance-toolkit.

Prepare test data

There are two options to prepare data:

  • use backups prepared by Atlassian. This option can be used if you develop your own app and you want to test your app for performance.
  • use your own data. This option can be used if you have your own Jira, Confluence or Bitbucket instance which you modified with out of the box features or using scripts, apps or any other way of modification.

Atlassian backups

Atlassian provides backups for Jira, Confluence and Bitbucket which can be used for testing your instance.

Move to app-dc-performance-toolkit/app/util/jira if you test Atlassian Jira, app-dc-performance-toolkit/app/util/confluence if you test Atlassian Confluence or app-dc-performance-toolkit/app/util/bitbucket if you test Atlassian Bitbucket. In these folders you will find files which are needed for restoring Atlassian backups.

Here are the files:

  • index-sync.sh (for Jira and Confluence only) - this script searches atlassian-jira.log (atlassian-confluence.log) file for phrases like "indexes - 100%" just to make sure that indexing was done on Jira/Confluence. It makes sense to run this script after you restored Atlassian backup and ran reindex.
  • populate_db.sh - this script downloads postgres dump with data and recreates Jira/Confluence/Bitbucket database with the backup.
  • upload_attachments.sh - this script downloads attachments prepared by Atlassian and moves these attachments to the data folder.
  • index-snapshot.sh (Confluence only) - this script checks index snapshot generation.

All these scripts must be run on the same virtual machine where your Jira, Confluence or Bitbucket runs. These scripts define the version of your Jira/Confluence/Bitbucket and download the backup data and attachments for your version.

Well, what should I do if I do not use postgres and the populate_db.sh works only for Postgres?

In this case for Jira you can download the XML backup and restore it with the Jira restore backup feature. Here is the url for downloading:

https://centaurus-datasets.s3.amazonaws.com/jira/${Jira_Verson}/large/xml_backup.zip - supported versions are 8.0.3, 7.13.6, 8.5.0.

There are no xml backups for Confluence and Bitbucket. You need to use the populate_db.sh script.

Use your own data

if you have your own Jira, Confluence or Bitbucket instance, there is a good chance that you have your own data and you want to test Jira, Confluence or Bitbucket on your own dataset. It is possible. All you need to do is to make sure that all entities which will be used for testing by dc-app-performance-toolkit are available in your instance.

dc-app-performance-toolkit runs prepare-data.py script during the Prepare stage of Taurus lifecycle. prepare-data.py creates csv files which will be used later during testing.

In order to understand which data you need in your Jira/Confluence/Bitbucket to run tests without errors you need to explore the prepare-data.py files for the product which you intend to test.

Let's explore these files.

Jira

You can find the prepare-data.py file for Jira in the dc-app-performance-toolkit/blob/master/app/util/data_preparation/jira/ folder.

The script selects data from Jira and create csv files with this data. This data will be used later for testing. Here is the list of the files:

  • issues.csv - contains issues.
  • jqls.csv - contains jql queries.
  • kanban-boards.scv - contains kanban boards.
  • project-keys.csv - contains project keys.
  • scrum-boards.csv- contains scrum boards.
  • users.scv - contains users.

We will not explore the whole prepare-data.py file, but just this function:

def __create_data_set(jira_api):
    dataset = dict()
    dataset[USERS] = __get_users(jira_api)
    software_project_keys = __get_software_project_keys(jira_api, PROJECTS_COUNT_LIMIT)
    dataset[PROJECT_KEYS] = software_project_keys
    dataset[ISSUES] = __get_issues(jira_api, software_project_keys)
    dataset[SCRUM_BOARDS] = __get_boards(jira_api, 'scrum')
    dataset[KANBAN_BOARDS] = __get_boards(jira_api, 'kanban')
    dataset[JQLS] = __generate_jqls(count=150)

    return dataset

As you can see this function selects data which are needed for all these files:

dataset[USERS] = __get_users(jira_api)

We select users from the Jira instance. The name of the users must start with the "performance_" prefix. We need as many users as the concurrency parameter in the jira.yml file. If no users are found or not enough users, then new users will be created in your Jira with "performance_" prefix and password: password.

software_project_keys = __get_software_project_keys(jira_api, PROJECTS_COUNT_LIMIT)
dataset[PROJECT_KEYS] = software_project_keys

We select project keys for software projects.

__get_issues(jira_api, software_project_keys)

We select not more than 8000 issues from software projects which are not in the Closed status.

dataset[SCRUM_BOARDS] = __get_boards(jira_api, 'scrum')

We select not more than 250 scrum boards.

dataset[KANBAN_BOARDS] = __get_boards(jira_api, 'kanban')

We select not more than 250 kanban boards.

dataset[JQLS] = __generate_jqls(count=150)

We generate jqls like 'text ~ "abc*"'.

It means that in order the tests worked you need to have a software project with issues, a scrum and a kanban board.

We will keep talking on preparing test data in Part 4.

1 comment

Gonchik Tsymzhitov
Community Leader
Community Leader
Community Leaders are connectors, ambassadors, and mentors. On the online community, they serve as thought leaders, product experts, and moderators.
January 7, 2021

Thanks for the your article!

Comment

Log in or Sign up to comment
TAGS
AUG Leaders

Atlassian Community Events