Performance testing of Jira, Confluence and Bitbucket with dc_app_performance_toolkit Part1

Hello!

In these series of articles I want to talk about performance testing of Atlassian Jira, Confluence and Bitbucket.

I will not discuss the methodology of performance testing, I will discuss only the technical aspects of performance testing with dc-app-performance-toolkit provided by Atlassian.

This toolkit lets you test such Atlassian products as Jira, Confluence and Bitbucket for performance. I love this toolkit because you do not have to spend hours to make it work, it just works out of the box.

This toolkit uses Taurus, jmeter and Selenium for testing.

You can use the toolkit for the following purpose:

  • If your company develops apps for Atlassian Marketplace. Then you can use this toolkit for the Data Center certification.
  • If your company uses Atlassian Jira, Confluence or Bitbucket for your internal or external use. You can use this toolkit to test your current configuration with all your scripts and installed apps and ,what is very much important, on your own dataset. As mentioned in the documentation this toolkit works only with certain versions of Jira, Confluence and Bitbucket. But I believe that it will work successfully on all recent versions of these products without modifications or with minor modifications. I will tell you how to modify this toolkit in these series of articles.

The steps which are needed for performance testing of Jira, Confluence and Bitbucket are the same, that is why I will provide examples only for Jira performance testing. If there is something special for a product, I will mention it.

You would need git installed on your PC to reproduce examples in this tutorial.

Installation

First you need to clone the toolkit with the following command:

https://github.com/atlassian/dc-app-performance-toolkit.git

You will have the dc-app-perfromance-toolkit folder created. Move to this folder:

cd dc-app-performance-toolkit

Now you need to install all dependencies and tools which are used for performance testing by the dc-app-perfromance-toolkit.

To accomplish this task, please, follow the instructions in the path_to_dc_app_performance_toolkit/README.md file.

Config files

Before running performance tests you should read documentation provided in the path_to_dc_app_performance_toolkit/doc.

The contents of the folder are:

  • In the root of this folder there are three md files (one for Jira, one for Confluence and one for Bitbucket) which will explain to you how to use this toolkit for Data Center certification.
  • Three folders: jira, confluence and bitbucket with information on how to run performance tests for each of the products.

Please, read this documentation before running performance tests. I will stop only on important things which are not mentioned in the documentation.

jira.yml

Before running performance tests you should provide information about your instance in jira.yml, confluence.yml or bitbucket.yml file which are placed in the path_to_dc_app_performance_toolkit/app folder. These files are the configuration files for Taurus. You can find more information on Taurus configuration files here.

I will explain in detail the jira.yml file.

---
settings:

settings is a section in the Taurus configuration file. It contains top-level settings for Taurus. You can find more information on the settings section here.

  artifacts-dir: results/jira/%Y-%m-%d_%H-%M-%S

artifacts-dir is a path template where to save artifact files. Here is the list of artifact files:

  • bzt.log - log of bzt run.
  • error_artifacts - folder with screenshots and HTMLs of Selenium fails.
  • jmeter.err - JMeter errors log.
  • kpi.jtl - JMeter raw data.
  • pytest.out - detailed log of Selenium execution, including stacktraces of Selenium fails.
  • selenium.jtl - Selenium raw data.
  • results.csv - consolidated results of execution.
  • resutls_summary.log - detailed summary of the run.
  • jira.yml - jira.yml file which were used for the test run.

  aggregator: consolidator

aggregator contains the module alias for top-level results aggregator to be used for collecting results and passing it to reporters. You can read more about aggregators here.

  verbose: false

verbose lets you run taurus in debug mode. We do not use the debug mode.

  env:

env sets environment variables. You can read more here.

    application_hostname: localhost   # Jira DC hostname without protocol and port e.g. test-jira.atlassian.com or localhost
    application_protocol: http      # http or https
    application_port: 2990            # 80, 443, 8080, 2990, etc
    application_postfix: /jira           # e.g. /jira in case of url like http://localhost:2990/jira
    admin_login: admin
    admin_password: admin

This set of parameters contains information about your Jira instance. I will test on my localhost Jira instance. I set all parameters accordingly. These parameters will be used in jmeter, selenium and other scripts.

    concurrency: 200
    test_duration: 45m

These parameters will be passed to the execution engine of Taurus. I will explain the meaning of these parameters later in the execution section.

  WEBDRIVER_VISIBLE: false

WEBDRIVER_VISIBLE sets visibility of Chrome browser during Selenium execution. We make Chrome browser invisible.

 JMETER_VERSION: 5.2.1

JMETER_VERSION defines the version of jmeter, which will be used for testing.

    allow_analytics: Yes            # Allow sending basic run analytics to Atlassian. These analytics help us to understand how the tool is being used and help us to continue to invest in this tooling. For more details please see our README.

services:

services is a section in the Taurus configuration file. It provides information about services which perform some actions before test starts, after test starts, or in parallel with running test. You can read more about services here.

  - module: shellexec

Shell executor is used to perform additional shell commands at various test execution phases.

    prepare:
      - python util/environment_checker.py
      - python util/data_preparation/jira/prepare-data.py
    shutdown:
      - python util/jmeter_post_check.py
      - python util/jtl_convertor/jtls-to-csv.py kpi.jtl selenium.jtl
    post-process:
      - python util/analytics.py jira
      - python util/cleanup_results_dir.py

Prepare, shutdown and post-process are Taurus lifecycle stages. You can read more about Taurus lifecycle here. Each stage runs certain scripts. Here is a short description of each script:

  • util/environment_checker.py - checks Python version. Throws an error if Python version is wrong.
  • util/data_preparation/jira/prepare-data.py - prepares test data. We will stop on it later.
  • util/jmeter_post_check.py - checks if kpi.jtl exists. If this file does not exist, then something went wrong with the jmeter testing.
  • util/jtl_convertor/jtls-to-csv.py kpi.jtl selenium.jtl - creates the results.csv file out of the kpi.jtl and selenium.jtl files. The results.csv file contains aggregated information from these files. You can find average time , median, 90% line, maximum, minimum time of jmeter and selenium tests execution and some other metrics in results.csv file
  • util/analytics.py jira - sends analytics to Atlassian. You can turn it off by the allow_analytics parameter.
  • util/cleanup_results_dir.py - removes temporary files generated during the test run.

execution:

Execution is a section of the Taurus configuration file. Execution objects represent actual underlying tool executions. You can find more information here.

  - scenario: jmeter
    concurrency: ${concurrency}
    hold-for: ${test_duration}
    ramp-up: 3m

Jmeter execution parameters. You can find more information here.

concurrency - number of target concurrent virtual users. It means that jmeter will execute scripts emulating 200 users simultaneously.

ramp-up - ramp-up time to reach target concurrency. if you execute performance testing it is a good practice to reach the target concurrency gradually.

hold-for - time to hold the target concurrency. When you reached the target concurrency you will execute tests for this amount of time.

  - scenario: selenium
    executor: selenium
    runner: pytest
    hold-for: ${test_duration}

Selenium execution parameters. You can find more information here.

executor - the executor.

runner - test runner. We use pytest.

hold-for - time to hold target concurrency.

scenarios:

scenarios is a section of the Taurus configuration file. It provides parameters for all scenarios declared in the execution section.

  selenium:
    script: selenium_ui/jira_ui.py

script provides path to Selenium tests.

  jmeter:
# provides path to the jmeter project file
    script: jmeter/jira.jmx
    properties:
      application_hostname: ${application_hostname}
      application_protocol: ${application_protocol}
      application_port: ${application_port}
      application_postfix: ${application_postfix}
      # Workload model
# the number of actions for an hour. 
      total_actions_per_hr: 54500
# actions and the % of execution within one hour. The sum of all parameters must equal to 100%
      perc_create_issue: 4
      perc_search_jql: 13
      perc_view_issue: 43
      perc_view_project_summary: 4
      perc_view_dashboard: 12
      perc_edit_issue: 4
      perc_add_comment: 2
      perc_browse_projects: 4
      perc_view_scrum_board: 3
      perc_view_kanban_board: 3
      perc_view_backlog: 6
      perc_browse_boards: 2
      perc_standalone_extension: 0 # By default disabled

script provides path to the jmeter project file.

total_actions_per_hr sets the number of actions performed within one hour.

perc_ parameters set the percentage of execution of each operation per hour. The sum of all perc_ parameters must be equal to 100%.

modules:
  consolidator:
    rtimes-len: 0 # CONFSRVDEV-7631 reduce sampling
    percentiles: [] # CONFSRVDEV-7631 disable all percentiles due to Taurus's excessive memory usage

modules is a section in the Taurus configuration file. This section contains a list of classes to load and the settings of these classes.

  jmeter:
    version: ${JMETER_VERSION}
    detect-plugins: true
    memory-xmx: 8G  # allow JMeter to use up to 8G of memory
    plugins:
      - bzm-parallel=0.4
      - bzm-random-csv=0.6
      - jpgc-casutg=2.5
      - jpgc-dummy=0.2
      - jpgc-ffw=2.0
      - jpgc-fifo=0.2
      - jpgc-functions=2.1
      - jpgc-json=2.6
      - jpgc-perfmon=2.1
      - jpgc-prmctl=0.4
      - jpgc-tst=2.4
      - jpgc-wsc=0.3
      - tilln-sshmon=1.0
      - jpgc-cmd=2.2
      - jpgc-synthesis=2.2
    system-properties:
      server.rmi.ssl.disable: true
      java.rmi.server.hostname: localhost
      httpsampler.ignore_failed_embedded_resources: "true"

jmeter provides properties for the Jmeter module. You can read more about Jmeter properties here.

detect-plugins - JMeter Plugins Manager allows you to install necessary plugins for your jmx file automatically. Yes, we want to install required plugins automatically.

plugins - a list of JMeter plugins you want to use.

system-properties - system properties for JMeter in system properties section. You can find more information on Jmeter system properties here.

  selenium:
# version of the chrome driver
    chromedriver:
      version: "80.0.3987.106" # Supports Chrome version 80. You can refer to http://chromedriver.chromium.org/downloads

selenium provides Selenium settings.

chromedriver - the version of the chrome driver we will use for testing.

reporting:
- data-source: sample-labels
  module: junit-xml

Reporting is a section in the Taurus configuration file, which provides analysis and reporting settings. We say that we want the JUnit xml reporting. You can find more information here.

Part 2

2 comments

Taranjeet Singh
Community Leader
Community Leader
Community Leaders are connectors, ambassadors, and mentors. On the online community, they serve as thought leaders, product experts, and moderators.
May 7, 2020

great information @Alexey Matveev !

IT Solutions Tomasz Kaczyński June 1, 2022

Hi Alexey,

I have an unusual question for you because I need the help of a person as experienced as you.

I prepare some new Confluence Data Center App and I would like to add it to the Atlassian Marketplace.

I need perdormance and scale testing report but I have no any Cloud Environment and required experience to make these reports.

Could you test my Confluence Data Center App on your Environment and send me the results?

If you would like any paymemt, please write.


Tomasz Kaczyński (tomasz.kaczynski01 @ gmail.com)

Comment

Log in or Sign up to comment
TAGS
AUG Leaders

Atlassian Community Events