Reports
Besides the files above you can also generate reports. Those reports are aimed for approving your app for Data Center.
Performance Report
This report will show you how your Jira, Confluence or Bitbucket instance ran with and without your app. First you make a test run with your app installed and then you make a test run without your app. After that you provides links to both of the results in the performance_profile.yml file:
# Defines which column from test runs is used for aggregated report. Default is "90% Line"
column_name: "90% Line"
runs:
# fullPath should contain a full path to the directory with run results. E.g. /home/$USER/dc-app-performance-toolkit/jira/results/2019-08-06_17-41-08
- runName: "without app"
fullPath: "/Users/alexm/PycharmProjects/easymigration/dc-app-performance-toolkit/app/results/confluence/noplugin"
- runName: "with app"
fullPath: "/Users/alexm/PycharmProjects/easymigration/dc-app-performance-toolkit/app/results/confluence/4_nodes"
# Chart generation config
index_col: "Action"
title: "DCAPT Performance Testing"
image_height_px: 1000
image_width_px: 1200
Then you run the following command:
python3 csv_chart_generator.py performance_profile.yml
The report contains three files: performance_profile_summary.log, performance_profile.csv, performance_profile.png
performance_profile_summary.log contains summary information about the two test runs:
cenario status: OK
========================================================================================================================
Run name: without app
Summary run status OK
Artifacts dir 2020-07-24_13-29-26
OS macOS
DC Apps Performance Toolkit version 3.0.1
Application confluence 7.4.1
Dataset info 975407 pages
Application nodes count 4
Concurrency 200
Expected test run duration from yml file 2700 sec
Actual test run duration 2710 sec
Finished (True, 'OK')
Compliant (True, 'OK')
Success (True, 'OK')
Action Success Rate Status
jmeter_comment 100.0 OK
jmeter_create_blog 100.0 OK
jmeter_create_blog_editor 100.0 OK
jmeter_create_page 100.0 OK
jmeter_create_page_editor 100.0 OK
jmeter_edit_page 100.0 OK
jmeter_like_page 100.0 OK
jmeter_login_and_view_dashboard 100.0 OK
jmeter_open_editor 100.0 OK
jmeter_recently_viewed 100.0 OK
jmeter_search_results 100.0 OK
jmeter_upload_attachment 100.0 OK
jmeter_view_attachment 100.0 OK
jmeter_view_blog 100.0 OK
jmeter_view_dashboard 100.0 OK
jmeter_view_page 100.0 OK
jmeter_view_page_tree 100.0 OK
selenium_a_login 100.0 OK
selenium_create_comment 100.0 OK
selenium_create_page 100.0 OK
selenium_edit_page 100.0 OK
selenium_view_blog 100.0 OK
selenium_view_dashboard 100.0 OK
selenium_view_page 100.0 OK
selenium_z_log_out 100.0 OK
========================================================================================================================
Run name: with app
Summary run status OK
Artifacts dir 2020-07-24_13-29-26
OS macOS
DC Apps Performance Toolkit version 3.0.1
Application confluence 7.4.1
Dataset info 975407 pages
Application nodes count 4
Concurrency 200
Expected test run duration from yml file 2700 sec
Actual test run duration 2710 sec
Finished (True, 'OK')
Compliant (True, 'OK')
Success (True, 'OK')
Action Success Rate Status
jmeter_comment 100.0 OK
jmeter_create_blog 100.0 OK
jmeter_create_blog_editor 100.0 OK
jmeter_create_page 100.0 OK
jmeter_create_page_editor 100.0 OK
jmeter_edit_page 100.0 OK
jmeter_like_page 100.0 OK
jmeter_login_and_view_dashboard 100.0 OK
jmeter_open_editor 100.0 OK
jmeter_recently_viewed 100.0 OK
jmeter_search_results 100.0 OK
jmeter_upload_attachment 100.0 OK
jmeter_view_attachment 100.0 OK
jmeter_view_blog 100.0 OK
jmeter_view_dashboard 100.0 OK
jmeter_view_page 100.0 OK
jmeter_view_page_tree 100.0 OK
selenium_a_login 100.0 OK
selenium_create_comment 100.0 OK
selenium_create_page 100.0 OK
selenium_edit_page 100.0 OK
selenium_view_blog 100.0 OK
selenium_view_dashboard 100.0 OK
selenium_view_page 100.0 OK
selenium_z_log_out 100.0 OK
========================================================================================================================
performance_profile.csv contains summary information about response time for each of the Selenium and Jmeter tests:
Action,without app,with app
jmeter_login_and_view_dashboard,92764,90969
jmeter_view_page,20174,21212
jmeter_view_page_tree,1027,1182
jmeter_recently_viewed,6537,6309
jmeter_search_results,20103,20091
jmeter_view_blog,20552,19301
jmeter_comment,1497,1581
jmeter_upload_attachment,3758,3895
jmeter_create_blog_editor,6201,6335
jmeter_create_blog,12925,11319
jmeter_view_attachment,1394,1281
jmeter_like_page,798,866
jmeter_view_dashboard,4971,4775
jmeter_create_page_editor,8401,8508
jmeter_create_page,8620,8801
jmeter_open_editor,4513,4541
jmeter_edit_page,9528,9225
selenium_login:open_login_page,703,790
selenium_login:login_and_view_dashboard,2115,2057
selenium_login,3007,3132
selenium_view_page,1065,1269
selenium_create_page:open_create_page_editor,9562,9378
selenium_create_page:save_created_page,5603,5202
selenium_create_page,14663,14980
selenium_edit_page:open_create_page_editor,4059,3950
selenium_edit_page:save_edited_page,2965,2870
selenium_edit_page,6393,6959
selenium_create_comment:write_comment,4763,4575
selenium_create_comment:save_comment,1378,1178
selenium_create_comment,6284,6698
selenium_view_blog,9723,9937
selenium_view_dashboard,5365,5007
selenium_log_out,1543,1110
performance_profile.png contains the graph for the two test runs:
Scale Report
This report will show you how your Jira, Confluence or Bitbucket instance ran on 4, 2 and 1 node. Before generating this report you need to run all tests on 4, 2 and 1 node. After that you provide links to the three of the results in the scale_profile.yml file.
# Defines which column from test runs is used for aggregated report. Default is "90% Line"
column_name: "90% Line"
runs:
# fullPath should contain a full path to the directory with run results. E.g. /home/$USER/dc-app-performance-toolkit/jira/results/2019-08-06_18-41-08
- runName: "Node 1"
fullPath: "/Users/alexm/PycharmProjects/easymigration/dc-app-performance-toolkit/app/results/confluence/1_node"
- runName: "Node 2"
fullPath: "/Users/alexm/PycharmProjects/easymigration/dc-app-performance-toolkit/app/results/confluence/2_nodes"
- runName: "Node 4"
fullPath: "/Users/alexm/PycharmProjects/easymigration/dc-app-performance-toolkit/app/results/confluence/4_nodes"
# Chart generation configs
index_col: "Action"
title: "DCAPT Scale Testing"
image_height_px: 1000
image_width_px: 1200
Again you will have three files: scale_profile_summary.log, scale_profile.cvs and scale_profile.png.
Those reports are supposed to be used for certifying your app for Data Center, but you can use the reports to get summary information about two or three test runs.
That is all for the reports and graphs. I believe, now you have enough information to start using dc-app-performance-toolkit.
Alexey Matveev
software developer
MagicButtonLabs
Philippines
1,574 accepted answers
0 comments