API to download assets

Rohit Bankoti April 17, 2023

Currently, I'm working on automating a process that involves utilizing the API to retrieve either all assets or a single .CSV file containing all assets, which will be stored as a backup.

I am not looking for other alternative options - Jira Cloud Backup & Restore App etc.

 

Has anyone succeeded in using API automation?

5 answers

1 accepted

3 votes
Answer accepted
Ian Carlos
Rising Star
Rising Star
Rising Stars are recognized for providing high-quality answers to other users. Rising Stars receive a certificate of achievement and are on the path to becoming Community Leaders.
April 18, 2023

Yes, I'm currently performing automations over Jira using Python 3

 

In order to retrieve the Assets, check the following links (after this I will give you a step by step guide):

Manage API tokens for your Atlassian account 

Get assets workspaces 

Get aql objects 

 

So, the steps are:

  1. Get a token (the first link I listed to you will have this inner link to create a token: API Tokens ) and save it

    ** Code for the next steps at the bottom **
  2. Get your current Assets workspace id and save it
  3. Get all your Assets (pagination is set to 25 by default, so, you'll need to iterate)

 

Here is the code for the 2nd and 3rd step

 

# This sample is written in Python 3

# Using https://pypi.org/project/requests/
import requests as req
import json

# Define some important variables
jira_auth = (your_jira_user_email, the_token_you_just_created)
jira_headers = {
"accept": "application/json",
"content-type": "application/json",
"authorization": "Token the_token_you_just_created"
}

# Get your workspace id with an API REST call

response = req.request(
"GET",
url= "https://your-domain.atlassian.net/rest/servicedeskapi/assets/workspace",
headers= jira_headers,
auth= jira_auth,
)

# If you have more than one workspace, I'd recommend you
# to print the whole response.text and get the workspaceid you need

# But assuming you have only one, do this
jira_workspace_id = json.loads(response.text)["values"][0]["workspaceId"]


# Now, let's get all the Assets

# Some control variables
# According to the docs, asset pagination starts at page 1
# asset_counter and total_counter have those values in order to get into
# the next while loop at least once
page_counter = 1
asset_counter = 0
total_counter = 1

# Iterate until our asset_counter arrives to the total_counter
while asset_counter < total_counter:

# API REST call for getting the Assets, no query since all of them will be retrieved
response = req.request(
"GET",
url= "https://api.atlassian.com/jsm/assets/workspace/" + jira_workspace_id + "/v1/aql/objects?qlQuery=",
headers= jira_headers,
auth= jira_auth,
params= {"page": page_counter}
)

# If the call was Ok
if response.status_code == 200:

# Load the Asset response in a JSON (This is what you want)
# Update the control variables as:
# total_counter to the total of assets according to that filter (no filter/query this time)
# asset_counter increase 25 this is the maximum of pagination assets return, so, that's why
# page_counter increase 1, so in the next iteration, the assets of the page 2, are the ones that will be returned
jra_aql = json.loads(response.text)
total_counter = jra_aql["totalFilterCount"]
asset_counter += 25
page_counter += 1

# And just for visualization make a pretty print of the Asset JSON
# This is what you want, later you can fix however you want to write it over a CSV, etc.
print(json.dumps(jra_aql, indent=4, separators=(",", ": ")))

# In case the call failed, set this values to avoid an infinite loop
else:
city_counter = 1
total_counter = 0


As you have seen, I never saved the jra_aql response dict in a list or something alike, because maybe you would like to fix the data before to save it

Well, I hope this could help you, let me know if you need anything else

 

Good luck automating!

Rohit Bankoti April 20, 2023

Thanks @Ian Carlos :)

Like Ian Carlos likes this
Dirk De Mal May 5, 2023

@Ian Carlos 

Big thx...tested it and worked like a charm. I would like my output to be in csv file(s).

Any idea on how to script this in Phyton?

Thx for the help.

Kind regards,

Dirk

Ian Carlos
Rising Star
Rising Star
Rising Stars are recognized for providing high-quality answers to other users. Rising Stars receive a certificate of achievement and are on the path to becoming Community Leaders.
May 7, 2023

I'm glad to read that it worked for you!

Uhhh, mmm, making it into a CSV it could be a little complex, but I would do it this way

This is just kind of pseudocode, it has to be fixed before actually using it

# Declare a list where the lines of the CSV will be saved
# Save the header of the CSV with the columns you want to have
list_of_strings = []
first_string = "objectTypeId,name,attribute1,attribute2,...,attributeN"
list_of_strings.append(first_string)

# Then iterate over the retrieved assets and their attributes, you'll
# save for each asset a string line that will contain the asset data you want to save
for asset in assets:
    asset_string = ""
    for asset_attr in asset["attributes"]:
        asset_string += asset_attr["value"] + ","
    list_of_strings.append(asset_string)

# Write the CSV
file2write = open("csvName.csv", "w")

for ln in list_of_strings:
file2write.write(ln + "\n2")
file2write.close()

 

I hope this could help you! 

Rick Westbrock September 15, 2023

I have a working script which first exports objects matching an AQL query to a list then writes the list to a CSV file. Below is the file writing function where the header row of attributes is written first followed by the objects.

 

def write_output_file(attributes: list, objects: List[dict], target_file: pathlib.Path) -> None:

    """

    Write list of Assets objects to CSV file

    Args:

        attributes (list): list of attributes to use as the header row

        objects (List[dict]): list of Assets objects dicts

        target_file(pathlib.Path): Posix path of file to be written

    Returns:

        None

    """

    try:

        with open(target_file, 'w') as csv_file:

            writer = csv.DictWriter(csv_file, fieldnames=attributes)

            writer.writeheader()

            for obj in objects:

                writer.writerow(obj)

    except IOError as err:

        logger.error(f'Failed to write output file due to {err}')

        sys.exit(1)

    logger.info(f'Requested assets objects written to {str(target_file)}')
Rohit Bankoti September 19, 2023

@Rick Westbrock : I think API response contains nested JSON objects. which is not in a sequence as it sits under objectEntries. 

Your solution of CSV files is working fine but missing piece is to pass this information from JSON objects to ATTRIBUTES otherwise it will not work. Can you summarize more. 

        attributes (list): list of attributes to use as the header row ---> which attributes have you used? Required attributes for my use case :

label : "2345"

displayValue : "26/Feb/23 11:15 PM"
displayValue : "26/Feb/23 11:17 PM"
name : "ABC"



        objects (List[dict]): list of Assets objects dicts ---> is it API Response?

My objective is to export files of each asset types for e.g. Hardware_20Sep (contains information of Laptops, Mobile Phones etc) OR  Software_20Sep and all CSV files must contain consolidate information.

  • Hardware
    • Laptops
    • Mobile Phones
  • Software
    • ABC
    • DEF
Rick Westbrock September 20, 2023

@Rohit Bankoti  My apologies, that snippet doesn't give you the context of the entire script. The list of attributes is in the config file for the script (where the script accepts a report name argument to figure out which list of attributes to read from the config file).

I have a method which gets the objects and just passes the API response back to the script but the script then runs the list of objects through a parse_object function to strip out all attributes which are not listed in the config file. Only then is the list of objects passed to the write_output_file function.

My use case was to export objects representing physical hardware where there are multiple nested child object types, however the script currently uses the same list of attributes for all exported objects. As such it can't handle attributes unique to a child object type at this time (although I may refactor it later to add that as an enhancement).

For your use case of hardware and software if the list of attributes desired in the CSV file is common across all objects types then using a static list of attributes would work fine.

 

The script calls the get_objects_list method below which calls the fetch_objects method. I also included the aforementioned parse_object method. (The post_api method just uses the requests module to call the API and read the response code to raise an exception in case of an error.)

 

    def get_objects_list(self, query_details: dict) -> List[dict]:

        """

        Query Assets for objects matching an AQL query and return as list of dicts

        Args:

            self (undefined): JmsSession object

            query_details (dict): schema ID, class ID and AQL query

        Returns:

            List[dict]: list of JSON definitions of Assets objects

        """

        objects_list = []

        page_num = 1

        get_next_page = True

        while get_next_page:

            payload = {

                "objectTypeId": query_details['class_id'],

                "page": page_num,

                "asc": 1,

                "resultsPerPage": 25,

                "includeAttributes": True,

                "objectSchemaId": query_details['schema_id'],

                "qlQuery": query_details['aql']

            }

            response = self.fetch_objects(payload=payload)

            if response['totalFilterCount'] == 0:

                return None

            elif response['pageSize'] == 1:

                return response['objectEntries']

            else:

                objects_list += response['objectEntries']

                if response['pageNumber'] == response['pageSize']:

                    get_next_page = False

                else:

                    page_num += 1

        return objects_list

    def fetch_objects(self, payload: dict) -> dict:

        """

        Description of fetch_objects

        Args:

            self (undefined): JsmSession object

            payload (dict): JSON payload for JSM Assets API call

        Returns:

            dict: JSON definition of Assets objects

        """

        uri = 'object/navlist/aql'

        return self.post_api(uri, payload)

    def parse_object(self, obj: dict, attributes: list) -> dict:

        """

        Parse full JSON response from get object and return selected attributes

        Args:

            self (undefined): JsmSession object

            obj (dict): JSON definition of object returned by JSM Assets

            attributes (list): list of attributes to return

        Returns:

            dict

        """

        obj_dict = {}

        obj_attrs = obj['attributes']

        if 'Class' in (attributes):

            obj_dict['Class'] = obj['objectType']['name']

        for attr in obj_attrs:

            attr_name = attr['objectTypeAttribute']['name']

            if attr_name in (attributes):

                if 'referencedObject' in attr['objectAttributeValues'][0]:

                    attr_value = attr['objectAttributeValues'][0]['referencedObject']['label']

                else:

                    attr_value = attr['objectAttributeValues'][0]['value']

                obj_dict[attr_name] = attr_value

        return obj_dict
Rohit Bankoti September 21, 2023

@Rick Westbrock ; That's fine. 

 

One quick question. Is there any identifier in API response which can assist me that main asset type is "Hardware" and every child  elements are associated to "Hardware".

 

  • Hardware
    • Laptops
    • Mobile Phones
    • Printer

 

When i was trying to analyse API response it seems this associated information is missing piece. I am able to export child elements details but there is no information to identify all child elements are belong to "Hardware".

Hardware details in API response is coming similar to child elements.

Rick Westbrock September 21, 2023

I am using the Post object navlist aql API https://developer.atlassian.com/cloud/assets/rest/api-group-object/#api-object-navlist-aql-post and the response for each object is enormous: one object is 1139 lines so I can't paste the entire thing here unfortunately. Part of that is my fault because I had a short timeline to develop the script so I am not passing the attributesToDisplay parameter in the request body which would allow me to exclude attributes which I don't care about (like the icon URLs).

I do see that one of the nested dictionaries under each object is the object type however, see excerpt of a response below; I think this is what you need.


{
"workspaceId": "b902f1c3-d6b9-46a0-aa4c-c61a4bf896b1",
"globalId": "b902f1c3-d6b9-46a0-aa4c-c61a4bf896b1:18871",
"id": "18871",
"label": "10003",
"objectKey": "FOUND-18871",
"avatar": {
"workspaceId": "b902f1c3-d6b9-46a0-aa4c-c61a4bf896b1",
"objectId": "18871",
},
"objectType": {
"workspaceId": "b902f1c3-d6b9-46a0-aa4c-c61a4bf896b1",
"globalId": "b902f1c3-d6b9-46a0-aa4c-c61a4bf896b1:59",
"id": "59",
"name": "Person",
"type": 0,
"description": "",
"icon": {
"id": "131",
"name": "User",
},
"position": 0,
"created": "2023-04-18T16:28:34.850Z",
"updated": "2023-05-04T20:26:41.074Z",
"objectCount": 0,
"objectSchemaId": "7",
"inherited": false,
"abstractObjectType": false,
"parentObjectTypeInherited": false
}
0 votes
Adam Kassoff November 14, 2023

@Rick Westbrock or @Ian Carlos  Hi, Do know of a way to adapt your script to fill object relationships? My objects have foreign keys as an attribute and I add an attribute on the object with reference to the key, to systematically fill in the reference attribute. My objects are being synced from an external source but the reference attribute is incompatible. 

0 votes
Rohit Bankoti September 15, 2023

@Ian Carlos : I was trying to test provided API's using POSTMAN. 

 

Can you check and confirm whether any changes are required for below APIs

1.  GET request : https://your-domain.atlassian.net/rest/servicedeskapi/assets/workspace

this API is working fine and provided workspace id.

 

2. GET request :  https://api.atlassian.com/jsm/assets/workspace/" + jira_workspace_id + "/v1/aql/objects?qlQuery=

 

getting 400 response from server. is there any changes required?

 

is it possible to export results from JSON response and convert them to create .csv file.

0 votes
kevincrpratp July 31, 2023

Hi, Thanks for the script.

 

How can we import the dump please?

 

Best regards

Rick Westbrock September 15, 2023

In your Assets schema go to the Import tab and create an import structure. Documentation for that is here: https://support.atlassian.com/jira-service-management-cloud/docs/create-an-import-structure/

0 votes
Rick Westbrock May 9, 2023

I just happened to have written a Python script to do this very thing in the past week or so (and an associated module with a JsmSession class since I prefer to use OOP for this type of thing) and I wrote a function using the csv.DictWriter method to write the dict of object data to a file.

def write_output_file(attributes: list, objects: List[dict], target_file: Path) -> None:
try:
with open(target_file, 'w') as csv_file:
writer = csv.DictWriter(csv_file, fieldnames=attributes)
writer.writeheader()
for obj in objects:
writer.writerow(obj)
except IOError as err:
logger.error(f'Failed to write output file due to {err}')
sys.exit(1)

 

The script already knows what the header row should be because I made it extensible so it reads that attributes variable from a config file. 

The objects variable is a list where each element is the JSON definition of an exported job; I have a parse_object method in the JmsSession class where I pass the full JSON export of an object and the attributes list and it returns a dict of just those attributes.

Suggest an answer

Log in or Sign up to answer
DEPLOYMENT TYPE
CLOUD
PRODUCT PLAN
PREMIUM
TAGS
AUG Leaders

Atlassian Community Events