You're on your way to the next level! Join the Kudos program to earn points and save your progress.
Level 1: Seed
25 / 150 points
Next: Root
1 badge earned
Challenges come and go, but your rewards stay with you. Do more to earn more!
What goes around comes around! Share the love by gifting kudos to your peers.
Keep earning points to reach the top of the leaderboard. It resets every quarter so you always have a chance!
Join now to unlock these features and more
The Atlassian Community can help you and your team get more value out of Atlassian products and practices.
Hi, I've got a space that hasn't been updated for a long time and I need to migrate some of its contents to another space. The problem is that since the content hasn't been updated for years, I have grave doubts that many links are already dead.
Can anybody suggest me what can I try to check the links?
So far my approach is to get body.view from all pages in a given space and extract all links from it. The problem is that the number of links makes it unfeasible to check them manually, so what I want is to try some Python script to iterate over them.
import requests
from requests.auth import HTTPDigestAuth
r = requests.get(link, auth = HTTPDigestAuth(user, password))
code = r.status_code
print(code)
My first attempt was to simply use requests library to get the status codes, but it appears that no matter what link I pass there - I get "200" status, even when I look up for non-existing page.