When I boot up an Elastic Bamboo instance, the Linux console log prints this:
ci-info: +---------+-------------------------------------------------+---------+---------------+ ci-info: | Keytype | Fingerprint (md5) | Options | Comment | ci-info: +---------+-------------------------------------------------+---------+---------------+ ci-info: | ssh-rsa | a1:a1:a1:03:6c:11:a1:4d:a1:4d:a1:4d:a1:4d:a1:1a | - | elasticbamboo | ci-info: +---------+-------------------------------------------------+---------+---------------+
But the fingerprint for the "elasticbamboo" key pair in "AWS Console > EC2 > Network & Security > Key Pairs" is different.
How do I configure Bamboo to authorize the correct key pair for ssh to the node?
It looks like Bamboo could be using an old key that I might have deleted and recreated, instead of checking the current value of the "elasticbamboo" key pair and using that key instead.
I've found the issue for my particular case.
The "instance startup script" was causing the system to hang, which is a separate issue.
The authorized_keys
file gets initialized after the instance startup script runs, which blocks SSH access if the instance startup script hangs the process. But if nothing goes wrong, Atlassian Bamboo reads the latest value of the "elasticbamboo" key file and puts in in authorized_keys
successfully.
The Elastic Bamboo instance generation should be changed such that the authorized_keys
file gets initialized first, so users can SSH into boxes with hung instance startup scripts and check logs to see what's broken.
@Przemyslaw Bruski, could you make that happen? I currently can't debug my own instance startup script issue.
Did you check in the correct region? Keys are region-bound.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.