Create
cancel
Showing results for 
Search instead for 
Did you mean: 
Sign up Log in
Celebration

Earn badges and make progress

You're on your way to the next level! Join the Kudos program to earn points and save your progress.

Deleted user Avatar
Deleted user

Level 1: Seed

25 / 150 points

Next: Root

Avatar

1 badge earned

Collect

Participate in fun challenges

Challenges come and go, but your rewards stay with you. Do more to earn more!

Challenges
Coins

Gift kudos to your peers

What goes around comes around! Share the love by gifting kudos to your peers.

Recognition
Ribbon

Rise up in the ranks

Keep earning points to reach the top of the leaderboard. It resets every quarter so you always have a chance!

Leaderboard

Come for the products,
stay for the community

The Atlassian Community can help you and your team get more value out of Atlassian products and practices.

Atlassian Community about banner
4,560,455
Community Members
 
Community Events
185
Community Groups

Error: Status 404: {"message":"No such container: ****_system_docker}

We're facing an issue while running our pipeline built with the atlassian/ssh-run pipe. This has been working smoothly up until just recently, here is the stack trace we're seeing in the bitbucket-pipelines-runner docker log:

[2022-10-04 18:31:00,606] An error occurred whilst inspecting container.
com.github.dockerjava.api.exception.NotFoundException: Status 404: {"message":"No such container: 6886979a-4d81-5af5-8793-4d69d1fe32ea_d409412b-a384-4d15-a566-f3e33493d5ac_system_docker"}

at com.github.dockerjava.netty.handler.HttpResponseHandler.channelRead0(HttpResponseHandler.java:97)
at com.github.dockerjava.netty.handler.HttpResponseHandler.channelRead0(HttpResponseHandler.java:32)
at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:99)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.handler.logging.LoggingHandler.channelRead(LoggingHandler.java:280)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.fireChannelRead(CombinedChannelDuplexHandler.java:436)
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:327)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:299)
at io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:251)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:800)
at io.netty.channel.epoll.EpollDomainSocketChannel$EpollDomainUnsafe.epollInReady(EpollDomainSocketChannel.java:138)
at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:480)
at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:378)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.base/java.lang.Thread.run(Unknown Source)

 

We have recently implemented Server Protection with Sophos so have a strong suspicion this is related to the new issue. What is strange is that running the uninstall script provided by Sophos doesn't fix the issue.

Is this familiar?

1 answer

1 accepted

0 votes
Answer accepted
Theodora Boudale
Atlassian Team
Atlassian Team members are employees working across the company in a wide variety of roles.
Oct 07, 2022

Hi @aslane,

  • Based on your post, I assume you are using Bitbucket Linux Docker runners for your build, is that correct? What version of the runner are you using?

  • What is the configuration you have in your bitbucket-pipelines.yml file?

  • Do you get an error when the command for atlassian/ssh-run pipe is running? If so, what is the output of the command in the Pipelines build log?

Please make sure to sanitize private/sensitive info, if any, prior to sharing it.


Kind regards,
Theodora

@Theodora Boudale ,

Interestingly we've discovered the problem is related to the Sophos installation interfering with AWS ECS and the process of deploying new tasks.

Upon the Sophos installation, the amazon-ecs-volume-plugin.service is fine but as soon as an ECS task is launched, it is made inactive. I'm going to inform both AWS and Sophos of this issue.

Running a simple command to restart the service (systemctl restart amazon-ecs-volume-plugin) fixes the issue, and from what I can tell, the issue doesn't come about again, curiously. This service being restarted also fixed the pipeline issue we were seeing.

I just wanted to let you know of the situation, in case other users encounter this.

In any case, this ticket can be closed.

Thank you for your time!

Like Theodora Boudale likes this
Theodora Boudale
Atlassian Team
Atlassian Team members are employees working across the company in a wide variety of roles.
Oct 10, 2022

Hi @aslane,

It's good to hear that you figured this out, and thank you for sharing more details about what was happening and the resolution.

Please feel free to reach out if you ever need anything else!

Kind regards,
Theodora

Suggest an answer

Log in or Sign up to answer
DEPLOYMENT TYPE
CLOUD
PERMISSIONS LEVEL
Site Admin
TAGS
AUG Leaders

Atlassian Community Events