I (shortly will) have Stash instances on several disconnected networks (no Internet, no connection to each other) , each containing a large number of projects and repos. What's the "Atlassian" best way to synchronize these repos on a weekly (or daily) basis?
Currently this isn't really something Atlassian has needed to address. By that I mean we usually have any given repository in a single place, whether that's Bitbucket or Stash, and people access the repositories directly from their canonical location.
You might be interested in the following issue, which is about mirroring in Stash:
I am curious though, when you say 'synchronize' do you mean that changes are made to the same repository/branch but in different instances? If so how do you plan on merging between those two instances? Sadly there is nothing that Stash can do to help with that problem, someone will need to resolve the merge manually. Ideally you have a single Stash instance where everyone can write/push to (for a given repository), but which are mirrored as read-only to other geographical instances for caching.
In any case, for now you could easily write a simple script (or Stash plugin) that fetches the latest changes from all the repositories in one instance and pushes them to another. Again, it's the pushing step that will be where you hit problems - if there have been any divergent changes someone will need to be notified to fix the merge. That someone is ideally a person involved in the actual code changes. You could hypothetically have your scripts detect this merge, raise a pull-request and assign all the users with commits on this instance as reviewers.
I'd love to hear more about what your requiements are for this situation.
Currently, source code 'lives' in its home repo on a given network, and gets transferred / copied / synchronized (via sneaker net - the networks are not physically connected, so mirroring probably? won't work here) to repos on other, disconnected networks. Usually, the synchronized code is just checked out of the destination repo only so that other code may be built, but sometimes, bug fixes and changes are checked into the destination repo, and rolled back to the source repo. Ideally, we would just compile all our source on its home network, and transfer artifacts around, but we're not there yet - this is related to my other recent question: https://answers.atlassian.com/questions/163199/stash-using-managing-multiple-repos-simultaneously
There isn't really much in the way of help that Stash can provide if the networks are separated like that. But you can utiise the distributed nature of Git to make things easier. You could have a repository on the the drive for copying, and just pull changes from the home network to the device, and then push it back onto the destintation. This will work just fine when you don't make changes on the destination, but my comment above still stands when there are, and you may have to deal with merge conflicts. The nice thing abou using Git like this, rather than just copying files, is that Git will tell you when you get conflicts without you having to think about it.
I hope that answers your question?
Bitbucket Pipelines helps me manage and automate a number of serverless deployments to AWS Lambda and this is how I do it. I'm building Node.js Lambda functions using node-lambda ...
Connect with like-minded Atlassian users at free events near you!Find a group
Connect with like-minded Atlassian users at free events near you!
Unfortunately there are no AUG chapters near you at the moment.Start an AUG
We're bringing product updates and pro tips on teamwork to ten cities around the world.Save your spot