there is this great post about how to use the Atlassian tools in a Salesforce Development context (https://developer.atlassian.com/sfdc), but I am missing something here.
When using a Git repo with branches and forks and pull requests and stuff, but also when using Bamboo or BitBucket Pipelines which automaticaly deploys your code to a Salesforce org, where does your code lives ?
I mean, with a classic IDE (let's say Sublime Text + MavensMate or the Force.com IDE), the code lives in the Salesforce org. So when I Cmd+S, the code is automatically compiled and stored inside the org, which I consider to be the master branch (so the workflow is broken here).
But if I understand the post properly, you got to create a branch for the feature, develop your code, create a pull request, and when the commit happens, it is Pipelines/Bamboo's job to deploy the code from the branch to the Salesforce org.
So where does your code lives ?
And more importantly, which IDE that does not interfere with this workflow do you use ? I would need an IDE that is Salesforce-compatible (for instance, autocompleting the system classes), but that does not save the code directly in the org, but rather in a Git repo.
And final question, how do you test that your code works properly BEFORE commiting to the master, if the code does not live in the org ?...
Thank you !
I'm quite new in the world of CI/CD for Salesforce or in general so I might not be the most qualified person to answer your question. However, I've been putting together the entire toolchain based on Atlassian's tools and Cookbook you mentioned and I can say that I'm almost there understanding everything and the way it works.
I actually performed most of the tests that I needed and I can say that they were really successful, so have that in mind when reading my answer.
For your first question, it took me a while to understand as well....
Where does the code live?
The SHORT answer is either in Bitbucket Server or Cloud. For my tests I used Server but it is exactly the same in Cloud. I will explain a little bit better to get to your point.
When you first setup the toolchain you will start from the Production Org (for my tests I used a Developer Sandbox named sfdc-prod like in the Cookbook). So you have 3 orgs (Prod, UAT and DEV). You will actually have to use Sublime+Maven (my choice) or Eclipse or any other IDE that has a Force.com plugin to connect to your org and pull down all metadata. Again, we're talking about Production Org. Once you pulled it down on your local machine, you go to the root folder and initialise it as a git repo(see git documentation on how to do that - Atlassian also has some great tutorials) and do your first commit (we are still on local).
Now that you have the prod metadata in a local git repo, you add a remote origin to your bitbucket repo (forgot to mention that you have to create your corresponding repos in bitbucket beforehand) and do a push of the repo to the remote bitbucket repo. At this point in time you have your production metadata all in a Bitbucket Server (or Cloud) repo. I would say that at this point, if you don't do crazy stuff with your salesforce production org (you should never really have to deal with this again, the prod metadata will live only here.
Now that you have the prod metadata, as we would want it, all your other orgs should be a perfect mirror of production and that's where Forking and fork syncing comes into play. You go to your Prod repo and fork it into the UAT repo and you then fork this new UAT repo into the DEV repo. Once all that's done, you have a copy of your prod metadata in all 3 repos.
Now we're getting to the confusing part:
Each developer has his own repo which initially should also be a copy of prod but to actually start development work each developer will have to clone the DEV repo found in Bitbucket to their local machine, then add that repo to Eclipse or Sublime, make it into a project and then authenticate it with Salesforce to connect to their own developer orgs. So although your IDE connects to a developer org, and when you save it will update the metadata on your developer org, it actually updates the Shared DEV repo (the one cloned from the shared DEV repo) when you do pushes to origin. That's what will then trigger all the Bamboo jobs to test and validate the metadata that you are trying to commit against the DEV and then will eventually Deploy to DEV, then UAT and finally Prod.
So to sum up, the code lives in Bitbucket or whatever else Git server you use. The developer machines will be the ones holding a copy of the DEV repo and keep updating their own developer orgs with that code.
I personally use Mavensmate+Sublime and it really works fine. I can save directly to my development org and do all the testing and use all the autocompletion and syntax highligting plugins and once I'm confident my work is correct and complete I just push all my code to the DEV repo and Bamboo does the rest for me.
For your final question: the entire beauty of using this toolchain is that you don't have to worry about that because once in place, your 3 main orgs (Prod-UAT-DEV) should always remain in sync which means that when you tests pass in DEV and then in UAT you are 99.9% sure that the code will work in PROD as well and you can always do only a validation of the code deploy in PROD before doing an actual deploy. It's all in the Bamboo plans that you setup.
Hope this answers your questions and clarifies a bit the entire concept.
I'm at the very begning as well so I totally understand what you are going through so Good Luck!
Connect with like-minded Atlassian users at free events near you!Find a group
Connect with like-minded Atlassian users at free events near you!
Unfortunately there are no AUG chapters near you at the moment.Start an AUG
We're bringing product updates and pro tips on teamwork to ten cities around the world.Save your spot