The definition of a large file is up to you, really. Long story short problem with large files on Git is that if they are modified throughout several commits, when devs checkout the code they'd be downloading all of the "large" files that are being tracked by Git. By using LFS, when you checkout a repo that has LFS enabled, devs will checkout a the big file on the latest commit but not all the previous ones unless they change to that commit. Only a small text file containing instructions on how to download the "large file" are stored in previous commits and hence prior versions are checked out on demand (i.e. if they checkout that commit/branch etc).
You can define whatever files you like to be a "large file" and then those will be handled that way bit Git when people clone that repo.
In-depth details: Atlassian Git LFS Tutorial
Bitbucket Pipelines helps me manage and automate a number of serverless deployments to AWS Lambda and this is how I do it. I'm building Node.js Lambda functions using node-lambda ...
Connect with like-minded Atlassian users at free events near you!Find a group
Connect with like-minded Atlassian users at free events near you!
Unfortunately there are no AUG chapters near you at the moment.Start an AUG
We're bringing product updates and pro tips on teamwork to ten cities around the world.Save your spot