Create
cancel
Showing results for 
Search instead for 
Did you mean: 
Sign up Log in
Celebration

Earn badges and make progress

You're on your way to the next level! Join the Kudos program to earn points and save your progress.

Deleted user Avatar
Deleted user

Level 1: Seed

25 / 150 points

Next: Root

Avatar

1 badge earned

Collect

Participate in fun challenges

Challenges come and go, but your rewards stay with you. Do more to earn more!

Challenges
Coins

Gift kudos to your peers

What goes around comes around! Share the love by gifting kudos to your peers.

Recognition
Ribbon

Rise up in the ranks

Keep earning points to reach the top of the leaderboard. It resets every quarter so you always have a chance!

Leaderboard

Come for the products,
stay for the community

The Atlassian Community can help you and your team get more value out of Atlassian products and practices.

Atlassian Community about banner
4,553,369
Community Members
 
Community Events
184
Community Groups

Block committing new large files

Edited

Hello!

We are looking to block people from committing files that are large (e.g. over 10 MB) or binary (e.g. zip, jar, exe). However, we don't want to block updates to existing files of that nature - we only want to block new additions. Right now we're working with Adaptavist ScriptRunner to accomplish this.

There's a pre-receive hook that is similar (Restrict file size), but the condition statement wouldn't solve this purpose, because the condition is an all-or-nothing check to run the hook. We also recently asked about trying to block binary files and got a good response (see https://community.atlassian.com/t5/Adaptavist-questions/Detect-and-block-a-binary-file/qaq-p/643782), but we'll need to adapt it further.

I did find the ChangeType enum that we could fetch per file, but I am not sure how we can check file sizes outside of the built in pre-receive hook. If we had a way to check file sizes within custom scripts, we could probably handle the rest.

Any thoughts? Thanks in advance!

 

EDIT: We could also make this work if we could execute the git command cat-file and pass in the hash for each blob we care about, but I am not sure if running git commands within these scripts is something that makes sense.

1 answer

1 accepted

I managed to come up with a solution, for anybody who comes across this in the future and needs similar behavior:

 

import com.atlassian.bitbucket.commit.Commit
import com.atlassian.bitbucket.commit.CommitService
import com.atlassian.bitbucket.content.AbstractChangeCallback
import com.atlassian.bitbucket.content.AbstractContentTreeCallback
import com.atlassian.bitbucket.content.AbstractDiffContentCallback
import com.atlassian.bitbucket.content.AbstractFileContentCallback
import com.atlassian.bitbucket.content.Change
import com.atlassian.bitbucket.content.ChangeType
import com.atlassian.bitbucket.content.ChangesRequest
import com.atlassian.bitbucket.content.ContentTreeNode
import com.atlassian.bitbucket.content.DiffRequest
import com.atlassian.bitbucket.content.Path
import com.atlassian.bitbucket.hook.HookResponse
import com.atlassian.bitbucket.repository.RefChange
import com.atlassian.bitbucket.repository.RefChangeType
import com.atlassian.bitbucket.repository.Repository
import com.atlassian.bitbucket.scm.Command
import com.atlassian.bitbucket.scm.DirectoryCommandParameters
import com.atlassian.bitbucket.scm.FileCommandParameters
import com.atlassian.bitbucket.scm.ScmService
import com.atlassian.bitbucket.util.PageRequest
import com.atlassian.bitbucket.util.PageRequestImpl
import com.atlassian.sal.api.component.ComponentLocator
import com.onresolve.scriptrunner.canned.bitbucket.util.BitbucketCannedScriptUtils

import javax.annotation.Nullable

CommitService commitService = ComponentLocator.getComponent(CommitService)
ScmService scmService = ComponentLocator.getComponent(ScmService)

long maxFileSizeAllowed = 10*1024*1024
String maxHumanReadableSize = "10 MB"

Repository repository = repository as Repository
Collection<RefChange> refChanges = refChanges as Collection<RefChange>
HookResponse hookResponse = hookResponse as HookResponse

StringBuilder msg = new StringBuilder()

Map<String,ChangeType> pathChangeTypeMap = [:]
Map<String,OptionalLong> pathFileSizeCache = [:]

try{
refChanges.each { refChange ->
if(!refChange.getCommits(repository)) {
// No commits = empty ref, move on to next ref
return
}

// Populate path map with types
ChangesRequest changesRequest = new ChangesRequest.Builder(repository, refChange.toHash).sinceId(refChange.fromHash).build()
commitService.streamChanges(changesRequest, new AbstractChangeCallback() {
@Override
boolean onChange(Change change){
pathChangeTypeMap[change.getPath().toString()] = change.getType()
super.onChange(change)
}
})

pathChangeTypeMap.each {String filePath, ChangeType changeType ->
// Continue past files we know are not candidates to block
if(!changeType.equals(ChangeType.ADD) && !changeType.equals(ChangeType.COPY)){
return
}

// Add blocking messages for each binary file added
FileCommandParameters fileParams = new FileCommandParameters.Builder().commitId(refChange.toHash).path(filePath).build()
PageRequest pageRequest = new PageRequestImpl(0, PageRequest.MAX_PAGE_LIMIT)
AbstractFileContentCallback fileCallback = new AbstractFileContentCallback() {
@Override
void onBinary() {
msg.append("Cannot push ${filePath} because it is a binary file\n")
super.onBinary()
}
}
scmService.getCommandFactory(repository).file(fileParams, fileCallback, pageRequest).call()

// Find directory containing this file
String fileDir = ""
int finalSeparator = filePath.lastIndexOf('/')
if(finalSeparator > 0){
fileDir = filePath.substring(0, finalSeparator+1)
}

// Fetch file sizes, only if it is not already available
if(!pathFileSizeCache.containsKey(filePath)){
DirectoryCommandParameters dirParams = new DirectoryCommandParameters.Builder().commitId(refChange.toHash).withSizes(true).recurse(false).path(fileDir).build()
pageRequest = new PageRequestImpl(0, PageRequest.MAX_PAGE_LIMIT)
AbstractContentTreeCallback contentCallback = new AbstractContentTreeCallback() {
@Override
boolean onTreeNode(ContentTreeNode node) {
if(node.getType() == ContentTreeNode.Type.FILE){
com.atlassian.bitbucket.content.File file = (com.atlassian.bitbucket.content.File) node
pathFileSizeCache[fileDir + file.getPath().toString()] = file.getSize()
}
super.onTreeNode(node)
}
}
scmService.getCommandFactory(repository).directory(dirParams, contentCallback, pageRequest).call()
}

// Add blocking messages for files that are too large
if(pathFileSizeCache[filePath].getAsLong() > maxFileSizeAllowed){
msg.append("Cannot push ${filePath} because it exceeds the size limit of $maxHumanReadableSize\n")
}
}
}
} catch(Exception e){
msg.append("Exception thrown while validating against binary files - please contact <support team>!\n")
msg.append("${e.toString()}\n")
}

if (msg) {
// Add any other information that should be shown e.g. a help url
hookResponse.out().print(BitbucketCannedScriptUtils.wrapHookResponse(msg))
return false
}

return true

Also note that refChange.fromHash can sometimes be the 0 hash, in which case this script will probably crash and burn as it did in our case - future users may want to detect this case and handle it appropriately.

Is it possible to restrict push commit size ? Thank you!

Suggest an answer

Log in or Sign up to answer
TAGS
AUG Leaders

Atlassian Community Events