Bamboo build log summary limitations

Scott Goodwin June 11, 2012

Maybe I'm just missing some pieces to the puzzle here, but I feel like Bamboo is a moderate let-down in terms of what we want it to do. This may be because we're not using it as it is intended, or maybe it is due to something I am missing, but let me explain.

First off, we are not using Bamboo to test Java code, or to do unit testing, etc. All we are using it for is to checkout the sourcecode, call msbuild to build it, then kick off a perl test harness script, which we simply use to run our compiled application with different combinations of command line arguments (we're using perl instead of batch because batch is god-awful). The perl script then collects the app's exit code, and any error and warning messages from the application's log file. The end result is that for each loop in the perl script, we have a specific combination of arguments that we kick off our application with, and a summary log containing not just errors, but warnings, notes, and asserts.

Originally, I was just outputting all this data to a different summary log file on the system that is running the remote agent. I wanted to show this stuff in the Bamboo summary log window, but couldn't figure out how it was done.

Recently, we found that we could use JUnit Parser (among others) as a subsequent task in bamboo, as long as our perl script sent its error summary to a JUnit compliant file. So that is what we did, and on a basic level, Bamboo is doing what we need. What is really puzzling is why Bamboo seems so limited on what info it can display in its build summary window. All we can get the JUnit xml files to display are the number of errors, and a brief error summary. There is no concept or warnings, or any other "non-error" notes that can be displayed, unless I am missing something obvious? I mean, there's not even a summary of how many tests were performed, or how many tests passed... just the number of errors. (See attached screen shot for an example of this). In this screenshot, you see a grand total of one error, and a very brief summary. Now, I realize that bamboo wasn't the one running the tests, it was the perl script, but I feel like that has got to be a way to pass that information back into bamboo. Am I wrong?

Does anyone have any insight on this? Yes, we can certainly just use it as is, and when ever we see an error in the summary log, we can navigate to the perl script's output log and manually pull it up, but with a tool like Bamboo, the whole point is to be able to display errors, warnings, notes, etc, about your test runs (not just errors!), but I'm not finding a way to do this. I've looked through every plugin and am not seeing one that takes care of this.

Do I have to write my own plugin to get this functionality?

Thanks!

Scott

7 answers

1 accepted

2 votes
Answer accepted
PiotrA
Rising Star
Rising Star
Rising Stars are recognized for providing high-quality answers to other users. Rising Stars receive a certificate of achievement and are on the path to becoming Community Leaders.
June 12, 2012

I think you could add to your plan a task(s) that would parse your .log files for arbitrary regexp and output the occurences to STDERR, see attached screenshot:

that way you could 'intercept' the errors from the files generated during the build (you could see the errors/warnings later in the Job Result Summary - I'm not sure if these errors/warnings are also propagated up to the Build Result Summary too). These 'errors' wouldn't be failing the build.

However, to make that solution more usable I think you should use 'artifact' facility in Bamboo to store the log files for further examination...

Scott Goodwin June 13, 2012

Thanks for the info -- this answer has given me some really helpful advice.

Outputing messages to STDERR is indeed accomplishing what we want to do, for the most part. I'll just make it known to the devs that the Job Summary Log is what they should look at, instead of the Build Summary Log (seems STDERR doesn't propagate up to that window).

You mentioned that your JUnit parser tries to handle the "skipped" node, among others. This looks to be true, but it literally just skips the entire testcase, and doesn't even show that it was attempted, so unfortunately I won't be able to leverage that in the Build Summary. It was worth a try though!

I just have one more question about all of this. Is there a way to increase the number of lines that the Build Summary log window, and the Job Summary log window displays, before it cuts it off, and prompts you to download the entire log? I like how the Summary windows color code the errors in red, and it sure makes it easier for the devs to quickly spot the notable lines, but once the entire log is downloaded, it is just displayed in plain text, and is harder to read. Seems to me the current limit is 1000 lines? I don't mind editing source files, as long as I have an idea about where to start looking... For that matter, I may even snoop around in the source and see if there is an easy way to alter the formatting for the Summary windows... any quick pointers there?

I will try out the artifact facility, as you mentioned. Now that I think about it, if I placed all my custom parsing and error interpretation in another Job, which pulled the logs via the artifact facility, it would "unlink" the failures from the testcase I actually ran on our app, meaning I could pass any test run I wanted, but then show the errors in a separate Job. (Not sure if that makes any sense the way I described it, but I know what I mean in my head :-) )

Thanks again!

--scott

PiotrA
Rising Star
Rising Star
Rising Stars are recognized for providing high-quality answers to other users. Rising Stars receive a certificate of achievement and are on the path to becoming Community Leaders.
June 13, 2012

1000 seems to be hard-coded limit. See /bamboo-src/components/bamboo-web/src/main/java/com/atlassian/bamboo/build/ViewBuildLogs.java -> MAX_LINES_TO_SHOW (I assume you have access to Bamboo sources)

Patrick Aikens June 13, 2012

We use Sahi to perform integration testing on our app suite and do almost exactly what's been suggested. We have a couple of jobs that run our various test suites and we generate JUnit compatible results files along the way (either directly from Sahi or using a couple of custom log parsers to handle failures during install steps that fall outside of Sahi). We then have a final job using a script task to collect all the resulting log files and arrange them into a nice folder structure. We then declare the root of that folder tree as an artifact. In addition, that final job has the JUnit parser step to check for failures in the resulting JUnit logs - that way the full test suite always runs to it's fullest completion, but the plan as a whole will fail if necessary - with full output logs available to browse from the results page.

Scott Goodwin June 14, 2012

Thanks for the feedback. I think we're now able to use Bamboo in a much more effective way than before, and I'm glad for the good advice I've gotten here. Thanks you!

I marked this topic as answered, but I want to put one more thing out there in case anyone has any input on it.

I must be dense, but I only just last night discovered that you can view the Successful runs separately from the failed runs (see yellow highlight in screenshot)

The links weren't really obvious, I guess, because I never noticed them. This takes care of the annoyance of not being able to see which tests passed! I know I asked earlier in this thread how to add "notes" or other text to a run, without it being marked as failed, but now that I discovered this screen (where it shows all successful restuls), THIS is the screen that I'd love to be able to add custom text to. For example, under each successful line item, maybe a short lines of text where I can give info relevent to that specific test run.

If this were possible, I would probably record myself dancing a jig in a leopard print toga, send it to the person who told me how to do it, and allow that person to post it all over the interwebs for whatever purposes they like. How's that for a bounty?!

Cheers all, and thanks again for all the help.

--scott

0 votes
PiotrA
Rising Star
Rising Star
Rising Stars are recognized for providing high-quality answers to other users. Rising Stars receive a certificate of achievement and are on the path to becoming Community Leaders.
June 12, 2012

I see... If you would go with writing up a self-made plugin, one thing you could try before is to use "skipped" test node in Junit xml files. I'm not sure if this works, but I see in the Bamboo code that we handle (at least try to handle) following nodes in JUnit test xml files: "testsuite", "testcase", "error", "failure", "skipped".

So maybe adjusting your PERL scripts to generate "skipped" tests instead of "error"/"failure" ones would do what you need? See https://jira.atlassian.com/browse/BAM-8983 for more details...

0 votes
Scott Goodwin June 12, 2012

It also occurred to me that maybe there are other workarounds to this. For example, can Bamboo hook into confuence, and create some sort of constantly-updating build results page, complete with full log data and formatting? This would work for us too, if possible!

0 votes
Scott Goodwin June 12, 2012

I have limited ability to photoshop up what I am envisioning, but see below for a screenshot of Jenkins (formerly Hudson). They have a plugin called Log Parser, with which you can format the log summary via regular expressions (and tag them as errors, warnings, etc). This would be ideal for Bamboo to have, but I think I'd have to write it, since I don't see anything like this in the marketplace.

The crux of the problem is this: there are a whole slew of warnings, and even some errors, that my perl script extracts from the application log files that should NOT signal a failed build, but that we still need to know about. As I see it, there are two ways to display this info: 1) My perl script places that info into an error tag in the xml summary file, and Bamboo uses JUnit (for example) to parse it, and display that info as an error in the summary windows. The problem with this is that that fails the build. 2) Be able to format the output in the (more complete) log window, much like you see in the screenshot of jenkins. Honestly, either of these would work for me. I just need to be able to see the warnings, erros, and other info that I want to, at a glance.

The icing on the cake would be if Bamboo supported a third state besides "Pass" and "Fail". Jenkins uses "Unstable". Passed with Warnings would also suffice.

In the end, I realize that Bamboo is geared more toward doing unit testing on many different units that can either pass or fail, and this is probably why it is limited to this, but the fact is, we just aren't using it in that way: we're using it to run our compiled app with different combinations of command line parameters, then extracting interesting bits of info from the app log. The return code of the app is currently how I am signaling to Bamboo that the test should pass or fail, but we'd really love to have a summary of other tidbits of info that the perl script finds.

Hopefully I've clarified this a little? If not, let me know!

thx,

scott

0 votes
PiotrA
Rising Star
Rising Star
Rising Stars are recognized for providing high-quality answers to other users. Rising Stars receive a certificate of achievement and are on the path to becoming Community Leaders.
June 11, 2012

"I wanted to show this stuff in the Bamboo summary log window"

Can you elaborate a little bit more on that? What exactly stuff and where you would like to see? Maybe you could illustrate that with 'photo-shopped' screenshot of some Bamboo page?

I'm asking because I'd like to check if there might be some feasible workarounds to your case, but I'm not sure what do you expect...

0 votes
Scott Goodwin June 11, 2012

Thanks for the note! That is what I was afraid of....

Perhaps I should post a feature request for Bamboo.

0 votes
tkhduracell
Rising Star
Rising Star
Rising Stars are recognized for providing high-quality answers to other users. Rising Stars receive a certificate of achievement and are on the path to becoming Community Leaders.
June 11, 2012

I wrote a custom testResultPaser and I ended up writing the full log as an TestResultError in order to get logdata to the webgui. This is clearly an deficiency!

Suggest an answer

Log in or Sign up to answer
TAGS
AUG Leaders

Atlassian Community Events