How can I get functional test code coverage reports for mapreduce program.

jin yang April 16, 2012

I want to generate functional test coverage reprots for map reduce program.

first I generate the instrumented artifacts along with the clover.db file

  • Setup Clover configuration in the main pom file
          <plugin>
<groupId>com.atlassian.maven.plugins</groupId>
<artifactId>maven-clover2-plugin</artifactId>
<version>2.5.1</version>
<configuration>
<includesTestSourceRoots>false</includesTestSourceRoots>
<generateXml>true</generateXml>
<generateHtml>true</generateHtml>
<licenseLocation>/Users/renhy/clover/clover.license</licenseLocation>
<statementContexts>
<log>^(logger|log|LOG|LOGGER)\..*</log>
<logcheck>^if *\((logger|log|LOG|LOGGER)\.is.*</logcheck>
</statementContexts>
<contextFilters>log,logcheck</contextFilters>
</configuration>
</plugin>
then,
  • Run the following command to do instrument, generate instrumented jars and create a clover registry file clover.db
mvn clean clover2:setup install -Dmaven.test.skip=true -Dmaven.clover.singleCloverDatabase=false
After that, I run the functional test and hope that the code coverage information can be writen to the clover.db.
But, because the program is mapreduce program, it is not run locally.

What I should do to make the mapreduce job write code coverage informaton to the clover.db when it runs.

4 answers

0 votes
Marek Parfianowicz
Atlassian Team
Atlassian Team members are employees working across the company in a wide variety of roles.
January 27, 2014

Update: I've read more details about HDFS in the documentation and it's rather unlikely that Clover will work on it without modifications - I expected that HDFS works transparently like NFS, but it looks that it's a completely different architecture and API. Sorry for a confusion.

Clover has no API for handling HDFS. However, I have raised a feature request for this, feel free to vote on it:

0 votes
Marek Parfianowicz
Atlassian Team
Atlassian Team members are employees working across the company in a wide variety of roles.
June 13, 2012

Yes, you can use a shared network drive so that all tracker nodes will write to the same location. You can either:

1) specify clover.db path during source code instrumentation in your pom.xml by defining a following property for clover2:instrument or clover2:setup goal:

<cloverDatabase>/full/path/to/hdfs/location/clover.db</cloverDatabase>

2) or override this value in runtime by a java property:

-Dclover.initstring=/full/path/to/hdfs/location/clover.db

Please note that in case when your application consists of multiple Maven modules, you should compile sources with the single Clover dabatase option (unless you want to deal with multiple databases in multiple locations and merge them after testing):

<singleCloverDatabase>true</singleCloverDatabase>

Regards
Marek

0 votes
jin yang June 12, 2012

Thanks Marek.

I saw the tutorial page, but it seems not help.

The program is mapreduce program. When it runs, it run on task tracker nodes.

We tried to let the program write coverage info to the tmp cache db file on task tracker nodes, but when the job finished, the tmp cache file is cleaned, we can not get the coverage info.

So far, I think if the clover instrumented program can read and write db file on HDFS, it can solve the problem.

Thanks,

Jin

0 votes
Marek Parfianowicz
Atlassian Team
Atlassian Team members are employees working across the company in a wide variety of roles.
June 11, 2012

Hi Jin,

You're using quite old version of Clover-for-Maven - 2.5.1 - I recommend upgrading to the latest version which is 3.1.5.

Have you seen a tutorial on CLOVER/Working+with+Distributed+Applications page?

Regards
Marek

Suggest an answer

Log in or Sign up to answer
TAGS
AUG Leaders

Atlassian Community Events