Hi,
We are using latest Clover and Surefire plugins (with JMockit ), but we are seeing a lot of coverage drop issues lately.
I do not have any conclusive evidence yet for the possible root cause here.
We tried a few suggestions available online for clover configuration, but none worked yet.
<flushPolicy>threaded</flushPolicy>
<flushInterval>500</flushInterval>
As we have more than 10k tests, we run test with Surefire (with 12 forks in parallel).
When I tried to debug for possible root cause, I see this error during test execution:
Exception in thread "CloverFlushThread" java.lang.UnsatisfiedLinkError: java.lang.System.currentTimeMillis()J
at java.lang.System.currentTimeMillis(Native Method)
at com.atlassian.clover.recorder.BaseCoverageRecorder$CloverFlushThread.run(BaseCoverageRecorder.java:159)
About error could be logged because, we do use partial mocking of System class for currentTimeMillis method like.
public void testNewConnection() throws IOException, NoSuchMethodException {
new MockUp<System>() {
@Mock
long currentTimeMillis() {
return 1438357206679L;
}
};
new Expectations() {{
mockedHttpServletRequest.getRemoteAddr(); result = "hostname";
mockedHttpServletRequest.getRemotePort(); result = 22;
mockedContainerResponseContext.getHeaders(); times = 0;
}};
......
I was wondering if any one else has witnessed the same.
Thanks,
Digant
Hi,
I can think about couple of reasons why the coverage _drops_
* JVM doesn't flush data to hard drive because it's getting killed (Clover adds an shutdown hook), but since you've changed flushPolicy to _threaded_ it's not the case
* Clover grasps the coverage data but since the test case failed (due to linkage error) the coverage data is not included in the report. You can tweak this behavior by using reportDescriptor and setting includeFailedTestCoverage to true.
* You're using parallel test execution. Clover was never designed to be executed with parallel test execution. We know it sometimes causes some problems and may be the reason of your data loss.
Regarding UnsatisfiedLinkError. It seems like Clover tries to flush data while you override System.currentTimeMillis(). Clover doesn't know about JMockit and tries simply to invoke _native_ method. This may cause errors like one you experience.
My recommendation is to replace all direct invocations of System.currentTimeMillis() with an interface, e.g. CurrentTimeProvider where default implementation looks like
DefaultCurrentTimeProvider implements CurrentTimeProvider {
public long currentTimeMillis() {
return System.currentTimeMillis();
}
}
This will give you the benefit of dependency injection and it's going to be easier to mock time based operations in tests.
Cheers,
Grzegorz Lewandowski
Hi Grzegorz,
Thanks for your reply.
Regarding parallel test execution, there is another thread here https://community.atlassian.com/t5/Clover-questions/support-of-parallelism-with-per-test-coverage-data/qaq-p/318130 that says that global test coverage will work fine in parallel mode.
We are seeing a drop in the global test coverage. We run multiple JVMs in parallel and for the most part the global coverage is reported correctly. It is only on some occasions that coverage drops sharply.
Just to add more info, we run 12 JVMs (surefire forkCount=12). But sometimes after the test run we only see 11 recording files (https://confluence.atlassian.com/clover/managing-the-coverage-database-72253456.html) It is in these cases that we see that coverage has dropped. Could it be possible that two forks are creating a coverage file with the same name ?
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Regarding parallel test execution, there is another thread here https://community.atlassian.com/t5/Clover-questions/support-of-parallelism-with-per-test-coverage-data/qaq-p/318130 that says that global test coverage will work fine in parallel mode.
Generally speaking, it's true but it depends if you run only tests in parallel or whole application, and how you handle coverage files.
Just to add more info, we run 12 JVMs (surefire forkCount=12). But sometimes after the test run we only see 11 recording files (https://confluence.atlassian.com/clover/managing-the-coverage-database-72253456.html) It is in these cases that we see that coverage has dropped. Could it be possible that two forks are creating a coverage file with the same name ?
I don't think number of coverage files is tied anyhow to number of JVMs. The name bases mostly on Clover db name, current timestamp and JVM java.lang.Object#hashCode()
You can verify coverage recording names by running Clover with debug logs enabled (-Dclover.logging.level=debug) and looking for log entries:
* Clover.getRecorder()
I suspect there might be one of two issues popping out:
* Clover ignores coverage resulting from failed tests, I've pointed out who to check it, have you tried it?
* Not all coverage is flushed you could track any potential flushing problems by running Clover with debug logs enabled and looking looking for logs:
** flush recorder
** flushing coverage for recorder
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Thanks Grzegorz for your reply.
We have verified that all tests are run successfully but the coverage is still reported as low. We also increased the surefire wait time for JVM shutdown after the tests complete execution.
I have tried increasing the log level -Dclover.logging.level=debug, and verbose mode. Also tried changing the logging adapter clover.logging.adapter to log4j and jdk but I dont see logs in the console logs or the surefire test output files.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
I see. So in order to debug this case further I'd recommend the following
Best regards,
Grzegorz Lewandowski
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
My company is also experiencing some problems with dropped Clover coverage, which used to work reliably, and we recently implemented parallel unit test execution. We've been unable to 100% tie the problem start to parallel execution, but it's our likeliest suspect. We're only concerned about global, not per-test, coverage in this case. I have some questions about what I'm seeing in this and the other linked coverage thread.
You've mentioned that Clover ignores coverage resulting from failed tests. We've already confirmed that we don't have failing tests related to the classes that are missing coverage (which are different every time.) But, how can Clover ignore the coverage from a particular failed test, if it doesn't know which coverage came from each test? (As explained in the other linked thread, Clover can't do per-test coverage in parallel because it ties coverage to each test only by timing.)
Is it possible that this is causing our problem? Here's what I imagine happening, based on the comment about ignoring coverage from failed tests, combined with not knowing which coverage is from which test. Tests A, B, and C all run at the same time. Test A fails, but tests B and C pass. We try to ignore coverage data from Test A, but we also inadvertently ignore coverage on Tests B and C, because we're ignoring all coverage data from the time when Test A was running (and we don't actually know which of it was from other tests.) We get surprising coverage failures on classes B and C, even though we can confirm elsewhere that tests B and C do cover those classes, that tests B and C did run, and that we had no failures on tests that touch classes B and C.
Does this seem plausible? Can we change a setting to have Clover not drop coverage for failed tests, in order to get around it? If another test fails, we'll be fixing that anyway, so the coverage failure won't be needed to bring our attention to the problem. We're currently using Clover 4.0.6, btw, which might be relevant.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.