Hello,
We'd like to unit-test some of our plugin components on a clustered environment. The tests would specifically be targeted at components that deal with shared state, caches, etc.
We'd like to be able to write unit tests in a rather compact way, not much different from how ordinary tests are written. I could imagine getting an Executor for a specific node, writing Runnable test code for that node, which uses some special synchronization functions to sync between the nodes, and then somehow collecting and analyzing the result.
I think this is technically doable, with some scaffolding or fixture on the JUnit part, and a special testing plugin that would be installed on JDC instances that would actually run the tests and manage synchronization.
Did anyone try to create something like that?
@Atlassian - how do you test JDC components?
Thanks,
Igor
Community moderators have prevented the ability to post new answers.
We have several mechanisms for testing components in a JDC environment:
The interesting one for you is #2, which is in the test code for what, for historic reasons, is called jira-ha-plugin. This lives at https://bitbucket.org/atlassian/jira-ha-plugin but appears to be closed-source.
We run the tests as what looks much like a cross-product integration test, starting two "products" called "node1" and "node2" that are both JIRA. Forgive me dumping a huge blob of XML from the pom on you, but it looks like this:
<products> <product> <id>jira</id> <instanceId>node1</instanceId> <httpPort>5991</httpPort> <ajpPort>8010</ajpPort> <contextPath>/node1</contextPath> <productDataPath>${basedir}/src/test/resources/node1-plugin-test-resources-6.0.zip</productDataPath> <systemPropertyVariables> <jira.ha.webfilter.disabled>true</jira.ha.webfilter.disabled> <atlassian.ehcache.config>${basedir}/target/node1/home/ehcache.xml</atlassian.ehcache.config> <atlassian.cache.jmx>true</atlassian.cache.jmx> <atlassian.cluster.scale>true</atlassian.cluster.scale> </systemPropertyVariables> <dataSources> <dataSource> <jndi>jdbc/JiraDS</jndi> <url>${jdbc.url}</url> <username>${db.username}</username> <password>${db.password}</password> <driver>${jdbc.driver}</driver> <libArtifacts> <libArtifact> <groupId>${db.groupId}</groupId> <artifactId>${db.artifactId}</artifactId> <version>${db.version}</version> </libArtifact> </libArtifacts> </dataSource> </dataSources> <sharedHome>/tmp/jiraha_shared</sharedHome> </product> <product> <id>jira</id> <instanceId>node2</instanceId> <httpPort>5992</httpPort> <ajpPort>8011</ajpPort> <contextPath>/node2</contextPath> <productDataPath>${basedir}/src/test/resources/node2-plugin-test-resources-6.0.zip</productDataPath> <systemPropertyVariables> <jira.ha.webfilter.disabled>true</jira.ha.webfilter.disabled> <atlassian.ehcache.config>${basedir}/target/node2/home/ehcache.xml</atlassian.ehcache.config> <atlassian.cache.jmx>true</atlassian.cache.jmx> <atlassian.cluster.scale>true</atlassian.cluster.scale> </systemPropertyVariables> <dataSources> <dataSource> <jndi>jdbc/JiraDS</jndi> <url>${jdbc.url}</url> <username>${db.username}</username> <password>${db.password}</password> <driver>${jdbc.driver}</driver> <libArtifacts> <libArtifact> <groupId>${db.groupId}</groupId> <artifactId>${db.artifactId}</artifactId> <version>${db.version}</version> </libArtifact> </libArtifacts> </dataSource> </dataSources> <sharedHome>/tmp/jiraha_shared</sharedHome> </product> </products> <testGroups> <testGroup> <id>ha</id> <productIds> <productId>node1</productId> <productId>node2</productId> </productIds> <includes> <include>it/com/atlassian/jira/plugins/ha/*Test.java</include> </includes> </testGroup> </testGroups>
shudder
But what this does is let you use atlas-run to start each of the "products" with your plugin:
atlas-run --product node1 atlas-run --product node2
The only thing we have on top of this is a helper class that extends FuncTestCase and wraps page object factories for each of the nodes, such as this:
protected final JiraTestedProduct node1Instance = TestedProductFactory.create(JiraTestedProduct.class, "node1", null);
And it has a bit of switching logic so that the test can direct the page objects to the intended target node appropriately:
protected static enum NodeSelection { RANDOM, DIFFERENT, SAME }
I think that this plugin-turned-test-suite is only locked down for historic reasons, so I'll raise the question about getting it opened up, but hopefully this gives you some idea as to how we have structured this.
Of course, these are clearly integration tests, not unit tests. I'm not sure that unit testing really makes that much sense when testing for cluster safety, as you would by definition need two copies of your component and its dependencies, and this is no longer really a "unit". It also takes you mocking out how interaction with one copy of its dependencies links to the behaviour of components in the other cluster. That level of knowledge about the behaviour of the other components is also beyond the normal scope of a unit test. As a unit testing purist, I would argue that all you really want to be doing is preparing a single unit and test the sending and receiving behaviours independently as true unit tests. If you want to test the end-to-end, you really should work with the full system.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hi Chris, Thank you very much for the detailed answer, it helps very much, including the long xml. I also hope that Atlassian will be able to open jira-ha-plugin to public. Or maybe I can get a sneak peek ;) Anyway, I'm not sure what is the role of that plugin - is it similar to jira-testkit-plugin? I agree these are going to be functional, not unit tests - we'll need to test the functionality that mutates and caches shared state. But that may require not only sending requests to different nodes via HTTP, but also some sneaky synchronization to hit the "right" moment. For example, we have optimistic locking when we incrementally update the database. To have the "lock" fail and go to the second cycle, we need to simulate a specific race outcome. We'll have this part unit-tested with in-process synchronization, but (and as a unit testing purist, you might not like what I'm going to say), it would be great to see this work in environment closer to real life, which might have a million things that can be overlooked when one focuses on a single unit. Some questions on your setup, if I may: - What's the point of starting both nodes with atlas-run, shouldn't they be both started automatically with atlas-integration-test? - Do you just start the database manually for this testing procedure, or is it also started by AMPS? - Do you run these tests on a CI server? How is the database set up for CI builds? Thanks again for your help, very much appreciated! Igor
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
> I'm not sure what is the role of that plugin - is it similar to jira-testkit-plugin? It wasn't originally intended to be, but as JDC development progressed, it turned out that there wasn't really any production code that belonged to it, so it grew into a something more like testkit. The JIRA Agile team uses it as a dependency for their own cluster-safe testing, and it may be that if you just specify it as a dependency then you'll be able to get started just from that: <dependency> <groupId>com.atlassian.jira.plugins</groupId> <artifactId>jira-ha-plugin</artifactId> <version>${jira.ha.plugin.version}</version> </dependency> <dependency> <groupId>com.atlassian.jira.plugins</groupId> <artifactId>jira-ha-plugin</artifactId> <version>${jira.ha.plugin.version}</version> <classifier>tests</classifier> <scope>test</scope> </dependency> ... <jira.ha.plugin.version>1.1</jira.ha.plugin.version> > What's the point of starting both nodes with atlas-run...? You don't want to do that when you're just writing the test do you? That would be very inconvenient. :) > Do you just start the database manually for this testing procedure, or is it also started by AMPS? We use the maven-antrun-plugin and <exec/> ant tasks to control the database's lifecycle. > Do you run these tests on a CI server? Yes. They run on ordinary Bamboo agents.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
An example test: public class SmokeTest extends AbstractHaTest { private static final String PROJECT_KEY = "HSP"; @Before public void setup() { } @Test public void testIndexing() throws InterruptedException { try { login(node1Instance); node1Backdoor.project().addProject("Homosapiens", PROJECT_KEY, "admin"); node1Backdoor.issues().createIssue(PROJECT_KEY, "New HA Issue"); waitForSync(); login(node2Instance); SearchResult response = node2Backdoor.search().getSearch(new SearchRequest().jql("summary ~ HA")); Assert.assertThat("Should get 1 issue", response.issues.size(), is(1)); } finally { // Cleanup our stuff. safelyDeleteProject(PROJECT_KEY); } } }
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Wonderful! Thanks again, Chris!
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.