What's a good strategy for running integration tests via the CI server?

I am setting up my build to run integration tests which I am running from Hudson and wondered what the best practices are for doing this (the question may be better suited to a Hudson forum - but thought I’d start here)

In our case, to run the integration tests we need our application deployed to Glassfish, so we have a Hudson job with the following 3 steps: 1. run a Gradle build (to compile, jar, war, ear and unit test) 2. deploy ear to Glassfish 3. run a Gradle build to execute integration tests.

I have followed the sample shipped with Gradle in samples\java\withIntegrationTests but this leaves me with a problem. As the Hudson job is performing the deployment, the step to run the integration tests is run in a separate VM so the integration test no longer has the dependencies and build artifacts from the first step. For now I have an ugly hack so the integration test references the build artifacts from the first step (i.e a hard-coded classpath). The only other choices I can think of are:

  1. Run the integration tests on a separate Hudson job - and have this job check-out the code again, re-compile, then run integration tests (seems a bit wasteful)

  2. Have the deploy step performed in the Gradle script? This would resolve my classpath issue, but is it possible to wait for a deployment to complete before my integration tests run?

Any advice appreciated - how are other people doing this?

I’m not very keen on coupling the integ tests with the CI tool so I’d recommend option #2. Another advantage of going that path is that someone can run those integ tests on his machine improving operability & maintainability.

There should be a way to wait for your server to start up. You can start with simple sleep and iterate to a smarter solution. I remember in one project we performed a http get from the build to find out if the server is up. However, it was long time ago and I don’t remember the details :slight_smile:

Hope that helps!

You can implement option #2. But there is a more fundamental and common problem here. You often want to have multiple hudson jobs for the same build to get the feedback cycles down. Let’s say in your case you often have compile errors after commits. You want to give this feedback ASAP. If one job is doing compile unit tests and integration tests many commits may accumulate before the job is finished. So the feedback cycle for compile errors is to long. The same could be the case for the feedback cycle for unit tests.

Having said that there is not a good solution yet for this. As Hudson/Jenkins jobs have separate work areas, you can’t make use of the incremental build feature of Gradle. So if you have multiple jons you have the wastefulness you have described above. We have started a discussion with some folks from Jenkins about how to make this better. We are still in the exploration phase.

I guess there’s more ways to achieve things (there’s a lot of different jenkins plugins that can be used in combination with gradle flexibility). Some things we use in our continuous deployment, but that would be applicable to integration testing as well is:

To ensure that different jobs are locked to the same SVN revision we use the: Parameterized trigger plugin.

At some point we build a deploy zip which we for starters only archive in jenkins. Then in a subsequent gradle driven deploy we use the following repository definition to grab the release zip directly from jenkins:

repositories {
   ivy {
      name = 'jenkins'
      artifactPattern "http://jenkins-master/job/${ciJob}/${ciBuildNumber}/artifact/build/distributions/[artifact]-[revision].[ext]"
      }
}

The deploy build gets some extra properties set on it with the job name and the build number.

There’s a number of jenkins plugins that can be used to copy artifacts around workspaces or other locations. Which you then could use in combination with flat directory repositories (and some build parameterization to select a different repository or to add some extra flatfile repository).

Another thing we use is STAF which we use to monitor logs during deploy and to copy things around from slaves to deploy servers. E.g. when a continuous deploy is running we directly monitor and filter log files on the remote server for interesting logging which we then directly display in jenkins so when a deploy/test fails we directly can see what happened inside jenkins without pulling logs from some random server.

STAF is a pretty convenient piece of glue. Although command syntax is a bit weird. With the java bindings we use it directly from a gradle build. If this would be seamless with the copy tasks etc. then it would be really great. But I didn’t work up the courage to try and make it into a plugin yet.

Hope you can get some inspiration from these…

(hmmm I’m missing a post preview… we’ll see how it ends up)

This is somewhat off topic, but have you considered these for your CI?

  1. Make a trade-off and use something less production-like but faster, such as Jetty connecting to an embedded database, and/or 2. Spin up a VM (or VM stack) to deploy your product to for the duration of your integration tests (e.g. Amazon EC2 with Cloud Formation).

#1 has the bonus of being fast and not dependent on a network connection, which IMO would lead to a large number of benefits. #2 has the advantage of helping you figure out your release automation - as this can equally be applied to your production environment. You could then also run multiple environments in parallel if need be.

Thanks for all the responses - a lot of useful information and options to consider. For now I have the ugly hack I mentioned with my first option, however I’ll try to implement option 2 when time permits and look into the various Jenkins plug-ins.

Hans - thanks for the info - I’ll be interested to see what comes out of the discussions with Jenkins.

Merlyn - I want to move to something like Jetty when I can. At the moment we are deploying an EAR file and several WAR’s, so Jetty probably won’t support an EAR file (as far as I know it’s a web server rather than a J2EE app server). However I should be able to rewrite our build to remove the EAR (we used to build a J2EE app using EJB’s but have converted to Spring/Hibernate - so I should be able to get rid of the EAR).

Actually it is possible to make use of the incremental build feature with separate jobs. By utilizing the Clone Workspace SCM Plugin, you can copy all or part of a workspace from one job to another.

Of course it would mean some time “wasted” by the copying, but you gain the time used for building. I guess it depends on the build if the copying is faster than a full build or not.