Multi-Project Builds versioning for Continuous Deployment

I’m setting up a new project, and I think it’s a good fit for a multi-project. But I’m having trouble figuring out how to configure the continuous deployment. We have nexus server that our artifacts are published to and Jenkins for the build server. I have it working great for a single project.

I’m looking for a way to configure versions in Gradle to avoid version collisions which would cause Nexus to error and fail the build. Also, I’d like to minimize publishing redundant artifacts.

My version strategy is to use <major>.<minor>.<build number>.

And my intended project structure is:

  • MyProject
    ** PublicWeb
    ** AdminWeb
    ** DomainAPI

My summarized build script is:

./gradlew clean
./gradlew build
./gradlew uploadArchives

My intention for the multi-project build is that this build script would run from MyProject for every commit in master. (Technically every push as it’s configured in WebHooks)

But under this plan if I make a change to PublicWeb and only PublicWeb, a new artifact of DomainAPI will be built and deployed. Once my API stablaizes, there will be very infrequent changes to that project. So that means there will be loads of redundant published artifacts for that subproject.

Is there a way around that, or is that just a cost of using multi-projects with automated continuous deployment?

There are different opinions on this question (and “holy wars” about this :wink:).

I had experience with both options and the way with setting one version for all sub-modules was the simplest. E.g. you found a bug in one module and you need a rollback to the previous version, but it is incompatible with another module, etc.

Maybe the following solution will work for you (I don’t know your real use case).
You can set 2 Nexus repositories: staging (build on each commit) and production (build only tagged releases). Periodically you run Nexus scheduled task to clean up staging repository (I think it called Remove Releases From Repository). So you can keep the number of artifacts in production repository small.

I, personally, think that repository implementations (nexus, artifactory) should be “smarter” about how they store the artifacts. If you have 30 versions of a 10mb war, it’s likely taking up 300mb in nexus when it’s likely only one or two class files have changed in a couple of jars. Transitive dependencies (eg spring jars) are likely stored 30 times with no changes between versions

If the filesystem was “smarter” the actual size on disk could be reduced to just the deltas. This would mean a new artifact version with no actual changes would be close to “free” in terms of disk space

Unfortunately this is not how repositories currently work, there’s lots of redundancy needing lots of disk space