Dependency Locking breaks when used with plugins that execute recursive builds

I’m using the Scala Multi-Version Plugin from Nathan Alderson to compile against different versions of Scala. The plugin works by using the GradleBuild task to re-run the build for each subsequent version of Scala but with a slightly different set of dependencies (for that version of Scala).

Unfortunately, the approach the plugin takes doesn’t work very well with dependency locking. After the first build which creates the locks (using --write-locks), subsequent builds fail because the dependency lock files only contain the last set of dependencies locked during the run.

For a detailed example of what I’m talking about, see this bug that I filed against the Scala Multi-Version Plugin.

This issue could be resolved if there was a way to merge new dependencies into the lock files rather than completely replacing them when the build recurses and runs for the next version of Scala (that way, all of the different dependencies for all of the different versions of Scala will end up the lock files).

Is there any way to customize how --write-locks works to leave behind any dependencies that were previously resolved but not resolved on the next run?

Another solution would be a way to configure the folder name used by Gradle to save the lockfiles. Then, my multiversion plugin could be updated to set the folder name to include the scala version. This would keep the lockfiles for each version separate and avoid the clobbering behavior that happens today.

We have been using a similar solution to use the multiversion plugin with the Nebula dependency lock plugin:

project.dependencyLock {
  if (project.plugins.hasPlugin("com.adtran.scala-multiversion-plugin")) {
    lockFile = "dependencies_${project.ext.scalaVersion}.lock"
  }
}
2 Likes

Unfortunately, it looks like the dependency lock directory path is currently hardcoded. See Line 38 of LockFileReaderWriter.

1 Like