Should the tooling API be used for functional plugin tests?

Bear with the explanation, I want to make sure our use case is explained…

We create a plugin JAR which needs to maintain compatibility across multiple versions of Gradle (like many of you probably do). In order to verify that we haven’t caused major problems (such as the build will blow up with a certain version of Gradle) we set up a compatibility test suite in a subproject.

We run one test task per supported version of Gradle, which passes in the distro URL as well as the classpath to use for our plugins. Inside the JUnit tests, we use the tooling API to run the different build scenarios we want to test.

Our discomfort comes from the options for kicking off the Gradle build from the tooling API.

Option 1:

Our current choice is to set the ‘embedded’ property on the GradleConnector. This is an internal property so there’s the first point of discomfort. The other is that this runs within the test process rather than forking. I don’t know if this will ever cause us problems, but it doesn’t seem ideal.

Option 2

Use the GradleConnector default (the daemon). This results in one daemon per supported version of Gradle (currently 3). I don’t see that as bad in itself, as long as those daemon processes don’t cause any further problems.

However, because the daemon is long-living and holds locks on both the directory it was last used in and any JARs it loaded into its classpath, we run into issues on the next execution when we try to cleanup the results of the previous build. This can happen in two places:

  1. The plugin JAR created in the root project. We are directly using the one built into the ‘libs’ directory. 2. The temp directories that each test project is run from. These are locked presumably as the working directories of the daemons.

What now?

There are probably some things that would help, for instance, copying the plugin JAR to a different location so we can clean up the ‘libs’ dir. Another thing that could help is if the daemon didn’t use the settings dir as its process working directory. Maybe it could use something in GRADLE_HOME (e.g. /daemon/wrkdir/)

However, I get the wider feeling that we’re either misusing the tooling API or just on the completely wrong track… Does anyone have thoughts on how to approach this situation?

Option 2 will be the way this should work.

Any chance you could send us something based on what you have so we can dig further into this. Ideally, we can improve the daemon to not suffer from these locking problems.

The long term goal is for the daemon to be the basis of such testing. We do this kind of thing internally in the Gradle build.

The plugin JAR created in the root project. We are directly using the one built into the libs directory.

How are you using the jar?

The temp directories that each test project is run from. These are locked presumably as the working directories of the daemons.

The daemon changes its working directory back to ~/.gradle/daemon/ when each build is finished, so it’s probably not that. Our daemon integration tests delete the project directory after each build and these tests are working fine.

I completely missed these responses (notifications don’t seem to work for my work email address), but I came back to this while I was testing out 1.1 on our plugins.

Some change in 1.1 broke the use of embedded for us. I don’t consider this a bug, since we really shouldn’t be using embedded, but it does mean I’ll want to resolve the original issue sooner rather than later. Please note that we can get past this for now by building our plugins against 1.0, so it isn’t holding anything up.

I’ll try to pare our use case back to a simple example (hopefully today) to help you debug this.

I ended up submitting an issue, since that was the easiest way to upload the example. It’s in GRADLE-2415.