I wrote a Java library as a subproject in my gradle multiproject. This library uses org.gradle:gradle-tooling-api to give me the ability to launch gradle programmatically. This actually works, but I made a slight goof along the way: I didn’t keep the version of my gradle-tooling-api dependency (4.9) in sync with the version of gradle I run from my gradle wrapper (which floated between many versions of gradle 3, 4 and 5 while I did my testing).
I run with gradle caching enabled. Running my functional tests with continuous mode turned off, everything works fine. But when turning on continuous mode and whenever I had a failing test in my functional test, the gradle functional test task build got into an infinite loop. The build scanner kept deciding that the gradle subproject that hosted my calls to org.gradle:gradle-tooling-api was continuously out of date and thus kept cleaning its build folder after each failed functional test run.
I ultimately understood and cured my problem by killing all daemons, clearing the .gradle/caches and .gradle/daemon folders, and then running my functional tests. At that point, I saw the tooling download a mismatched gradle distribution, and I understood I had the version mismatches. When I brought my gradle wrapper’s version in sync with my org.gradle:gradle-tooling-api dependency’s version, the build scanner did not clear the folders whenever I had a build failure in my functional tests and continuous mode worked properly, going to sleep after the first functional test failure. I can reproduce my problem by bringing the versions of gradle wrapper and org.gradle:gradle-tooling-api out-of-sync again.
It seems to me that it would be great if org.gradle:gradle-tooling-api could get its own parallel universe gradle cache so as not to confuse any other daemon that might be running concurrently which is of a different version of gradle. What I’m describing is a case of gradle reentrancy where continuous mode, gradle cache, the tooling API, and multiple gradle versioned daemons are all involved.
It’s definitely my fault for having the version mismatches and creating the reentrancy situation to begin with. But, it would be super if there was some better protections or affordance for pilot errors like mine going forward.
If you’re curious as to why I am torturing gradle in this way, I was using the Application gradle plugin to make another subproject runnable as a java application. My functional test needed a way to simulate launching my production code java subproject from the command line. Well, I don’t like shadow jar/uber-jarring as a means of rolling up dependencies, and gradle’s so good at prepping classpaths, I figured I’d give the tooling option a go.