Why is the build script cache global?

Hello, I know that this is probably a stupid question and I’ve probably misunderstood something in the caching.

First of all, I’m experimenting with our build pipeline at the moment. There is first one job and after that it’s several jobs running in parallel on different machines. The workspace from the first job is copied to each following job. Basically 1 job -> 20 parallel jobs that might be running on the same machine or not.

From what I’ve seen regarding the caches, there are three caches. One for the task output hashes etc in project.home/.gradle/1.7. Then there are two in gradle.user.home. One for dependencies and one for the build script compilations. Looking into the build script cache it seems like these caches uses absolute path for the build scripts. I just wonder if it would be better to have the scripts caches in the project directory with relative paths instead. That should make it possible to reuse the script classes between different concurrent jobs that have a copy of the workspace.

Putting the gradle.user.home into the workspace won’t do it since the caches seems to work on absolut paths.

I see pros and cons of both approaches, neither really strong. It’s how we have implemented it historically. In practice, I don’t think it matters that much where the scrip cache lives. Keeping it global means fast builds even if you drop the .gradle dir. Keeping it local might mean some small performance benefits for scenarios like yours. However, I don’t think it’s quite worth implementing at this stage (even though I like the idea of project’s scripts in project’s .gradle - it feels natural). Perhaps if we have stronger use cases or run out of more important items :slight_smile:

Hope that helps!

Thanks for a quick response (as usual).

Yes it helps, now I don’t have to investigate more in this area for the build pipeline. :slight_smile: In our enviroment the extra compilation of the scripts adds two-three minutes to the build time. Mostly because it’s many jobs running at once and cpu spikes.

Would it be of intrest if I submitted something that changes this behaviour? For example a new option that changes from global to local cache for the build scripts with relative paths? I think it might help larger projects with large build pipelines a little.

We can certainly work with you on the contribution. However, we would need to first focus on the use case. There might be other ways to solve the problem (for example, smarter cache keys). There might be some implications of moving the script cache.

Can you write an email to the dev list describing your use case (or linking to this forum entry)?

I’ll add my 2c as I think a behavior we’ve been experiencing for quite a while might be related.

Use case:

  • You start a long running gradle build process A. In our case this starts a jboss server instance. This call does not return, but stays up.

  • You edit one of the gradle build files * You start a second gradle build process B in the same project. Say you want to run the functional web tests against the executing jboss instance via gradle.

    This leads to the second process being locked (basically at startup, before any tasks get executed) and never returning. I assume it is waiting for the script cache item or something similar to get unlocked so it can go ahead and recompile the build script. The behavior is quite counter productive in this scenario as we now have to kill and restart the jboss server (with potentially complex and time-consuming-to-replicate state) in order to run the functional tests.

It’s worth mentioned that in our CI, we purposefully isolate each build by setting GRADLE_USER_HOME=$WORKSPACE/.gradle. It avoids any issues with a shared cache. The only downside that we experience that it’s using more storage, but for us storage is cheap.

Hey Matias,

This behavior seems like a bug. When the long running jboss process is started, you should be able to run as many builds in that project as you want. Can you run the second process with -d and submit the log to gist?

@Szczepan Hi, I’ll try to get something to the development list as soon as possible. I just need some time to polish my use cases a little. It might also be that this is less of a problem with gradle 1.8, which should be faster.

I’m also using the “set individual cache for each ci job”-strategy, just to get rid of the locked cache problem.

But we’re using the --gradle-user-home= instead of the tick box in jenkins. The reason for this is that with the tick box the gradle_user_home is set in the environment which means that jobs wont’ share the downloaded gradle wrapper. Might be unnecessary though since storage is as Justin says, cheap.

Gist with the relevant gradle debug log on gradle 1.8 created:


We are having the same problem and I am still investigating the best way to work around it. We could use the --grade-user-home=$WORKSPACE/.gradle, but we have information in gradle.properties that needs to be shared between our builds (proxy settings for example). It is tough to maintain if I have to replicate that for every gradle home directory.

I am also in the camp that considers this behavior to be a bug and I’m hopeful that there will be a fix in a future version.

Just a follow-up on how we fixed our issues with shared properties between builds. In Jenkins, we did 2 things for each project.

  1. Check the box “Force GRADLE_USER_HOME to use workspace”. 2. Add a shell script step that runs before the Gradle build step that executes the following command:

cp ~/.gradle/gradle.properties $WORKSPACE/

This solves our problem. I would still prefer to not have to workaround the issues with the locking, but this solution works.