Confusion about Gradle Daemon vs Workers and JVM settings

We just went through an upgrade from Gradle 4 (4.10.2) to Gradle 5 (5.2.1) and one thing we immediately discovered was that the time to run our tests dramatically increased. We finally pinned this down to the fact that Gradle 4 had a large heap by default than Gradle 5. The part that was somewhat surprising was that we didn’t think we were using the default heap values at all. We had set the JVM args for the Daemon using “org.gradle.jvmargs” which was set in a “gradle.properties” file at the root of our project.

Based on the documentation and the description of how the Daemon works, we always assumed that these were the settings being used when running our tests. Not true. The tests of course aren’t run by the Daemon itself, but by one of its workers and these workers do not respect the setting of that property at all. In fact they only seem to respect the settings from the tasks which they run.

Knowing this now it is all perfectly obvious and when we updated our Test tasks to use the desired settings everything went back to its previous state.

Is there a reason why the workers don’t use the Daemon’s settings by default? Is there something in the docs about this that I missed? What work does the Daemon itself actually do (trying to figure out how much memory it might need)?

1 Like

Because their work is highly task dependent. Making them all respond to the same property would be like saying “every Java program on my machine should have 2g of RAM, regardless of what that program is”. For example, if you have a single module with 1 million lines of code and forked compilation and testing, then the daemon itself has almost nothing to do (it only needs to orchestrate the build of a single project), while the compiler and test workers have to deal with your 1 million lines of code. On the other hand, if you have 500 submodules with hundreds of dependencies each, the daemon has a lot of orchestration work to do, while the compiler and test runner only get very small work packets for each subproject.

Calculating the model of your build, resolving dependencies, figuring out which tasks to run. It also does a lot of actual task work in most builds. E.g. currently in-process Java compilation is still the default. Build scans will tell you how much time your builds spends collecting garbage, which is usually a good indicator to increase heap size (or look into badly behaving plugins).

I guess that depends on what you are expecting to find. For example, the testing chapter mentions that tests are run in a different process and shows an example of adjusting the memory of that process.

Thanks for the info. I understand that default settings won’t be appropriate for all tasks, but they are often useful to have nonetheless. In fact when launching a worker the Daemon must decide what memory settings (if any) to use and if the user doesn’t tell it, it must (and does) pick a default. It was a change to this default that triggered our problem in the first place. In my poking about I had seen a note about the change, but didn’t think that impacted our situation since I thought we were explicitly changing those values. Turns out I was wrong – live and learn. FWIW, I actually agree that the default size used should be as small as reasonable so you don’t chew up memory you don’t need in most cases.

As for the documentation, expecting might not be exactly the right word, but I guess I was thinking that it might be nice to talk about some of this (perhaps a summary of your first paragraph) in the Daemon documentation. Perhaps as one of the FAQ’s with a reference to the testing section (I knew that I could customize the settings for a test, but what was less clear was that the default settings for the Daemon had no relevance in that case). Googling for “Gradle Daemon Workers memory” (and similar) just nets you that page and the Build Environment page – both of which describe memory settings, none of which are used by the workers.

BTW, you mention the scan, which I was using to try to figure out the issue, but here again it seems to ignore the workers. The performance page shows GC stats, which would have been nice to have for the process running the tasks, but it only seems to show the Daemon. Is there a way to get similar information for the workers? If not is that something on the roadmap (because it seems like it would be generally useful).

1 Like

Showing memory stats for workers is something that we plan to add, yes. I don’t have a target date for that yet though.