Max workers and memory usage

Hi all, I’m investigating some memory issues with our rather large Android build. I have couple of questions about possible memory usage with different options.

From what I understand, org.gradle.workers.max property will limit number of total worker processes Gradle spawns. Does this include jvms forked to run tests, JavaCompile task, dex etc.?

Second question is about org.gradle.jvmargs property – does it affect only Gradle client? My understanding is that yes, and that’s why e.g. test task has its own max heap settings. Does that mean maximum memory usage might be equal to max_workers * heaviest_task_usage, assuming the build can parallelize heaviest task enough to run it max_worker times? And then we’d have to limit each of those tasks, or reduce parallelism?

Do you have any pointers in general to identify proper combination of options to avoid oom errors? For example our machines have 64 cores (build scan shows max 64 workers), would it be possible that too many tasks are run in parallel for which jvm follows different settings than org.gradle.jvmargs, thus depleting the memory? I’m pretty sure 64 is way overkill anyway. Or maybe is there some way to identify heavy jvm forks?

Thanks for any help!

It does.

I guess you mean the Gradle daemon. The client is what we call the small command line app that connects to the daemon that does the actual build.

And yes this setting is for the daemon only

Yes, but Gradle will stop forking more processes if there isn’t enough system memory. If that doesn’t work in some case, feel free to open an issue.

In general you’ll want to specify some reasonable limit for every forking task. See also Reduce default memory settings for daemon and workers · Issue #6216 · gradle/gradle · GitHub

Stefan, thanks a lot for the answer!

I guess you mean the Gradle daemon.

That’s right.

Yes, but Gradle will stop forking more processes if there isn’t enough system memory.

Not sure if that might’ve been the reason, but our virtual machines have 64 cores and only 16GB of memory. If build scan always shows all workers, then I see ~14 worker threads created by Gradle and then the build usually crashed during one of several thousands unit tests.

But I’m still not clear on one thing, for example when I serialize the tests: set max workers to 1, and for tests do:

tasks.withType(Test) {
    maxParallelForks 1
    maxHeapSize = "16m"
}

then consistently 45 tests pass and 46th crashes with OOM (java.lang.OutOfMemoryError: GC overhead limit exceeded) – it’s understandable and reproducible. However even when I specify e.g. forkEvery 5, the results are the same – 45 tests pass, 46th crashes. Daemon has plenty memory (8g, absurdly high, now I know) and the failing tests pass when they are the only ones that run. I’m aware 16m memory for tests isn’t enough, but I want to understand the various options and how they work together. Shouldn’t every 5 tests have their own jvm with 16m memory in this case? I’d expect that running 5 tests manually and running 100 tests with forkEvery 5 would yield same results memory-wise.

Ha! My bad, forkEvery forks every x classes, not tests as in methods. forkEvery 1 works as expected and forks new VM that’s capable of running new tests.

This leaves last question, then – how to determine appropriate memory settings for Gradle daemon (e.g. Android dex will run in-process, and it needs ~1-2g I think), tests and all javaExec tasks. I suppose the only way is to properly profile the build. Right now I’m planning to run the profiler using linux perf to see all processes and play with different values. Does that make sense to you?

Crashed with what exception? Maybe the problem is not using too much overall memory, but actually giving the test VM too little?

Note that using forkEvery is generally a bad idea, as it will greatly slow down testing. Forking a new VM costs a lot of time.

If overall machine memory is the problem, I’d just limit overall parallelism on a machine with such a discrepancy between CPU and RAM. Something like --max-workers 16 should probably do. If possible I’d change machines to something more balanced, e.g. 16 cores and 32GB of RAM.