Gradle build in Docker container taking up too much memory

Hello,

I need your help limiting the amount of memory Gradle builds are taking up on a build machine. I try to build an Android project that has almost 100 modules, including a single com.android.application module and a lot of com.android.library. The single Gradle command is doing both the build, lint and tests (which includes Robolectric tests if that matters).

I run the build in a Docker container that is based on jetbrains/teamcity-agent. The container is limited to use only 10GB out of 64GB of host’s RAM using mem_limit option.

The builds have recently started to fail because they were taking up too much memory and one of the processes was getting killed by the host which I could see by running dmesg on the host. It could look like this:

[3377661.066812] Task in /docker/3e5e65a5f99a27f99c234e2c3a4056d39ef33f1ee52c55c4fde42ce2f203942f killed as a result of limit of /docker/3e5e65a5f99a27f99c234e2c3a4056d39ef33f1ee52c55c4fde42ce2f203942f
[3377661.066816] memory: usage 10485760kB, limit 10485760kB, failcnt 334601998
[3377661.066817] memory+swap: usage 0kB, limit 9007199254740988kB, failcnt 0
[3377661.066818] kmem: usage 80668kB, limit 9007199254740988kB, failcnt 0
[3377661.066818] Memory cgroup stats for /docker/3e5e65a5f99a27f99c234e2c3a4056d39ef33f1ee52c55c4fde42ce2f203942f: cache:804KB rss:10404288KB rss_huge:0KB shmem:0KB mapped_file:52KB dirty:12KB writeback:0KB inactive_anon:1044356KB active_anon:9359932KB inactive_file:348KB active_file:72KB unevictable:0KB
[3377661.066826] [ pid ]   uid  tgid total_vm      rss pgtables_bytes swapents oom_score_adj name
[3377661.066919] [ 6406]     0  6406     5012       14    81920       74             0 run-agent.sh
[3377661.066930] [ 8562]     0  8562   569140     2449   319488     6762             0 java
[3377661.066931] [ 8564]     0  8564     1553       20    53248        7             0 tail
[3377661.066936] [ 9342]     0  9342  2100361    92901  2437120    36027             0 java
[3377661.067178] [31134]     0 31134     1157       17    57344        0             0 sh
[3377661.067179] [31135]     0 31135     5012       83    77824        0             0 bash
[3377661.067181] [31145]     0 31145  1233001    21887   499712        0             0 java
[3377661.067182] [31343]     0 31343  4356656  2412172 23494656        0             0 java
[3377661.067195] [13020]     0 13020    56689    39918   413696        0             0 aapt2
[3377661.067202] [32227]     0 32227  1709308    30383   565248        0             0 java
[3377661.067226] Memory cgroup out of memory: Kill process 31343 (java) score 842 or sacrifice child
[3377661.067240] Killed process 13020 (aapt2) total-vm:226756kB, anon-rss:159668kB, file-rss:4kB, shmem-rss:0kB

I was observing the memory usage in certain scenarios using top and I noticed that a single java process (the Gradle daemon) was slowly taking up over 8GB while gradle.properties had org.gradle.jvmargs=-Xmx4g so it’s way too much than I would expect.

I tried a few ways to configure Gradle to limit the memory usage somehow but I failed. I tried the following settings is various combinations:

  1. I found out JVM doesn’t know about the Docker memory limits and it needs -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap so I added it and removed Xmx4g. Didn’t help. Using Xmx4g was almost the same.
  2. I found -XX:MaxRAMFraction but I think it didn’t change much.
  3. I tried Xmx2g but it failed the build by throwing OutOfMemoryError: GC overhead limit exceeded at some point.
  4. I tried -XX:MaxPermSize=1g -XX:MaxMetaspaceSize=1g which hanged the build and was throwing OutOfMemoryError: Metaspace so I increased them to 2g which didn’t limit the overall memory usage, I think.
  5. I tried limiting the number of workers to 4 and 1.
  6. I found out the Kotlin compiler can use a different execution strategy: -Dkotlin.compiler.execution.strategy="in-process". I guess it could help a little because there were fewer memory consuming processes but the daemon process was still taking up over 8GB.
  7. Using GRADLE_OPTS set to -Dorg.gradle.jvmargs="-XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -XX:MaxRAMFraction=10" -Dkotlin.compiler.execution.strategy="in-process" seemed to limit the memory a little (the daemon took about 4.5GB so it looked promising) but it just hanged the build with Expiring Daemon because JVM heap space is exhausted. I will try another fraction then. I suppose 4 is the default, right?

I’m also confused how the parameters are passed to the Gradle daemon vs Gradle CLI so I tried some different combinations (but not every possible combination so I could easily miss something) of both gradle.properties, JAVA_OPTS and GRADLE_OPTS with org.gradle.jvmargs inside. It would be helpful to be able to print the actual JVM args used by the daemon somehow. The only way I was trying to figure it out was using ps -x and checking the arguments passed to the java processes.

I should probably mention I tried all these stuff using --no-daemon option and Gradle 6.2.2. I’m now trying out Gradle 6.3 but I don’t expect it to help.

If I run a build locally using Android Studio or Gradle command on a MacBook with 16GB of RAM it never fails due to memory limitation issues. This issue happens only on the build server.

I have no clue what to do next to limit the amount of memory used by the build process. My only remaining ideas I haven’t tried yet are:

  1. Setting the limits in the Gradle tasks, e.g. like tasks.withType(JavaCompile) { ... }. Would that help?
  2. Removing the less relevant Gradle plugins I use, e.g. com.autonomousapps.dependency-analysis

Unfortunately, all the tests are very time consuming because a single build may take as long as 30-60 minutes to execute depending on the conditions so I would like to avoid testing it blindly.

Am I doing all of this wrong? Are there any other options to limit the memory? Is there any way to analyze what takes up so much memory? Is it possible to force GC during the build process? Should I ask a different question instead?

3 Likes