C/cpp plugin - GC overhead limit exceed

OS: RedHat 6.5
RAM: 16g
CPU: 2x Core7 - 20 nproc

org.gradle.jvmargs=-Xmx8g -Xms4g

The project consist of 758 sub-projects which are a mix of c and c++. There is one executable being build with everything else being a share object that is loaded on demand for the most part. Our build files have been auto generated from existing make files which is why I havent upgrade to gradle 2.10 or .11 yet as there is some breaking change in 2.10 when I tried it.

I’m guessing that my jvm options are the issue as I am mostly guessing/googling to try and figure out what to use for them.

Your org.gradle.jvmargs lines overwrite each other. I think you want:

org.gradle.jvmargs=-Xmx8g -Xms4g -XX:MaxPermSize=1g -XX:MaxHeapSize=2g

Which version of the JDK are you using?

Do you know if any of the executables add the root directory as a source directory?

I’ll make the changes to the args when I get to work. As far as the root dir, no none of them are. All of the sub projects have basically the same build files with only minor variations. The variations being some of them running Qt moc before assemble/build, others running a copy task to copy the built .so to a plugins folder as a .po, some have both c and c++, and most generate some symbolic links for the .so’s. As to the JDK it is the default open JDK 7 that came with RedHat 6.5. That was one of the things I was going to look at today. I am planing to try Oracle JDK 8 as well as 7 if 8 doesn’t give better results.

Some other notes: I’ve ran it with --stacktrace and --info. Sometimes the stacktrace will show heap has ran out of memory, other times it wont. The --info has shown some of the projects seem to take a very long time to build. One of the projects took about 5 minutes to build but depends on about 20 other projects. So the question there is, is the 5 minutes the total time for all the projects it had to build or just the one project? If it is the single project then we should look into that as only a few of our sub-project are of any significant size. Most are well under 5k loc.

Steps I’m planning to try:

  1. Apply suggested jvmargs changes
  2. Try Oracle JDK 8 / Upgrade to gradle 2.11
  3. Restructure project to match what gradle expects
  4. Installed another 16g of ram :smiley:
  5. Look at building the project in phases somehow vs gradlew build

As a little background this project is about 20 years old, has not been well maintained, is under continues development, with no test, on the order of 3m loc, the directory structure does not match what gradle expects, and I could go on but I think you get the idea. The structure is rootDir/projectDir/subProject(s)/ srcFiles & headerFiles. On top of that there are random files everywhere with header files that have been copied around. I’m trying to fix all this but it takes time. :wink: If you have suggestions I’m all ears!

I’m in a similar situation. I ported the repository of a project that I am working on over to Gradle from Make & Visual Studio build scripts. I still can’t say I’m using the model rule-based approach the most efficiently, but one thing that I was able to exploit (and I’m not sure this will help you) was that a lot of the executables being built used a good number of the same object files. Encapsulating this code into separate libraries helped decrease the Gradle build time drastically. This also cut down on the memory being used.

Originally when I did the port, Make was much faster than Gradle because all the object files were being written to the same directory so they didn’t need to be built more than once; while the first setup I chose with Gradle took much more time since those object files had to be built for each project. By isolating the commonly used files into additional libraries, I was able to get Gradle to be consistently faster than Make (using the Gradle daemon, parallel builds, and configuration on demand features). I’m sure this solution seems obvious and isn’t that impressive but the original project structure just used a massive src directory with all the files. Refactoring all of this took a some time, but we can now build and release the software much faster than before.

Kevin, so your saying if I have subproject A which B C and D depend on, then when I do a full build it will build A separately for each B C and D?

It looks like upgrading to Gradle 2.11 has mostly resolved this issue. I am about 50% through the build now just resolving little compile issues that pop up. I have noticed however it is very slow to build from the rootDir vs a projectDir.

The setup I started with was different. As an example, subprojects A, B, C, D all use files F1, F2, F3, F4. With projects A, B, C, and D all including those files in their sourceSets, the four files would be compiled four times (once for each project). I was just saying I took those four files and put them into a library. The project structure at the time didn’t have those files as a separate library.

Ok, got it, that makes sense. I’ve actually have a different problem now.

I have everything compiled now but when gradle goes to link the Executable I get libA.so needed by libB.so not found where libB.so is used by the executable. I can fix this by adding A as a lib project of the executable but it seems like gradle would handle this?

I found the answer to the issue: Native cpp: automatically handle transitive include dependencies

Do you mean that gradle build from the root directory and gradle build from the project directory are significantly different?

Yes, that is correct. It is faster to build each of the subprojects one by one than it is to try to build them all at once.

Just wanted to update this post for anyone searching.

All of these issues have went away in 2.11. 2.11 is significantly better for native project over 2.9. Additionally, I added a root project clean task that scans the project for stay files/symbolic links and cleans them up. I’m not sure if gradle was somehow interacting with these links/files but it seems to have helped.