Why is gradle so much slower on Windows / NTFS?

I have a laptop with dual boot on which I noticed significant performance difference on my Android work project between Windows 10 and Linux Ubuntu 14. I isolated the main difference to the java compile time phase, I therefore did a controlled experiment where I generated a pure java project with 500+ files of which I timed the compile times. The results are illustrated in this image:

“javac” is where I did a pure command-line compile with the java compiler outside of gradle, while “gradle” is running “./gradlew assemble --rerun-tasks”. All tests are are done 4 times where I took the average of the last 3, discarding the first.

The repo with the test code is here.

In short it looks like the NTFS file system makes gradle slow. The difference are much bigger when using gradle then when using pure javac (141% gain to 21%). Therefore I dare say gradle has a part to play in the slow-down on Windows/Ubuntu+NTFS here - or rather lack of performance gain.

The 141% compile time penalty is not one I’m prepared to take in my daily work.

In both experiments I had the laptop plugged in to power, I was using java 64 bit v1.8.0_73 and I had parallelization enabled with gradle by using identical gradle.settings like this:

org.gradle.parallel=true
org.gradle.configureondemand=true
org.gradle.workers.max=2
org.gradle.daemon=true
org.gradle.jvmargs=-Xmx5g

Anyone have any theory of why gradle java compilation is so much slower on NTFS? Is there any configuration I can do to improve it? Is there something intrinsic in the Gradle code that needs to be improved?

1 Like

Do you have Anti-virus running? This can be a cause of slowdown due to consistent scanning of JARs. If so, try to exclude the GRadle cache and the project directories and see if there is an improvement.

3 Likes

Oh. my god. That was it. At least a great portion of it. Build time went down to 3,2 seconds average from 6.2. That’s almost a 100% performance increase.

THANK YOU!!! I’ve been wondering about this for about half a year, and switched to Ubuntu for that period of time for this reason alone. Not that I hate Ubuntu, but there are a bunch of other quirks with Ubunu which still make me want to develop in Windows. I might go back now :slight_smile:

That said, there’s still a bit to go to Ubunut’s 2.5 second average…Might be just the NTFS file system that is slower - now at least it is the same build speed for the project when mounted on Ubuntu NTFS, and Windows.

edit 4.2 was javac build. Gradle was even better - 3.2s! Guess because of parallelization?

Table updated, with data on Windows without antivirus:

I used to be on Windows and I can attest to NTFS being another bottleneck. It’s just not good at listing files quickly. Tools like git also suffer from this.

The raw NTFS filesystem itself is fast. However Windows adds a bunch of system level services that can slow it down considerably. Some of these can be configured at a “Path” level and some can only be configured at a “Volume” level (e.g. C:, D:). Always make sure:

  • The most common item is Virus scanning which can slow down system writes and reads by 100-500%. Not so bad when reading a Word document but a killer with any build system. Windows Defender (and most other virus scanning software) allows you to add path level exclusions if you are an administrator.
  • Windows search indexes the contents of many files on the file system for use in Windows search (and Cortana in Windows 10). You can control the search paths in “Indexing Options” and excluding your working directory. Note a build is constantly creating and modifying files that it is trying to reindex otherwise.

At the volume level there are several performance tweaks that can be done if you are NOT on the system volume. For performance critical servers and workstations I made sure there is a separate volume for the build where the following is configured:

  • Disable 8.3 filename support (this causes multiple extra read/writes every time a file is created or the name is changed, the performance impact grows non-linearly with the number of items in a directory)
  • Disable last access time stamp updates (which prevents a write to the Master File Table on every access).
  • If you are not on an SSD make sure that the disk is defragmented frequently as builds can cause massive fragmentation quickly.
  • If your builds have have lots of small files on the volume my may also want to boost the default Master File Table (MFT) size from the default 12.5% of volume to something larger. Files under 512 bytes are stored directly in the MFT and a excess numbers of small files can cause fragmentation of this table as it is dynamically grown, however this is only an issue if your use pattern causes this.
  • Finally some tools (and or IT departments) add file system watchers that you may want to disable. For example some GUI source code management tools add plugins to the Windows Explorer that can massively slow down the file system (I’ve seen 10X slowdowns), and my IT department once thought it would be great to put an intrusion detection system on our build servers and workstations that was painfully slow (took us six months to get permission to bypass that one.)

There are other optimizations that I generally don’t recommend such as modifying the default cluster size (generally only useful when streaming very large files such as when you are doing video editing), you can also move your paging file to a separate physical device, if you have one, to prevent conflicts with your builds.

Hope this helps.

4 Likes

Something that can be useful for debugging poor Windows NTFS file performance is https://technet.microsoft.com/en-us/sysinternals DiskMon and Process Monitor utilities. You can use these to discover what (other than your build process itself) is using the Disk and CPU during your builds. This can point to indexing or other “extensions” that you may not have been aware of. I’ve often found that things become enabled with Window Updates or third party software that suck the performance out of the system. The strength and weakness of Windows is that it is so extensible. Virtually anyone can create a Windows Explorer plugin, NTFS monitor, or device driver extension that “adds” functionality to Windows. Many of these are targeted for low performance environments (Word processing, web browsing, etc.) where the extra overhead is not noticeable. I’ve found a clean install of Windows from the Microsoft Media and the system tuning above closes 90% of the gap with Linux / MacOS environments. However often we are stuck with the “IT Build” and to push back and get that build changed you need to root cause what is causing the performance issue in the first place.

Finally make sure you have an SSD, lots of memory and good drivers on your system. Windows does a good job of opportunistically caching file system IO if it has enough extra memory. I’ve worked with development groups where simply plugging a new SSD doubled the build performance, and adding additional memory doubled the performance again.

If all else fails consider using a RAMDISK utility to create a temporary virtual drive that is automatically backed up to the disk drive or SSD. I’ve found that IT groups sometimes allow a RAMDISK which is automatically deleted on power down to bypass some of the policy driven Virus/Intrusion Detection systems that are required on long term storage. Some of the RAMDISK utilities can be configured to write through to a real disk in the background as a backstop for system crash or power failure. A properly configured RAMDISK without virus sanners, etc. can be 10’s of times faster than even an SSD making your system always 100% CPU limited.

1 Like

You saved my day!!!

Not directly related to Windows, but i noticed disk i/o performance overhead when benchmarking gradle build on mac-os vs linux/ubuntu machine.

Ubuntu performed better than mac-os, when it comes to unpacking the cache artifacts.