Gradle reports that "dynamic revision cache is expired" for static revision

We have a multi-project build for which all the dependencies are concrete versions. When debugging the build, it constantly logs “Resolved revision in dynamic revision cache is expired” for each of the dependencies. It goes through all of the dependencies for each of the sub projects and re-resolves them. Having profiled it, this part of the build takes over 30s - which seems pretty long. Each of the subprojects has a different subset of a set of dependencies; what is the best way of sharing them among the subprojects? In which ways can we speed up this part of the build?

We are using 1.4 on windows with the daemon fetching dependencies from a few in house maven repositories.

Many thanks,

Thanks for the report: you’ve actually pointed out a minor bug in our dependency resolution logic. An earlier refactoring ( failed to update the ‘DefaultCachedModuleVersion.isDynamicVersion()’ method.

This bug results in the extra logging that you’re seeing, but would likely have no significant impact on performance. At the caching level, Gradle does not look at the version to determine first if it’s dynamic; Gradle will always look in the Dynamic Version Cache to see if there’s a previous resolve. This is by design: think of this step as resolving a “version selector” to a single “version”; we plan to use this in the future for all sorts of goodness.

So the only impact of this bug is the extra debug logging and a few extra objects created (logic is in

The likely cause of your slow resolution is actually cache locking - this is a hotspot (particularly on Windows) which we are currently working on addressing. So thanks for the report, and we feel your performance pain!

Thanks for the detailed response. What are the plans to improve the locking? Is that going to be in 1.5?

What are the plans to improve the locking? Is that going to be in 1.5?

There are a few parts to this: for 1.5 we’re hoping to have eliminated the cache-locking for local repositories, which don’t use the cache, anyway. Next we’re planning to work on keeping some more state in-memory, to reduce the number of times the cached metadata files are parsed. Spikes of these 2 changes have demonstrated a significant increase in resolution performance when everything is cached.

Later we’d like to reduce the number of times the cache is locked/unlocked by using a more optimistic (but still process-safe) model.

No guarantees on when this stuff will be complete, but it’s being actively worked on at this time.

Cheers. Looking forward to it.