In addition to mavenCentral, I have 2 maven repositories in s3. For some reason, the HEAD request gradle does is failing there (still looking into that), with a 403. Because this is a medium size multi project build, this takes up to 45 seconds just to “resolve dependencies” for all the projects (so just do a simple gradle tasks is unbearably slow).
I ran with --debug and it appears it is going through each dependency, for each project (since I setup the repository config in a subprojects block of the root project).
It appears there are 2 problems here. First, how can the custom repositories be shared across projects in a multi project setup? Using subprojects, it is creating unique repository objects right? With mavenCentral(), I believe it gets a shared object, so resolving a dependency on maven central gets resolved just once. But for artifacts in custom repos, it has to go through the process for every subproject.
The second issue is it should really be possible to say “use this repository only for these dependencies”. I believe that is this issue: https://issues.gradle.org/browse/GRADLE-1066
@rjernst a repository configuration with a url "https://... is a http based repository and is expected to respond to http requests in the same manner as artifactory or nexus would. The http endpoints for S3 buckets do not obey the same contract. Most likely, each HEAD request to S3 is eventually timing out and slowing down resolution. There is native support for S3 backed repositories in Gradle which would likely perform much better.
Regarding the sharing of repositories across projects, each repository configuration is light-weight and is simply telling gradle where to look for dependencies when it does resolve. Gradle will also cache dependencies so subsequent builds will be faster.
@Adrian_Kelly Thanks for looking. s3 appears to require credentials, but these are public s3 buckets? I also tried changing to just http, but that was slower. If gradle is caching, why would it be doing HEAD requests per project (what I am seeing with --info)?
Looks like you guys own the S3 buckets (download.elasticsearch.org) you could generate an accessKey and secretKey and verify if there is any difference locally.
When gradle does a HEAD request it is looking for the e-tag and checksum of the artifact so that it can cache. For subsequent resolves (depending on resolutionStrategy) it will use that checksum to decide weather or not a locally available artifact has changed. My suspicion is that because those HEAD requests, for those snapshot artifacts, have never succeeded, gradle is unable to cache the artifacts.
How is it not able to cache the artifacts, yet the artifacts are usable? The project builds, and those dependencies (lucene and randomized runner) are definitely used.
Taking a step back, these are special snapshots, so they would not be found in mavenCentral or sonatype repositories. How do those cache the fact that “the artifact doesn’t exist here”? Is the behavior different if a 404 is returned?
@rjernst turns out the HEAD requests are being served correctly and my suspicion was wrong. e.g.
curl -I http://s3.amazonaws.com/download.elasticsearch.org/lucenesnapshots/rr-pr202/com/carrotsearch/randomizedtesting/randomizedtesting-runner/2.2.0-snapshot-pr202/randomizedtesting-runner-2.2.0-snapshot-pr202.pom
curl -X GET http://s3.amazonaws.com/download.elasticsearch.org/lucenesnapshots/rr-pr202/com/carrotsearch/randomizedtesting/randomizedtesting-runner/2.2.0-snapshot-pr202/randomizedtesting-runner-2.2.0-snapshot-pr202.pom
Digging a bit further, there is something in your build scripts or gradle plugins interfering with dependency resolution and preventing the cached dependencies from being used. Here’s how to see the correct resolution behaviour:
Changes settings.gradle to the following:
rootProject.name = 'elasticsearch'
include 'rest-api-spec'
include 'core'
run gradle compileJava -i --refresh-dependencies and you will see the dependencies being downloaded. There will be some logs with Resource missing. [HTTP HEAD:.. because gradle will look for a dependency in each repository in the order they are specified, stopping at the first repository that contains the requested module.
next
2. Run gradle build -i and you will see the dependencies are cached and not downloaded again.
By the way it’s a best practice to use the gradle wrapper.
@Adrian_Kelly I tried again this morning, and the 403’s I saw are gone (not sure what changed). Starting a build from the root is much faster, but still slow. And note that this isn’t just with running a build. Something as simple as gradle tasks has the same slowness. Right now, after “configuring projects” finishes, it takes an additional 10-15 seconds to “resolve dependencies”. And running with --debug, I see the caching is definitely working. FWIW, if I run gradle tasks in a subproject (eg plugins:analysis-icu), it runs much faster (about 2-3 seconds, vs 15-25).
And I’m running with the daemon, and running multiple times to get these timings. Finally, if I run gradle tasks --offline from the root project, I get an expected 2-3 seconds like when running in a subproject.
Note that adding 15-20 seconds to running say gradle build probably doesn’t matter in the long run. But if I’m running a couple small tasks in each project (from the root project) which take less than a second, this is crazy to need all this time when the dependencies are clearly already cached.
@rjernst I’d have to disagree with you there 15-20 seconds added to every gradle invocation is not cool, especially when you need to run lightweight tasks such as gradle tasks
I’ve figured out what’s happening: the following subprojects do not specify a version for the compile dependency on httpclient: discovery-ec2 repository-s3
This would cause gradle to try and resolve that dependency for every invocation. Despite not being able to resolve the non-versioned dependency org.apache.httpcomponents:httpclient gradle was able to transitively resolve org.apache.httpcomponents:httpclient:4.3.6 but at the cost of reaching out to remote repositories every time.
Wow @Adrian_Kelly thank you! The fix is great and that’s why I had just added versions.httpclient last night (without realizing it would fix this issue), but I am curious about the technical reasons for not being able to resolve more quickly, since this seems a little trappy? If in a multi-project build the entire project must be configured, why did running a build from a subproject not trigger this issue?
Resolving the dependencies is not happening at configuration time (unless you enforce it by asking for the resolved dependencies). That happens at execution time by the first task (e.g. compile) that asks for the resolved dependencies.
I am also curious then what could be causing “more resolves” from the root? When I had the slowness, running gradle tasks from within a subproject was much faster (clearly did not resolve dependencies for all projects).
Since we added the dependency substitution in 2.5, we’ve had to split the dependency resolution step into two steps, so we do resolve dependencies when determining task dependencies. The first step does the resolution and the second step does the download. When you run gradle tasks, we have to do the first step to tell if there are any external dependencies that were replaced with local project dependencies (so cross-project dependencies will work).
So I think you’d see the first step (resolving what we would download), but not the second (downloading artifacts) when running gradle tasks, even if you’re doing everything ‘right’.
In this particular case, since there wasn’t a version set for one of the dependencies, we searched all the repositories for it every time. I think if there hadn’t been a transitive way of resolving the same dependency, the build would have failed. There was some discussion this morning if we should be searching a remote repository for dependencies with no versions at all.