Timeouts on plugins.gradle.org


Is there a place to report issues about plugins.gradle.org? We have a build that runs a check every hour and we see a timeout from plugins.gradle.org about 5 times a day. Here is a snippet from our logs. It’s always the same artifact timing out:

Caused by: org.gradle.internal.resolve.ModuleVersionResolveException: Could not resolve com.crashlytics.sdk.android:crashlytics:2.6.6.
Caused by: org.gradle.api.resources.ResourceException: Could not get resource 'https://plugins.gradle.org/m2/com/crashlytics/sdk/android/crashlytics/2.6.6/crashlytics-2.6.6.pom'.
Caused by: org.gradle.internal.resource.transport.http.HttpRequestException: Could not GET 'https://plugins.gradle.org/m2/com/crashlytics/sdk/android/crashlytics/2.6.6/crashlytics-2.6.6.pom'.
Caused by: java.net.SocketTimeoutException: Read timed out

https://status.gradle.com/ doesn’t report any ongoing incidents at the moment.

We’re seeing exact same issue with our builds. It also looks like it’s a recurring situation that happens every 4 hours

2018-08-28 00:02:15,434+0000 ERROR ... from URL https://plugins.gradle.org/m2/, this is 10 (re)try, cause: java.net.SocketTimeoutException: Read timed out
2018-08-28 04:02:14,002+0000 ERROR ... from URL https://plugins.gradle.org/m2/, this is 10 (re)try, cause: java.net.SocketTimeoutException: Read timed out
2018-08-28 08:03:13,561+0000 ERROR ... from URL https://plugins.gradle.org/m2/, this is 10 (re)try, cause: java.net.SocketTimeoutException: Read timed out
2018-08-28 12:02:20,938+0000 ERROR ... from URL https://plugins.gradle.org/m2/, this is 10 (re)try, cause: java.net.SocketTimeoutException: Read timed out
2018-08-28 16:02:57,973+0000 ERROR ... from URL https://plugins.gradle.org/m2/, this is 10 (re)try, cause: java.net.SocketTimeoutException: Read timed out
1 Like

I submitted an issue here: https://support.gradle.com/hc/en-us/requests/1924

Hey folks,

Thanks for the reports. I’m sorry that your builds are failing unnecessarily. The build tool exceptions are flawed in that they do not properly show where the timeout is occurring.

You see, the Gradle Plugin Portal just redirects requests for “unknown” artifacts to https://repo.jfrog.org/some/path. You can observe this just manually hitting any of the URLs you’ve provided like https://plugins.gradle.org/m2/com/crashlytics/sdk/android/crashlytics/2.6.6/crashlytics-2.6.6.pom — The Crashlytics library is not hosted on the Gradle Plugin Portal.

Unfortunately, Gradle does not show that this request was redirected and that it timed out downstream. Timeouts from the plugin portal itself do happen, but happen less than once per day. I personally get a message every time this happens, and am monitoring that closely.

So what we ought to do is gather all of this information and submit it to Bintray in such a way that it’s actionable. Log messages with the timestamp and the resource URL are really helpful (like @noggi’s post). We as Gradle will aggregate this information and submit it and work to improve error messaging in cases such as this.



I can supply another data point from today (4:04:00 pm UTC, Friday, September 14, 2018

* What went wrong:
Error resolving plugin [id: 'org.jetbrains.kotlin.jvm', version: '1.2.61']
> Could not resolve all dependencies for configuration 'detachedConfiguration1'.
   > Could not determine artifacts for org.jetbrains.kotlin.jvm:org.jetbrains.kotlin.jvm.gradle.plugin:1.2.61
      > Could not get resource 'https://plugins.gradle.org/m2/org/jetbrains/kotlin/jvm/org.jetbrains.kotlin.jvm.gradle.plugin/1.2.61/org.jetbrains.kotlin.jvm.gradle.plugin-1.2.61.jar'.
         > Could not HEAD 'https://plugins.gradle.org/m2/org/jetbrains/kotlin/jvm/org.jetbrains.kotlin.jvm.gradle.plugin/1.2.61/org.jetbrains.kotlin.jvm.gradle.plugin-1.2.61.jar'.
            > Read timed out

Here are a couple more. The are all “java.net.SocketTimeoutException: Read timed out” and times are in UTC:

More failures (times are in UTC):

We encounter this problem very often (every one to two days). We most often encounter read timeouts when trying to download com.netflix.nebula:gradle-ospackage-plugin:5.0.2, which seems to be hosted at https://plugins-artifacts.gradle.org.

EDIT: plugins-artifacts.gradle.org is most likely not the culprit. See my next comment below.

1 Like

Hi eriwen,

Have you contacted Bintray about this issue yet?

We’re continuing to see issues several times a day still. Here are the timetamps, in UTC, for the last 24 hours:

Notice how the errors are on pretty even 4 hour intervals.

I think I found the culprit in my case:

This is part of our build.gradle file:

plugins {
  id "nebula.ospackage" version "4.9.3"

When using the plugin this way (the new mechanism), the Gradle plugin system first tries to retrieve this file:
https://plugins.gradle.org/m2/nebula/ospackage/nebula.ospackage.gradle.plugin/4.9.3/nebula.ospackage.gradle.plugin-4.9.3.jar. The path is generated by the name and version of said plugin. For almost all plugins, this jar-file will not actually exist, but there is a pom-file there referencing the real artifact.

Anyway, said URL actually redirects (303) to https://jcenter.bintray.com/nebula/ospackage/nebula.ospackage.gradle.plugin/4.9.3/nebula.ospackage.gradle.plugin-4.9.3.jar, which then redirects (302) to https://repo.jfrog.org/artifactory/libs-release-bintray/nebula/ospackage/nebula.ospackage.gradle.plugin/4.9.3/nebula.ospackage.gradle.plugin-4.9.3.jar?referrer.

As you can see, there is some room for error here. I don’t know which of these three hosts actually timeouts so often, but it is most likely not plugins.gradle.org, because this would affect all other plugins, too.

How to workaround

In case there is no timeout on any of the mentioned hosts, Gradle finally finds out that the jar-file indeed does not exist (404). Damn, all this work for nothing! Gradle then tries to retrieve the corresponding pom-file instead, which is actually hosted directly at https://plugins.gradle.org/m2/nebula/ospackage/nebula.ospackage.gradle.plugin/4.9.3/nebula.ospackage.gradle.plugin-4.9.3.pom without all this redirect stuff.

The pom-file only exists to be found by the new mechanism to include Gradle plugins (the plugin-block shown above). The pom-file defines com.netflix.nebula:gradle-ospackage-plugin:4.9.3 as its sole dependency, which is the actual plugin that we actually want!

The obvious workaround is to skip all these redirects and reference the actual plugin-artifact (com.netflix.nebula:gradle-ospackage-plugin:4.9.3) directly. In case of the “nebula.os-package”-plugin, this can be done by using the old mechanism to use Gradle plugins:

buildscript {
  repositories {
    maven {
      url "https://plugins.gradle.org/m2/"
  dependencies {
    classpath "com.netflix.nebula:gradle-ospackage-plugin:4.9.3

As you can see, the “classpath” directly references the correct artifact that can be retrieved without all these redirects caused by the new plugin mechanism.

Extra information for users of Sonatype Nexus

We use a Sonatype Nexus repository as a proxy to the Gradle Plugin Portal, so that we don’t hammer it with excessive plugin-downloads every few minutes. Unfortunately, Nexus doesn’t work well with the new plugin mechanism of Gradle.

I was curious why we are even affected by all these timeouts, as Nexus should cache all downloaded Gradle plugins. But as explained above, when using the new mechanism to retrieve gradle plugins, Gradle first tries to download a non-existent jar-file (which will be the case for almost all plugins when including them the new way). Unless you configured your Nexus-proxy to use a “Negative cache” (“Not found cache”), these requests will always hit the actual hosts.

The workaround stated above also applies in this case.

My two cents

I think it is wrong for Gradle to try to directly download the jar-file when using the new mechanism, as this request (as far as I can see) will always result in a 404. When using the new mechanism Gradle should download the pom-file first, which then references the actual artifact as a dependency. This would prevent a useless 404-request and skips a lot of redirecting through jfrog and bintray).


Thanks for sharing Christian.

Our setup is a little different. We did the following and so far have had not had any timeouts the whole weekend:

  1. Move our automated build from the hour so that it’s offset a few minutes. We did this as we noticed most of the failures happen on the hour every 4 hours
  2. Increase the Gradle http timeouts by setting
1 Like

I faced this issue also, I had the impression it was some proxy in my company, but increasing the timeout of Gradle worked. Thank you so much