How to have multiple uploadArchives?

I’ve got two plugins that both implement an instance of uploadArchives{} using mavenDeployer{} to deploy artifacts to a Nexus repository. The problem here is that uploadArchives{} provides an eponymous task ‘gradle uploadArchives’, which creates a conflict when two plugins try to implement it.

How do I fix/bypass this? Or are there more elegant solutions to have multiple deploying mechanisms?

Your plugins could make their own tasks of type Upload instead of trying to share the uploadArchives task.

https://docs.gradle.org/current/javadoc/org/gradle/api/tasks/Upload.html

Sorry for the late reply.

Are there any examples on this? That page is only the documentation for the implementation, not how to use a task with it. The “task: Upload” page is only minimally more descriptive.

I’ve got the following in one task:

project.configurations.archives.artifacts.clear();
project.artifacts {
      archives file: project.file("build/libs/myartifact.jar")
}

And then this as a new task:

 project.tasks.create('myCustomUpload', Upload.class) {
    configuration project.configurations.archives
    repositories {
        mavenDeployer {
            repository(url: "$project.releasesUrl") {
                 authentication(userName: "$project.nexusUser", password: "$project.nexusPassword")
            }
        }
    }
}

But this gives me the following error:

Could not find method configuration() for arguments [configuration ‘:archives’] on task ‘:narArtifactUpload’ of type org.gradle.api.tasks.Upload.

You can use the referenced Upload DSL documentation or the API documentation to see how to configure the task instance. Properties are values that can be set using the assignment operator = or retrieved to be passed to functions. Methods are the functions you can call with or without parenthesis. Script blocks or methods that have as a last parameter a Closure are methods that besides its normal parameters, can also receive a closure block (i.e., a {} code block) at the end.

Looking at the Upload task, it has a repositories script block, which means you can call the function passing it a closure (it only has the closure as a parameter). You’re doing this correctly. However, we can also see in the documentation that there is no method or script block called configuration. There’s only a property. Therefore, instead of using it as a function (i.e., configuration(param1, param2, ...) or configuration param1, param2, ...) you have to use it as a property:

configuration = project.configurations.archives

Hope this helps :slight_smile:

1 Like

That did the trick, thank you!

It’s true that I still have trouble reading Gradle’s API documentations. Usually I just look for a DSL example and don’t try to worry about what’s happening in the backend at all.

I’ve got another question related to this issue.

Before, I used mavenDeployer to publish my artifacts. That one worked fine in terms of uploading the files, but it would cause an error on our Nexus (400: Bad Request), because redeployment is disallowed in the release repositories. For my project I’ve got multiple artifacts per build, which get uploaded with a unique classifier each time, but the POM would be uploaded each time as well (without the classifier), which would cause the no redeployment error. There was no clear way to explicitly remove the POM using mavenDeployer.

So now I’ve tried to migrate to my own deployer using maven. The problem I got now, however, is that I don’t know how to generate and/or upload the POM in any way at all. The result I’d like to have is that it gets created once but doesn’t cause an error on redeployment. Something like a “try to deploy the POM but don’t throw an error if you fail”. Only for the POM, obviously. As a duct tape solution I’ve simply suppressed all errors, but that’s not right.

Also, it doesn’t replace the “-SNAPSHOT” placeholder with a time stamp when I upload to a snapshot repository (it doesn’t honor uniqueVersion=true).

This is what I got.

        project.tasks.create('narUpload', Upload.class) {
            configuration = project.configurations.archives
            
            def deployUrl
            if (project.version.endsWith('-SNAPSHOT')) {
                deployUrl = project.snapshotsUrl
            } else {
                deployUrl = project.releasesUrl
            }
            
            repositories {
                // Causes error due to POM redeployment
                mavenDeployer {
                    repository(url: "$deployUrl") {
                        authentication(userName: "$project.nexusUser", password: "$project.nexusPassword")
                    }
                }

                // Doesn't generate a POM at all and doesn't honor uniqueVersion for snapshots
                maven {
                    url deployUrl

                    credentials {
                        username project.nexusUser
                        password project.nexusPassword
                    }
               }
            }
        }

Can you instead publish all artifacts in a single publication? Doing that would publish the POM once. My familiarity with the maven plugin is very limited (we’re using the maven-publish plugin everywhere), but I believe this would be accomplished by adding all artifacts to your archives configuration and just having a single Upload task.

Sorry for the vague ideas, hopefully it might give you something to look into.

Sorry for the late reply.

The issue is that I have a distributed build via Jenkins on different slave machines, all of which should deploy their own version to a Nexus repository. I’d have to somehow collect all slave artifacts in the master and then trigger an upload task to publish it once, which is kind of a bottleneck, cause I’d have to upload the artifacts to the master only to have the master upload them to the repository once more.

That’s why I’m currently struggling to find a solution that satisfies the distributed deployment without any conflicts of the POM.

Can you provide some examples of what each artifact is? Have you explored a solution that involves different artifact IDs instead of using classifiers?