Is it bad to delegate work to dynamically constructed tasks?

I’ve found it useful in a couple of scenarios to setup convenient “dummy” tasks that create subtasks that the dummy depends on and that read properties of the dummy task at execution.

Is this a good practice? It makes it easy to compose tasks, but it seems slightly awkward.

I’ve also used this to configure Exec tasks with source properties, so that the “dummy” can extend SourceTask and it’s configuration and getting around the fact that Exec cannot be instantiated.

Could you give some concrete examples of what you’ve done? I can’t quite follow.

There’s an example in my fork of gradle-js-plugin.

Yeah, I’d avoid this.

This kind of coordination belongs in a plugin or some kind of controller (an example of this would be how SourceSets are used by the Java plugin).

Tasks are just dumb workers. They should have no knowledge of their environment (e.g. plugins, other tasks). All coordination logic should be elsewhere.

So how should tasks be combined in situations like this? One tasks output should be another tasks input and execution (and even existence) of the second task is logically a configuration of the first task.

I understand that for a “default” task, like the one added by the plugin in the example, the plugin can explicitly create the necessary “subtask” tasks and setup the dependencies. But this won’t allow users to use the task type very easily. Are you saying that it should always be necessary for the user to understand and explicitly declare these subtasks and their dependencies?

An alternative I can think of would be to configure these tasks using a named domain container, and then have the plugin create the actual tasks dynamically for each domain. This seems more complicated to me, though:

‘’’ // In the gradle config jsbuild {

main {

namespace = ‘tutorial.notepad’

source = javascript.source.main.js

} } ‘’’

‘’’ // In the plugin project.afterEvaluate {

project.jsbuild.each { JsBuildExtension ext ->

project.task("${ext.name}.build",type:BuildJsTask) {

namespace = ext.namespace

source = ext.source

}

} } ‘’’

It sounds like you’re saying the latter option would be your preference. My concern here is that for a languages like javascript, there are many possible ways to compile and package from source. NamedDomain containers seem to make sense for source configuration, but it would then be necessary to duplicated much of that configuration for the build task configuration.

In the above example, you’d have something like: ‘’’ // In the gradle config javascript.source {

main.js {

srcDir = javascript.source.main.js

} jsbuild {

main {

namespace = ‘tutorial.notepad’

source = javascript.source.main.js

} } ‘’’ Even though that’s more concise than explicitly configuring multiple build tasks, it somehow seems like it could be confusing? If this is how other plugins are doing things, that makes sense to me, though.

It’s difficult to give firm concrete advice without completely understanding the problem you face unfortunately.

When you make tasks contextually sensitive, they ultimately end up coupled. Tasks are the most basic and lowest level unit in a build. The smart wiring that glues tasks together into a more useful function belongs elsewhere. The primary reason for this is that if the user needs to escape your opinionated wiring they can always manually create task instances and wire them together how they need. As a 3rd party external plugin it’s your call on whether you embrace this or not, but for a core plugin it’s essential.

The Gradle model is open. You can add as many new concepts as you need. It seems like you are missing the modelling of something that represents a kind of processing pipeline, something like a controller. Extension objects do not need to be simple structs. You can have methods that perform wiring, create tasks… do anything.

As for how to wire the values of different tasks together. The way to do this is via convention mapping, which unfortunately is still an internal feature (though lots of external plugins do use it). This avoids the need to use afterEvaluate() to defer value assignment.