I must have missed something in the task type definition… How should I code these inputs & outputs? Are there any tools to debug this? (I know about --info to see why a task is executed, but I found nothing to explain why it is up-to-date…)
So, just to try and get a complete understanding of how all this works, I added traces - in the closure you propose - in the afterEvaluate I have implemented
For the record, here are the results: - The afterEvaluate seems to be called very early (and once for each task) - the closure is invoked (3 times…?..) just before task execution
FileCollections are lazy evaluated. I would expect the closure to be evaluated each time task.getOutputs().getFiles() is iterated. I don’t see a problem with this occuring more than once. You could log the stack trace if you want to see where it’s being called under the hood.
You don’t have to do everything in your task’s constructor. You can use @Input/@Output annotations to mark fields or methods as inputs/outputs for your task.
I switched this to a SourceTask (vs DefaultTask). It’s basically the same, except a SourceTask will skip execution if no sources are found. We need to lazily evaluate the sourceSets because someone could add/remove/change the srcDirs before and after our task is created. If you use the @OutputDirectory annotation, Gradle will automatically create the path for you. In this case, it doesn’t really matter since your output directory changes based on the value of module, but I also marked module as an @Input. If the properties of the task somehow influenced the output (e.g., command line arguments), you can mark them as inputs and Gradle will include those in up-to-date checks.
Thanks for this additional answer! This gives sheds some additional light on my basic understanding…
I still have an issue with the incremental build of the real-life example that I simplified with foo and bar. It is unfortunately a bit too much to expose the whole of it herein…
One question that I have though is: by adding --info you get traces explaining why a task is considered up-to-date. However, how do you “debug” tasks that are rebuilt when you believe they shouldn’t?
My real-life bug is (based on more complex tasks than the provided sample): * First build ==> foo and bar are executed * Second build => foo is executed again and bar is up-to-date * Third build => foo and bar are up-to-date
I had been hoping that adding @Input to the module as you suggest above would fix this but it did not.
I am sure there must be a way but I could not find it.
When you run your build with --info, you should see something about why the task is executing. It looks like the same output as the up-to-date checks. e.g., it should say something like “Executing $task due to:” and then a reason why Gradle thinks the task is out of date.
I’d check that your output directory isn’t shared with another task or that another task isn’t modifying the output after your ‘foo’ task has run. Both of those things could cause foo to run again if the output files aren’t exactly the same between runs.
You can try running the tasks in isolation (just ‘gradle foo’ multiple times). If it seems to work like that, then it’s probably another task putting files in your output directory or modifying the output somehow.
If you suspect that, then you’ll have to look for tasks that run in the first build and not in the second/third. The hard part might be that the other task is partially broken and doesn’t make it clear that it uses the same output directory as foo (that would explain why it doesn’t run in the second/third builds).
There is an input file (in the sources) that I used to copy inside the MyCompilerTask. (either a *-prod file or a *-dev file was being copied then used by the compiler).
Now I have moved that into a ‘Copy’ task on which the two compilation steps depend. This way, I benefit from the proper incremental implementation of the Copy task and then my two compilation tasks are seeing the files as unchanged.
Your advice of calling ‘foo’ repeatedly with --info was the clue I was missing: I was only focused on the outputs which is why I did not find the issue…
For the record… I wrote: “by adding --info you get traces explaining why a task is considered up-to-date” which is wrong… the opposite is true: you get an explanation of why a task is re-run, not why it is seen up-to-date (what folders are considered…)