Technically speaking, you are: you are resolving a project configuration from a context outside of the project.
It depends. Basically today your process is quite inefficient: it requires collecting all configurations from all projects, which potentially configures project you don’t need in a build (for example when using configure on demand). Also it will trigger, as far as I understand, the resolution of all configurations (potentially restricted to the configuration) in all projects whatever the task is called. Dependency resolution is, despite a lot of optimizations, very slow. It’s generally not an issue when it happens at execution time because we can resolve things in parallel, or different configurations from different projects in parallel. Here, you would basically be resolving always all configurations, and sequentially.
I would consider a different approach: instead of always resolving everything everytime, even when it’s not needed (say you run help
or tasks
or asciidoc
, you could create, say, one task per project, or one task per configuration in a project, which actually resolves this configuration. There would be multiple advantages:
- first, it would be cacheable: there’s no point in always doing the analysis if it’s always going to be the same. Modelling as a task makes it possible to optimize and cache. The result of the analysis would be a file (report or txt) that an aggregator could consume to generate a global report
- second, the task would only be executed if needed
- it could be, in the future, compatible with the configuration cache, meaning that all verifications could run in parallel safely
Now, of course it requires calling a task to get the feedback. You have several options here:
- wire the verification task to a particular lifecycle task. For example, you could decide to add it as a dependency to
classes
, so that everytime the classes are built, you actually execute the check first.
- as a variant of 1, create your own lifecycle task, so that the verification is executed only if this task is called
I don’t think it makes sense to run the analysis everytime independently of the context. It also raises the complexity of your plugin. I think that by carefully choosing the lifecycle task you hook in, you can benefit from pre-emptive checking too. It wouldn’t particularly make sense for all contexts, though, because you’re basically resolving everything without really caring in what context the configuration is used for, which forces the user to exclude configurations.
So one last approach I wanted to mention is that you could use an artifact transform instead. The idea would be that you carefully select on what configurations this has to happen. For example, when you resolve the compileClasspath
or runtimeClasspath
. Because then, you can alter the configuration resolution attributes to ask for a different artifact type. This would cause a transform to be executed, which, in your case, would generate the same jar if everything is fine, or fail if the bytecode isn’t what you expect. Just a rough idea of course, but again, the benefit is that the lifecyle is handled by Gradle, cached, so that if the verification has been done once for a jar, you don’t have to redo the verification.