Plugin Portal & Improving Plugin Development

Hi all,

Luke Daley and I worked on a solution to make applying plugin dependencies more concise. This is the syntax we settled on:

apply plugin:'pluginId', dependency:'group:artifact:version'

This works by performing a transformation in the first pass which converts the apply call into a call on the ScriptHandler. The second pass then applies the plugin as usual. This solution means that classpath additions are accessible to the build script.

The code isn’t ready for a pull request. It is however ready for criticism as regards the overall approach. Would you take a look at it?

The commit is here: https://github.com/curious-attempt-bunny/gradle/commit/f61b82580af0841d346c425b71d9c3b2a116fa43 The integration test is here: https://github.com/curious-attempt-bunny/gradle/blob/f61b82580af0841d346c425b71d9c3b2a116fa43/subprojects/integ-test/src/integTest/groovy/org/gradle/integtests/ApplyPluginDependencyIntegrationTest.groovy

Cheers, Merlyn

I’m reluctant to go with option 1). I think you’re going to have difficulty making it work well. What’s your plan for dealing with some of these cases:

subprojects {
     apply plugin: 'someplugin', dependency: 'some:dep'
 }
libraries = [somedep: 'some:dep:1.0']
apply plugin: 'someplugin', dependency: libraries.somedep
groovyProjects = allprojects.findAll { // ... }
groovyProject.each { it.apply plugin: 'someplugin', dependency: 'some:dep' }
class SomeType {
    void apply(Map) { }
}
configure(new SomeType()) { apply dependency: 'this-is-not-an-apply-statement' }
pluginVersion = '1.2'
dependsOnChildren() // child project mutates pluginVersion
apply plugin: 'some-plugin', dependency: "some:dep:$pluginVersion"
def configureProject(String dependency) {
   apply plugin: 'some-plugin', dependency: dependency
}
configureProject('some:dep:1.2')

I don’t think you can solve this generally with static inspection. You’re going to have to restrict the cases where the classpath is mutated to a subset that you can statically determine.

I’d rather we went with option 3). It will require a little more effort on the part of the plugin author (which we can address in other ways), without the cost of the complexity that the build script author has to deal with. Or the complexity of trying to deal with classpath conflicts between the build script and the various plugins it happens to require.

The plugin would register the type somewhere. For example, it might, by convention, make the type available through its extension, so that you do:

apply plugin: 'myplugin', dependency: 'some:dep'
  task customTask(type: myplugin.Custom)

where ‘Custom’ is a property of type ‘Class’.

We might even make this happen automatically, so that the plugin author does not have to register the type. For example, when you access an unknown property of an extension, we might look for a class with the given name in the plugin’s classpath. For example, ‘myplugin.Custom’ looks for a class with simple name ‘Custom’ in the plugin’s classpath without the plugin author having to do anything.

@Adam: good points.

It’s starting to look like this isn’t going to be practical to solve.

An implication of using indirection for Class object references is going to be that it’s harder for IDEs to provide intellisense in build scripts.

Okay. So tweaking the build script classpath is out. Instead I presume we create a classloader for each plugin.

So here’s a idea from a different direction, that Luke and I came up with. How about a naming service that can resolve a pluginid (plus any other relevant information such as the gradleApi version) to a way to resolve the plugin artifact and dependencies?

That way as long as the naming service in use knows what “tomcat” maps to, the script author can write:

apply plugin: 'tomcat'

The naming service could be a simple restful service, and the Gradle-blessed naming service could either be included by default, or easily available via a method e.g.

namingService { gradleNamingService() }

I like this idea. It is pretty much independent of how we choose to deal (or not) with the classpath, so we can do this regardless of whether we go with option 1) or option 3), or some other option wrt to classloading.

Some things we need to think about:

One thing that using a dependency definition in the apply statement gives you is namespacing of the plugin ids, ie you’re giving a (group, module, version, plugin-id) tuple which nicely namespaces the plugin. By just supplying an id, this becomes potentially ambiguous. And this ambiguity can change over time, as the set of available plugins grows, so you have problems with reproducibility.

On a similar note, using a dependency definition automatically gives you versioning of the plugin. If you just supply an id, then the version becomes ambiguous. And the ambiguity can change over time.

My thought here is that we model things like this:

  • A plugin has a composite id (group, plugin, version). This is similar to, but not the same as, a dependency module id. * A plugin has an implementation classpath. * There are 3 steps to applying a plugin:

  • Determine the plugin id from the parameters provided to the apply statement

  • Given a plugin id, determine the implementation classpath.

  • Given an implementation classpath, instantiate and apply the plugin object. * For the first step, we give you some way to provide values for the missing pieces, ie given just a plugin name, determine the group and version. * The default policy, baked into the core, knows about the built-in plugins (java, groovy), etc and has a hard-coded mapping to provide the missing pieces (group: ‘org.gradle’, version: GradleVersion.current()) * Your proposal, above, would be an additional, and optional, policy that fills in the missing bits using a web service. * For the second step, in a similar way, we give you some way to provide the implementation classpath for a given plugin id. * The default policy would know about the built-in plugins and map their ids to the plugin classloader and/or the distribution lib/plugins directory. * The default policy could also have some additional mapping, so that given a plugin id (group, plugin, version), attempt to resolve module (group, plugin, version) using the project.repositories and use the resulting jars as the implementation classpath.

Over time, as we bust up the Gradle distribution, the hard-coded mapping can also start to make use of the web service approach. The distinction here would be that we would guarantee there is no ambiguity for the ‘official’ plugins, so that given a plugin name and gradle version, you always end up with the exact same plugin implementation. So, it is much safer to use defaults.

We can extend this approach as we start to use more meta-data about a plugin:

  • To support auto-apply of a plugin. The idea here is that when I run ‘gradle wrapper’ or ‘gradle eclipse’, the associated plugin should get automatically applied, even though it is not declared in the build script. We need some way to map from an unknown task name to a plugin. Again, one option is to use a web service that, given a task name, returns a plugin id. Or a list of matches when ambiguous, or a list of candidates when there’s no exact match.

  • For error messages. There’s lots of cools stuff we could do here. Say we encounter ‘somePlugin { … }’ where there’s no extension with the name ‘somePlugin’, then we can hit the web service to map from extension id to plugin id, and give you an error message saying something like: ‘No extension ‘somePlugin’ available. Did you mean to apply plugin ‘some-plugin’?’

Not really, if we’re doing the registration declaratively. The idea is that we (via the tooling API) provide the ide with information about the type of a given identifier, and which identifiers are available. If this content assistance is based on static analysis, it’s not going to be any different whether the classes end up on the script classpath or not. And same for if the content asistance is based on some runtime evaluation. Either way, a given identifier has-a runtime type and a given type has-a implementation classpath.

We could also use plugin meta-data in the IDE content assistance, where the web service is one source of plugin meta-data:

  • Given 'apply plugin: ', auto-complete the set of available plugins. This may use the web service to determine the set of available plugins.

  • Given ‘somePlugin { }’, mark ‘somePlugin’ as an unknown extension and offer to add ‘apply plugin: ‘some-plugin’’. This may use the web service to map from extension ‘somePlugin’ to the set of plugins that provide this extension.

And so on…

@Adam/@Merlynn: I think we should move this discussion to a new thread as it’s one specific aspect of better plugin support and this thread is getting long.

The difference would be that types would need to be imported, and IDEs could leverage that to get some info. But, that’s probably not that valuable as it can’t address everything.

Also, if our long term strategy is to provide rich information via the tooling API then it seems like we should more or less give up on trying to put these things on the classpath proper as suggested.

Very nice…just remember to add caching for this web service :slight_smile:

I would implement 'apply plugin: ‘id’, dependency: ’ as something like:

  1. Create a configuration: ‘project.configurations.detachedConfiguration(project.dependencies.create())’ 2. Resolve the configuration to end up with a classpath. 3. If we’ve not seen the classpath before, create a ClassLoader for it with ‘gradle.scriptClassLoader’ as parent. If we have seen the classpath before, reuse the ClassLoader. 4. Lookup the plugin id using the ClassLoader and load the implementation class. 5. Create the plugin instance and apply it to the target.

Merlyn had it pretty much like this originally, but I advised against.

The functional difference between a plugin loaded this way and by the traditional seem too subtle and intricate too me and will ultimately confuse.

It’s also not really practical without infrastructure for hoisting class objects up to the buildscript.

The functional difference between a plugin loaded this way and by the traditional seem too subtle and intricate too me and will ultimately confuse.

I can only see 2 differences: * You can’t import the types from the plugin. * You won’t run into the weirdness of trying to deal with conflicts between your script classpath and those of the plugins you use, as now they’re all isolated from each other.

This model is exactly the same as that used by apply from: ‘script’, so there’s some goodness we pick up from the consistency there.

It’s also not really practical without infrastructure for hoisting class objects up to the buildscript.

Of course it is. Many builds don’t instantiate any tasks from the plugins they use, they simply configure the tasks or domain objects that the plugin adds.

We don’t necessarily need any infrastructure for people to get started. People have been using this model with applied scripts for quite a while now.

Having said that, perhaps a good way to tackle this problem is to first add some (basic) infrastructure that can work for applied scripts, then add the new apply method which can then make use of this infrastructure, then grow out the infrastructure.

Of course it is. Many builds don’t instantiate any tasks from the plugins they use, they simply configure the tasks or domain objects that the plugin adds.

Granted, but it does happen and it’s something we support / promote. Or at least, we don’t discourage it.

We don’t necessarily need any infrastructure for people to get started. People have been using this model with applied scripts for quite a while now.

Sharing of 3rd party scripts is far less common. I can’t think of any widely used community plugins that are distributed via scripts. My concern is that the picture for loading plugins/build time dependencies is already non trivial to understand and we are adding more complexity here. Sure, the syntax is getting more concise/simpler but in terms of functionality we are introducing something new.

I’m not against the idea of leaving the compile classpath loader alone as I think it’s the only real solution given your points. Given that this is the direction we are going, perhaps we should just add this without anything else and deal with any support fallout that might occur because it’s somewhat temporary. If you’re confident that we’ll eventually have robust solutions for dealing with types introduced by plugins loaded in this manner in a reasonable timeframe I’ll withdraw my objections.