Apply from vs apply plugin aka build logic reuse

In the organization i have been working for , i have seen many gradle build scripts , which do the “same” over and over again, with only slight variants.

Example: a build script, which builds Java code, with a couple of dependencies, where the versions are managed by the Nebula Dependency management Plugin, copies all resources into a target directory structure with some property expansion and filters, generates a Zip File from that directory, builds from there a Rpm with the nebula plugin and in the end deploys jar, zip and rpm to a Artefactory repository using the the maven-publish plugin in combination with the Artefactory plugin.

So the scripts do a lot of not very complex but useful things, driven by useful plugins.

But one can also easily see how for an enterprise there is lot’s of standardization potential and in terms of usage scenarios not much variability. Some standardization examples: repository naming, respository user & password, directory layout for the RPM, Rpm User and User groups, service naming etc. Examples for variability: systemd unit name, installation directory naming, package name etc

The strategy up to now has been copy & paste, which we are now realizing simply doesn’t scale.

So we are looking for ways to reuse build logic in the purest sense of the word.

Since we never want to reinvent the wheel, we are studying the Gradle documentation in this regard:

Where the apply from resp Script Plugin feature seems quite straight forward, see https://docs.gradle.org/current/userguide/plugins.html#sec:script_plugins and understable, also the restrictions of this approach (caching, versioning etc).
I am slightly disappointed with the recommendations with regard to binary Plugin recommendation:
I find the gradle documentation about organizing and reusing build logic highly theoretical, see https://guides.gradle.org/designing-gradle-plugins/#reusable_logic_should_be_written_as_binary_plugin and actually beside the point: because it is more about providing new functionally and not so much about (re) using existing functionality, which in the end boils down to reusing existing plugins

I do not see in the examples https://github.com/bmuschko/gradle-docker-plugin#provided-plugins and https://github.com/bmuschko/gradle-docker-plugin#provided-plugins not much of reusing build logic, but providing new functionality, which can be reused. Ok fair enough also important, but in the Build world , i find the reuse of existing functionality with slight variants much more important. Also googleing i don’t find much : https://objectpartners.com/2014/04/24/reuse-your-gradle-logic-across-the-enterprise/

Whereas i completely get writing a plugin to do something new, easy, i don’t see how i can apply existing plugin, configure and execute their tasks with any justifiable effort.
I don’t see that any real world scenario of factoring out repeated build logic also with the desire to enforce useful standards and which differs essentially only in some parameters, can be achieved with a binary custom plugin, which naturally in the above outlined scenario depends the ‘java’ , ‘maven-publish’ , ‘com.jfrog.artifactory’ , ‘nebula.ospackage’ and currently also on the ‘nebula.dependency-recommender’.

Or: apply from and apply plugin are really two very different beasts.

Am i wrong? I would be very happy to be proven wrong. Are there real world examples for above scenario?

Or should we stick with copy & paste :disappointed_relieved:

Best thanks, any feedback is greatly welcomed

Hello,

I think there is some kind of misunderstanding about the capabilities of plugins.
Plugins can provide new functionalities. But they also can enforce some kind of common patterns.

Let’s say that all the projects in your organization must comply with the following base requirements:

  1. compile with java toolchain
  2. be published on a maven repository
  3. report all warnings during compilation
  4. use assertj for test
  5. consume dependencies from maven central
  6. deploy binaries in a maven repository located at file:///tmp/somewhere

They can add more stuff, have slightly different layouts, different resolution strategy for conflict resolution, but these are the non negotiable key point. All your projects will end up having that kind of copy-pasted configuration:

plugins {
	id 'java'
	id 'maven-publish'
}

tasks.withType(JavaCompile).configureEach {
	options.compilerArgs << '-Xlint:all'
}

repositories {
	mavenCentral()
}

dependencies {
	testImplementation 'org.assertj:assertj-core:3.12.2'
}

publishing {
	repositories {
		maven {
			url = 'file:///tmp/somewhere'
		}
	}
}

You can share that configuration through a script or a binary plugin. Let’s skip directly to the binary plugin.

A settings.gradle file:

rootProject.name = 'example-plugin'

A build.gradle file:

plugins {
	id 'java-gradle-plugin'
	id 'maven-publish'
}

group = 'org.example'
version = '1'

gradlePlugin {
	plugins {
		myPlugin {
			id = 'example-plugin'
			implementationClass = 'org.example.ExamplePlugin'
		}
	}
}

publishing {
	repositories {
		maven {
			url = 'file:///tmp/somewhere'
		}
	}
}

And now the code in ExamplePlugin.java, first when applying our plugin we’ll configure the end-user project:

package org.example;

import org.gradle.api.NonNullApi;
import org.gradle.api.Plugin;
import org.gradle.api.Project;
import org.gradle.api.artifacts.dsl.DependencyHandler;
import org.gradle.api.logging.Logger;
import org.gradle.api.plugins.ExtensionContainer;
import org.gradle.api.plugins.JavaPlugin;
import org.gradle.api.plugins.PluginContainer;
import org.gradle.api.publish.PublishingExtension;
import org.gradle.api.publish.maven.plugins.MavenPublishPlugin;
import org.gradle.api.tasks.compile.JavaCompile;

@NonNullApi
public class ExamplePlugin implements Plugin<Project> {

	private final String REPO = "file:///tmp/somewhere";

	@Override
	public void apply(final Project project) {
		final Logger logger = project.getLogger();
		final ExtensionContainer ext = project.getExtensions();
		final PluginContainer plugins = project.getPlugins();
		logger.info("applying java plugin");
		plugins.apply(JavaPlugin.class);
		logger.info("download dependencies from maven central");
		project.getRepositories().mavenCentral();
		logger.info("all java compile tasks will report all warnings");
		project.getTasks().withType(JavaCompile.class).configureEach(t -> t.getOptions().getCompilerArgs().add("-Xlint:all"));
		logger.info("tests will be able to use assertj lib");
		project.getDependencies().add("testImplementation", "org.assertj:assertj-core:3.12.2");
		logger.info("maven publish plugin");
		plugins.apply(MavenPublishPlugin.class);
		logger.info("configuring publishing repository to be a maven type hosted at {}", REPO);
		ext.getByType(PublishingExtension.class).getRepositories().maven(m -> m.setUrl(REPO));
		logger.info("adding an end-user dsl to apply preferences");
		ext.create("myPlugin", ExamplePluginDsl.class, project);
	}
}

Let’s add a custom DSL served by ExamplePluginDsl.java to enhance our plugin capabilities:

package org.example;

import javax.inject.Inject;

import org.gradle.api.Project;
import org.gradle.api.artifacts.dsl.DependencyHandler;
import org.gradle.api.tasks.testing.Test;

public class ExamplePluginDsl {

	private final Project project;

	@Inject
	public ExamplePluginDsl(final Project project) {
		this.project = project;
	}

	public void useJUnit5() {
		final DependencyHandler dependencies = project.getDependencies();
		dependencies.add("testImplementation", dependencies.platform("org.junit:junit-bom:5.4.2"));
		dependencies.add("testImplementation", "org.junit.jupiter:junit-jupiter-api");
		dependencies.add("testRuntimeOnly", "org.junit.jupiter:junit-jupiter-engine");
		project.getTasks().withType(Test.class).configureEach(Test::useJUnitPlatform);
	}
}

And now, to start a new project using our corporate conventions:

In a settings.gradle file:

pluginManagement {
	repositories {
		maven {
			url = 'file:///tmp/somewhere'
		}
	}
}

And the build.gradle file:

plugins {
    id 'example-plugin' version '1'
}

dependencies {
	testImplementation 'junit:junit:4.12'
}

Let’s add an unit test TrueTest.java:

package org.example;

import static org.assertj.core.api.Assertions.assertThat;

import java.util.ArrayList;
import java.util.List;

import org.junit.Test;

public class TrueTest {

	@Test
	public void should_be_true() {
		List l = new ArrayList(); // expect rawtype warning at compile
		assertThat(l).isEmpty(); // expect unchecked warning at compile
	}
}

Running gradle test should print compile warnings, and run tests. The only required configuration was

  1. provide a way to reach the corporate plugin (the settings.gradle file)
  2. apply the corporate plugin
  3. add junit 4.12 to dependencies which is not part of the corporate conventions

Now, let’s say you want to start a project but use JUnit5 instead of the good old JUnit 4. Same settings.gradle as before. New build.gradle:

plugins {
	id 'example-plugin' version '1'
}

myPlugin {
	useJUnit5()
}

New TrueTest.java:

package org.example;

import static org.assertj.core.api.Assertions.assertThat;

import java.util.ArrayList;
import java.util.List;

import org.junit.jupiter.api.Test;

class TrueTest {

	@Test
	void should_be_true() {
		List l = new ArrayList(); // expect rawtype warning at compile
		assertThat(l).isEmpty(); // expect unchecked warning at compile
	}
}

Here you can use the DSL provided by the corporate plugin. What does it mean is that, by default use of JUnit5 is not enforced but highly recommended.

To summarize, if a plugin can add new functionalities, it can also do everything else:

  1. pre-configure existing tasks/extensions/…
  2. disable or blacklist some repositories/dependencies/…
  3. enhance logging (for example to log how many tests were passed in the console)

I won’t say that your imagination is the only limit to Gradle plugins. There are limitations. We could list for example:

  1. A plugin is tied to a gradle API, for instance the plugin above use configuration avoidance API configureEach it cannot run with Gradle < 4.9 and if that method is removed with Gradle 42.0, then you’ ll have to update the plugin to use the new method provided by the new API
  2. Some operations might be rejected by the gradle engine at runtime (for instance the new configuration avoidance api configure and configureEach cannot be used within a lifecycle hook such as project.afterEvaluate
  3. Plugins cannot trigger tasks, they can prepare the ground for said tasks, but execution will always be up to the end-user

I hope I understood well your question and that answer will help you.
Regards.

4 Likes

Brilliant Pierre, exactly what i was looking for. Great thanks.
I also saw , looking at nebula gradle plugin code, which has some examples of “opinated” plugin’s , that it seems a good idea to mirror the plugins used or to be standardized. So one would have instead of one doing all, have one custom plugin per plugin used, enforcing standards. Seem also like a good way to go. This give the user also greater flexibility. And in this it is actually quite different to the apply from approach, where you in the end have lesser flexibility. So the Gradle documentation, but it would be great to have there an example like yours.

But again, great thanks, this helps very much.
Regards Christoph

I will code through your example , one of the next days.