Validating Outputs of Custom Gradle Plugin in Integration Tests


(Rob Winch) #1

I’m writing a custom plugin that adds a custom task. I’d like to perform some integration tests to ensure that the task produces the correct outputs.

I’ve looked into using TestKit, but as far as I can tell this only allows me to validate output written to the console. It appears that the actual Gradle Build is using its own internal infrastructure to perform integration tests. I’d like to validate all the inputs and outputs of the task.

I could use the tooling API, but then I would need to manage the classpath for the tests. I’m also not sure if this is the right path since TestKit states it will be adding more features later.

What is the best way to write tests that can validate all the outputs of custom tasks of a custom plugin?


(Benjamin Muschko) #2

TestKit does use the Tooling API under the covers and properly sets up the classpath for you. I think in the long run Gradle core’s internal testing infrastructure will be replaced by TestKit as it solves the same purpose. We just have additional features in AbstractIntegrationSpec that TestKit doesn’t have yet.

Can you describe which what exactly you mean by “that the task produces the correct outputs”? Does that mean checking if a task writes an expected file with proper content for example? If that’s the case then you are definitely on the right path by using TestKit. Keep in mind that you don’t have to use the TestKit API to assert something. You can always inspect outputs with plain old Java/Groovy code.

I think it would help if you could give me the concrete use case you want to test.


(Rob Winch) #3

Thanks for the response.

Any example is I’m creating a Javadoc task that aggregates multiple sub projects. This is intended to be used for publishing an aggregate Javadoc to a site for users to browse. I would want to validate the output files (as you alluded to).

I think you have already answered my question when you said I can validate the output without using TestKit APIs. I had thought of this, but I wasn’t sure if this was the best path to take for the long haul.

One thing I don’t like about this approach is that I’d like to be able to obtain the output file locations from the API rather than hard coding them based on the root project’s path. Is this something that will eventually be addressed in TestKit API? Is there something I’m missing?

Thanks!
Rob


(Benjamin Muschko) #4

One thing I don’t like about this approach is that I’d like to be able to obtain the output file locations from the API rather than hard coding them based on the root project’s path. Is this something that will eventually be addressed in TestKit API? Is there something I’m missing?

There are two approaches to this:

  • In the build under test I write a task that uses the Gradle API to assert the files based on the path available to the JavaDoc task. You run the task on top of javadoc. It is a bit clunky but works. Something like the following:
task assertJavaDocFiles {
    doLast {
        assert javadoc.destinationDir.isDirectory()
    }
}
  • Instead of aiming for functional testing you aim for integration testing. For this aspect I’d use ProjectBuilder though it has its limitations and quirks. I think in the long run we’d like to provide a better story (in terms of usability) for integration testing.

(Rob Winch) #5

Thanks for all the suggestions and feedback!

It appears that TestKit with hard coded relative paths to the build output is my best option.


(uklance) #6

You could add a task which writes the javadoc task’s outputs as a json file and then parse the file in the test and perform assertions on it.

Perhaps a more generic task could be useful for everyone to use. Eg a task where you specify some groovy to identify an object (eg configurations.runtime or tasks['foo'].outputs) and a file path and the task writes the result as json