Caching tasks that create no output?

I’m looking for help in creating a custom cacheable task. I’d like to use a recommended best practice if possible, so if my approach seems wrong then I’m happy to adapt, but I’ve started down a route already.

So I’m trying to cache an expensive test run written in JavaScript. It doesn’t create any output – it just validates that the code is correct. It’s important for me to know that the tests have been run for the given inputs, and avoid running them if the input hasn’t changed.

The build cache seems like the obvious solution, but gradle builds with no outputs are always considered out of date. It seems that in order to create a cacheable task, I need to declare at least one output file, whose contents varies based on the input.

To that end I’m considering writing out a small token file, whose contents reflect the inputs, as the sole output. That way, gradle will consider the task cacheable, and I can avoid unnecessary test runs.

If that’s not true, and I can declare a task cacheable which only has inputs, that’d be great. Or if I can declare a non-file output, or derive/create an output automatically, that’d be great.

Assuming I have to write the file myself, the question is what should go into that file. Initially I thought of creating an MD5 hash of all the input files and writing that out, but that sounds an awful lot like recreating the gradle build cache key, but badly. Ideally I’d write that key out but it seems from Gradle build cache key that it’s not possible.

Pseudocode for my current approach;

// terrible pseudocode to show the point
task test() {
  inputs.dir("src").withPathSensitivity(PathSensitivity.RELATIVE)
  outputs.file(".tested.update-tracker")

  outputs.cacheIf { true }

  doLast {
    exec <something to run the tests>
    new File(".tested.update-tracker").text = task.buildCacheKey
  }
}

Is there some way I can either;

  • calculate the build cache key
  • mark the task as cacheable despite it having no output
  • read more about how other people have solved this problem?

Hi Steve,

the output file does not need to depend on the inputs. So you can create an empty file as the output file.

I wonder if the test execution itself doesn’t create some output. Normally I would say that you should write the test output to the output file.

An other suggestion is to use a custom task class instead of using an ad-hoc task. This allows declaring the inputs and outputs via annotations and makes it possible to use nice names for the input properties.

Cheers,
Stefan

Hi, Stefan. Thanks for the quick response!

I don’t think my process generates anything itself - it’s an program that writes to stdout and exits 0 if it’s successful or !0 if it’s failed.

Your suggestion of an empty file make me realise I hadn’t maybe described enough of the context of my problem - I didn’t want to get into the whole thing for one little piece of the problem, but there’s a bit more.

We split our code into several packages; something like

/packages
   /utils
   /feature1
   /feature2

where feature1 and feature2 depend on utils.

So I’m currently working building the lowest-level utils package with gradle - which is working nicely. When I wrap feature1 in gradle, I’m going to need to establish a dependency on utils, so that when I build feature1:test it will run utils:test first.

I think I’ve got my head in Makefiles, where I’d need an output file in order to set up the dependency between projects. Can I just ignore this requirement if I’m doing this in gradle? I’m guessing I need a multi-project build to set up the dependency graph for these projects?

Thanks again for the help!

I wonder why you need a dependency on utils from feature1 for tests. Do you actually compile the Javascript files? If you don’t, each test can run independently, right? I guess for the test themselves you will have some test files which you want to run and some production files which you want to test. You will need to declare those inputs on your test task to have the right inputs I guess.

I wonder why you need a dependency on utils from feature1 for tests. Do you actually compile the Javascript files?

So we actually don’t compile the javascript - we have a top-level package which is our app, and that brings in all the javascript into one large compile process. So that’ll execute last. but there’s no point compiling the whole thing if the tests for the individual packages fails, so we run the package tests first, ideally in parallel, ideally with a cache. Only once we’re confident about tests, lint, security vulnerabilities etc in the packages would we look to compile the source together into one distributable.

The tests work happily against the loose javascript files - one of the differences between a language like javascript and something like kotlin or java – the ‘compile’ step is only for compression, rather than execution.

So we end up with something like app.build dependsOn package.test

So what you want to do is to declare the Javascript files in utils as inputs to the test task for feature1 as well, right?

I’m going to need to establish a dependency on utils , so that when I build feature1:test it will run utils:test first.

You want to add this dependency since you don’t want to test feature1 if some tests in util fail? The dependency doesn’t seem necessary to me.

You could write an xml file. Eg:

<tests>
   <test name="foo.bar.Test1" result="PASSED" />
   <test name="foo.bar.Test2" result="FAILED" />
   ...
</tests>

In future other tasks could use this task output to perform some logic (eg generating a test report) which can still work if the task is up-to-date / skipped

1 Like