I created a discussion topic with a similar title in the help section but it did not receive any attention. I am reiterating the issues I have found here to seek advice and suggestions from the Gradle team.
I have created a native test plugin using Boost Test for use with a large C++ project that I am porting to Gradle from CMake. The Boost Test unit tests that already exist have used the one test executable per test suite option which is terribly inefficient yet necessary for reasons I will not go into here. With the model of the existing native test plugins as a guide I found that the code for the library under test is compiled and linked into a unit test binary instead of the test binary having a āusesā dependency to the library under test. Aside from this being an invalid test IMHO since it is not testing the compiled library but a simulated binary with the code of the library embedded it is causing the compilation to happen as many times as there are test suites.
In this project there are over 1,300 test executables which are testing as many as 20 separate dynamic libraries. You can imagine that compiling and linking these is bad enough for the unit test source code alone, having the attached library source code added to the mix makes this completely unacceptable in terms of build time performance. On top of the source code recompilation issue the install task behavior is multiplied the same way. These 1,300 test executables if installed would require 1,300 copies of the library dependencies, not a sustainable build model in my opinion.
I addressed these issues in my Boost Test plugin by extracting the code that was part of the NativeBinariesTestPlugin and making the same work in my plugin by having the library under test as a library dependency of the generated test components. I also generated one test component per source file to satisfy the unfortunate choice of the executable per test suite operation of Boost. Finally I created a new task that constructs a ātest-wrapperā script for each test executable that sets the LD_LIBRARY_PATH or the PATH in Windows correctly to make all the library dependencies accessible at runtime eliminating the need to āinstallā the test executable. This has proven successful and gives me high confidence that it is the final product being tested not a simulated binary with the compiled code embedded.
In addition to the problem I solved with the library dependency I made the plugin create test executables for both shared and static library variants. I managed that by giving each executable a separate type string with the keyword static or shared in it.
I would like to contribute all this work back to the core Gradle if possible. I still have to discuss with my employer the possibility of doing this soon. I hope we can agree that some of the ideas presented here are valuable and perhaps adjust the current CUnit and GoogleTest plugins to take similar approaches.
Hi Alexander
Yep, recompiling all of the production sources into a test suite was a bit of a hack to get things working: I didnāt have any experience with CUnit at the time and did the simplest thing that worked. It would be great to fix this so that we can simply link to the library under test.
Are you able to share your solution via a pull request? Or privately send a diff against master? Even better if you can remove the āseparate test suite per test source fileā functionality to reduce the scope of the change.
Thanks for your interest and your offer to contribute.
Daz
I can share the solution I have in a pull request. I will have to apply specific parts of it over the current CUnit and GoogleTest plugins. Most of it fits the common code in NativeTestSuites.java.
For this change I created an alternate run script modeled after the script created by the InstallExecutable.groovy. I do this in a new task called CreateTestWrapper.groovy. This provides the necessary LD_LIBRARY_PATH or DYLD_LIBRARY_PATH value computed based on the information extracted by the dependencies.
I actually solved another issue you had left in a TODO:DAZ comment in the code.
public void execute(final NativeTestSuiteSpec testSuite) {
for (final NativeBinarySpec testedBinary : testedBinariesOf(testSuite)) {
if (testedBinary instanceof SharedLibraryBinary) {
// TODO:DAZ For now, we only create test suites for static library variants
continue;
}
createNativeTestSuiteBinary(testSuite, testSuiteBinaryClass, typeString, testedBinary, buildDir, serviceRegistry);
}
}
For this I changed the typeString to include SharedLib or StaticLib thus making it possible for both types of executables to coexist, one for testing shared lib and one with static lib.
I can adapt these two solutions from my plugin code into the current 2.10 source tree which is what I am using now and make a pull request against that.
I think we should be using the existing InstallExecutable task for this purpose. When testing a shared library, this means that the set of runtime dependencies will need to include the tested library, so that this code configures the installation libs correctly.
I presume youāre already doing something for linking the test suite executable, so that the correct link-time files are provided here.
It would be better to keep these 2 changes separate, I think. So first, create a test suite by linking to the existing library, rather than recompiling the sources. Then look at creating a test suite for both static and shared libraries.
Thanks for your interest. I look forward to see this contribution.
In my solution I did indeed use this code to identify the correct libs that should be in the LD_LIBRARY_PATH. Similar to the InstallTask copying them into a location and setting the LD_LIBRARY_PATH to that specific location but instead I created a multi-path LD_LIBRARY_PATH using the unique locations of all the libs. The difference with what I had to do is that I did not want X number of installed copies of the libs with each unique test suite executable. As I indicated our legacy projects has upwards of 1300 test suite executables and creating just as many unique installed copies of the dependency libs is unattainable.
Since you prefer to not introduce the one executable per test suite option in the first set of commits I will adjust my solution to manage that.
I will produce separate pull requests, first addressing the linking with the shared library or the static library as the case might be. Second pull request to introduce the option to build test suites for both static and shared variants side-by-side using the typeString to distinguish the variants.
We can further discuss if the single executable per test suite is an option we could add to the DSP perhaps to enable using the test frameworks in both modes. My team expressed a preference to single executable per test suite to isolate side effects of test suites between units under test. Perhaps there is a way to make this expressed in the DSL for the native tests, although the way the current implementations work it is entirely by convention and there is no DSL to speak of.
Thanks for getting the ball rolling on this. I have just finished integrating googletest into my build and have run into the same issue herein described. While I do not have 1300 test suite executables, I do have a moderately sized project with multiple native componentsāall of which need to be tested. The build time jumped from just under 5 minutes to about 17 minutes (on average) simply with the addition of the extra test compile tasks. I donāt currently compile anything more than a few placeholder tests. Most of the increase is attributed to the recompilation of all the component sources under test.
Reading the thread, I do have a few comments based on my experience:
I think it would most flexible to retain the dedicated install task. For example, one might wish to cross compile for a mobile or embedded device. The install task lets you specify how exactly to install the tests on your target device. The execute task allows you to do the same come time to actually run the tests. Alex, it seems for you purposes you can simply disable the install tasks and update the run tasks to point at the executables with the proper load paths set.
The recompilation, if not a performance issue, also poses a build configuration issue. Because everything is compiled twice, You must pass the same arguments to the compiler. This manifests as repeated configuration in build scripts. For example, if I have a macro that should only be defined when compiling source for a library, it must also be defined when compiling the tests even though it is unrelated to the test code.
I know work has been done to allow you to specify for which libraries (and platforms?) test suites should be created (google-test-test-suites). But, I havenāt found a good way to only run tests on my host machine when executing gradle build (of course you can select exactly the task you want to execute). Say I have enabled my project to compile for my host machine, among others, it is definitely questionable what the default behavior should be. Should gradle only execute tests on the host machine, or should it try to run the tests for all platforms? There are solid arguments for both sides and the native plugins are slightly opinionated about how platform selection should work already. It would be nice to imagine an api by which you could configure which tests run under any given environment. Obviously, the ideal is to run all tests on all platforms during a build, but that is often the responsibility of an automated build environment. Even then, sometimes that is not practical and simply executing the tests on the current (or one) platform is sufficient (barring any machine instruction-level bugs). The way I accomplish this now involves a combination of disabling test tasks for platforms on which my build has not been taught to execute tests, and using the current platform to select compiler tooling (am I building on my osx machine or my ubuntu builder i.e. clang vs gcc).
Iāll try to find the pull request, but Iām happy to help in any way I can. Iād love to any efforts to polish up the native test plugins (=
unfortunately I have been unable to port my working changes to the master. You will not find a pull request yet. Too much demand of my time for the dayjob to find time to do the work necessary! I am fully committed to making this change since I would rather see this out there than limited to our internal builds. Also, maintaining this and keeping it compatible with the fast moving Gradle versions is not something I want to do long term!
I am already doing what you suggested in point #1. I leave the install task alone and redirect the RunTest task executable to my own wrapper script that does a few more things for Boost Test that I found desirable.
Completely agree with point #2. It works so much better if this strange recompile is avoided.
As for #3 there was a change in 2.12 that was discussed in another thread and resulted in a workaround. I found that for my needs using the binary.buildable attribute was sufficient but I have not attempted cross-compilation. I believe we do need some mechanism to declare the binaries compatibility with the host platform and automatically skip running cross-compiled binaries for tests. Ideally we would have the option to package the workspace of the cross compiled binaries and ship it to a dedicated slave that could then execute the unit tests. Give the ability of Gradle to precisely detect up-to-date binaries it may just work out that the tests will be the only work to be executed on the target host.
I think we were both referencing this thread: https://discuss.gradle.org/t/gradle-2-5-google-test-plugin-tries-to-compile-tests-for-incorrect-platform, right? Thatās definitely a step in the right direction, but not quite a universal workaround. I definitely agree about the ability to declare a binaryās compatibility with a host build environment (although itās good to point out that build environment is not always going to be the same as host platformāyou might cross-compile and execute on an attached device). For example, my setup can actually build all binaries for all platforms, so theyāre all buildable, (and indeed I wish to compile all test binaries in order to reduce the chance that someone writes a test that doesnāt work with a specific set of platform headers), but I simply wish to only execute the tests for ārunnableā platforms. As I mentioned, Iāll certainly want to configure the build to be able to execute all tests on all platforms all the time (perfect thing for a CI machine to do) but in the individual dev workflow, executing the tests on the host machine is sufficient.
Shipping off the entire workspace, build cache included, off to a different builder is a really cool idea! I hadnāt thought of that. Interesting.
As I said, Iām happy to help. Perhaps we should start a discussion on the dev-list? We might also want to revise the current native testing design doc or create a new oneā¦
I am open to collaborate with you on this David. It is a great opportunity to have significant impact on the future of this tool and I am in 100%. I like the idea of discussing this in the dev-list and getting more direct feedback from the core team so we keep the implementation inline with the direction of the project.
Apologies on resurrecting an old thread, but I was wondering if thereād been any progress on anything similar to #2?
I was thinking of raising a GitHub issue since there doesnāt seem to be anything covering it, but I wasnāt sure how best to define the outstanding work.