Currently we build an Android App which will use a bunch of modules. We have a
main-module (com.android.application) and multiple library modules (com.android.library) like
main-module declare all of these library modules as dependency. Furthermore it may happen that other modules have other modules as dependencies as well (like the
Gradle introduced compileOnly quite some time ago. One use case of using
compileOnly (according to their blog) is:
Dependencies whose API is required at compile time but whose implementation is to be provided by a consuming library, application or runtime environment.
Back to the project.
For dependencies which are used in the
main-module and in a library (like RxJava) are declared as following:
We have decided to use
compileOnly here because of the statement from the blog. We use
compileOnly for all dependencies where we are sure that the
main-module needs it as well. Otherwise we use - of course -
api (depends on).
Anyway. While doing so we spotted some annoying behavior. For testing inside the libraries (like
login) we have to
implement all the dependencies for the
test configuration (
testImplementation) obviously. Because all the
compileOnly dependencies aren’t available at runtime.
Which leads to something like that (
Which looks really odd.
To my question:
Is it common/recommend (by Gradle?!) to use such a configuration and “we have to live” with the “duplicated entries”? Or should we simply “fix” that by using
implementation instead of
compileOnly in our library modules? If yes what about “performace” or something like this? Shouldn’t it take some time for Gradle to check all the
implementation dependencies and decided which one to use? Isn’t it faster to have
compileOnly here? What about “correctes”? From my point view the
compileOnly is exactly designed for such use cases and should be used…?
@eriwen We need to revisit that Blog post.
compileOnly is only meant for exactly what it says: Things you only need at compile time. If your libraries need that dependency to work at runtime too, it should either be
implementation, depending on whether it is part of the library’s API.
compileOnly is vastly overused in my opinion. One of the few valid use cases I can think of is annotation libraries that are only needed for compile-time checks. It can also be used as a workaround for our current lack of “optional features”, which will soon be rectified.
The “things provided by someone else” use case is modelling things the wrong way round. The library should not make assumptions about how its dependencies will be provided in the final deployment. That’s up to the consumer of the library. Ironically this is the use case that most people use it for, because that’s what you do in Maven-land with the
<provided> scope (which is an equally bad model). We need some best practices documentation there for people coming from Maven.
@StefMa to summarize: Use
implementation as applicable.
@eriwen I think we need to explain why we discourage making assumptions about “Who will provide a dependency at runtime”, because a lot of people come from Maven and have been encouraged by it to do that using the
<provided> scope and they look for an equivalent in Gradle.
We should also bring the blog post in line with our user guide, as it currently over-recommends
compileOnly for use cases that should just be normal
Understood. Thank you @st_oehme.
Then I’ll change everything to
I’m totally with you that it is confusing a little bit.
The reason why we have done it (in other library-projects) was that we don’t want to depend on a specific version which may lead to breaks.
A perfect example is the
kotlin-stdlib-jdk8 and the
Imagine that a library use both in version
1.2.30. A Application uses these library but only override
kotlin-stdlib-jdk8 with the version
1.2.40. Then only these dependency will be updated. But not the
kotlin-reflect artifact. Which may lead to issues:
| +--- org.jetbrains.kotlin:kotlin-stdlib-jdk7:1.2.30 -> 1.2.40
| | \--- org.jetbrains.kotlin:kotlin-stdlib:1.2.40
| | \--- org.jetbrains:annotations:13.0
| \--- org.jetbrains.kotlin:kotlin-reflect:1.2.30
| \--- org.jetbrains.kotlin:kotlin-stdlib:1.2.30 -> 1.2.40 (*)
To don’t depend on a version we used
compileOnly and force the Application to define these dependencies (as
implementation). Of course this can break things on our side as well (because or code don’t reflect changes from updated versions). But in normal cases such breaking changes will be deprecated first and we - as a library developer - has some time to update it.
Just to give you a deeper understanding why I (or maybe some other library-devs) uses
This would be better solved with an alignment rule, something the dependency management team is working on.
I’d strongly discourage using
compileOnly for this.
NoClassDefFoundError should not be the standard contract for library consumers
It looks like
implementation is broken.
From a Maven project, depending on an artifact built with Gradle, it will put in the final artifact transitive dependencies even though they are clearly marked
The only way I’ve found for excluding transitive dependencies from the final artifact (fat jar in this case), is marking them
If there’s an explanation to this I’d love to know why.
I guess the reason why it s overly used as a replacement for the maven
provided scope is because it was advocated as the answer to a 7 year old issue requesting a
providedCompile configuration in the java plugin (as it is exists in the war plugin). -> https://issues.gradle.org/browse/GRADLE-784
While it may seem reasonable from a academic point of view to not make assumptions about the runtime it’s a acceped and lived pattern for nearly 20 years in J2EE applications - and gradle never explained their vision on how to solve that and why it’s there for
war projects and not for any other.