To be honest, this request makes absolutely no sense.
One of the biggest strengths of Gradle is to avoid unnecessary work.
And if you compile the exact same sources with the same Java version and the same compiler configuration, why should Gradle not use the already existing result but redo the work it already did before?
By forcing it to do so the only change you get is, that you lost time needlessly.
When your project build has Gradle property like org.gradle.caching=true and the sources of subprojects are not touched you will get the build product from the cache.
That is not true.
The result will simply be UP-TO-DATE and not taken from cache.
If you change some sources and compile and then revert the change and compile again, then the results would be taken from cache, as the inputs changed from the last execution, but the result is present in the cache.
Surprisingly (what is not a bug I suppose) even if you delete the project workspace and then clone it from remote git repo you will still get the build products(output) from cache.
That is not surprising, but exactly what the build cache is for.
If you did the same task with the same inputs before, you can use the result that is already present.
This works if you have different workspaces with the same sources, it works if you switches back and forth between branches, it works if you change files and change them back, …
There is even a remote build cache that you could enable, so that even other machines could reuse the results that are already present, so that for example a CI build can fill the remote cache, and every developer that needs to do the same task with the same inputs can simply use the result the CI already has built and put into the remote build cache.
So no, this is not a bug, but exactly the build cache feature.
and does not bother what kind of git orchestration set them up
Of course it does not bother, how the inputs came into existence is absolutely irrelevant. If you need to do the same piece of work with the same inputs, there is no reason to redo the work instead of simply using the already present result.
we do not expect some subprojects’ classes to be taken from Gradle’s cache when new a repo has just been cloned
Really, this statement is complete non-sense.
It effectively says “I don’t want to use one of the biggest strengths of Gradle but instead waste time and resources needlessly”.
Let’s assume that devOps do not want to edit Gradle scripts in any ways…
they do not want to turn off org.gradle.caching=true
or add outputs.cacheIf(false) to some compileJava tasks
They do not also want to manually clean the Gradle cache.
Definitely, they shouldn’t.
At most the should use an init script which is the tool to customize a build from outside, but even that should not be used here.
Is there a way not to get the outputs from cache for some subprojects … just by using a specific switch in gradle build command-line? (only for the first build they run with the newly cloned repo)
Not for “some subprojects”.
If you want it for “some subprojects”, you probably need to use an init script that does tasks.configureEach { cacheIf { false } } on those subprojects.
To do it for all projects in a given build, you can simply say --no-build-cache.
Alternatively, on a freshly cloned project also --rerun-tasks has the same effect.
The latter would also prevent any UP-TO-DATE.
Another alternative would be to use an init script that disables the configured caches which then would have the same effect as --no-build-cache.
But really, non of that should be used, as the only “benefit” you get from it is wasted time and resources.