Memory Leak when applying (transitive) settings?

I have a project that includes other project in a flat layout.
Many subprojects additionally transitively include other projects but at most in a single build there should not be more than 100 projects in total involved.

Now, sometimes I keep having this issue that the build just hangs up, eats all my memory and then eats all my CPU (for GC) and finally dies because of GC overhead time limitations.

I have tested this by giving gradle first 1g,2g, 4g then 10g.
When I finally gave gradle 20g it worked (after consuming my whole RAM and taking 1000% more time than comparable builds) but I cant help but think that this is not intended behaviour, since this only occurs with a handful of projects which layouts are pretty much the same with others.

(I am on windows 10).
I have so far tried upgrading/reinstalling gradle, restarting my computer, running the build as admin.

I have tried using the --debug flag and found out that this is likely the culprit:

11:34:27.237 [DEBUG] [org.gradle.internal.operations.DefaultBuildOperationExecutor] Build operation ‘Apply script settings.gradle to settings ‘redacted’’ started
11:34:27.239 [DEBUG] [org.gradle.internal.operations.DefaultBuildOperationExecutor] Build operation ‘Apply script settings.gradle to settings ‘redacted’’ started
11:34:27.239 [DEBUG] [org.gradle.internal.operations.DefaultBuildOperationExecutor] Completing Build operation ‘Apply script settings.gradle to settings ‘redacted’’
11:34:27.239 [DEBUG] [org.gradle.internal.operations.DefaultBuildOperationExecutor] Build operation ‘Apply script settings.gradle to settings ‘redacted’’ completed

Gradle prints pages and pages of this when it tries to build.

Does anyone know a fix or a at least a explanation for this behaviour ?

I really dont think that I have circular dependencies/references in my projects since the ant based solution that we’ve been using has been working just fine, and I expect gradle to correctly identify and report that (as it has in the past).