Multi-version / Multi-platform building for Visual Studio C++/Native chains

What I’ve been thinking about for a while is the ability to support compiling native code for multiple platforms/architectures without having to have the environment pre-setup for it, like it does now for visual studio chains.

The highly repetitive nature of visual studio, as a build system, makes me really want to use something like gradle to alleviate my frustrations.

Two different scenarios I’ve recently had to partake in at work is 1) having to build google’s protocol buffers library with 10 total VS version/platform mixtures, when google only offered 1 mixture to start with. 2) having to update usage of an internal library to a newer build in about 6 VS projects (mixed platforms) totaling about 30 configurations across them all. which came out to me having to update about 120-150 different locations across the visual studio projects (not fun!).

So, right now I’m usually finding myself doing either of 1) Maintain large number of visual studio projects/solutions of the same code for each of the different versions - highly repetitive and opens up large chances for human error. 2) Use alternative building mechanisms, such as (obscure?) combinations of batch and GNU make.

Though I’m not quite sure as to the how extensive the problems I seem to encounter all the time at work are elsewhere…

With the enhancement, naturally the question of “what mixtures should projects be built with” arises, so the DSL would likely need some change to support the specification of the versions

apply plugin: 'cpp'
NativeChains = [
NativeChain(compiler: NativeCompiler.VisualStudio, version: VisualStudio.VS_2010, platform: Platform.AMD64),
NativeChain(compiler: 'msvs', version: '2013', platform: 'x86'),
NativeChain(compiler: 'gcc', version: '4.8', platform: Platform.NATIVE)
]

comes to mind as a possibility in how this could manifest (and some possible conveniences to them). As for a default for this configuration, I’m leaning more towards there not being one and forcing it be specified, for simplicity purposes.

As to how this could manifest in an implementation, this would likely involve a couple top-level concepts.

  1. Detect the platforms that Visual Studio supports and record the build environment for that platform. - Make a small temporary batch script to call the /VC/vcvarsall.bat and output the environment (such as with the ‘set’ command) changes for recording and later use/replication. These changes would include, but not be exclusive to, Path, LIBPATH, INCLUDE, and windowsSdkDir. - platforms i’m aware of VS supports/has supported are ‘x86’, ‘amd64’, ‘ia64’, and ‘arm’, either manifesting as native chains or as cross-prefix varieties (_) forms such as ‘x86_amd64’, ‘x86_ia64’, ‘x86_arm’ - These environment variables will be necessary for instantiating later processes such as cl, link, lib, rc, and eventually midl

  2. With all sorts of version/platform mixes, it does become a potential mess, so a command/process factory of sorts is likely going to be useful in designing this, where it takes - a visual studio version - a platform - the process executable to start (not quite sure how ‘ml’ is going to ‘fit’ with it being one that changes names between platforms) which returns something to build the rest of the process arguments with (ProcessBuilder or similar), so arguments can just be added as necessary, expecting the environment and full path to the command already be initialized.

This theoretically could be the base primitives for such support, though things like 1) different intermediate object & cache locations for each mixture 2) different locations for final module/project outputs for each mixture would also need to be considered.

This framework could, seemingly easily, later be extended to support C# compilation (which we also use at work) with it’s special ‘Any’ platform

All in all, if this is considered I would love to try and convince my boss to let me work on helping implement the idea, seeing at how it would be able to alleviate our build maintenance headaches by gradle being the all-in-one solution for us.

The new ‘native binary’ support in Gradle is starting to address exactly these sorts of issues, so your thoughts are pretty much on track with where we’re heading.

The first thing I’d suggest is that you check out a recent nightly build of Gradle (http://gradle.org/nightly) or even better build from source. You can then check out some of the ‘native-binaries’ samples and see what functionality is already present in Gradle. Currently we support a number of tool chains (GCC, Clang, Visual Studio 2010/2012) targeting a defined set of target platforms (x86, x86_64, ia64). There a recent pull request to extend this support to include Visual Studio 2013 on Windows 8.1.

The Visual C++ tool chain already configures the relevant Path, LIBPATH, INCLUDE etc, and manages the different names/paths for the various tools. Recent changes mean we can now use rc.exe to build windows resources as well. This is in addition to support for building debug/release variants, supplying custom compiler/linker args and incremental compile of C++ code.

It would be great if you could try out Gradle with your current project, and let us know any limitations you encounter. Note that this functionality is still a developer preview and is subject to change, but we’re planning to continue to make the C++ support in Gradle even better.

Hi Daz,

I have been watching the github repository and did notice the pull request for the Visual Studio 2013 recognition, but it’s updating code to look for things like the Windows SDK directory and such, which doesn’t really make sense to me as to why it’s having to do that.

I did talk to my boss, asking for time to look into this, and he said I might be able to dig into it in a few days, so I haven’t had the ability to start chomping down on what’s there yet.

that pull request seems like gradle is still working from the fact that the environment has to be pre-initialized before gradle is started and it has to go do the arduous finding of what’s what and where…

Hopefully I’ll get the time to start digging in soon and get some more accurate feedback, rather than make statements off of largely incomplete data.

The Windows SDK 8.1 detection part is a different thing (not bound to Visual Studio 2013), it just happens to be in the same pull request. It also has cross-compiler stuff which is not really related to Visual Studio 2013 either.

Anyway that pull request is far from complete, I’m adding more to it right now (such as actually keeping track of the Visual Studio version and capabilities, which might be what you’re thinking of).

One of the big benefits of using Gradle to build with Visual Studio is that you don’t need to configure your environment for the tool chain. With Gradle, you can build using Visual Studio simply by having it available on the PATH (cl.exe+rc.exe), installed into a standard location, or explicitly configured. It’s even possible to build with multiple versions of Visual Studio (2010/2012 and soon 2013) in the same invocation of Gradle. You should try it out!

That’s precisely the functionality I’ve been looking forward to seeing!

The pull requests I’ve seen though make me a bit worried though. It seems like gradle is manually building up the environment for the compilation, rather than utilizing the environment setup scripts that visual studio provides. I’m not sure as to why this is (or even if this is an accurate interpretation), but it seems like a risk to not follow the behavior of visual studio and having to manually build up the compilation environment, as this allows for gaps to arise which may be harder to discern later…

You’re right, we’re not utilising the batch scripts that come with Visual Studio or the Windows SDK. We want to understand and model the execution environment rather than relying on opaque configuration scripts: having that model will enable us to do more sophisticated and powerful things in the future.

I’m pretty confident that over time we can come up with a solution that works as well or better than the batch scripts, with all of their magic properties.

I’m sorry, but I’m not understanding the points you’re trying to state with ‘understanding and model the execution environment’ and ‘enable us to do more sophisticated and powerful things in the future’

I would say that those environment scripts are an abstraction mechanism, that gradle is deciding to deliberately violate, but for what purpose exactly? - the supplied reasoning is currently too vague for me to comprehend completely.

On the first point, I find the nearest similarity in gradle being the wrapper. Would you want/expect others to go “no don’t use the gradle wrapper, since we want to completely figure it out what its doing ourselves” - what’s the point in the wrapper then?

Similarly, in the GCC paradigm, I would say that the [i]sysroots are the closest concept. Does gradle perform searches on the file system to find the [i]sysroots and then tell gcc and associated binaries where they are?

On the second point, I don’t see any benefit in not using the VS environment scripts. there’s nothing preventing gradle from using them as a starting point environment to which they can be further enabled for “more sophisticated and powerful things”. This may also help with creating a clear delineation of what “Visual Studio natively does/supports” and “cool functionality that gradle can do on top of that”

Primarily, I don’t understand why gradle chose this path, and would just like to have more clarification of the history of the current path that gradle has chosen.

Through experience in other domains, we’ve found that there a host of benefits when Gradle understands exactly the inputs to a particular build process.

On problem with the Visual Studio batch scripts is that they depend on a number of other environment-specific settings. By just using these setting then perhaps the build will work on my machine, but it’s difficult to guarantee that the build will work in exactly the same way on another developer’s machine, or on my CI server.

Another problem with the batch scripts is that they are opaque to Gradle. This means that we don’t know when the inputs have changed, so it’s hard to know when to rebuild. We also don’t know what the inputs are exactly, so it’s hard to know what meta-data should be published with the build outputs.

One of the goals in Gradle is to have less stuff implicit in the environment, and more things explicitly declared. But there’s a balance here. You’re right that one way to look at the Visual Studio tool chain interface is at the level of the batch scripts: we have instead chosen the command-line tools as the primary interface, with the environment explicitly configured. In the future it will be possible to implement a Visual Studio tool chain that uses the batch files if that proves to be a necessary alternative.

(note: I’m not part of Gradleware)

I can see advantages and disadvantages to both methods…

On one hand, even if you use the vcvars*.bat files to setup your environment, you still need to actually know where to look for those files. And then you need to know what properties/environment variables those files set, which might change between versions, so you’re really just moving the problem.

On the other hand, there’s more to getting the compilers working than just getting those informations out of the .bat files. Simple example: if I just execute amd64\cl.exe (which is basically the “normal” compiler on my machine), everything works fine. Try to execute amd64_x86\cl.exe? It has a bunch of dependencies that are spread across multiple directories and those need to be in your path… And that’s really annoying…

Basically, I don’t know. I’ll just do it the way I’m asked to do it :slight_smile:

@Daz > On problem with the Visual Studio batch scripts is that they depend on a number of other environment-specific settings. By just using these setting then perhaps the build will work on my machine, but it’s difficult to guarantee that the build will work in exactly the same way on another developer’s machine, or on my CI server.

Bringing this to the gradle paradigm: because of (unknown) fear that the gradle wrapper might not work on all systems, a custom solution has to be maintained to do what it does instead, adding maintenance for little benefit. At work, we’ve been using those initialization scripts since before .NET, across all of our developers’ individual systems and all of our CI systems. It seems evident that Microsoft has got the scripts stable across the possibilities.

Another problem with the batch scripts is that they are opaque to Gradle. This means that we don’t know when the inputs have changed, so it’s hard to know when to rebuild. We also don’t know what the inputs are exactly, so it’s hard to know what meta-data should be published with the build outputs.

This is touching a couple points.

  1. An assumption being made here that the Visual Studio environment changes regularly. True, it can change, but that’s primarily only when a service pack of Visual Studio gets installed. Here, it’s the libraries, headers, and the command line utilities that change, not those scripts.

And we’ve been neglecting the GCC paradigm a lot in these discussions. GCC has “environment” and “metadata” that change between versions as well. GCC also releases much more often than Visual Studio. Does gradle setup a build environment from scratch for gcc, keeping track of all the details to see when it changes there, or does it assume that it’s setup correctly? (I’ve not looked, so this is not rhetorical)

  1. “Metadata” for Visual Studio chains for publishing is going to be more affected by * The version of Visual Studio * The base target platform of the command line utilities * The SP level of Visual studio → often correlating to needing SP runtime distributables to run the binaries for that version. * The options of the command line utilities, e.g. ** /MT(d) /MD(d) determining how/which of the C/C++ runtimes are linked. ** /arch: (and what the compiler defaults it to (which changes over versions)) determining the minimum instruction set the code runs on. * The code (because you never know what’s really there), e.g. ** handwritten assembly using particular instructions ** http://msdn.microsoft.com/en-us/library/7f0aews7(v=vs.80).aspx

GCC has similarities and differences to this as well…

The environment variables INCLUDE, LIB, and LIBPATH affect the compiler and linker search directories, but that’s pretty much the extent of the effect of the environment on the command-line environment.

Visual Studio (as an IDE/build system) can use environment variables (in addition to its macros) as fill in variables for resolving include and library directories and such as well.

@mputters > On one hand, even if you use the vcvars*.bat files to setup your environment, you still need to actually know where to look for those files.

Visual Studio pops its installation location into the registry, but avoiding that you can use the system-level environment variables it creates. “${System.env[VS<version>COMNTOOLS]}…/…/” gets you to the root of a Visual Studio installation, where version is ‘80’, ‘90’, ‘100’, ‘110’, or ‘120’ (basically major.minor version without the decimal) (.NET 2002 is an exception to the standard in a couple points) from there it’s VC/vcvarsall.bat <platform> to call the script which forwards onto the platform-specific ones. Don’t go looking for them yourself. There’s a bit of a list to maintain for the different combinations of platforms and then trying cross-compilation versions if the native version isn’t there, but I would say that’s simpler than what’s going on now.

And then you need to know what properties/environment variables those files set, which might change between versions, so you’re really just moving the problem

“Versions” as in different versions of visual studio or different service packs of the same version? Different versions makes absolute sense since visual studio has specific versioned libraries and headers it needs to setup for. You can’t expect to setup a Visual Studio 2012 compiler against 2013 or 2010 libraries/headers and have it work.

On the other hand, there’s more to getting the compilers working than just getting those informations out of the .bat files. Simple example: if I just execute amd64\cl.exe (which is basically the “normal” compiler on my machine), everything works fine. Try to execute amd64_x86\cl.exe? It has a bunch of dependencies that are spread across multiple directories and those need to be in your path… And that’s really annoying…

I’m not understanding the example being made here, the .bat files setup the environment for compiling for that platform chain, there’s no path mucking needed afterwards. If you want amd64 cross-compilers from x86 systems, then you initialize with the ‘x86_amd64’ platform. It’s understood that the environment can’t compile for platforms other than what was initialized for.

In the end, those environment setup scripts do some relatively complex things to be versatile in setting up the compilation environment (such as reading the registry to find installation locations of all the pieces). So gradle trying to do that work itself (and the current state of doing in a much less versatile manner) is not what I see as worth the effort, at least not for the reasons listed so far.

  • edit used < > as parameter indicators that got treated as invalid markup and needed to be escaped

Thanks for the input, @Steven. You obviously have a lot of experience with Visual Studio, and it may turn out that our current approach is inadequate or untenable in the long term.

The current implementation is certainly not deeply embedded in the design; there’s no reason we can’t have a Visual Studio tool chain implementation that uses the batch scripts for configuration.

@Steven

The versions part was more about the SDK than the compiler itself, like the 8.1 SDK splitting the includes into 3 separate directories (well 2 that matter + the RT one).

As for the “On the other hand” part, it was about the downside of not using the .bat, so yes, obviously it’d be simpler with the .bat, that’s the point I was trying to make.

@all

That being said, if we need to switch to this vcvars method + the extra info (/MT, etc), I’ll gladly - almost typed gradly heheh - work on that part as well.

But while we’re on the topic, here is some issue I’ll have at some point as well: - I have a C++ project (well really multiple static libs + an executable, so projectS, but a single gradle build file) - it builds the same application for Windows, Linux, OS X, iOS, Android and - eventually - Windows Phone - it also builds small test projects (tiny executables) for Xbox 360, which requires VS2010

So what do you both think should be the nicest way to set this up?

Another parameter that might be worth discussing: how do we handle unicode/ansi (even though it’s a simple /D)? We can’t really just define it by default, but then it may not be very user friendly to have people know you need to define it when the IDE does it for you.

Hm another one: exception handling By default, _HAS_EXCEPTIONS is #define’d as 1, which translates all _TRY_BEGIN and such macros to actual try/catch. However when that happens, you get a warning because /EHsc is not passed to the compiler by default. Might be nice to have some standard parameter to enable/disable exception handling for each toolchain.

Yes, exception handling semantics is definitely something we want to include in the Gradle model of your native components/binaries. You can easily add a plugin to do this for all binaries, but if Gradle knows about this then it can make smarter decisions about what variants to link against among other things.

The same may well apply to unicode/ansi: this is an attribute of the source set which we could model.

The unicode/multi-byte/ansi one (_UNICODE, _MBCS, or lack of either respectively) affects things that could or could not change underlying APIs.

Primarily this would involve around the TCHAR type, which gets defined specifically to wchar_t (unicode) or to char (multibyte/ansi). the _T/_TEXT macros and the _tcs<x> functions all get defined similarly too, as well as windows APIs getting defined going to A or W suffixed versions, but these all should primarily be internal and not affect externally facing APIs or limit usage thereof.

But since there is the chance it can affect things depending on the code, it should probably be modeled and tracked.

Similarly, the /Z:wchar_t(-) also affects things similarly for whether wchar_t is a native type or is a typedef to a short. This discrepancy makes C++ decorations incompatible, but C lacks that, so not an issue there…

There’s quite a large amount of flags/options for Visual Studio that drives the behavior that utilizing code need to be aware of. I’ve not looked at gradle’s IDE models for Visual Studio yet, but it seems like these are probably starting to pop up there too.

Because there is quite a large discrepancy between Visual Studio (and derived) and GCC (and derived) in compiler options and “metadata that needs tracking”, I’m wondering if it would be more convenient to have DSL options/configurations that are specific to each of them…

But, given the drastic differences between Visual Studio and GCC, usually if you stick to VS’isms or GCC’isms like things mentioned above in exposed APIs, there’ll be lots of issues in trying to (start) using the other compiler variety…

The way I see it, some of those settings would be common to every toolchain (and properly translated) when it makes sense, or specific to a particular toolchain when it doesn’t (it’s already the case for the Windows SDK directory for example).

The wchar_t example could handle /Z:wchar_t(-) for VisualCpp and short-wchar for Gcc (even though they don’t exactly do the same).

You could also imagine having some rather abstract optimization level (say, 0 to 5) and map them for each compiler, even though some might only have 2 or 3 levels available.

Debugging information is another one that would nicely map for all toolchains.

But another point that may actually be more important than the actual mapping is the defaults we set. Right now, Visual Studio defaults to having wchar_t be its own type and (afair) _UNICODE builds. It used to be the other way around, so which one should Gradle use as the default? (I’d rather have it use the latest defaults, personally, but then I haven’t done an ANSI build in 500 years)

Yes, once we’ve modelled a few more key concepts there will be a need to review the default behaviour of Gradle. And it would be great if we can (as much as possible) make the default behaviour consistent across tool chains.

One nice thing about Gradle is that you can already easily extend the model with your own custom attributes, configure these attributes in your build, and then process the attributes in a custom plugin. For example, without modifying Gradle core you could write a plugin that would allow:

apply plugin: ‘my-optimisation-levels’ buildTypes {

debug {

optimisation 0

}

release {

optimisation 3

} }

Your plugin would add the ‘optimisation’ attribute (with a default value) and add a rule to inspect the ‘optimisation’ attribute of every binary and apply the the correct compiler arguments (different for different tool chains). Adding the ‘optimisation’ attribute can be as simple as using: “buildTypes.all { it.ext.optimisation = 2 }”, or better still use the fact that these objects are ExtensionAware.

@mputters

Yes, for concepts that are shared between toolchains they should be common in the DSL. But as we’ve mentioned already, there are things that aren’t common.

one major example that hasn’t been mentioned yet is MSVS’s: common language runtime (/clr[:…]) option which completely changes the entire game by making the C++ code more similar in nature to C# (Managed C++), requiring .NET framework runtime, being able to reference C# libraries and such. (Given the complexity of this one, may be better to just not support it at all until gradle supports C#)

since we just mentioned optimization levels, though being a more minor one MSVS has a max of 2 in most versions (not used 2013 all that much yet, so not sure if they remained consistent there) Intel’s compiler (even on Windows) has a max of 3 GCC goes up to 3, but usually also accepts arbitrary larger values.

Actually, speaking of CLR, I don’t think it should be bound to the toolchain, but it’ll be simpler to evaluate that when there is support for C# or other .NET stuff.

A better example (I think) would be the Windows SDK: right now it’s part of VisualCppToolchain. So, if I want to build something with mingw or other, I have to pass the arguments manually.

So instead of this:

targetPlatforms {
    windows {
        operatingSystem 'windows'
        architecture 'x86_64'
    }
}

Why not:

targetPlatforms {
    windows(Win32) {
        sdk 'version or path to sdk'
        unicode true
    }
      android(Android) {
        ndk 'version or path to ndk'
        sdk level: 17 minimum: 11
        permission 'permission1', 'permission2', ...
    }
     ios(iOS) {
        sdk '7.1'
        frameworks 'framework1', 'framework2' // ok frameworks could maybe be special dependencies
   }
}

(that’s just random properties to give a general idea)