Notes |
|
(0035871)
|
Ben Boeckel
|
2014-05-13 11:38
|
|
This is because ninja is doing its own scanning relying on GCC's depfile output while the makefile generators are using cmCDepends to scan for the dependencies. Telling it that the header is SYSTEM makes the compiler ignore it (if you change an actual system header, e.g., stdint.h, you won't get a recompile in make either) which ninja will diligently do. The difference is that CMake's dependency scanner sees that it is under the source tree and adds it anyways. |
|
|
(0035877)
|
Pawel Sikora
|
2014-05-14 05:57
|
|
|
|
(0035879)
|
Ben Boeckel
|
2014-05-14 11:02
|
|
It is in the sense that ninja is not the same as make, but they are not 100% interchangeable anyways. System headers are stripped from the dependency list by the compiler in ninja versus a CMake-custom tool in make. Either the compiler needs a way to say "force dependencies on files under /path/to/project" or CMake needs to use its mechanism in ninja, which is going to have issues since Ninja supports only a subset of the output required (only one deprule per depfile and it must match the output).
One workaround would be to add the dependency explicitly, but if it really a system header, should it really be changing all that often? I feel like marking it as system and still editing it regularly enough that this is a problem is contradictory… |
|
|
(0035880)
|
Pawel Sikora
|
2014-05-14 11:24
|
|
system headers *can* change in various ways and should be included in deps.
1).
you can upgrade compiler (e.g. 4.8.x -> 4.8.x+1 or 4.9.y) and libstdc++
templates/headers change should force the rebuild.
2).
you can switch to another version of the cross-compiler by swithing
branches in the current working copy of the project. rebuild may be required.
3).
you can upgrade 3rd-party libs (e.g. the boost version) in your project (included as system) and this should force rebuild in all use places.
project branch switching and 3rd-party libs can result in frequent system
headers changes. in all these scenarios current ninja deps tracking model
may lead to wrong binary code (missed recompilation).
btw).
i've changed the '-MMD' to '-MD' in the Modules/Compiler/GNU.cmake
as a local workaround in our project repo. |
|
|
(0035881)
|
Ben Boeckel
|
2014-05-14 11:29
|
|
So what you really want is for system headers to be included in dependencies. Which is fine, but not a ninja-specific problem (upgrading the compiler or the system boost won't trigger recompiles in make either). Maybe excluding them should be an option. Brad, what's the rationale for excluding system headers currently? |
|
|
(0035882)
|
Brad King
|
2014-05-14 14:17
|
|
Re 0014914:0035881: I do not recall whether there was any explicit decision for whether system headers should be included in dependencies. Some people want them. Some people do not.
The Makefile generator dependency scanner does not do full preprocessing. It uses a heuristic approach that works well in most use cases and does not depend on the compiler having a feature like gcc -MM. It ignores dependencies on headers it cannot find in the list of include directories the scanner is given. The set of include directories is computed by cmLocalGenerator::GetIncludeDirectories:
http://cmake.org/gitweb?p=cmake.git;a=blob;f=Source/cmLocalGenerator.h;hb=v3.0.0-rc5#l220 [^]
Note that the stripImplicitInclDirs argument default is "true", and that the call site for handing directories to the dependency scanner:
http://cmake.org/gitweb?p=cmake.git;a=blob;f=Source/cmMakefileTargetGenerator.cxx;hb=v3.0.0-rc5#l1057 [^]
does not change it to "false". Therefore implicit directories known to the compiler are excluded from dependency scanning. |
|
|
(0040668)
|
Jussi Judin
|
2016-03-13 08:18
|
|
|
|
(0040679)
|
Brad King
|
2016-03-14 11:48
|
|
Re 0014914:0040668: Thanks. I think we should certainly provide a way to use -MD instead of -MMD, at least optionally. However, while some people want to include the system dependencies for full build correctness others want to exclude them for speed (and are willing to rebuild from scratch on system upgrades). We could switch the CMAKE_DEPFILE_FLAGS_ content based on a variable like CMAKE_NINJA_DEPEND_NO_SYSTEM. We can then use -MD by default to ensure correctness by default. |
|
|
(0040684)
|
Jussi Judin
|
2016-03-14 15:19
|
|
In my benchmarks the speed is not concern whether there is -MMD or -MD in use. A full build of over 2000 ninja targets where each target has easily dozens of dependencies (many of them are the same) did not see any measurable difference whether the dependencies were generated with -MD or with -MMD. Build time variation was higher between clean builds due to other activity on the system than what I could see (with 0.1 second accuracy) when adding -MD flag. And there shouldn't be any measurable difference, as there are only maybe some dozens or hundreds of new files in the end to check, as system headers point to same locations for multiple different files.
Also I have done quite a large builds with hundreds of thousands of targets and their dependencies and ninja goes through about 80000-100000 of them in 1 second on my laptop for its build files when everything is in memory (and when having same dependencies for multiple targets, everything is). In plain textual ninja files. I haven't benchmarked how it would do with dependency tree generation with dependency files, but as they come from ninja build log and are in binary format, I would assume that they have even a smaller impact. |
|
|
(0040694)
|
Brad King
|
2016-03-15 10:35
|
|
|
|
(0040698)
|
Brad King
|
2016-03-15 14:02
|
|
|
|
(0040699)
|
Jussi Judin
|
2016-03-15 18:34
|
|
I wonder how I could respond to email discussions when I don't have the original email. But I can explain the issues in those two above mentioned email threads that make Attila Krasznahorkay's builds slow: makefile generator with recursive make and uncached(?) network file system (/afs/). It highly resembles the situation that I encountered around 1.5 years ago where no-op build took 30 seconds to just check dependencies with CMake's makefile generator. That no-op build time was improved to 0.2 seconds by using ninja generator in CMake (= 150x speedup).
This leads into a situation where every make invocation has to read and check dependencies and there can be hundreds or thousands of those invocations. This leads to a situation where make is doing easily hundreds of file system calls against the same file. Ninja only needs to do this once, as it just reads the generated build file and build log and can form the dependency graph based on those and optimize the file system access to just one file system call per path. Some benchmarks about recursive make and its issues when having to check a lot of dependencies are described here: http://gittup.org/tup/make_vs_tup.html [^]
Other issues that leads into slow dependency checks and builds is the usage of a networked file system with non-existent caching behavior (/afs/). This can be also seen when Attila is talking about adding hundreds of include paths to GCC that the build performance is hampered by file system access. Which is not surprising that every include path you add to GCC needs to be scanned until the appropriately named header can be found. Doing hundreds of stat() calls on a file system that does not cache itself fully into memory (or even if AFS does it, it in my experience is still way slower than native file systems on a local machine) is really time consuming.
So there really are two parts in this equation. First is the build system (recursive make) and there is already a solution in CMake for this that can already give tremendous speedups (ninja generator) and for the second issue (environment with slow network drives) CMake can't really do anything without badly compromising the build integrity. And I would hope build integrity to be the primary concern of any build system by default over any performance issues. |
|
|
(0041227)
|
Kitware Robot
|
2016-06-10 14:21
|
|
This issue tracker is no longer used. Further discussion of this issue may take place in the current CMake Issues page linked in the banner at the top of this page. |
|