An possible alternative could be to associate one or more labels to your
tests, then you should be able to run a specific subset of the test
suite. <br>
<br>
See <a href="http://www.cmake.org/cmake/help/ctest-2-8-docs.html#opt:-Lregex--label-regexregex">http://www.cmake.org/cmake/help/ctest-2-8-docs.html#opt:-Lregex--label-regexregex</a><br>and <a href="http://www.cmake.org/cmake/help/cmake-2-8-docs.html#prop_tgt:LABELS">http://www.cmake.org/cmake/help/cmake-2-8-docs.html#prop_tgt:LABELS</a><br>
<br>
Hth<br>
Jc<br><br><div class="gmail_quote">On Tue, Feb 21, 2012 at 2:51 PM, Robert Dailey <span dir="ltr"><<a href="mailto:rcdailey@gmail.com">rcdailey@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div class="gmail_quote"><div><div class="h5">On Tue, Feb 21, 2012 at 1:15 PM, David Cole <span dir="ltr"><<a href="mailto:david.cole@kitware.com" target="_blank">david.cole@kitware.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div><div>On Tue, Feb 21, 2012 at 1:51 PM, Robert Dailey <<a href="mailto:rcdailey@gmail.com" target="_blank">rcdailey@gmail.com</a>> wrote:<br>
> On Tue, Feb 21, 2012 at 12:37 PM, David Cole <<a href="mailto:david.cole@kitware.com" target="_blank">david.cole@kitware.com</a>> wrote:<br>
>><br>
>> On Tue, Feb 21, 2012 at 1:27 PM, Robert Dailey <<a href="mailto:rcdailey@gmail.com" target="_blank">rcdailey@gmail.com</a>> wrote:<br>
>>><br>
>>> Hi,<br>
>>><br>
>>> I'm using Visual Studio as my generator for my CMake projects. As of<br>
>>> right now, I make my tests depend on the libraries they test. So for<br>
>>> example, tests named:<br>
>>><br>
>>> test_thingA<br>
>>> test_thingB<br>
>>><br>
>>> will all depend on library:<br>
>>><br>
>>> libfoo.lib<br>
>>><br>
>>> When I build target "libfoo" in visual studio, it would be nice to have<br>
>>> all dependent tests build as well, and have them each execute.<br>
>>><br>
>>> The goal for all of this is to make it as convenient as possible for<br>
>>> developers on my team to RUN TESTS on their code before they submit to<br>
>>> version control. I want to make it automated, so when they rebuild the<br>
>>> library, the testing automatically happens. I'd also obviously create an<br>
>>> option in cmake cache to turn this automation off should it become too<br>
>>> annoying.<br>
>>><br>
>>> If this isn't a good idea, can someone recommend a good workflow for<br>
>>> running tests locally prior to checking in source code?<br>
>>><br>
>>> ---------<br>
>>> Robert Dailey<br>
>>><br>
>>> --<br>
>>><br>
>>> Powered by <a href="http://www.kitware.com" target="_blank">www.kitware.com</a><br>
>>><br>
>>> Visit other Kitware open-source projects at<br>
>>> <a href="http://www.kitware.com/opensource/opensource.html" target="_blank">http://www.kitware.com/opensource/opensource.html</a><br>
>>><br>
>>> Please keep messages on-topic and check the CMake FAQ at:<br>
>>> <a href="http://www.cmake.org/Wiki/CMake_FAQ" target="_blank">http://www.cmake.org/Wiki/CMake_FAQ</a><br>
>>><br>
>>> Follow this link to subscribe/unsubscribe:<br>
>>> <a href="http://www.cmake.org/mailman/listinfo/cmake" target="_blank">http://www.cmake.org/mailman/listinfo/cmake</a><br>
>><br>
>><br>
>><br>
>> If you're using add_test in your CMakeLists files, then the perfect way to<br>
>> prove that the tests all work on a developer's machine is for him or her to<br>
>> run:<br>
>><br>
>> ctest -D Experimental<br>
>><br>
>> after making local mods, and before pushing the changes to your source<br>
>> control system.<br>
>><br>
>> That will configure, build all and run all the tests. And submit the<br>
>> results to your CDash server so that is public evidence that he actually did<br>
>> run the tests, and hopefully that they all passed on his machine at least.<br>
>><br>
>> You can also restrict the set of tests that run using -R or -I or -L on<br>
>> the ctest command line, although, you should strive to have your test suite<br>
>> be brief enough that it's not painful for folks to run the full test suite<br>
>> prior to checkin.<br>
><br>
><br>
> I think this is a reasonable idea for small projects, but in general I<br>
> disagree with running all tests.<br>
><br>
> There are hundreds of projects (probably 150) and hundreds more of tests<br>
> (probably 10 tests per project). In general agile methodology, it only makes<br>
> sense to unit test those components which have changed. Unit testing a<br>
> dependent component that did not have a source code change will not be<br>
> needed or beneficial.<br>
><br>
> All of these tests can take hours to run, which isn't unacceptable because<br>
> it's a full test suite. Only the build server kicks off a build and runs the<br>
> FULL test suite (thus running ctest -D Experimental as you have suggested).<br>
> Developers just do an intermediate check by unit testing only the parts of<br>
> the code base that have changed. This is essential for practices like<br>
> continuous integration.<br>
><br>
> Ideally the pipeline goes like this:<br>
><br>
> Programmer makes a change to a certain number of libraries<br>
> Programmer runs the relevant tests (or all) for each of the libraries that<br>
> were changed.<br>
> Once those tests have passed, the developer submits the source code to<br>
> version control<br>
> The build server is then instructed to run a full build and test of the<br>
> entire code base for each checkin.<br>
> The build server can then run any integration tests that are configured (not<br>
> sure how these would be setup in CMake - probably again as tests, but not<br>
> specific to only a single project)<br>
> Build is considered "complete" at this point.<br>
><br>
> Seems like there would be no choice but to run them individually in this<br>
> case, since CMake really shines only in steps after #3<br>
<br>
</div></div>Incremental testing is something we've talked about over the years,<br>
but there's no concept of "what's changed, what needs to run" when<br>
ctest runs at the moment. Communicating that information from the<br>
build to ctest, or making testing always part of the build are the two<br>
approaches we've considered. Nothing exists yet, though, so it's all<br>
in the future, yet to come.<br>
<br>
Sorry to hear you disagree about running all the tests. I'll make one<br>
more point and then shut up about it: the larger the project, the more<br>
you need to run all the tests when changes are made. Unless the<br>
developers all have a very good understanding of what parts need to be<br>
tested when they make a change, they should run all the tests. If a<br>
system is very large, then developers are more likely to have<br>
imperfect understandings about the system... when that's the case, if<br>
there is any doubt at all about what dependencies exist, then all the<br>
tests should be run to verify a change.<br>
<br>
Until incremental testing is available, I'd say your best bet is to<br>
run all the tests.</blockquote><div><br></div></div></div><div>I apologize if I sounded like your suggestion wasn't meaningful or useful. I would much rather prefer to do it how you suggest (running all tests), but this leaves me with some concerns:</div>
<div><br></div><div><ol><li>If the developer is running all unit tests on their local machine, what is the purpose of then running them on the server? If the server does it again in response to the commit, wouldn't that be considered redundant?</li>
<li>Let's assume that the time it takes to run all tests takes about 1 hour. Not only does this slow down productivity, but it also makes practices like continuous integration impossible to perform, since a lot of people can commit work in that 1 hour window, in which case you'd have to run the tests again after updating. It's a recursive issue.</li>
</ol><div>How would you address the concerns I have noted above?</div></div><div><br></div><div>My tests are labeled in such a way that they are easy to spot and work with in the solution explorer. For example:</div><div>
<br></div><div>projectA</div><div>projectA_test_iostreams</div><div>projectA_test_fileio</div><div>projectA_test_graphics</div><div>projectA_test_input</div><div><br></div><div>In my example above, target named "projectA" has 4 unit tests. Each test can be responsible for 1 or more translation units (there is no strict rule here). If I change the way files are loaded by library "projectA", then I would run the fileio test. However, in this case it's really easy for the developer to spot the tests for that project and run all of them if they are unsure.</div>
<div><br></div><div>Would you also mind commenting on this structure? It seems to ease the concern you mentioned about people not always being able to know which tests to run.</div><div><br></div><div>Thanks for your feedback. Keep in mind that not only am I covering general testing principles, but I also want to know how to best apply them to the tools (CMake, CTest). This is where your expertise becomes valuable to me :)</div>
</div>
<br>--<br>
<br>
Powered by <a href="http://www.kitware.com" target="_blank">www.kitware.com</a><br>
<br>
Visit other Kitware open-source projects at <a href="http://www.kitware.com/opensource/opensource.html" target="_blank">http://www.kitware.com/opensource/opensource.html</a><br>
<br>
Please keep messages on-topic and check the CMake FAQ at: <a href="http://www.cmake.org/Wiki/CMake_FAQ" target="_blank">http://www.cmake.org/Wiki/CMake_FAQ</a><br>
<br>
Follow this link to subscribe/unsubscribe:<br>
<a href="http://www.cmake.org/mailman/listinfo/cmake" target="_blank">http://www.cmake.org/mailman/listinfo/cmake</a><br></blockquote></div><br><br clear="all"><br>-- <br>+1 919 869 8849<br><br>