<div class="gmail_quote">2010/1/10 Dmytro Ovdiienko <span dir="ltr"><<a href="mailto:dmitriy.ovdienko@gmail.com">dmitriy.ovdienko@gmail.com</a>></span><br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
Gavin,<br><br>Good practice is to run as much tests as possible and do not stop on first failed test. <br><br><div class="gmail_quote"></div></blockquote><div> </div><div>Hi Dmytro,</div><div><br></div><div>Thank you for your advice. For integration or overnight build testing I absolutely agree, and that is where I could use "make test" to run full tests for all components. Running tests by default is primarily to influence day-to-day development behaviour.</div>
<div><br></div><div>I try to practise test-driven development, and when I type make I'm usually only interested in the tests related to the component I'm currently working on. These tests have to complete quickly to avoid interrupting my concentration. Running just the test suite for a changed component when I type make works great for me.</div>
<div><br></div><div>When I say "a failing test causes a failing build", what I actually mean is one or more failures from the test program I invoke causes a failing build. This might be several individual unit test failures; Google Test runs all tests in a suite before reporting a summary.</div>
<div><br></div><div>Thanks very much,</div><div><br></div><div>Gavin</div><div><br></div></div>