>Yes the title is ment to be a bit touchy, and controversial. Quality means different things to different people, and there tolerance of the deviation from that acceptable level varies. One measurement of quality is by how consistently your projects unit tests pass or don’t pass during your builds. A short run through the build status reports of projects on the release train, shows that for those projects that publish their unit test results, that the Modeling projects consistently have good I-builds that pass all their tests. There are more than a hand full of other projects that consistently have tests that fail during I-builds. Frightenly there are more than a few that have published Milestone builds with failing unit tests. A milestone should not be declared unless all it’s tests pass.
To me personally, unit tests are your first warning sign that your quality is on shakey ground if they conistently fail. Ignoring the tests, or deleting them to just get the build green does not address the issue. A unit test fails for a reason. Maybe the test itself is bad, but not addressing it still leaves the quality of the product produced in question.
If you are on the release train…how do you determine an acceptable quality for your product? Would you want to consume something from another project that was consistently failing tests? It could affect your product and how you consume that api? What about adopters? Shouldn’t they have stable (passing) I-Builds and Milestones to consume and develop with?
Note: The way I checked and determined the status was to visit each projects download page and for those that published their I-Build statuses, reviewed the results of their test suite runs. About one third of the projects only provided binaries for download, but not status information. Having the information available about the state of the builds is an important part of working in an open manner with the community. It helps adopters determine whether they want to consume that particular build or not.