#258 closed defect (wontfix)

noise from skip and todo tests makes it look like the test has failed

Reported by: zooko Owned by: zooko
Priority: major Milestone:
Component: dev-infrastructure Version: 0.7.0
Keywords: testing Cc: warner
Launchpad Bug:

Description

I suppressed the SKIP and TODO tests (one each) because the test output was disheartening to users, who didn't see the "PASSED" among all the warning tracebacks.

This ticket is for me to unsuppress those as soon as I've cut the 0.7.0 release.

Change History (7)

comment:1 Changed at 2008-01-05T05:13:36Z by warner

For reference, the two tests in question are in test_web.py:

--- old-tahoe/src/allmydata/test/test_web.py    2008-01-04 21:13:22.000000000 -0800
+++ new-tahoe/src/allmydata/test/test_web.py    2008-01-04 21:13:22.000000000 -0800
@@ -968,7 +968,6 @@
                                   400, "Bad Request", "random",
                                   self.PUT, url, "")
         return d
-    del test_PUT_NEWDIRURL_localdir_missing
 
     def test_POST_upload(self):
         d = self.POST(self.public_url + "/foo", t="upload",
@@ -995,7 +994,6 @@
         # you just uploaded.
         return d
     test_POST_upload_no_link_whendone.todo = "Not yet implemented."
-    del test_POST_upload_no_link_whendone
 
     def test_POST_upload_mutable(self):
         # this creates a mutable file

comment:2 Changed at 2008-01-15T03:13:49Z by zooko

  • Resolution set to fixed
  • Status changed from new to closed

fixed by e65967da49a8232c

comment:3 Changed at 2008-01-23T03:21:23Z by zooko

  • Milestone 0.7.1 deleted

Milestone 0.7.1 deleted

comment:4 Changed at 2008-01-23T20:31:44Z by zooko

  • Resolution fixed deleted
  • Status changed from closed to reopened
  • Summary changed from put back skipped and todo tests to noise from skip and todo tests makes it look like the test has failed

I've observed three cases of people installing tahoe and then running the unit tests to see if it was installed correctly. In all three cases, the people saw the noise generated by the skip and todo tests (one of each) and concluded that tahoe was failing its own tests.

I think it is useful for people to be able to use "make test" as a way to gain assurance that the version of tahoe they just got is a good version. Also I don't like noise that I have to scan past, myself, when looking at output from a tool.

Any ideas?

comment:5 Changed at 2008-01-23T21:11:38Z by warner

How about we say that official releases (i.e. 0.7.0) should be no-warnings clean (so 'make test' doesn't complain about TODOs or skips), but development code in-between releases does not have this requirement?

I'm more in favor of teaching new developers to look for the "PASSED" line (and teach them exactly what the skip/todo tests mean) than losing .skip/.todo as a development tool.

comment:6 Changed at 2008-01-23T21:34:07Z by terrell

I was the 'third case' mentioned above.

I saw the PASSED, but figured it was important to report the SKIP as well, not knowing if it was common for others to see the same.

I think warner's suggestion of official/development makes the most sense. I was pulling the most recent from darcs and so wanted to see the most information I could.

Thanks.

comment:7 Changed at 2008-01-23T21:45:53Z by zooko

  • Resolution set to wontfix
  • Status changed from reopened to closed

Good enough for me.

Note: See TracTickets for help on using tickets.