#1191 closed defect (fixed)

unit test failure: failed to download file with 2 shares on one server and one share on another

Reported by: zooko Owned by: Brian Warner <warner@…>
Priority: major Milestone: 1.8.0
Component: code-peerselection Version: 1.8β
Keywords: immutable download Cc: warner
Launchpad Bug:

Description

For example in this run:

http://tahoe-lafs.org/buildbot/builders/hardy-amd64/builds/687/steps/test/logs/stdio

[ERROR]: allmydata.test.test_hung_server.HungServerDownloadTest.test_2_good_8_broken_copied_share

Traceback (most recent call last):
Failure: allmydata.interfaces.NotEnoughSharesError: ran out of shares: complete=sh0,sh2 pending= overdue= unused= need 3. Last failure: None

The download should have succeeded because server 0 should have shares 0 and 2 and server 1 should have share 1.

trunk/src/allmydata/test/test_hung_server.py@4661#L197

This test failure is nondeterministic. The first step is probably to understand why that is and make it fail every time.

Attachments (5)

test_2_good_8_broken_copied_share.txt (10.4 KB) - added by francois at 2010-09-04T23:22:27Z.
1191-fix.diff (644 bytes) - added by warner at 2010-09-06T18:34:10Z.
fix for the bug, but lacks a specific unit test
test.patch.txt (10.0 KB) - added by zooko at 2010-09-07T07:20:58Z.
a test of ShareFinder? in which everything that is provided to the ShareFinder? is a fake/mock thing. This doesn't quite succeed at exercising the bug because it doesn't arrange for ShareFinder? to run its own .loop after the last request is retired and before the last share is added to the node's _shares.
mockingtests.dpatch.txt (14.6 KB) - added by zooko at 2010-09-08T14:03:35Z.
do-retire-and-got-share-in-same-tick.dpatch.txt (2.0 KB) - added by zooko at 2010-09-09T18:59:23Z.

Download all attachments as: .zip

Change History (26)

comment:1 Changed at 2010-09-03T05:45:57Z by zooko

This diff seems to have caused the test to be green every time. I'll leave it running overnight to be sure:

 Zooko-Ofsimplegeos-MacBook-Pro:~/playground/tahoe-lafs/trunk$ time darcs diff -u
diff -rN -u old-trunk/src/allmydata/test/no_network.py new-trunk/src/allmydata/test/no_network.py
--- old-trunk/src/allmydata/test/no_network.py  2010-09-02 23:24:54.000000000 -0600
+++ new-trunk/src/allmydata/test/no_network.py  2010-09-02 23:24:59.000000000 -0600
@@ -238,7 +238,8 @@
         self.rebuild_serverlist()

     def rebuild_serverlist(self):
-        self.all_servers = frozenset(self.servers_by_id.items())
+        # self.all_servers = list(reversed(sorted(frozenset(self.servers_by_id.items()))))
+        self.all_servers = sorted(frozenset(self.servers_by_id.items()))
         for c in self.clients:
             c._servers = self.all_servers

comment:2 Changed at 2010-09-03T06:03:50Z by zooko

No, that wasn't it. Even with that diff the test occasionally goes red.

comment:3 Changed at 2010-09-03T06:40:34Z by zooko

Okay, this patch makes it go red every time:

HACL Zooko-Ofsimplegeos-MacBook-Pro:~/playground/tahoe-lafs/trunk$ time darcs diff -u
diff -rN -u old-trunk/src/allmydata/test/test_hung_server.py new-trunk/src/allmydata/test/test_hung_server.py
--- old-trunk/src/allmydata/test/test_hung_server.py    2010-09-03 00:21:06.000000000 -0600
+++ new-trunk/src/allmydata/test/test_hung_server.py    2010-09-03 00:21:09.000000000 -0600
@@ -101,7 +101,8 @@
 
         self.c0 = self.g.clients[0]
         nm = self.c0.nodemaker
-        self.servers = [(id, ss) for (id, ss) in nm.storage_broker.get_all_servers()]
+        self.servers = sorted([(id, ss) for (id, ss) in nm.storage_broker.get_all_servers()])
+        self.servers = self.servers[5:] + self.servers[:5]
 
         if mutable:
             d = nm.create_mutable_file(mutable_plaintext)
                                                                               

comment:4 Changed at 2010-09-04T22:25:24Z by zooko

When I was looking at the behavior of the tests when it was non-deterministic, I noticed that there are two ways that this particular test could turn out: either the first server to respond claimed 2 shares and the second claimed 1 share, or the first server to respond claimed 1 share and the second claimed 2 shares. I observed that the download always failed in one case and always succeeded in the other. So I hypothesize that maybe the download logic which is trying to achieve diversity isn't realizing that it can use the newly arrived DYHB response for more than one share, or else it isn't realizing that it can use the previously arrived DYHB response for more than one share.

comment:5 Changed at 2010-09-04T22:53:37Z by francois

  • Owner set to francois
  • Status changed from new to assigned

comment:6 Changed at 2010-09-04T23:24:11Z by francois

As first step, the failing test was run with verbose logging, see attachment:test_2_good_8_broken_copied_share.txt.

comment:7 Changed at 2010-09-05T00:51:43Z by francois

  • Owner francois deleted
  • Status changed from assigned to new

I don't really know why the following patch makes the test succeed (and breaks two others). But it might give some insight about the actual bug. Unfortunately, I'll probably won't find time to work on this during the next few days.

--- old-tahoe-upstream/src/allmydata/immutable/downloader/finder.py     2010-09-05 02:21:25.000000000 +0200
+++ new-tahoe-upstream/src/allmydata/immutable/downloader/finder.py     2010-09-05 02:21:25.000000000 +0200
@@ -220,7 +220,7 @@
         shares_s = ",".join([str(sh) for sh in shares])
         self.log(format="delivering shares: %s" % shares_s,
                  level=log.NOISY, umid="2n1qQw")
-        eventually(self.share_consumer.got_shares, shares)
+        self.share_consumer.got_shares(shares)
 
     def _got_error(self, f, peerid, req, d_ev, lp):
         d_ev.finished("error", now())

comment:8 Changed at 2010-09-06T04:41:54Z by zooko

Okay I think I understand the bug now. ShareFinder._deliver_shares() calls eventually(self.share_consumer.got_shares, shares) and then in the next tick—before got_shares() has been executed—ShareFinder.loop() runs and gives up and aborts the download because it isn't aware of the shares that were going to be delivered to it in a subsequent tick.

comment:9 Changed at 2010-09-06T18:16:09Z by warner

Oh, yeah, that's a bug. I think the proper behavior would be for the call to self.share_consumer.no_more_shares() to use the same eventual-send that the call to self.share_consumer.got_shares() uses (to make sure the no_more_shares cannot race ahead of the got_shares call).

This bug probably appeared during the diversity-seeking patch: part of that patch was to deliver all shares as quickly as possible, instead of trickling them out one share at a time.

I'll think about what the unit test for this should look like.. something that fails consistently without the patch, of course. Maybe just a way to consistentify test_2_good_8_broken_copied_share.

Changed at 2010-09-06T18:34:10Z by warner

fix for the bug, but lacks a specific unit test

comment:10 Changed at 2010-09-06T18:35:16Z by warner

that patch definitely fixes the problem (consistently, thanks to the recent patch to test_hung_server.py which makes the share order consistent). But I'd like to have a test case which specifically exercises this code path to go along with the fix.

comment:11 Changed at 2010-09-06T19:53:10Z by zooko

To believe that 1191-fix.diff fixes this issue, I think I would have to believe that eventually(A);eventually(B) where A does eventually(C) will always execute in order A;B;C and never A;C;B. Is it true that this fix requires that property?

It would be nice if the fix to this bug were more "local" and one could see by inspecting the source that the state was kept consistent. That is: isn't the problem here that a tick updates the ShareFinder's state to remove the last request from pending_requests but does not update the state to add the newly discovered share? And then loop runs between that tick and the later tick which is going to add the resulting share with got_shares? Wouldn't it be better if there was never a time in between ticks where one of those bits of state had been updated and the other had not?

If I understand correctly this patch leaves it the case that one tick updates pending_requests and a subsequent tick calls got_shares, but prevents the problem by ensuring that loop doesn't get run between those two ticks. This seems a bit fragile, but I guess as long as we have a robust test case for it then we'll know if it regresses. Maybe we could add some sort of internal consistency check assertion to the effect that, um, loop never gets called after _request_retired but before _got_shares ?

comment:12 Changed at 2010-09-07T05:56:59Z by zooko

Okay here is a log created by instrumenting ShareFinder to log each method call and DownloadNode to log got_shares and no_more_shares. This is without 1191-fix.diff.

local#231 23:01:59.969: xxx <allmydata.immutable.downloader.finder.ShareFinder instance at 0x1049f6050>._request_retired(<allmydata.immutable.downloader.finder.RequestToken instance at 0x104ef9c68>)
local#232 23:01:59.969: xxx <allmydata.immutable.downloader.finder.ShareFinder instance at 0x1049f6050>._got_response({9: <allmydata.test.no_network.LocalWrapper instance at 0x1049dc3f8>}, {'http://allmydata.org/tahoe/protocols/storage/v1': {'maximum-immutable-share-size': 128704925696, 'tolerates-immutable-read-overrun': True, 'delete-mutable-shares-with-zero-length-writev': True}, 'application-version': 'allmydata-tahoe/1.8.0c3-r4715'}, <B9><A3>N<80>u<9C>_<F7><97>FSS<A7><BD>^B<F9>f$:      , <allmydata.immutable.downloader.finder.RequestToken instance at 0x104ef9c68>, <allmydata.immutable.downloader.status.DYHBEvent instance at 0x104ef9cb0>, 1283835719.96, 210)
local#233 23:01:59.969: got shnums [9] from [xgru5adv]
local#234 23:01:59.969: xxx <allmydata.immutable.downloader.finder.ShareFinder instance at 0x1049f6050>._create_share(9, <allmydata.test.no_network.LocalWrapper instance at 0x1049dc3f8>, {'http://allmydata.org/tahoe/protocols/storage/v1': {'maximum-immutable-share-size': 128704925696, 'tolerates-immutable-read-overrun': True, 'delete-mutable-shares-with-zero-length-writev': True}, 'application-version': 'allmydata-tahoe/1.8.0c3-r4715'}, <B9><A3>N<80>u<9C>_<F7><97>FSS<A7><BD>^B<F9>f$:        , 0.0133891105652)
local#235 23:01:59.970: Share(sh9-on-xgru5) created
local#236 23:01:59.970: xxx <allmydata.immutable.downloader.finder.ShareFinder instance at 0x1049f6050>._deliver_shares([Share(sh9-on-xgru5)])
local#237 23:01:59.970: delivering shares: Share(sh9-on-xgru5)
local#238 23:01:59.970: xxx <allmydata.immutable.downloader.finder.ShareFinder instance at 0x1049f6050>.loop()
local#239 23:01:59.970: ShareFinder loop: running=True hungry=False, pending=
local#240 23:01:59.971: xxx <allmydata.immutable.downloader.finder.ShareFinder instance at 0x1049f6050>.loop()
local#241 23:01:59.971: ShareFinder loop: running=True hungry=False, pending=
local#242 23:01:59.972: xxx <allmydata.immutable.downloader.finder.ShareFinder instance at 0x1049f6050>.hungry()
local#243 23:01:59.972: ShareFinder[si=dglevpj4ueb7] hungry
local#244 23:01:59.972: xxx <allmydata.immutable.downloader.finder.ShareFinder instance at 0x1049f6050>.start_finding_servers()
local#245 23:01:59.973: xxx <allmydata.immutable.downloader.finder.ShareFinder instance at 0x1049f6050>.loop()
local#246 23:01:59.973: ShareFinder loop: running=True hungry=True, pending=
local#247 23:01:59.973: ShareFinder.loop: no_more_shares, ever
local#248 23:01:59.973: xxx ImmutableDownloadNode(dglevpj4ueb7).no_more_shares() ; _active_segment: <allmydata.immutable.downloader.fetcher.SegmentFetcher instance at 0x1049f6638>
local#249 23:01:59.975: xxx <allmydata.immutable.downloader.finder.ShareFinder instance at 0x1049f6050>.loop()
local#250 23:01:59.975: ShareFinder loop: running=True hungry=True, pending=
local#251 23:01:59.975: ShareFinder.loop: no_more_shares, ever
local#252 23:01:59.975: xxx ImmutableDownloadNode(dglevpj4ueb7).no_more_shares() ; _active_segment: <allmydata.immutable.downloader.fetcher.SegmentFetcher instance at 0x1049f6638>
local#253 23:01:59.976: ran out of shares: complete=sh1,sh8 pending= overdue= unused= need 3. Last failure: None
local#254 23:01:59.976: SegmentFetcher(dglevpj4ueb7).stop
local#255 23:01:59.977: xxx ImmutableDownloadNode(dglevpj4ueb7).got_shares([Share(sh9-on-xgru5)])

(Sorry about the wide lines there.)

So at local#231 23:01:59.969 the request is retired but the resulting eventual got_shares won't happen until local#255 23:01:59.977 which is shortly after the loop at local#247 23:01:59.973 which said no_more_shares, ever, which set a flag named _no_more_shares in the SegmentFetcher so that the next time SegmentFetcher._do_loop runs then it gives up and says ran out of shares at local#253 23:01:59.976.

Now 1191-fix.diff makes it so that when loop decides no_more_shares, ever then it sets an eventual task to set the _no_more_shares flag in SegmentFetcher instead of doing it immediately. Is this guaranteed to always prevent this bug? I guess it is because when the _request_retired (local#231 23:01:59.969) is done immediately then during that same tick the got_shares (local#255 23:01:59.977) is put on the eventual queue, so when the setting of _no_more_shares is put on the eventual queue it will always go take effect after the got_shares does.

Okay.

But this still feels fragile to me, for example after we apply 1191-fix.diff, then if someone were to change the code of ShareFinder._got_response so that it invoked _deliver_shares eventually instead of immediately, or if they were to change _deliver_shares so that it invoked DownloadNode.got_shares eventually instead of immediately, or if they were to change DownloadNode.got_shares so that it updated its _shares data structure eventually instead of immediately, then that would reintroduce this bug.

It would feel nicer to me if we could update both the ShareFinder.pending_requests data structure and the DownloadNode._shares data structure in the same immediate call so that there is no tick that begins when those two data structures are in a mutually inconsistent state (with the request removed from the former but the share not yet added to the latter).

Okay now I'll try to make a narrow test case of this issue.

Version 2, edited at 2010-09-07T05:58:42Z by zooko (previous) (next) (diff)

comment:13 Changed at 2010-09-07T07:19:30Z by zooko

Okay I've written a test of ShareFinder in which everything that is provided to the ShareFinder is a fake/mock thing. I think that's pretty cool and it helped me learn the code better. I have to go to sleep now and currently the test doesn't exercise the bug because it doesn't arrange for the ShareFinder to execute its own loop in between the final request being retired and the final share being added to the node's _share. However, I'll attach the patch in case someone wants to play with it.

Changed at 2010-09-07T07:20:58Z by zooko

a test of ShareFinder? in which everything that is provided to the ShareFinder? is a fake/mock thing. This doesn't quite succeed at exercising the bug because it doesn't arrange for ShareFinder? to run its own .loop after the last request is retired and before the last share is added to the node's _shares.

comment:14 Changed at 2010-09-08T05:05:01Z by zooko

<zooko> Okay I need to talk out loud.				        [22:25]
<zooko> finder tells node "no more shares"
<zooko> when finder knows that it will never find more shares during this
	download, *and* node has said "I'm hungry--feed me more shares."
<zooko> This is fine, except that recently we (Brian) extended it so that the
	node will be hungry for more shares even though it has enough shares,
	because it wants better-distributed shares.
<zooko> So now this is an alternate (better) 
<zooko> explanation for #1191.
<zooko> That finder then tells node "WAH! Give up in despair!" when finder
	runs out of new shares.					        [22:27]
<zooko> But actually node should be content with the unevenly distributed set
	that it already has.
<zooko> Okay, cool. So now can I write a unit test of *that*...
<zooko> Well, for one thing this shows that it isn't really finder's fault.
<zooko> Oh no wait these are really *both* bugs.		        [22:28]
<zooko> I've just written a unit test for the bug that finder says "No more
	shares, ever!" immediately followed by "here's another share". 
<zooko> That's not right. :-)
<zooko> But then we also need *another* test which is that node, when told "no
	more shares ever" goes ahead and makes do with what it has if it can.

comment:15 Changed at 2010-09-08T14:03:07Z by zooko

Okay I wrote a test for the first part -- it goes red if finder says "no more shares ever" and then delivers another share. I started writing a test for the second part -- it should go red if fetcher, given two shares on one server and one share on another server, gives up in despair. By inspection, it looks like fetcher doesn't have that bug: trunk/src/allmydata/immutable/downloader/fetcher.py@4707#L128. Also maybe that test is redundant with bigger downloader tests that we already have... I'll attach the patch as it exists in my sandbox at this moment.

Changed at 2010-09-08T14:03:35Z by zooko

comment:16 Changed at 2010-09-09T06:54:51Z by zooko

Okay here is a narrowly focussed unit test which gives the finder a red mark because the finder announced "no more shares, ever" and then subsequently announces a new share: 56a3258ff7560af3. (Note: it may be appropriate to write a more lenient unit test which allows the finder to do that and get away with it. However, the documentation for "no more shares" should then be updated and also I haven't yet been able to think of a unit test which is looser than this one and which still narrowly focusses on this bug.)

comment:17 Changed at 2010-09-09T18:58:44Z by zooko

I thought that the following patch (that I will attach to this ticket after I post this comment) would surely fix the bug by making the retire-request and add-new-share events happen in the same tick. Alas, it causes some other tests to fail in ways that I don't understand, namely allmydata.test.test_cli.Errors.test_get.

comment:18 Changed at 2010-09-09T19:02:15Z by zooko

Hm, how could do-retire-and-got-share-in-same-tick.dpatch.txt cause a failure of allmydata.test.test_cli.Errors.test_get? This means there is some other dependency between state-updating events related to _retired_request() and _got_response, right?

I'm giving up for now. Help!

comment:19 Changed at 2010-09-10T04:34:57Z by zooko

Okay I applied Brian's patch 1191-fix.diff and it made my new test go green, so I guess I'll commit and stop worrying about why do-retire-and-got-share-in-same-tick.dpatch.txt made allmydata.test.test_cli.Errors.test_get hang. I consider 1191-fix.diff combined with my new test to be a sufficient fix for this issue.

comment:20 Changed at 2010-09-10T05:00:43Z by Brian Warner <warner@…>

  • Owner set to Brian Warner <warner@…>
  • Resolution set to fixed
  • Status changed from new to closed

In 0475bd8e27ae8307:

immutable download: have the finder inform its share consumer "no more shares" in a subsequent tick, thus avoiding accidentally telling it "no more shares" now and then telling it "here's another share" in a subsequent tick
fixes #1191
Patch by Brian. This patch description was actually written by Zooko, but I forged Brian's name on the "author" field so that he would get credit for this patch in revision control history.

comment:21 Changed at 2010-09-10T19:54:29Z by zooko

By the way there is a generalization that I could make about do-retire-and-got-share-in-same-tick.dpatch.txt compared to 1191-fix.diff. We use state machines to handle asynchronous events, so when we want to have some behavior which spans multiple network events or timeouts and which has state, we have to store the state somewhere and check it in the next step of the state machine. But, we should really avoid the state machine paradigm whenever possible, so if we have some behavior which spans multiple ticks within one Twisted reactor, then we should not store the state somewhere and check the state in the function that we eventually run—we should instead pass that state as an argument to the eventually run function. That's just my opinion, man.

This is not to criticize 1191-fix.diff. It is the smallest patch that fixes all the known problems, and it is what we're actually deploying in 1.8.0, but I wanted to explain why I spent time attempting (unsuccessfully) to write do-retire-and-got-share-in-same-tick.dpatch.txt instead.

A related idea that I have is: suppose you have a loop() method (we have several) which contains the core of the state machine. Suppose you have some new information/some new event to communicate to the state machine. Then do not write down the new event and trigger the state machine, like self._no_more_shares = True; eventually(self.loop), but instead extend the interface of loop() to accept this event, like eventually(self.loop, no_more_shares=True). That way people will not have to think about what happens if the state changes in other ways before that eventually(self.loop) happens, such as if there is already a different call to loop on the eventual queue. If we hadn't needed to consider such potential complications, it would have been easier for us to diagnose the issue in this ticket.

Note: See TracTickets for help on using tickets.