Opened at 2020-11-30T18:53:07Z
Last modified at 2020-11-30T21:05:16Z
#3540 new defect
allmydata.mutable.publish.Publish.publish has unreliably covered bad shares handling code
Reported by: | exarkun | Owned by: | |
---|---|---|---|
Priority: | normal | Milestone: | undecided |
Component: | unknown | Version: | n/a |
Keywords: | Cc: | ||
Launchpad Bug: |
Description
From https://www.tahoe-lafs.org/trac/tahoe-lafs/ticket/2891 and https://app.codecov.io/gh/tahoe-lafs/tahoe-lafs/compare/896/changes, these lines are non-deterministically covered:
for key, old_checkstring in list(self._servermap.get_bad_shares().items()): (server, shnum) = key self.goal.add( (server,shnum) ) self.bad_share_checkstrings[(server,shnum)] = old_checkstring
Add some deterministic coverage for them.
Note: See
TracTickets for help on using
tickets.
It's hard to tell what the point of this loop is. Nothing in the test suite fails if I just delete it.
The self.update_goal() call that follows immediately afterwards discovers the bad shares are homeless and adds them to self.goal itself so this loop does not seem to be important to cause bad shares to be re-uploading before the publish operation is considered successful.
The bad_share_checkstrings thing might be the purpose. If values are found there later then the writer is told about the checkstrings. Perhaps this avoid uncoordinated repairs?
So ...
maybe?