Ticket #1363: 1363-patch2.dpatch

File 1363-patch2.dpatch, 164.0 KB (added by warner, at 2011-02-27T01:15:03Z)

bundle of refactoring patches

Line 
120 patches for repository /Users/warner2/stuff/tahoe/trunk:
2
3Sat Feb 26 17:10:51 PST 2011  warner@lothar.com
4  * test_client.py, upload.py:: remove KiB/MiB/etc constants, and other dead code
5
6Sat Feb 26 17:10:56 PST 2011  warner@lothar.com
7  * storage_client.py: clean up test_add_server/test_add_descriptor, remove .test_servers
8
9Sat Feb 26 17:11:00 PST 2011  warner@lothar.com
10  * refactor: s/peer/server/ in immutable/upload, happinessutil.py, test_upload
11 
12  No behavioral changes, just updating variable/method names and log messages.
13  The effects outside these three files should be minimal: some exception
14  messages changed (to say "server" instead of "peer"), and some internal class
15  names were changed. A few things still use "peer" to minimize external
16  changes, like UploadResults.timings["peer_selection"] and
17  happinessutil.merge_peers, which can be changed later.
18
19Sat Feb 26 17:11:03 PST 2011  warner@lothar.com
20  * upload.py: fix var names to avoid confusion between 'trackers' and 'servers'
21
22Sat Feb 26 17:11:07 PST 2011  warner@lothar.com
23  * upload.py: more tracker-vs-server cleanup
24
25Sat Feb 26 17:11:11 PST 2011  warner@lothar.com
26  * happinessutil.py: server-vs-tracker cleanup
27
28Sat Feb 26 17:11:15 PST 2011  warner@lothar.com
29  * test_upload.py: server-vs-tracker cleanup
30
31Sat Feb 26 17:11:20 PST 2011  warner@lothar.com
32  * test_upload.py: factor out FakeServerTracker
33
34Sat Feb 26 17:11:24 PST 2011  warner@lothar.com
35  * happinessutil.py: finally rename merge_peers to merge_servers
36
37Sat Feb 26 17:11:28 PST 2011  warner@lothar.com
38  * upload.py: rearrange _make_trackers a bit, no behavior changes
39
40Sat Feb 26 17:11:32 PST 2011  warner@lothar.com
41  * add remaining get_* methods to storage_client.Server, NoNetworkServer, and
42  MockIServer stubs
43
44Sat Feb 26 17:11:34 PST 2011  warner@lothar.com
45  * immutable/checker.py: remove some uses of s.get_serverid(), not all
46
47Sat Feb 26 17:11:38 PST 2011  warner@lothar.com
48  * immutable/upload.py: reduce use of get_serverid()
49
50Sat Feb 26 17:11:42 PST 2011  warner@lothar.com
51  * immutable/offloaded.py: reduce use of get_serverid() a bit more
52
53Sat Feb 26 17:11:46 PST 2011  warner@lothar.com
54  * immutable/downloader/finder.py: reduce use of get_serverid(), one left
55
56Sat Feb 26 17:11:50 PST 2011  warner@lothar.com
57  * immutable/downloader/share.py: reduce get_serverid(), one left, update ext deps
58 
59  test_download.py: create+check MyShare instances better, make sure they share
60  Server objects, now that finder.py cares
61
62Sat Feb 26 17:11:53 PST 2011  warner@lothar.com
63  * immutable/downloader/fetcher.py: fix diversity bug in server-response handling
64 
65  When blocks terminate (either COMPLETE or CORRUPT/DEAD/BADSEGNUM), the
66  _shares_from_server dict was being popped incorrectly (using shnum as the
67  index instead of serverid). I'm still thinking through the consequences of
68  this bug. It was probably benign and really hard to detect. I think it would
69  cause us to incorrectly believe that we're pulling too many shares from a
70  server, and thus prefer a different server rather than asking for a second
71  share from the first server. The diversity code is intended to spread out the
72  number of shares simultaneously being requested from each server, but with
73  this bug, it might be spreading out the total number of shares requested at
74  all, not just simultaneously. (note that SegmentFetcher is scoped to a single
75  segment, so the effect doesn't last very long).
76
77Sat Feb 26 17:11:56 PST 2011  warner@lothar.com
78  * immutable/downloader/fetcher.py: remove all get_serverid() calls
79
80Sat Feb 26 17:11:59 PST 2011  warner@lothar.com
81  * web: remove some uses of s.get_serverid(), not all
82
83Sat Feb 26 17:12:03 PST 2011  warner@lothar.com
84  * control.py: remove all uses of s.get_serverid()
85
86
87New patches:
88
89[test_client.py, upload.py:: remove KiB/MiB/etc constants, and other dead code
90warner@lothar.com**20110227011051
91 Ignore-this: dc83c5794c2afc4f81e592f689c0dc2d
92] {
93hunk ./src/allmydata/immutable/upload.py 31
94 from cStringIO import StringIO
95 
96 
97-KiB=1024
98-MiB=1024*KiB
99-GiB=1024*MiB
100-TiB=1024*GiB
101-PiB=1024*TiB
102-
103-class HaveAllPeersError(Exception):
104-    # we use this to jump out of the loop
105-    pass
106-
107 # this wants to live in storage, not here
108 class TooFullError(Exception):
109     pass
110hunk ./src/allmydata/test/test_client.py 10
111 import allmydata
112 from allmydata import client
113 from allmydata.storage_client import StorageFarmBroker
114-from allmydata.introducer.client import IntroducerClient
115 from allmydata.util import base32, fileutil
116 from allmydata.interfaces import IFilesystemNode, IFileNode, \
117      IImmutableFileNode, IMutableFileNode, IDirectoryNode
118hunk ./src/allmydata/test/test_client.py 16
119 from foolscap.api import flushEventualQueue
120 import allmydata.test.common_util as testutil
121 
122-class FakeIntroducerClient(IntroducerClient):
123-    def __init__(self):
124-        self._connections = set()
125-    def add_peer(self, nodeid):
126-        entry = (nodeid, "storage", "rref")
127-        self._connections.add(entry)
128-    def remove_all_peers(self):
129-        self._connections.clear()
130-
131 BASECONFIG = ("[client]\n"
132               "introducer.furl = \n"
133               )
134}
135[storage_client.py: clean up test_add_server/test_add_descriptor, remove .test_servers
136warner@lothar.com**20110227011056
137 Ignore-this: efad933e78179d3d5fdcd6d1ef2b19cc
138] {
139hunk ./src/allmydata/storage_client.py 74
140         # own Reconnector, and will give us a RemoteReference when we ask
141         # them for it.
142         self.servers = {}
143-        # self.test_servers are statically configured from unit tests
144-        self.test_servers = {} # serverid -> rref
145         self.introducer_client = None
146 
147     # these two are used in unit tests
148hunk ./src/allmydata/storage_client.py 77
149-    def test_add_server(self, serverid, rref):
150-        self.test_servers[serverid] = rref
151-    def test_add_descriptor(self, serverid, dsc):
152-        self.servers[serverid] = dsc
153+    def test_add_rref(self, serverid, rref):
154+        s = NativeStorageServer(serverid, {})
155+        s.rref = rref
156+        self.servers[serverid] = s
157+
158+    def test_add_server(self, serverid, s):
159+        self.servers[serverid] = s
160 
161     def use_introducer(self, introducer_client):
162         self.introducer_client = ic = introducer_client
163hunk ./src/allmydata/storage_client.py 128
164 
165     def get_all_serverids(self):
166         serverids = set()
167-        serverids.update(self.test_servers.keys())
168         serverids.update(self.servers.keys())
169         return frozenset(serverids)
170 
171hunk ./src/allmydata/storage_client.py 136
172                           if s.get_rref()])
173 
174     def get_known_servers(self):
175-        servers = []
176-        for serverid,rref in self.test_servers.items():
177-            s = NativeStorageServer(serverid, {})
178-            s.rref = rref
179-            servers.append(s)
180-        servers.extend(self.servers.values())
181-        return sorted(servers, key=lambda s: s.get_serverid())
182+        return sorted(self.servers.values(), key=lambda s: s.get_serverid())
183 
184     def get_nickname_for_serverid(self, serverid):
185         if serverid in self.servers:
186hunk ./src/allmydata/test/test_checker.py 31
187                       "my-version": "ver",
188                       "oldest-supported": "oldest",
189                       }
190-            dsc = NativeStorageServer(peerid, ann_d)
191-            sb.test_add_descriptor(peerid, dsc)
192+            s = NativeStorageServer(peerid, ann_d)
193+            sb.test_add_server(peerid, s)
194         c = FakeClient()
195         c.storage_broker = sb
196         return c
197hunk ./src/allmydata/test/test_client.py 132
198     def test_permute(self):
199         sb = StorageFarmBroker(None, True)
200         for k in ["%d" % i for i in range(5)]:
201-            sb.test_add_server(k, "rref")
202+            sb.test_add_rref(k, "rref")
203 
204         self.failUnlessReallyEqual(self._permute(sb, "one"), ['3','1','0','4','2'])
205         self.failUnlessReallyEqual(self._permute(sb, "two"), ['0','4','2','1','3'])
206hunk ./src/allmydata/test/test_client.py 136
207-        sb.test_servers.clear()
208+        sb.servers.clear()
209         self.failUnlessReallyEqual(self._permute(sb, "one"), [])
210 
211     def test_versions(self):
212hunk ./src/allmydata/test/test_mutable.py 191
213     storage_broker = StorageFarmBroker(None, True)
214     for peerid in peerids:
215         fss = FakeStorageServer(peerid, s)
216-        storage_broker.test_add_server(peerid, fss)
217+        storage_broker.test_add_rref(peerid, fss)
218     return storage_broker
219 
220 def make_nodemaker(s=None, num_peers=10):
221hunk ./src/allmydata/test/test_upload.py 199
222         peers = [ ("%20d"%fakeid, FakeStorageServer(mode[fakeid]))
223                   for fakeid in range(self.num_servers) ]
224         self.storage_broker = StorageFarmBroker(None, permute_peers=True)
225-        for (serverid, server) in peers:
226-            self.storage_broker.test_add_server(serverid, server)
227+        for (serverid, rref) in peers:
228+            self.storage_broker.test_add_rref(serverid, rref)
229         self.last_peers = [p[1] for p in peers]
230 
231     def log(self, *args, **kwargs):
232}
233[refactor: s/peer/server/ in immutable/upload, happinessutil.py, test_upload
234warner@lothar.com**20110227011100
235 Ignore-this: 7ea858755cbe5896ac212a925840fe68
236 
237 No behavioral changes, just updating variable/method names and log messages.
238 The effects outside these three files should be minimal: some exception
239 messages changed (to say "server" instead of "peer"), and some internal class
240 names were changed. A few things still use "peer" to minimize external
241 changes, like UploadResults.timings["peer_selection"] and
242 happinessutil.merge_peers, which can be changed later.
243] {
244hunk ./src/allmydata/immutable/upload.py 71
245 def pretty_print_shnum_to_servers(s):
246     return ', '.join([ "sh%s: %s" % (k, '+'.join([idlib.shortnodeid_b2a(x) for x in v])) for k, v in s.iteritems() ])
247 
248-class PeerTracker:
249-    def __init__(self, peerid, storage_server,
250+class ServerTracker:
251+    def __init__(self, serverid, storage_server,
252                  sharesize, blocksize, num_segments, num_share_hashes,
253                  storage_index,
254                  bucket_renewal_secret, bucket_cancel_secret):
255hunk ./src/allmydata/immutable/upload.py 76
256-        precondition(isinstance(peerid, str), peerid)
257-        precondition(len(peerid) == 20, peerid)
258-        self.peerid = peerid
259+        precondition(isinstance(serverid, str), serverid)
260+        precondition(len(serverid) == 20, serverid)
261+        self.serverid = serverid
262         self._storageserver = storage_server # to an RIStorageServer
263         self.buckets = {} # k: shareid, v: IRemoteBucketWriter
264         self.sharesize = sharesize
265hunk ./src/allmydata/immutable/upload.py 86
266         wbp = layout.make_write_bucket_proxy(None, sharesize,
267                                              blocksize, num_segments,
268                                              num_share_hashes,
269-                                             EXTENSION_SIZE, peerid)
270+                                             EXTENSION_SIZE, serverid)
271         self.wbp_class = wbp.__class__ # to create more of them
272         self.allocated_size = wbp.get_allocated_size()
273         self.blocksize = blocksize
274hunk ./src/allmydata/immutable/upload.py 98
275         self.cancel_secret = bucket_cancel_secret
276 
277     def __repr__(self):
278-        return ("<PeerTracker for peer %s and SI %s>"
279-                % (idlib.shortnodeid_b2a(self.peerid),
280+        return ("<ServerTracker for server %s and SI %s>"
281+                % (idlib.shortnodeid_b2a(self.serverid),
282                    si_b2a(self.storage_index)[:5]))
283 
284     def query(self, sharenums):
285hunk ./src/allmydata/immutable/upload.py 126
286                                 self.num_segments,
287                                 self.num_share_hashes,
288                                 EXTENSION_SIZE,
289-                                self.peerid)
290+                                self.serverid)
291             b[sharenum] = bp
292         self.buckets.update(b)
293         return (alreadygot, set(b.keys()))
294hunk ./src/allmydata/immutable/upload.py 152
295 def str_shareloc(shnum, bucketwriter):
296     return "%s: %s" % (shnum, idlib.shortnodeid_b2a(bucketwriter._nodeid),)
297 
298-class Tahoe2PeerSelector(log.PrefixingLogMixin):
299+class Tahoe2ServerSelector(log.PrefixingLogMixin):
300 
301     def __init__(self, upload_id, logparent=None, upload_status=None):
302         self.upload_id = upload_id
303hunk ./src/allmydata/immutable/upload.py 157
304         self.query_count, self.good_query_count, self.bad_query_count = 0,0,0
305-        # Peers that are working normally, but full.
306+        # Servers that are working normally, but full.
307         self.full_count = 0
308         self.error_count = 0
309hunk ./src/allmydata/immutable/upload.py 160
310-        self.num_peers_contacted = 0
311+        self.num_servers_contacted = 0
312         self.last_failure_msg = None
313         self._status = IUploadStatus(upload_status)
314         log.PrefixingLogMixin.__init__(self, 'tahoe.immutable.upload', logparent, prefix=upload_id)
315hunk ./src/allmydata/immutable/upload.py 167
316         self.log("starting", level=log.OPERATIONAL)
317 
318     def __repr__(self):
319-        return "<Tahoe2PeerSelector for upload %s>" % self.upload_id
320+        return "<Tahoe2ServerSelector for upload %s>" % self.upload_id
321 
322     def get_shareholders(self, storage_broker, secret_holder,
323                          storage_index, share_size, block_size,
324hunk ./src/allmydata/immutable/upload.py 174
325                          num_segments, total_shares, needed_shares,
326                          servers_of_happiness):
327         """
328-        @return: (upload_servers, already_peers), where upload_servers is a set of
329-                 PeerTracker instances that have agreed to hold some shares
330-                 for us (the shareids are stashed inside the PeerTracker),
331-                 and already_peers is a dict mapping shnum to a set of peers
332-                 which claim to already have the share.
333+        @return: (upload_servers, already_servers), where upload_servers is
334+                 a set of ServerTracker instances that have agreed to hold
335+                 some shares for us (the shareids are stashed inside the
336+                 ServerTracker), and already_servers is a dict mapping shnum
337+                 to a set of servers which claim to already have the share.
338         """
339 
340         if self._status:
341hunk ./src/allmydata/immutable/upload.py 182
342-            self._status.set_status("Contacting Peers..")
343+            self._status.set_status("Contacting Servers..")
344 
345         self.total_shares = total_shares
346         self.servers_of_happiness = servers_of_happiness
347hunk ./src/allmydata/immutable/upload.py 189
348         self.needed_shares = needed_shares
349 
350         self.homeless_shares = set(range(total_shares))
351-        self.contacted_peers = [] # peers worth asking again
352-        self.contacted_peers2 = [] # peers that we have asked again
353+        self.contacted_servers = [] # servers worth asking again
354+        self.contacted_servers2 = [] # servers that we have asked again
355         self._started_second_pass = False
356hunk ./src/allmydata/immutable/upload.py 192
357-        self.use_peers = set() # PeerTrackers that have shares assigned to them
358-        self.preexisting_shares = {} # shareid => set(peerids) holding shareid
359+        self.use_servers = set() # ServerTrackers that have shares assigned
360+                                 # to them
361+        self.preexisting_shares = {} # shareid => set(serverids) holding shareid
362         # We don't try to allocate shares to these servers, since they've said
363         # that they're incapable of storing shares of the size that we'd want
364         # to store. We keep them around because they may have existing shares
365hunk ./src/allmydata/immutable/upload.py 201
366         # for this storage index, which we want to know about for accurate
367         # servers_of_happiness accounting
368         # (this is eventually a list, but it is initialized later)
369-        self.readonly_peers = None
370-        # These peers have shares -- any shares -- for our SI. We keep
371+        self.readonly_servers = None
372+        # These servers have shares -- any shares -- for our SI. We keep
373         # track of these to write an error message with them later.
374hunk ./src/allmydata/immutable/upload.py 204
375-        self.peers_with_shares = set()
376+        self.servers_with_shares = set()
377 
378         # this needed_hashes computation should mirror
379         # Encoder.send_all_share_hash_trees. We use an IncompleteHashTree
380hunk ./src/allmydata/immutable/upload.py 218
381                                              num_share_hashes, EXTENSION_SIZE,
382                                              None)
383         allocated_size = wbp.get_allocated_size()
384-        all_peers = [(s.get_serverid(), s.get_rref())
385-                     for s in storage_broker.get_servers_for_psi(storage_index)]
386-        if not all_peers:
387-            raise NoServersError("client gave us zero peers")
388+        all_servers = [(s.get_serverid(), s.get_rref())
389+                       for s in storage_broker.get_servers_for_psi(storage_index)]
390+        if not all_servers:
391+            raise NoServersError("client gave us zero servers")
392 
393hunk ./src/allmydata/immutable/upload.py 223
394-        # filter the list of peers according to which ones can accomodate
395-        # this request. This excludes older peers (which used a 4-byte size
396+        # filter the list of servers according to which ones can accomodate
397+        # this request. This excludes older servers (which used a 4-byte size
398         # field) from getting large shares (for files larger than about
399         # 12GiB). See #439 for details.
400hunk ./src/allmydata/immutable/upload.py 227
401-        def _get_maxsize(peer):
402-            (peerid, conn) = peer
403+        def _get_maxsize(server):
404+            (serverid, conn) = server
405             v1 = conn.version["http://allmydata.org/tahoe/protocols/storage/v1"]
406             return v1["maximum-immutable-share-size"]
407hunk ./src/allmydata/immutable/upload.py 231
408-        writable_peers = [peer for peer in all_peers
409-                          if _get_maxsize(peer) >= allocated_size]
410-        readonly_peers = set(all_peers[:2*total_shares]) - set(writable_peers)
411+        writable_servers = [server for server in all_servers
412+                            if _get_maxsize(server) >= allocated_size]
413+        readonly_servers = set(all_servers[:2*total_shares]) - set(writable_servers)
414 
415         # decide upon the renewal/cancel secrets, to include them in the
416         # allocate_buckets query.
417hunk ./src/allmydata/immutable/upload.py 244
418                                                        storage_index)
419         file_cancel_secret = file_cancel_secret_hash(client_cancel_secret,
420                                                      storage_index)
421-        def _make_trackers(peers):
422-           return [PeerTracker(peerid, conn,
423-                               share_size, block_size,
424-                               num_segments, num_share_hashes,
425-                               storage_index,
426-                               bucket_renewal_secret_hash(file_renewal_secret,
427-                                                          peerid),
428-                               bucket_cancel_secret_hash(file_cancel_secret,
429-                                                         peerid))
430-                    for (peerid, conn) in peers]
431-        self.uncontacted_peers = _make_trackers(writable_peers)
432-        self.readonly_peers = _make_trackers(readonly_peers)
433-        # We now ask peers that can't hold any new shares about existing
434+        def _make_trackers(servers):
435+           return [ServerTracker(serverid, conn,
436+                                 share_size, block_size,
437+                                 num_segments, num_share_hashes,
438+                                 storage_index,
439+                                 bucket_renewal_secret_hash(file_renewal_secret,
440+                                                            serverid),
441+                                 bucket_cancel_secret_hash(file_cancel_secret,
442+                                                           serverid))
443+                   for (serverid, conn) in servers]
444+        self.uncontacted_servers = _make_trackers(writable_servers)
445+        self.readonly_servers = _make_trackers(readonly_servers)
446+        # We now ask servers that can't hold any new shares about existing
447         # shares that they might have for our SI. Once this is done, we
448         # start placing the shares that we haven't already accounted
449         # for.
450hunk ./src/allmydata/immutable/upload.py 261
451         ds = []
452-        if self._status and self.readonly_peers:
453-            self._status.set_status("Contacting readonly peers to find "
454+        if self._status and self.readonly_servers:
455+            self._status.set_status("Contacting readonly servers to find "
456                                     "any existing shares")
457hunk ./src/allmydata/immutable/upload.py 264
458-        for peer in self.readonly_peers:
459-            assert isinstance(peer, PeerTracker)
460-            d = peer.ask_about_existing_shares()
461-            d.addBoth(self._handle_existing_response, peer.peerid)
462+        for server in self.readonly_servers:
463+            assert isinstance(server, ServerTracker)
464+            d = server.ask_about_existing_shares()
465+            d.addBoth(self._handle_existing_response, server.serverid)
466             ds.append(d)
467hunk ./src/allmydata/immutable/upload.py 269
468-            self.num_peers_contacted += 1
469+            self.num_servers_contacted += 1
470             self.query_count += 1
471hunk ./src/allmydata/immutable/upload.py 271
472-            self.log("asking peer %s for any existing shares" %
473-                     (idlib.shortnodeid_b2a(peer.peerid),),
474+            self.log("asking server %s for any existing shares" %
475+                     (idlib.shortnodeid_b2a(server.serverid),),
476                     level=log.NOISY)
477         dl = defer.DeferredList(ds)
478         dl.addCallback(lambda ign: self._loop())
479hunk ./src/allmydata/immutable/upload.py 279
480         return dl
481 
482 
483-    def _handle_existing_response(self, res, peer):
484+    def _handle_existing_response(self, res, server):
485         """
486         I handle responses to the queries sent by
487hunk ./src/allmydata/immutable/upload.py 282
488-        Tahoe2PeerSelector._existing_shares.
489+        Tahoe2ServerSelector._existing_shares.
490         """
491         if isinstance(res, failure.Failure):
492             self.log("%s got error during existing shares check: %s"
493hunk ./src/allmydata/immutable/upload.py 286
494-                    % (idlib.shortnodeid_b2a(peer), res),
495+                    % (idlib.shortnodeid_b2a(server), res),
496                     level=log.UNUSUAL)
497             self.error_count += 1
498             self.bad_query_count += 1
499hunk ./src/allmydata/immutable/upload.py 293
500         else:
501             buckets = res
502             if buckets:
503-                self.peers_with_shares.add(peer)
504-            self.log("response to get_buckets() from peer %s: alreadygot=%s"
505-                    % (idlib.shortnodeid_b2a(peer), tuple(sorted(buckets))),
506+                self.servers_with_shares.add(server)
507+            self.log("response to get_buckets() from server %s: alreadygot=%s"
508+                    % (idlib.shortnodeid_b2a(server), tuple(sorted(buckets))),
509                     level=log.NOISY)
510             for bucket in buckets:
511hunk ./src/allmydata/immutable/upload.py 298
512-                self.preexisting_shares.setdefault(bucket, set()).add(peer)
513+                self.preexisting_shares.setdefault(bucket, set()).add(server)
514                 self.homeless_shares.discard(bucket)
515             self.full_count += 1
516             self.bad_query_count += 1
517hunk ./src/allmydata/immutable/upload.py 314
518                     len(self.homeless_shares)))
519         return (msg + "want to place shares on at least %d servers such that "
520                       "any %d of them have enough shares to recover the file, "
521-                      "sent %d queries to %d peers, "
522+                      "sent %d queries to %d servers, "
523                       "%d queries placed some shares, %d placed none "
524                       "(of which %d placed none due to the server being"
525                       " full and %d placed none due to an error)" %
526hunk ./src/allmydata/immutable/upload.py 319
527                         (self.servers_of_happiness, self.needed_shares,
528-                         self.query_count, self.num_peers_contacted,
529+                         self.query_count, self.num_servers_contacted,
530                          self.good_query_count, self.bad_query_count,
531                          self.full_count, self.error_count))
532 
533hunk ./src/allmydata/immutable/upload.py 326
534 
535     def _loop(self):
536         if not self.homeless_shares:
537-            merged = merge_peers(self.preexisting_shares, self.use_peers)
538+            merged = merge_peers(self.preexisting_shares, self.use_servers)
539             effective_happiness = servers_of_happiness(merged)
540             if self.servers_of_happiness <= effective_happiness:
541                 msg = ("server selection successful for %s: %s: pretty_print_merged: %s, "
542hunk ./src/allmydata/immutable/upload.py 330
543-                    "self.use_peers: %s, self.preexisting_shares: %s") \
544-                        % (self, self._get_progress_message(),
545-                        pretty_print_shnum_to_servers(merged),
546-                        [', '.join([str_shareloc(k,v) for k,v in p.buckets.iteritems()])
547-                            for p in self.use_peers],
548-                        pretty_print_shnum_to_servers(self.preexisting_shares))
549+                       "self.use_servers: %s, self.preexisting_shares: %s") \
550+                       % (self, self._get_progress_message(),
551+                          pretty_print_shnum_to_servers(merged),
552+                          [', '.join([str_shareloc(k,v)
553+                                      for k,v in s.buckets.iteritems()])
554+                           for s in self.use_servers],
555+                          pretty_print_shnum_to_servers(self.preexisting_shares))
556                 self.log(msg, level=log.OPERATIONAL)
557hunk ./src/allmydata/immutable/upload.py 338
558-                return (self.use_peers, self.preexisting_shares)
559+                return (self.use_servers, self.preexisting_shares)
560             else:
561                 # We're not okay right now, but maybe we can fix it by
562                 # redistributing some shares. In cases where one or two
563hunk ./src/allmydata/immutable/upload.py 344
564                 # servers has, before the upload, all or most of the
565                 # shares for a given SI, this can work by allowing _loop
566-                # a chance to spread those out over the other peers,
567+                # a chance to spread those out over the other servers,
568                 delta = self.servers_of_happiness - effective_happiness
569                 shares = shares_by_server(self.preexisting_shares)
570                 # Each server in shares maps to a set of shares stored on it.
571hunk ./src/allmydata/immutable/upload.py 355
572                 shares_to_spread = sum([len(list(sharelist)) - 1
573                                         for (server, sharelist)
574                                         in shares.items()])
575-                if delta <= len(self.uncontacted_peers) and \
576+                if delta <= len(self.uncontacted_servers) and \
577                    shares_to_spread >= delta:
578                     items = shares.items()
579                     while len(self.homeless_shares) < delta:
580hunk ./src/allmydata/immutable/upload.py 371
581                             if not self.preexisting_shares[share]:
582                                 del self.preexisting_shares[share]
583                             items.append((server, sharelist))
584-                        for writer in self.use_peers:
585+                        for writer in self.use_servers:
586                             writer.abort_some_buckets(self.homeless_shares)
587                     return self._loop()
588                 else:
589hunk ./src/allmydata/immutable/upload.py 376
590                     # Redistribution won't help us; fail.
591-                    peer_count = len(self.peers_with_shares)
592-                    failmsg = failure_message(peer_count,
593-                                          self.needed_shares,
594-                                          self.servers_of_happiness,
595-                                          effective_happiness)
596+                    server_count = len(self.servers_with_shares)
597+                    failmsg = failure_message(server_count,
598+                                              self.needed_shares,
599+                                              self.servers_of_happiness,
600+                                              effective_happiness)
601                     servmsgtempl = "server selection unsuccessful for %r: %s (%s), merged=%s"
602                     servmsg = servmsgtempl % (
603                         self,
604hunk ./src/allmydata/immutable/upload.py 391
605                     self.log(servmsg, level=log.INFREQUENT)
606                     return self._failed("%s (%s)" % (failmsg, self._get_progress_message()))
607 
608-        if self.uncontacted_peers:
609-            peer = self.uncontacted_peers.pop(0)
610-            # TODO: don't pre-convert all peerids to PeerTrackers
611-            assert isinstance(peer, PeerTracker)
612+        if self.uncontacted_servers:
613+            server = self.uncontacted_servers.pop(0)
614+            # TODO: don't pre-convert all serverids to ServerTrackers
615+            assert isinstance(server, ServerTracker)
616 
617             shares_to_ask = set(sorted(self.homeless_shares)[:1])
618             self.homeless_shares -= shares_to_ask
619hunk ./src/allmydata/immutable/upload.py 399
620             self.query_count += 1
621-            self.num_peers_contacted += 1
622+            self.num_servers_contacted += 1
623             if self._status:
624hunk ./src/allmydata/immutable/upload.py 401
625-                self._status.set_status("Contacting Peers [%s] (first query),"
626+                self._status.set_status("Contacting Servers [%s] (first query),"
627                                         " %d shares left.."
628hunk ./src/allmydata/immutable/upload.py 403
629-                                        % (idlib.shortnodeid_b2a(peer.peerid),
630+                                        % (idlib.shortnodeid_b2a(server.serverid),
631                                            len(self.homeless_shares)))
632hunk ./src/allmydata/immutable/upload.py 405
633-            d = peer.query(shares_to_ask)
634-            d.addBoth(self._got_response, peer, shares_to_ask,
635-                      self.contacted_peers)
636+            d = server.query(shares_to_ask)
637+            d.addBoth(self._got_response, server, shares_to_ask,
638+                      self.contacted_servers)
639             return d
640hunk ./src/allmydata/immutable/upload.py 409
641-        elif self.contacted_peers:
642-            # ask a peer that we've already asked.
643+        elif self.contacted_servers:
644+            # ask a server that we've already asked.
645             if not self._started_second_pass:
646                 self.log("starting second pass",
647                         level=log.NOISY)
648hunk ./src/allmydata/immutable/upload.py 416
649                 self._started_second_pass = True
650             num_shares = mathutil.div_ceil(len(self.homeless_shares),
651-                                           len(self.contacted_peers))
652-            peer = self.contacted_peers.pop(0)
653+                                           len(self.contacted_servers))
654+            server = self.contacted_servers.pop(0)
655             shares_to_ask = set(sorted(self.homeless_shares)[:num_shares])
656             self.homeless_shares -= shares_to_ask
657             self.query_count += 1
658hunk ./src/allmydata/immutable/upload.py 422
659             if self._status:
660-                self._status.set_status("Contacting Peers [%s] (second query),"
661+                self._status.set_status("Contacting Servers [%s] (second query),"
662                                         " %d shares left.."
663hunk ./src/allmydata/immutable/upload.py 424
664-                                        % (idlib.shortnodeid_b2a(peer.peerid),
665+                                        % (idlib.shortnodeid_b2a(server.serverid),
666                                            len(self.homeless_shares)))
667hunk ./src/allmydata/immutable/upload.py 426
668-            d = peer.query(shares_to_ask)
669-            d.addBoth(self._got_response, peer, shares_to_ask,
670-                      self.contacted_peers2)
671+            d = server.query(shares_to_ask)
672+            d.addBoth(self._got_response, server, shares_to_ask,
673+                      self.contacted_servers2)
674             return d
675hunk ./src/allmydata/immutable/upload.py 430
676-        elif self.contacted_peers2:
677+        elif self.contacted_servers2:
678             # we've finished the second-or-later pass. Move all the remaining
679hunk ./src/allmydata/immutable/upload.py 432
680-            # peers back into self.contacted_peers for the next pass.
681-            self.contacted_peers.extend(self.contacted_peers2)
682-            self.contacted_peers2[:] = []
683+            # servers back into self.contacted_servers for the next pass.
684+            self.contacted_servers.extend(self.contacted_servers2)
685+            self.contacted_servers2[:] = []
686             return self._loop()
687         else:
688hunk ./src/allmydata/immutable/upload.py 437
689-            # no more peers. If we haven't placed enough shares, we fail.
690-            merged = merge_peers(self.preexisting_shares, self.use_peers)
691+            # no more servers. If we haven't placed enough shares, we fail.
692+            merged = merge_peers(self.preexisting_shares, self.use_servers)
693             effective_happiness = servers_of_happiness(merged)
694             if effective_happiness < self.servers_of_happiness:
695hunk ./src/allmydata/immutable/upload.py 441
696-                msg = failure_message(len(self.peers_with_shares),
697+                msg = failure_message(len(self.servers_with_shares),
698                                       self.needed_shares,
699                                       self.servers_of_happiness,
700                                       effective_happiness)
701hunk ./src/allmydata/immutable/upload.py 445
702-                msg = ("peer selection failed for %s: %s (%s)" % (self,
703-                                msg,
704-                                self._get_progress_message()))
705+                msg = ("server selection failed for %s: %s (%s)" %
706+                       (self, msg, self._get_progress_message()))
707                 if self.last_failure_msg:
708                     msg += " (%s)" % (self.last_failure_msg,)
709                 self.log(msg, level=log.UNUSUAL)
710hunk ./src/allmydata/immutable/upload.py 458
711                 msg = ("server selection successful (no more servers) for %s: %s: %s" % (self,
712                             self._get_progress_message(), pretty_print_shnum_to_servers(merged)))
713                 self.log(msg, level=log.OPERATIONAL)
714-                return (self.use_peers, self.preexisting_shares)
715+                return (self.use_servers, self.preexisting_shares)
716 
717hunk ./src/allmydata/immutable/upload.py 460
718-    def _got_response(self, res, peer, shares_to_ask, put_peer_here):
719+    def _got_response(self, res, server, shares_to_ask, put_server_here):
720         if isinstance(res, failure.Failure):
721             # This is unusual, and probably indicates a bug or a network
722             # problem.
723hunk ./src/allmydata/immutable/upload.py 464
724-            self.log("%s got error during peer selection: %s" % (peer, res),
725+            self.log("%s got error during server selection: %s" % (server, res),
726                     level=log.UNUSUAL)
727             self.error_count += 1
728             self.bad_query_count += 1
729hunk ./src/allmydata/immutable/upload.py 469
730             self.homeless_shares |= shares_to_ask
731-            if (self.uncontacted_peers
732-                or self.contacted_peers
733-                or self.contacted_peers2):
734+            if (self.uncontacted_servers
735+                or self.contacted_servers
736+                or self.contacted_servers2):
737                 # there is still hope, so just loop
738                 pass
739             else:
740hunk ./src/allmydata/immutable/upload.py 475
741-                # No more peers, so this upload might fail (it depends upon
742+                # No more servers, so this upload might fail (it depends upon
743                 # whether we've hit servers_of_happiness or not). Log the last
744hunk ./src/allmydata/immutable/upload.py 477
745-                # failure we got: if a coding error causes all peers to fail
746+                # failure we got: if a coding error causes all servers to fail
747                 # in the same way, this allows the common failure to be seen
748                 # by the uploader and should help with debugging
749hunk ./src/allmydata/immutable/upload.py 480
750-                msg = ("last failure (from %s) was: %s" % (peer, res))
751+                msg = ("last failure (from %s) was: %s" % (server, res))
752                 self.last_failure_msg = msg
753         else:
754             (alreadygot, allocated) = res
755hunk ./src/allmydata/immutable/upload.py 484
756-            self.log("response to allocate_buckets() from peer %s: alreadygot=%s, allocated=%s"
757-                    % (idlib.shortnodeid_b2a(peer.peerid),
758+            self.log("response to allocate_buckets() from server %s: alreadygot=%s, allocated=%s"
759+                    % (idlib.shortnodeid_b2a(server.serverid),
760                        tuple(sorted(alreadygot)), tuple(sorted(allocated))),
761                     level=log.NOISY)
762             progress = False
763hunk ./src/allmydata/immutable/upload.py 490
764             for s in alreadygot:
765-                self.preexisting_shares.setdefault(s, set()).add(peer.peerid)
766+                self.preexisting_shares.setdefault(s, set()).add(server.serverid)
767                 if s in self.homeless_shares:
768                     self.homeless_shares.remove(s)
769                     progress = True
770hunk ./src/allmydata/immutable/upload.py 497
771                 elif s in shares_to_ask:
772                     progress = True
773 
774-            # the PeerTracker will remember which shares were allocated on
775+            # the ServerTracker will remember which shares were allocated on
776             # that peer. We just have to remember to use them.
777             if allocated:
778hunk ./src/allmydata/immutable/upload.py 500
779-                self.use_peers.add(peer)
780+                self.use_servers.add(server)
781                 progress = True
782 
783             if allocated or alreadygot:
784hunk ./src/allmydata/immutable/upload.py 504
785-                self.peers_with_shares.add(peer.peerid)
786+                self.servers_with_shares.add(server.serverid)
787 
788             not_yet_present = set(shares_to_ask) - set(alreadygot)
789             still_homeless = not_yet_present - set(allocated)
790hunk ./src/allmydata/immutable/upload.py 521
791 
792             if still_homeless:
793                 # In networks with lots of space, this is very unusual and
794-                # probably indicates an error. In networks with peers that
795+                # probably indicates an error. In networks with servers that
796                 # are full, it is merely unusual. In networks that are very
797                 # full, it is common, and many uploads will fail. In most
798                 # cases, this is obviously not fatal, and we'll just use some
799hunk ./src/allmydata/immutable/upload.py 525
800-                # other peers.
801+                # other servers.
802 
803                 # some shares are still homeless, keep trying to find them a
804                 # home. The ones that were rejected get first priority.
805hunk ./src/allmydata/immutable/upload.py 535
806             else:
807                 # if they *were* able to accept everything, they might be
808                 # willing to accept even more.
809-                put_peer_here.append(peer)
810+                put_server_here.append(server)
811 
812         # now loop
813         return self._loop()
814hunk ./src/allmydata/immutable/upload.py 543
815 
816     def _failed(self, msg):
817         """
818-        I am called when peer selection fails. I first abort all of the
819+        I am called when server selection fails. I first abort all of the
820         remote buckets that I allocated during my unsuccessful attempt to
821         place shares for this file. I then raise an
822         UploadUnhappinessError with my msg argument.
823hunk ./src/allmydata/immutable/upload.py 548
824         """
825-        for peer in self.use_peers:
826-            assert isinstance(peer, PeerTracker)
827+        for server in self.use_servers:
828+            assert isinstance(server, ServerTracker)
829 
830hunk ./src/allmydata/immutable/upload.py 551
831-            peer.abort()
832+            server.abort()
833 
834         raise UploadUnhappinessError(msg)
835 
836hunk ./src/allmydata/immutable/upload.py 829
837         self.results = value
838 
839 class CHKUploader:
840-    peer_selector_class = Tahoe2PeerSelector
841+    server_selector_class = Tahoe2ServerSelector
842 
843     def __init__(self, storage_broker, secret_holder):
844hunk ./src/allmydata/immutable/upload.py 832
845-        # peer_selector needs storage_broker and secret_holder
846+        # server_selector needs storage_broker and secret_holder
847         self._storage_broker = storage_broker
848         self._secret_holder = secret_holder
849         self._log_number = self.log("CHKUploader starting", parent=None)
850hunk ./src/allmydata/immutable/upload.py 845
851         self._upload_status.set_results(self._results)
852 
853         # locate_all_shareholders() will create the following attribute:
854-        # self._peer_trackers = {} # k: shnum, v: instance of PeerTracker
855+        # self._server_trackers = {} # k: shnum, v: instance of ServerTracker
856 
857     def log(self, *args, **kwargs):
858         if "parent" not in kwargs:
859hunk ./src/allmydata/immutable/upload.py 896
860         return d
861 
862     def locate_all_shareholders(self, encoder, started):
863-        peer_selection_started = now = time.time()
864+        server_selection_started = now = time.time()
865         self._storage_index_elapsed = now - started
866         storage_broker = self._storage_broker
867         secret_holder = self._secret_holder
868hunk ./src/allmydata/immutable/upload.py 904
869         self._storage_index = storage_index
870         upload_id = si_b2a(storage_index)[:5]
871         self.log("using storage index %s" % upload_id)
872-        peer_selector = self.peer_selector_class(upload_id, self._log_number,
873-                                                 self._upload_status)
874+        server_selector = self.server_selector_class(upload_id,
875+                                                     self._log_number,
876+                                                     self._upload_status)
877 
878         share_size = encoder.get_param("share_size")
879         block_size = encoder.get_param("block_size")
880hunk ./src/allmydata/immutable/upload.py 913
881         num_segments = encoder.get_param("num_segments")
882         k,desired,n = encoder.get_param("share_counts")
883 
884-        self._peer_selection_started = time.time()
885-        d = peer_selector.get_shareholders(storage_broker, secret_holder,
886-                                           storage_index,
887-                                           share_size, block_size,
888-                                           num_segments, n, k, desired)
889+        self._server_selection_started = time.time()
890+        d = server_selector.get_shareholders(storage_broker, secret_holder,
891+                                             storage_index,
892+                                             share_size, block_size,
893+                                             num_segments, n, k, desired)
894         def _done(res):
895hunk ./src/allmydata/immutable/upload.py 919
896-            self._peer_selection_elapsed = time.time() - peer_selection_started
897+            self._server_selection_elapsed = time.time() - server_selection_started
898             return res
899         d.addCallback(_done)
900         return d
901hunk ./src/allmydata/immutable/upload.py 924
902 
903-    def set_shareholders(self, (upload_servers, already_peers), encoder):
904+    def set_shareholders(self, (upload_servers, already_servers), encoder):
905         """
906hunk ./src/allmydata/immutable/upload.py 926
907-        @param upload_servers: a sequence of PeerTracker objects that have agreed to hold some
908-            shares for us (the shareids are stashed inside the PeerTracker)
909-        @paran already_peers: a dict mapping sharenum to a set of peerids
910-                              that claim to already have this share
911+        @param upload_servers: a sequence of ServerTracker objects that
912+                               have agreed to hold some shares for us (the
913+                               shareids are stashed inside the ServerTracker)
914+        @paran already_servers: a dict mapping sharenum to a set of serverids
915+                                that claim to already have this share
916         """
917hunk ./src/allmydata/immutable/upload.py 932
918-        msgtempl = "set_shareholders; upload_servers is %s, already_peers is %s"
919-        values = ([', '.join([str_shareloc(k,v) for k,v in p.buckets.iteritems()])
920-            for p in upload_servers], already_peers)
921+        msgtempl = "set_shareholders; upload_servers is %s, already_servers is %s"
922+        values = ([', '.join([str_shareloc(k,v) for k,v in s.buckets.iteritems()])
923+            for s in upload_servers], already_servers)
924         self.log(msgtempl % values, level=log.OPERATIONAL)
925         # record already-present shares in self._results
926hunk ./src/allmydata/immutable/upload.py 937
927-        self._results.preexisting_shares = len(already_peers)
928+        self._results.preexisting_shares = len(already_servers)
929 
930hunk ./src/allmydata/immutable/upload.py 939
931-        self._peer_trackers = {} # k: shnum, v: instance of PeerTracker
932-        for peer in upload_servers:
933-            assert isinstance(peer, PeerTracker)
934+        self._server_trackers = {} # k: shnum, v: instance of ServerTracker
935+        for server in upload_servers:
936+            assert isinstance(server, ServerTracker)
937         buckets = {}
938hunk ./src/allmydata/immutable/upload.py 943
939-        servermap = already_peers.copy()
940-        for peer in upload_servers:
941-            buckets.update(peer.buckets)
942-            for shnum in peer.buckets:
943-                self._peer_trackers[shnum] = peer
944-                servermap.setdefault(shnum, set()).add(peer.peerid)
945-        assert len(buckets) == sum([len(peer.buckets) for peer in upload_servers]), \
946+        servermap = already_servers.copy()
947+        for server in upload_servers:
948+            buckets.update(server.buckets)
949+            for shnum in server.buckets:
950+                self._server_trackers[shnum] = server
951+                servermap.setdefault(shnum, set()).add(server.serverid)
952+        assert len(buckets) == sum([len(server.buckets)
953+                                    for server in upload_servers]), \
954             "%s (%s) != %s (%s)" % (
955                 len(buckets),
956                 buckets,
957hunk ./src/allmydata/immutable/upload.py 954
958-                sum([len(peer.buckets) for peer in upload_servers]),
959-                [(p.buckets, p.peerid) for p in upload_servers]
960+                sum([len(server.buckets) for server in upload_servers]),
961+                [(s.buckets, s.serverid) for s in upload_servers]
962                 )
963         encoder.set_shareholders(buckets, servermap)
964 
965hunk ./src/allmydata/immutable/upload.py 963
966         """ Returns a Deferred that will fire with the UploadResults instance. """
967         r = self._results
968         for shnum in self._encoder.get_shares_placed():
969-            peer_tracker = self._peer_trackers[shnum]
970-            peerid = peer_tracker.peerid
971-            r.sharemap.add(shnum, peerid)
972-            r.servermap.add(peerid, shnum)
973+            server_tracker = self._server_trackers[shnum]
974+            serverid = server_tracker.serverid
975+            r.sharemap.add(shnum, serverid)
976+            r.servermap.add(serverid, shnum)
977         r.pushed_shares = len(self._encoder.get_shares_placed())
978         now = time.time()
979         r.file_size = self._encoder.file_size
980hunk ./src/allmydata/immutable/upload.py 972
981         r.timings["total"] = now - self._started
982         r.timings["storage_index"] = self._storage_index_elapsed
983-        r.timings["peer_selection"] = self._peer_selection_elapsed
984+        r.timings["peer_selection"] = self._server_selection_elapsed
985         r.timings.update(self._encoder.get_times())
986         r.uri_extension_data = self._encoder.get_uri_extension_data()
987         r.verifycapstr = verifycap.to_string()
988hunk ./src/allmydata/test/test_upload.py 196
989         self.num_servers = num_servers
990         if type(mode) is str:
991             mode = dict([i,mode] for i in range(num_servers))
992-        peers = [ ("%20d"%fakeid, FakeStorageServer(mode[fakeid]))
993-                  for fakeid in range(self.num_servers) ]
994+        servers = [ ("%20d"%fakeid, FakeStorageServer(mode[fakeid]))
995+                    for fakeid in range(self.num_servers) ]
996         self.storage_broker = StorageFarmBroker(None, permute_peers=True)
997hunk ./src/allmydata/test/test_upload.py 199
998-        for (serverid, rref) in peers:
999+        for (serverid, rref) in servers:
1000             self.storage_broker.test_add_rref(serverid, rref)
1001hunk ./src/allmydata/test/test_upload.py 201
1002-        self.last_peers = [p[1] for p in peers]
1003+        self.last_servers = [s[1] for s in servers]
1004 
1005     def log(self, *args, **kwargs):
1006         pass
1007hunk ./src/allmydata/test/test_upload.py 414
1008     def test_first_error_all(self):
1009         self.make_node("first-fail")
1010         d = self.shouldFail(UploadUnhappinessError, "first_error_all",
1011-                            "peer selection failed",
1012+                            "server selection failed",
1013                             upload_data, self.u, DATA)
1014         def _check((f,)):
1015             self.failUnlessIn("placed 0 shares out of 100 total", str(f.value))
1016hunk ./src/allmydata/test/test_upload.py 446
1017     def test_second_error_all(self):
1018         self.make_node("second-fail")
1019         d = self.shouldFail(UploadUnhappinessError, "second_error_all",
1020-                            "peer selection failed",
1021+                            "server selection failed",
1022                             upload_data, self.u, DATA)
1023         def _check((f,)):
1024             self.failUnlessIn("placed 10 shares out of 100 total", str(f.value))
1025hunk ./src/allmydata/test/test_upload.py 471
1026         d.addBoth(self._should_fail)
1027         return d
1028 
1029-class PeerSelection(unittest.TestCase):
1030+class ServerSelection(unittest.TestCase):
1031 
1032     def make_client(self, num_servers=50):
1033         self.node = FakeClient(mode="good", num_servers=num_servers)
1034hunk ./src/allmydata/test/test_upload.py 500
1035         self.node.DEFAULT_ENCODING_PARAMETERS = p
1036 
1037     def test_one_each(self):
1038-        # if we have 50 shares, and there are 50 peers, and they all accept a
1039-        # share, we should get exactly one share per peer
1040+        # if we have 50 shares, and there are 50 servers, and they all accept
1041+        # a share, we should get exactly one share per server
1042 
1043         self.make_client()
1044         data = self.get_data(SIZE_LARGE)
1045hunk ./src/allmydata/test/test_upload.py 510
1046         d.addCallback(extract_uri)
1047         d.addCallback(self._check_large, SIZE_LARGE)
1048         def _check(res):
1049-            for p in self.node.last_peers:
1050-                allocated = p.allocated
1051+            for s in self.node.last_servers:
1052+                allocated = s.allocated
1053                 self.failUnlessEqual(len(allocated), 1)
1054hunk ./src/allmydata/test/test_upload.py 513
1055-                self.failUnlessEqual(p.queries, 1)
1056+                self.failUnlessEqual(s.queries, 1)
1057         d.addCallback(_check)
1058         return d
1059 
1060hunk ./src/allmydata/test/test_upload.py 518
1061     def test_two_each(self):
1062-        # if we have 100 shares, and there are 50 peers, and they all accept
1063-        # all shares, we should get exactly two shares per peer
1064+        # if we have 100 shares, and there are 50 servers, and they all
1065+        # accept all shares, we should get exactly two shares per server
1066 
1067         self.make_client()
1068         data = self.get_data(SIZE_LARGE)
1069hunk ./src/allmydata/test/test_upload.py 523
1070-        # if there are 50 peers, then happy needs to be <= 50
1071+        # if there are 50 servers, then happy needs to be <= 50
1072         self.set_encoding_parameters(50, 50, 100)
1073         d = upload_data(self.u, data)
1074         d.addCallback(extract_uri)
1075hunk ./src/allmydata/test/test_upload.py 529
1076         d.addCallback(self._check_large, SIZE_LARGE)
1077         def _check(res):
1078-            for p in self.node.last_peers:
1079-                allocated = p.allocated
1080+            for s in self.node.last_servers:
1081+                allocated = s.allocated
1082                 self.failUnlessEqual(len(allocated), 2)
1083hunk ./src/allmydata/test/test_upload.py 532
1084-                self.failUnlessEqual(p.queries, 2)
1085+                self.failUnlessEqual(s.queries, 2)
1086         d.addCallback(_check)
1087         return d
1088 
1089hunk ./src/allmydata/test/test_upload.py 537
1090     def test_one_each_plus_one_extra(self):
1091-        # if we have 51 shares, and there are 50 peers, then one peer gets
1092-        # two shares and the rest get just one
1093+        # if we have 51 shares, and there are 50 servers, then one server
1094+        # gets two shares and the rest get just one
1095 
1096         self.make_client()
1097         data = self.get_data(SIZE_LARGE)
1098hunk ./src/allmydata/test/test_upload.py 549
1099         def _check(res):
1100             got_one = []
1101             got_two = []
1102-            for p in self.node.last_peers:
1103-                allocated = p.allocated
1104+            for s in self.node.last_servers:
1105+                allocated = s.allocated
1106                 self.failUnless(len(allocated) in (1,2), len(allocated))
1107                 if len(allocated) == 1:
1108hunk ./src/allmydata/test/test_upload.py 553
1109-                    self.failUnlessEqual(p.queries, 1)
1110-                    got_one.append(p)
1111+                    self.failUnlessEqual(s.queries, 1)
1112+                    got_one.append(s)
1113                 else:
1114hunk ./src/allmydata/test/test_upload.py 556
1115-                    self.failUnlessEqual(p.queries, 2)
1116-                    got_two.append(p)
1117+                    self.failUnlessEqual(s.queries, 2)
1118+                    got_two.append(s)
1119             self.failUnlessEqual(len(got_one), 49)
1120             self.failUnlessEqual(len(got_two), 1)
1121         d.addCallback(_check)
1122hunk ./src/allmydata/test/test_upload.py 564
1123         return d
1124 
1125     def test_four_each(self):
1126-        # if we have 200 shares, and there are 50 peers, then each peer gets
1127-        # 4 shares. The design goal is to accomplish this with only two
1128-        # queries per peer.
1129+        # if we have 200 shares, and there are 50 servers, then each server
1130+        # gets 4 shares. The design goal is to accomplish this with only two
1131+        # queries per server.
1132 
1133         self.make_client()
1134         data = self.get_data(SIZE_LARGE)
1135hunk ./src/allmydata/test/test_upload.py 570
1136-        # if there are 50 peers, then happy should be no more than 50 if
1137-        # we want this to work.
1138+        # if there are 50 servers, then happy should be no more than 50 if we
1139+        # want this to work.
1140         self.set_encoding_parameters(100, 50, 200)
1141         d = upload_data(self.u, data)
1142         d.addCallback(extract_uri)
1143hunk ./src/allmydata/test/test_upload.py 577
1144         d.addCallback(self._check_large, SIZE_LARGE)
1145         def _check(res):
1146-            for p in self.node.last_peers:
1147-                allocated = p.allocated
1148+            for s in self.node.last_servers:
1149+                allocated = s.allocated
1150                 self.failUnlessEqual(len(allocated), 4)
1151hunk ./src/allmydata/test/test_upload.py 580
1152-                self.failUnlessEqual(p.queries, 2)
1153+                self.failUnlessEqual(s.queries, 2)
1154         d.addCallback(_check)
1155         return d
1156 
1157hunk ./src/allmydata/test/test_upload.py 596
1158         d.addCallback(self._check_large, SIZE_LARGE)
1159         def _check(res):
1160             counts = {}
1161-            for p in self.node.last_peers:
1162-                allocated = p.allocated
1163+            for s in self.node.last_servers:
1164+                allocated = s.allocated
1165                 counts[len(allocated)] = counts.get(len(allocated), 0) + 1
1166             histogram = [counts.get(i, 0) for i in range(5)]
1167             self.failUnlessEqual(histogram, [0,0,0,2,1])
1168hunk ./src/allmydata/test/test_upload.py 619
1169         d.addCallback(extract_uri)
1170         d.addCallback(self._check_large, SIZE_LARGE)
1171         def _check(res):
1172-            # we should have put one share each on the big peers, and zero
1173-            # shares on the small peers
1174+            # we should have put one share each on the big servers, and zero
1175+            # shares on the small servers
1176             total_allocated = 0
1177hunk ./src/allmydata/test/test_upload.py 622
1178-            for p in self.node.last_peers:
1179+            for p in self.node.last_servers:
1180                 if p.mode == "good":
1181                     self.failUnlessEqual(len(p.allocated), 1)
1182                 elif p.mode == "small":
1183hunk ./src/allmydata/test/test_upload.py 753
1184     def _do_upload_with_broken_servers(self, servers_to_break):
1185         """
1186         I act like a normal upload, but before I send the results of
1187-        Tahoe2PeerSelector to the Encoder, I break the first servers_to_break
1188-        PeerTrackers in the upload_servers part of the return result.
1189+        Tahoe2ServerSelector to the Encoder, I break the first
1190+        servers_to_break ServerTrackers in the upload_servers part of the
1191+        return result.
1192         """
1193         assert self.g, "I tried to find a grid at self.g, but failed"
1194         broker = self.g.clients[0].storage_broker
1195hunk ./src/allmydata/test/test_upload.py 768
1196         encoder = encode.Encoder()
1197         encoder.set_encrypted_uploadable(uploadable)
1198         status = upload.UploadStatus()
1199-        selector = upload.Tahoe2PeerSelector("dglev", "test", status)
1200+        selector = upload.Tahoe2ServerSelector("dglev", "test", status)
1201         storage_index = encoder.get_param("storage_index")
1202         share_size = encoder.get_param("share_size")
1203         block_size = encoder.get_param("block_size")
1204hunk ./src/allmydata/test/test_upload.py 776
1205         d = selector.get_shareholders(broker, sh, storage_index,
1206                                       share_size, block_size, num_segments,
1207                                       10, 3, 4)
1208-        def _have_shareholders((upload_servers, already_peers)):
1209+        def _have_shareholders((upload_servers, already_servers)):
1210             assert servers_to_break <= len(upload_servers)
1211             for index in xrange(servers_to_break):
1212                 server = list(upload_servers)[index]
1213hunk ./src/allmydata/test/test_upload.py 783
1214                 for share in server.buckets.keys():
1215                     server.buckets[share].abort()
1216             buckets = {}
1217-            servermap = already_peers.copy()
1218-            for peer in upload_servers:
1219-                buckets.update(peer.buckets)
1220-                for bucket in peer.buckets:
1221-                    servermap.setdefault(bucket, set()).add(peer.peerid)
1222+            servermap = already_servers.copy()
1223+            for server in upload_servers:
1224+                buckets.update(server.buckets)
1225+                for bucket in server.buckets:
1226+                    servermap.setdefault(bucket, set()).add(server.serverid)
1227             encoder.set_shareholders(buckets, servermap)
1228             d = encoder.start()
1229             return d
1230hunk ./src/allmydata/test/test_upload.py 1058
1231         # one share from our initial upload to each of these.
1232         # The counterintuitive ordering of the share numbers is to deal with
1233         # the permuting of these servers -- distributing the shares this
1234-        # way ensures that the Tahoe2PeerSelector sees them in the order
1235+        # way ensures that the Tahoe2ServerSelector sees them in the order
1236         # described below.
1237         d = self._setup_and_upload()
1238         d.addCallback(lambda ign:
1239hunk ./src/allmydata/test/test_upload.py 1073
1240         # server 2: share 0
1241         # server 3: share 1
1242         # We change the 'happy' parameter in the client to 4.
1243-        # The Tahoe2PeerSelector will see the peers permuted as:
1244+        # The Tahoe2ServerSelector will see the servers permuted as:
1245         # 2, 3, 1, 0
1246         # Ideally, a reupload of our original data should work.
1247         def _reset_encoding_parameters(ign, happy=4):
1248hunk ./src/allmydata/test/test_upload.py 1088
1249 
1250 
1251         # This scenario is basically comment:53, but changed so that the
1252-        # Tahoe2PeerSelector sees the server with all of the shares before
1253+        # Tahoe2ServerSelector sees the server with all of the shares before
1254         # any of the other servers.
1255         # The layout is:
1256         # server 2: shares 0 - 9
1257hunk ./src/allmydata/test/test_upload.py 1095
1258         # server 3: share 0
1259         # server 1: share 1
1260         # server 4: share 2
1261-        # The Tahoe2PeerSelector sees the peers permuted as:
1262+        # The Tahoe2ServerSelector sees the servers permuted as:
1263         # 2, 3, 1, 4
1264         # Note that server 0 has been replaced by server 4; this makes it
1265hunk ./src/allmydata/test/test_upload.py 1098
1266-        # easier to ensure that the last server seen by Tahoe2PeerSelector
1267+        # easier to ensure that the last server seen by Tahoe2ServerSelector
1268         # has only one share.
1269         d.addCallback(_change_basedir)
1270         d.addCallback(lambda ign:
1271hunk ./src/allmydata/test/test_upload.py 1128
1272 
1273 
1274         # Try the same thing, but with empty servers after the first one
1275-        # We want to make sure that Tahoe2PeerSelector will redistribute
1276+        # We want to make sure that Tahoe2ServerSelector will redistribute
1277         # shares as necessary, not simply discover an existing layout.
1278         # The layout is:
1279         # server 2: shares 0 - 9
1280hunk ./src/allmydata/test/test_upload.py 1188
1281         return d
1282     test_problem_layout_ticket_1124.todo = "Fix this after 1.7.1 release."
1283 
1284-    def test_happiness_with_some_readonly_peers(self):
1285+    def test_happiness_with_some_readonly_servers(self):
1286         # Try the following layout
1287         # server 2: shares 0-9
1288         # server 4: share 0, read-only
1289hunk ./src/allmydata/test/test_upload.py 1227
1290         return d
1291 
1292 
1293-    def test_happiness_with_all_readonly_peers(self):
1294+    def test_happiness_with_all_readonly_servers(self):
1295         # server 3: share 1, read-only
1296         # server 1: share 2, read-only
1297         # server 2: shares 0-9, read-only
1298hunk ./src/allmydata/test/test_upload.py 1233
1299         # server 4: share 0, read-only
1300         # The idea with this test is to make sure that the survey of
1301-        # read-only peers doesn't undercount servers of happiness
1302+        # read-only servers doesn't undercount servers of happiness
1303         self.basedir = self.mktemp()
1304         d = self._setup_and_upload()
1305         d.addCallback(lambda ign:
1306hunk ./src/allmydata/test/test_upload.py 1272
1307         # the layout presented to it satisfies "servers_of_happiness"
1308         # until a failure occurs)
1309         #
1310-        # This test simulates an upload where servers break after peer
1311+        # This test simulates an upload where servers break after server
1312         # selection, but before they are written to.
1313         def _set_basedir(ign=None):
1314             self.basedir = self.mktemp()
1315hunk ./src/allmydata/test/test_upload.py 1287
1316             self._add_server(server_number=5)
1317         d.addCallback(_do_server_setup)
1318         # remove the original server
1319-        # (necessary to ensure that the Tahoe2PeerSelector will distribute
1320+        # (necessary to ensure that the Tahoe2ServerSelector will distribute
1321         #  all the shares)
1322         def _remove_server(ign):
1323             server = self.g.servers_by_number[0]
1324hunk ./src/allmydata/test/test_upload.py 1347
1325 
1326     def test_merge_peers(self):
1327         # merge_peers merges a list of upload_servers and a dict of
1328-        # shareid -> peerid mappings.
1329+        # shareid -> serverid mappings.
1330         shares = {
1331                     1 : set(["server1"]),
1332                     2 : set(["server2"]),
1333hunk ./src/allmydata/test/test_upload.py 1358
1334         # if not provided with a upload_servers argument, it should just
1335         # return the first argument unchanged.
1336         self.failUnlessEqual(shares, merge_peers(shares, set([])))
1337-        class FakePeerTracker:
1338+        class FakeServerTracker:
1339             pass
1340         trackers = []
1341         for (i, server) in [(i, "server%d" % i) for i in xrange(5, 9)]:
1342hunk ./src/allmydata/test/test_upload.py 1362
1343-            t = FakePeerTracker()
1344-            t.peerid = server
1345+            t = FakeServerTracker()
1346+            t.serverid = server
1347             t.buckets = [i]
1348             trackers.append(t)
1349         expected = {
1350hunk ./src/allmydata/test/test_upload.py 1390
1351         expected = {}
1352         for (i, server) in [(i, "server%d" % i) for i in xrange(10)]:
1353             shares3[i] = set([server])
1354-            t = FakePeerTracker()
1355-            t.peerid = server
1356+            t = FakeServerTracker()
1357+            t.serverid = server
1358             t.buckets = [i]
1359             trackers.append(t)
1360             expected[i] = set([server])
1361hunk ./src/allmydata/test/test_upload.py 1407
1362         # value for given inputs.
1363 
1364         # servers_of_happiness expects a dict of
1365-        # shnum => set(peerids) as a preexisting shares argument.
1366+        # shnum => set(serverids) as a preexisting shares argument.
1367         test1 = {
1368                  1 : set(["server1"]),
1369                  2 : set(["server2"]),
1370hunk ./src/allmydata/test/test_upload.py 1421
1371         # should be 3 instead of 4.
1372         happy = servers_of_happiness(test1)
1373         self.failUnlessEqual(3, happy)
1374-        # The second argument of merge_peers should be a set of
1375-        # objects with peerid and buckets as attributes. In actual use,
1376-        # these will be PeerTracker instances, but for testing it is fine
1377-        # to make a FakePeerTracker whose job is to hold those instance
1378-        # variables to test that part.
1379-        class FakePeerTracker:
1380+        # The second argument of merge_peers should be a set of objects with
1381+        # serverid and buckets as attributes. In actual use, these will be
1382+        # ServerTracker instances, but for testing it is fine to make a
1383+        # FakeServerTracker whose job is to hold those instance variables to
1384+        # test that part.
1385+        class FakeServerTracker:
1386             pass
1387         trackers = []
1388         for (i, server) in [(i, "server%d" % i) for i in xrange(5, 9)]:
1389hunk ./src/allmydata/test/test_upload.py 1430
1390-            t = FakePeerTracker()
1391-            t.peerid = server
1392+            t = FakeServerTracker()
1393+            t.serverid = server
1394             t.buckets = [i]
1395             trackers.append(t)
1396         # Recall that test1 is a server layout with servers_of_happiness
1397hunk ./src/allmydata/test/test_upload.py 1436
1398         # = 3.  Since there isn't any overlap between the shnum ->
1399-        # set([peerid]) correspondences in test1 and those in trackers,
1400+        # set([serverid]) correspondences in test1 and those in trackers,
1401         # the result here should be 7.
1402         test2 = merge_peers(test1, set(trackers))
1403         happy = servers_of_happiness(test2)
1404hunk ./src/allmydata/test/test_upload.py 1444
1405         # Now add an overlapping server to trackers. This is redundant,
1406         # so it should not cause the previously reported happiness value
1407         # to change.
1408-        t = FakePeerTracker()
1409-        t.peerid = "server1"
1410+        t = FakeServerTracker()
1411+        t.serverid = "server1"
1412         t.buckets = [1]
1413         trackers.append(t)
1414         test2 = merge_peers(test1, set(trackers))
1415hunk ./src/allmydata/test/test_upload.py 1463
1416             4 : set(['server4']),
1417         }
1418         trackers = []
1419-        t = FakePeerTracker()
1420-        t.peerid = 'server5'
1421+        t = FakeServerTracker()
1422+        t.serverid = 'server5'
1423         t.buckets = [4]
1424         trackers.append(t)
1425hunk ./src/allmydata/test/test_upload.py 1467
1426-        t = FakePeerTracker()
1427-        t.peerid = 'server6'
1428+        t = FakeServerTracker()
1429+        t.serverid = 'server6'
1430         t.buckets = [3, 5]
1431         trackers.append(t)
1432         # The value returned by servers_of_happiness is the size
1433hunk ./src/allmydata/test/test_upload.py 1473
1434         # of a maximum matching in the bipartite graph that
1435-        # servers_of_happiness() makes between peerids and share
1436+        # servers_of_happiness() makes between serverids and share
1437         # numbers. It should find something like this:
1438         # (server 1, share 1)
1439         # (server 2, share 2)
1440hunk ./src/allmydata/test/test_upload.py 1531
1441         sbs = shares_by_server(test1)
1442         self.failUnlessEqual(set([1, 2, 3]), sbs["server1"])
1443         self.failUnlessEqual(set([4, 5]), sbs["server2"])
1444-        # This should fail unless the peerid part of the mapping is a set
1445+        # This should fail unless the serverid part of the mapping is a set
1446         test2 = {1: "server1"}
1447         self.shouldFail(AssertionError,
1448                        "test_shares_by_server",
1449hunk ./src/allmydata/test/test_upload.py 1547
1450         # server 2: empty
1451         # server 3: empty
1452         # server 4: empty
1453-        # The purpose of this test is to make sure that the peer selector
1454+        # The purpose of this test is to make sure that the server selector
1455         # knows about the shares on server 1, even though it is read-only.
1456         # It used to simply filter these out, which would cause the test
1457         # to fail when servers_of_happiness = 4.
1458hunk ./src/allmydata/test/test_upload.py 1578
1459 
1460 
1461     def test_query_counting(self):
1462-        # If peer selection fails, Tahoe2PeerSelector prints out a lot
1463+        # If server selection fails, Tahoe2ServerSelector prints out a lot
1464         # of helpful diagnostic information, including query stats.
1465         # This test helps make sure that that information is accurate.
1466         self.basedir = self.mktemp()
1467hunk ./src/allmydata/test/test_upload.py 1601
1468                             c.upload, upload.Data("data" * 10000,
1469                                                   convergence="")))
1470         # Now try with some readonly servers. We want to make sure that
1471-        # the readonly peer share discovery phase is counted correctly.
1472+        # the readonly server share discovery phase is counted correctly.
1473         def _reset(ign):
1474             self.basedir = self.mktemp()
1475             self.g = None
1476hunk ./src/allmydata/test/test_upload.py 1672
1477         d.addCallback(lambda client:
1478             self.shouldFail(UploadUnhappinessError,
1479                             "test_upper_limit_on_readonly_queries",
1480-                            "sent 8 queries to 8 peers",
1481+                            "sent 8 queries to 8 servers",
1482                             client.upload,
1483                             upload.Data('data' * 10000, convergence="")))
1484         return d
1485hunk ./src/allmydata/test/test_upload.py 1678
1486 
1487 
1488-    def test_exception_messages_during_peer_selection(self):
1489+    def test_exception_messages_during_server_selection(self):
1490         # server 1: read-only, no shares
1491         # server 2: read-only, no shares
1492         # server 3: read-only, no shares
1493hunk ./src/allmydata/test/test_upload.py 1711
1494                             "total (10 homeless), want to place shares on at "
1495                             "least 4 servers such that any 3 of them have "
1496                             "enough shares to recover the file, "
1497-                            "sent 5 queries to 5 peers, 0 queries placed "
1498+                            "sent 5 queries to 5 servers, 0 queries placed "
1499                             "some shares, 5 placed none "
1500                             "(of which 5 placed none due to the server being "
1501                             "full and 0 placed none due to an error)",
1502hunk ./src/allmydata/test/test_upload.py 1752
1503                             "total (10 homeless), want to place shares on at "
1504                             "least 4 servers such that any 3 of them have "
1505                             "enough shares to recover the file, "
1506-                            "sent 5 queries to 5 peers, 0 queries placed "
1507+                            "sent 5 queries to 5 servers, 0 queries placed "
1508                             "some shares, 5 placed none "
1509                             "(of which 4 placed none due to the server being "
1510                             "full and 1 placed none due to an error)",
1511hunk ./src/allmydata/test/test_upload.py 2013
1512         return d
1513 
1514 
1515-    def test_peer_selector_bucket_abort(self):
1516-        # If peer selection for an upload fails due to an unhappy
1517-        # layout, the peer selection process should abort the buckets it
1518+    def test_server_selector_bucket_abort(self):
1519+        # If server selection for an upload fails due to an unhappy
1520+        # layout, the server selection process should abort the buckets it
1521         # allocates before failing, so that the space can be re-used.
1522         self.basedir = self.mktemp()
1523         self.set_up_grid(num_servers=5)
1524hunk ./src/allmydata/test/test_upload.py 2028
1525         d = defer.succeed(None)
1526         d.addCallback(lambda ignored:
1527             self.shouldFail(UploadUnhappinessError,
1528-                            "test_peer_selection_bucket_abort",
1529+                            "test_server_selection_bucket_abort",
1530                             "",
1531                             client.upload, upload.Data("data" * 10000,
1532                                                        convergence="")))
1533hunk ./src/allmydata/test/test_upload.py 2083
1534         return None
1535 
1536 # TODO:
1537-#  upload with exactly 75 peers (shares_of_happiness)
1538+#  upload with exactly 75 servers (shares_of_happiness)
1539 #  have a download fail
1540 #  cancel a download (need to implement more cancel stuff)
1541 
1542hunk ./src/allmydata/util/happinessutil.py 77
1543 
1544     for peer in upload_servers:
1545         for shnum in peer.buckets:
1546-            servermap.setdefault(shnum, set()).add(peer.peerid)
1547+            servermap.setdefault(shnum, set()).add(peer.serverid)
1548     return servermap
1549 
1550 def servers_of_happiness(sharemap):
1551}
1552[upload.py: fix var names to avoid confusion between 'trackers' and 'servers'
1553warner@lothar.com**20110227011103
1554 Ignore-this: 5d5e3415b7d2732d92f42413c25d205d
1555] {
1556hunk ./src/allmydata/immutable/upload.py 189
1557         self.needed_shares = needed_shares
1558 
1559         self.homeless_shares = set(range(total_shares))
1560-        self.contacted_servers = [] # servers worth asking again
1561-        self.contacted_servers2 = [] # servers that we have asked again
1562+        self.contacted_trackers = [] # servers worth asking again
1563+        self.contacted_trackers2 = [] # servers that we have asked again
1564         self._started_second_pass = False
1565hunk ./src/allmydata/immutable/upload.py 192
1566-        self.use_servers = set() # ServerTrackers that have shares assigned
1567-                                 # to them
1568+        self.use_trackers = set() # ServerTrackers that have shares assigned
1569+                                  # to them
1570         self.preexisting_shares = {} # shareid => set(serverids) holding shareid
1571hunk ./src/allmydata/immutable/upload.py 195
1572-        # We don't try to allocate shares to these servers, since they've said
1573-        # that they're incapable of storing shares of the size that we'd want
1574-        # to store. We keep them around because they may have existing shares
1575-        # for this storage index, which we want to know about for accurate
1576-        # servers_of_happiness accounting
1577-        # (this is eventually a list, but it is initialized later)
1578-        self.readonly_servers = None
1579+
1580         # These servers have shares -- any shares -- for our SI. We keep
1581         # track of these to write an error message with them later.
1582         self.servers_with_shares = set()
1583hunk ./src/allmydata/immutable/upload.py 248
1584                                  bucket_cancel_secret_hash(file_cancel_secret,
1585                                                            serverid))
1586                    for (serverid, conn) in servers]
1587-        self.uncontacted_servers = _make_trackers(writable_servers)
1588-        self.readonly_servers = _make_trackers(readonly_servers)
1589+        self.uncontacted_trackers = _make_trackers(writable_servers)
1590+
1591+        # We don't try to allocate shares to these servers, since they've
1592+        # said that they're incapable of storing shares of the size that we'd
1593+        # want to store. We ask them about existing shares for this storage
1594+        # index, which we want to know about for accurate
1595+        # servers_of_happiness accounting, then we forget about them.
1596+        readonly_trackers = _make_trackers(readonly_servers)
1597+
1598         # We now ask servers that can't hold any new shares about existing
1599         # shares that they might have for our SI. Once this is done, we
1600         # start placing the shares that we haven't already accounted
1601hunk ./src/allmydata/immutable/upload.py 262
1602         # for.
1603         ds = []
1604-        if self._status and self.readonly_servers:
1605+        if self._status and readonly_trackers:
1606             self._status.set_status("Contacting readonly servers to find "
1607                                     "any existing shares")
1608hunk ./src/allmydata/immutable/upload.py 265
1609-        for server in self.readonly_servers:
1610-            assert isinstance(server, ServerTracker)
1611-            d = server.ask_about_existing_shares()
1612-            d.addBoth(self._handle_existing_response, server.serverid)
1613+        for tracker in readonly_trackers:
1614+            assert isinstance(tracker, ServerTracker)
1615+            d = tracker.ask_about_existing_shares()
1616+            d.addBoth(self._handle_existing_response, tracker.serverid)
1617             ds.append(d)
1618             self.num_servers_contacted += 1
1619             self.query_count += 1
1620hunk ./src/allmydata/immutable/upload.py 273
1621             self.log("asking server %s for any existing shares" %
1622-                     (idlib.shortnodeid_b2a(server.serverid),),
1623+                     (idlib.shortnodeid_b2a(tracker.serverid),),
1624                     level=log.NOISY)
1625         dl = defer.DeferredList(ds)
1626         dl.addCallback(lambda ign: self._loop())
1627hunk ./src/allmydata/immutable/upload.py 327
1628 
1629     def _loop(self):
1630         if not self.homeless_shares:
1631-            merged = merge_peers(self.preexisting_shares, self.use_servers)
1632+            merged = merge_peers(self.preexisting_shares, self.use_trackers)
1633             effective_happiness = servers_of_happiness(merged)
1634             if self.servers_of_happiness <= effective_happiness:
1635                 msg = ("server selection successful for %s: %s: pretty_print_merged: %s, "
1636hunk ./src/allmydata/immutable/upload.py 331
1637-                       "self.use_servers: %s, self.preexisting_shares: %s") \
1638+                       "self.use_trackers: %s, self.preexisting_shares: %s") \
1639                        % (self, self._get_progress_message(),
1640                           pretty_print_shnum_to_servers(merged),
1641                           [', '.join([str_shareloc(k,v)
1642hunk ./src/allmydata/immutable/upload.py 335
1643-                                      for k,v in s.buckets.iteritems()])
1644-                           for s in self.use_servers],
1645+                                      for k,v in st.buckets.iteritems()])
1646+                           for st in self.use_trackers],
1647                           pretty_print_shnum_to_servers(self.preexisting_shares))
1648                 self.log(msg, level=log.OPERATIONAL)
1649hunk ./src/allmydata/immutable/upload.py 339
1650-                return (self.use_servers, self.preexisting_shares)
1651+                return (self.use_trackers, self.preexisting_shares)
1652             else:
1653                 # We're not okay right now, but maybe we can fix it by
1654                 # redistributing some shares. In cases where one or two
1655hunk ./src/allmydata/immutable/upload.py 356
1656                 shares_to_spread = sum([len(list(sharelist)) - 1
1657                                         for (server, sharelist)
1658                                         in shares.items()])
1659-                if delta <= len(self.uncontacted_servers) and \
1660+                if delta <= len(self.uncontacted_trackers) and \
1661                    shares_to_spread >= delta:
1662                     items = shares.items()
1663                     while len(self.homeless_shares) < delta:
1664hunk ./src/allmydata/immutable/upload.py 372
1665                             if not self.preexisting_shares[share]:
1666                                 del self.preexisting_shares[share]
1667                             items.append((server, sharelist))
1668-                        for writer in self.use_servers:
1669+                        for writer in self.use_trackers:
1670                             writer.abort_some_buckets(self.homeless_shares)
1671                     return self._loop()
1672                 else:
1673hunk ./src/allmydata/immutable/upload.py 392
1674                     self.log(servmsg, level=log.INFREQUENT)
1675                     return self._failed("%s (%s)" % (failmsg, self._get_progress_message()))
1676 
1677-        if self.uncontacted_servers:
1678-            server = self.uncontacted_servers.pop(0)
1679+        if self.uncontacted_trackers:
1680+            tracker = self.uncontacted_trackers.pop(0)
1681             # TODO: don't pre-convert all serverids to ServerTrackers
1682hunk ./src/allmydata/immutable/upload.py 395
1683-            assert isinstance(server, ServerTracker)
1684+            assert isinstance(tracker, ServerTracker)
1685 
1686             shares_to_ask = set(sorted(self.homeless_shares)[:1])
1687             self.homeless_shares -= shares_to_ask
1688hunk ./src/allmydata/immutable/upload.py 404
1689             if self._status:
1690                 self._status.set_status("Contacting Servers [%s] (first query),"
1691                                         " %d shares left.."
1692-                                        % (idlib.shortnodeid_b2a(server.serverid),
1693+                                        % (idlib.shortnodeid_b2a(tracker.serverid),
1694                                            len(self.homeless_shares)))
1695hunk ./src/allmydata/immutable/upload.py 406
1696-            d = server.query(shares_to_ask)
1697-            d.addBoth(self._got_response, server, shares_to_ask,
1698-                      self.contacted_servers)
1699+            d = tracker.query(shares_to_ask)
1700+            d.addBoth(self._got_response, tracker, shares_to_ask,
1701+                      self.contacted_trackers)
1702             return d
1703hunk ./src/allmydata/immutable/upload.py 410
1704-        elif self.contacted_servers:
1705+        elif self.contacted_trackers:
1706             # ask a server that we've already asked.
1707             if not self._started_second_pass:
1708                 self.log("starting second pass",
1709hunk ./src/allmydata/immutable/upload.py 417
1710                         level=log.NOISY)
1711                 self._started_second_pass = True
1712             num_shares = mathutil.div_ceil(len(self.homeless_shares),
1713-                                           len(self.contacted_servers))
1714-            server = self.contacted_servers.pop(0)
1715+                                           len(self.contacted_trackers))
1716+            tracker = self.contacted_trackers.pop(0)
1717             shares_to_ask = set(sorted(self.homeless_shares)[:num_shares])
1718             self.homeless_shares -= shares_to_ask
1719             self.query_count += 1
1720hunk ./src/allmydata/immutable/upload.py 425
1721             if self._status:
1722                 self._status.set_status("Contacting Servers [%s] (second query),"
1723                                         " %d shares left.."
1724-                                        % (idlib.shortnodeid_b2a(server.serverid),
1725+                                        % (idlib.shortnodeid_b2a(tracker.serverid),
1726                                            len(self.homeless_shares)))
1727hunk ./src/allmydata/immutable/upload.py 427
1728-            d = server.query(shares_to_ask)
1729-            d.addBoth(self._got_response, server, shares_to_ask,
1730-                      self.contacted_servers2)
1731+            d = tracker.query(shares_to_ask)
1732+            d.addBoth(self._got_response, tracker, shares_to_ask,
1733+                      self.contacted_trackers2)
1734             return d
1735hunk ./src/allmydata/immutable/upload.py 431
1736-        elif self.contacted_servers2:
1737+        elif self.contacted_trackers2:
1738             # we've finished the second-or-later pass. Move all the remaining
1739hunk ./src/allmydata/immutable/upload.py 433
1740-            # servers back into self.contacted_servers for the next pass.
1741-            self.contacted_servers.extend(self.contacted_servers2)
1742-            self.contacted_servers2[:] = []
1743+            # servers back into self.contacted_trackers for the next pass.
1744+            self.contacted_trackers.extend(self.contacted_trackers2)
1745+            self.contacted_trackers2[:] = []
1746             return self._loop()
1747         else:
1748             # no more servers. If we haven't placed enough shares, we fail.
1749hunk ./src/allmydata/immutable/upload.py 439
1750-            merged = merge_peers(self.preexisting_shares, self.use_servers)
1751+            merged = merge_peers(self.preexisting_shares, self.use_trackers)
1752             effective_happiness = servers_of_happiness(merged)
1753             if effective_happiness < self.servers_of_happiness:
1754                 msg = failure_message(len(self.servers_with_shares),
1755hunk ./src/allmydata/immutable/upload.py 459
1756                 msg = ("server selection successful (no more servers) for %s: %s: %s" % (self,
1757                             self._get_progress_message(), pretty_print_shnum_to_servers(merged)))
1758                 self.log(msg, level=log.OPERATIONAL)
1759-                return (self.use_servers, self.preexisting_shares)
1760+                return (self.use_trackers, self.preexisting_shares)
1761 
1762hunk ./src/allmydata/immutable/upload.py 461
1763-    def _got_response(self, res, server, shares_to_ask, put_server_here):
1764+    def _got_response(self, res, tracker, shares_to_ask, put_tracker_here):
1765         if isinstance(res, failure.Failure):
1766             # This is unusual, and probably indicates a bug or a network
1767             # problem.
1768hunk ./src/allmydata/immutable/upload.py 465
1769-            self.log("%s got error during server selection: %s" % (server, res),
1770+            self.log("%s got error during server selection: %s" % (tracker, res),
1771                     level=log.UNUSUAL)
1772             self.error_count += 1
1773             self.bad_query_count += 1
1774hunk ./src/allmydata/immutable/upload.py 470
1775             self.homeless_shares |= shares_to_ask
1776-            if (self.uncontacted_servers
1777-                or self.contacted_servers
1778-                or self.contacted_servers2):
1779+            if (self.uncontacted_trackers
1780+                or self.contacted_trackers
1781+                or self.contacted_trackers2):
1782                 # there is still hope, so just loop
1783                 pass
1784             else:
1785hunk ./src/allmydata/immutable/upload.py 481
1786                 # failure we got: if a coding error causes all servers to fail
1787                 # in the same way, this allows the common failure to be seen
1788                 # by the uploader and should help with debugging
1789-                msg = ("last failure (from %s) was: %s" % (server, res))
1790+                msg = ("last failure (from %s) was: %s" % (tracker, res))
1791                 self.last_failure_msg = msg
1792         else:
1793             (alreadygot, allocated) = res
1794hunk ./src/allmydata/immutable/upload.py 486
1795             self.log("response to allocate_buckets() from server %s: alreadygot=%s, allocated=%s"
1796-                    % (idlib.shortnodeid_b2a(server.serverid),
1797+                    % (idlib.shortnodeid_b2a(tracker.serverid),
1798                        tuple(sorted(alreadygot)), tuple(sorted(allocated))),
1799                     level=log.NOISY)
1800             progress = False
1801hunk ./src/allmydata/immutable/upload.py 491
1802             for s in alreadygot:
1803-                self.preexisting_shares.setdefault(s, set()).add(server.serverid)
1804+                self.preexisting_shares.setdefault(s, set()).add(tracker.serverid)
1805                 if s in self.homeless_shares:
1806                     self.homeless_shares.remove(s)
1807                     progress = True
1808hunk ./src/allmydata/immutable/upload.py 501
1809             # the ServerTracker will remember which shares were allocated on
1810             # that peer. We just have to remember to use them.
1811             if allocated:
1812-                self.use_servers.add(server)
1813+                self.use_trackers.add(tracker)
1814                 progress = True
1815 
1816             if allocated or alreadygot:
1817hunk ./src/allmydata/immutable/upload.py 505
1818-                self.servers_with_shares.add(server.serverid)
1819+                self.servers_with_shares.add(tracker.serverid)
1820 
1821             not_yet_present = set(shares_to_ask) - set(alreadygot)
1822             still_homeless = not_yet_present - set(allocated)
1823hunk ./src/allmydata/immutable/upload.py 536
1824             else:
1825                 # if they *were* able to accept everything, they might be
1826                 # willing to accept even more.
1827-                put_server_here.append(server)
1828+                put_tracker_here.append(tracker)
1829 
1830         # now loop
1831         return self._loop()
1832hunk ./src/allmydata/immutable/upload.py 549
1833         place shares for this file. I then raise an
1834         UploadUnhappinessError with my msg argument.
1835         """
1836-        for server in self.use_servers:
1837-            assert isinstance(server, ServerTracker)
1838-
1839-            server.abort()
1840-
1841+        for tracker in self.use_trackers:
1842+            assert isinstance(tracker, ServerTracker)
1843+            tracker.abort()
1844         raise UploadUnhappinessError(msg)
1845 
1846 
1847}
1848[upload.py: more tracker-vs-server cleanup
1849warner@lothar.com**20110227011107
1850 Ignore-this: bb75ed2afef55e47c085b35def2de315
1851] {
1852hunk ./src/allmydata/immutable/upload.py 174
1853                          num_segments, total_shares, needed_shares,
1854                          servers_of_happiness):
1855         """
1856-        @return: (upload_servers, already_servers), where upload_servers is
1857+        @return: (upload_trackers, already_servers), where upload_trackers is
1858                  a set of ServerTracker instances that have agreed to hold
1859                  some shares for us (the shareids are stashed inside the
1860                  ServerTracker), and already_servers is a dict mapping shnum
1861hunk ./src/allmydata/immutable/upload.py 178
1862-                 to a set of servers which claim to already have the share.
1863+                 to a set of serverids which claim to already have the share.
1864         """
1865 
1866         if self._status:
1867hunk ./src/allmydata/immutable/upload.py 198
1868 
1869         # These servers have shares -- any shares -- for our SI. We keep
1870         # track of these to write an error message with them later.
1871-        self.servers_with_shares = set()
1872+        self.serverids_with_shares = set()
1873 
1874         # this needed_hashes computation should mirror
1875         # Encoder.send_all_share_hash_trees. We use an IncompleteHashTree
1876hunk ./src/allmydata/immutable/upload.py 280
1877         return dl
1878 
1879 
1880-    def _handle_existing_response(self, res, server):
1881+    def _handle_existing_response(self, res, serverid):
1882         """
1883         I handle responses to the queries sent by
1884         Tahoe2ServerSelector._existing_shares.
1885hunk ./src/allmydata/immutable/upload.py 287
1886         """
1887         if isinstance(res, failure.Failure):
1888             self.log("%s got error during existing shares check: %s"
1889-                    % (idlib.shortnodeid_b2a(server), res),
1890+                    % (idlib.shortnodeid_b2a(serverid), res),
1891                     level=log.UNUSUAL)
1892             self.error_count += 1
1893             self.bad_query_count += 1
1894hunk ./src/allmydata/immutable/upload.py 294
1895         else:
1896             buckets = res
1897             if buckets:
1898-                self.servers_with_shares.add(server)
1899+                self.serverids_with_shares.add(serverid)
1900             self.log("response to get_buckets() from server %s: alreadygot=%s"
1901hunk ./src/allmydata/immutable/upload.py 296
1902-                    % (idlib.shortnodeid_b2a(server), tuple(sorted(buckets))),
1903+                    % (idlib.shortnodeid_b2a(serverid), tuple(sorted(buckets))),
1904                     level=log.NOISY)
1905             for bucket in buckets:
1906hunk ./src/allmydata/immutable/upload.py 299
1907-                self.preexisting_shares.setdefault(bucket, set()).add(server)
1908+                self.preexisting_shares.setdefault(bucket, set()).add(serverid)
1909                 self.homeless_shares.discard(bucket)
1910             self.full_count += 1
1911             self.bad_query_count += 1
1912hunk ./src/allmydata/immutable/upload.py 377
1913                     return self._loop()
1914                 else:
1915                     # Redistribution won't help us; fail.
1916-                    server_count = len(self.servers_with_shares)
1917+                    server_count = len(self.serverids_with_shares)
1918                     failmsg = failure_message(server_count,
1919                                               self.needed_shares,
1920                                               self.servers_of_happiness,
1921hunk ./src/allmydata/immutable/upload.py 442
1922             merged = merge_peers(self.preexisting_shares, self.use_trackers)
1923             effective_happiness = servers_of_happiness(merged)
1924             if effective_happiness < self.servers_of_happiness:
1925-                msg = failure_message(len(self.servers_with_shares),
1926+                msg = failure_message(len(self.serverids_with_shares),
1927                                       self.needed_shares,
1928                                       self.servers_of_happiness,
1929                                       effective_happiness)
1930hunk ./src/allmydata/immutable/upload.py 505
1931                 progress = True
1932 
1933             if allocated or alreadygot:
1934-                self.servers_with_shares.add(tracker.serverid)
1935+                self.serverids_with_shares.add(tracker.serverid)
1936 
1937             not_yet_present = set(shares_to_ask) - set(alreadygot)
1938             still_homeless = not_yet_present - set(allocated)
1939hunk ./src/allmydata/immutable/upload.py 923
1940         d.addCallback(_done)
1941         return d
1942 
1943-    def set_shareholders(self, (upload_servers, already_servers), encoder):
1944+    def set_shareholders(self, (upload_trackers, already_servers), encoder):
1945         """
1946hunk ./src/allmydata/immutable/upload.py 925
1947-        @param upload_servers: a sequence of ServerTracker objects that
1948-                               have agreed to hold some shares for us (the
1949-                               shareids are stashed inside the ServerTracker)
1950+        @param upload_trackers: a sequence of ServerTracker objects that
1951+                                have agreed to hold some shares for us (the
1952+                                shareids are stashed inside the ServerTracker)
1953         @paran already_servers: a dict mapping sharenum to a set of serverids
1954                                 that claim to already have this share
1955         """
1956hunk ./src/allmydata/immutable/upload.py 931
1957-        msgtempl = "set_shareholders; upload_servers is %s, already_servers is %s"
1958-        values = ([', '.join([str_shareloc(k,v) for k,v in s.buckets.iteritems()])
1959-            for s in upload_servers], already_servers)
1960+        msgtempl = "set_shareholders; upload_trackers is %s, already_servers is %s"
1961+        values = ([', '.join([str_shareloc(k,v)
1962+                              for k,v in st.buckets.iteritems()])
1963+                   for st in upload_trackers], already_servers)
1964         self.log(msgtempl % values, level=log.OPERATIONAL)
1965         # record already-present shares in self._results
1966         self._results.preexisting_shares = len(already_servers)
1967hunk ./src/allmydata/immutable/upload.py 940
1968 
1969         self._server_trackers = {} # k: shnum, v: instance of ServerTracker
1970-        for server in upload_servers:
1971-            assert isinstance(server, ServerTracker)
1972+        for tracker in upload_trackers:
1973+            assert isinstance(tracker, ServerTracker)
1974         buckets = {}
1975         servermap = already_servers.copy()
1976hunk ./src/allmydata/immutable/upload.py 944
1977-        for server in upload_servers:
1978-            buckets.update(server.buckets)
1979-            for shnum in server.buckets:
1980-                self._server_trackers[shnum] = server
1981-                servermap.setdefault(shnum, set()).add(server.serverid)
1982-        assert len(buckets) == sum([len(server.buckets)
1983-                                    for server in upload_servers]), \
1984+        for tracker in upload_trackers:
1985+            buckets.update(tracker.buckets)
1986+            for shnum in tracker.buckets:
1987+                self._server_trackers[shnum] = tracker
1988+                servermap.setdefault(shnum, set()).add(tracker.serverid)
1989+        assert len(buckets) == sum([len(tracker.buckets)
1990+                                    for tracker in upload_trackers]), \
1991             "%s (%s) != %s (%s)" % (
1992                 len(buckets),
1993                 buckets,
1994hunk ./src/allmydata/immutable/upload.py 954
1995-                sum([len(server.buckets) for server in upload_servers]),
1996-                [(s.buckets, s.serverid) for s in upload_servers]
1997+                sum([len(tracker.buckets) for tracker in upload_trackers]),
1998+                [(t.buckets, t.serverid) for t in upload_trackers]
1999                 )
2000         encoder.set_shareholders(buckets, servermap)
2001 
2002}
2003[happinessutil.py: server-vs-tracker cleanup
2004warner@lothar.com**20110227011111
2005 Ignore-this: b856c84033562d7d718cae7cb01085a9
2006] {
2007hunk ./src/allmydata/util/happinessutil.py 57
2008             ret.setdefault(peerid, set()).add(shareid)
2009     return ret
2010 
2011-def merge_peers(servermap, upload_servers=None):
2012+def merge_peers(servermap, upload_trackers=None):
2013     """
2014     I accept a dict of shareid -> set(peerid) mappings, and optionally a
2015     set of PeerTrackers. If no set of PeerTrackers is provided, I return
2016hunk ./src/allmydata/util/happinessutil.py 69
2017     # context where it is okay to do that, make a copy of servermap and
2018     # work with it.
2019     servermap = deepcopy(servermap)
2020-    if not upload_servers:
2021+    if not upload_trackers:
2022         return servermap
2023 
2024     assert(isinstance(servermap, dict))
2025hunk ./src/allmydata/util/happinessutil.py 73
2026-    assert(isinstance(upload_servers, set))
2027+    assert(isinstance(upload_trackers, set))
2028 
2029hunk ./src/allmydata/util/happinessutil.py 75
2030-    for peer in upload_servers:
2031-        for shnum in peer.buckets:
2032-            servermap.setdefault(shnum, set()).add(peer.serverid)
2033+    for tracker in upload_trackers:
2034+        for shnum in tracker.buckets:
2035+            servermap.setdefault(shnum, set()).add(tracker.serverid)
2036     return servermap
2037 
2038 def servers_of_happiness(sharemap):
2039}
2040[test_upload.py: server-vs-tracker cleanup
2041warner@lothar.com**20110227011115
2042 Ignore-this: 2915133be1a3ba456e8603885437e03
2043] {
2044hunk ./src/allmydata/test/test_upload.py 776
2045         d = selector.get_shareholders(broker, sh, storage_index,
2046                                       share_size, block_size, num_segments,
2047                                       10, 3, 4)
2048-        def _have_shareholders((upload_servers, already_servers)):
2049-            assert servers_to_break <= len(upload_servers)
2050+        def _have_shareholders((upload_trackers, already_servers)):
2051+            assert servers_to_break <= len(upload_trackers)
2052             for index in xrange(servers_to_break):
2053hunk ./src/allmydata/test/test_upload.py 779
2054-                server = list(upload_servers)[index]
2055-                for share in server.buckets.keys():
2056-                    server.buckets[share].abort()
2057+                tracker = list(upload_trackers)[index]
2058+                for share in tracker.buckets.keys():
2059+                    tracker.buckets[share].abort()
2060             buckets = {}
2061             servermap = already_servers.copy()
2062hunk ./src/allmydata/test/test_upload.py 784
2063-            for server in upload_servers:
2064-                buckets.update(server.buckets)
2065-                for bucket in server.buckets:
2066-                    servermap.setdefault(bucket, set()).add(server.serverid)
2067+            for tracker in upload_trackers:
2068+                buckets.update(tracker.buckets)
2069+                for bucket in tracker.buckets:
2070+                    servermap.setdefault(bucket, set()).add(tracker.serverid)
2071             encoder.set_shareholders(buckets, servermap)
2072             d = encoder.start()
2073             return d
2074}
2075[test_upload.py: factor out FakeServerTracker
2076warner@lothar.com**20110227011120
2077 Ignore-this: 6c182cba90e908221099472cc159325b
2078] {
2079hunk ./src/allmydata/test/test_upload.py 728
2080     # print "HAAPP{Y"
2081     return True
2082 
2083+class FakeServerTracker:
2084+    def __init__(self, serverid, buckets):
2085+        self.serverid = serverid
2086+        self.buckets = buckets
2087+
2088 class EncodingParameters(GridTestMixin, unittest.TestCase, SetDEPMixin,
2089     ShouldFailMixin):
2090     def find_all_shares(self, unused=None):
2091hunk ./src/allmydata/test/test_upload.py 1363
2092         # if not provided with a upload_servers argument, it should just
2093         # return the first argument unchanged.
2094         self.failUnlessEqual(shares, merge_peers(shares, set([])))
2095-        class FakeServerTracker:
2096-            pass
2097         trackers = []
2098         for (i, server) in [(i, "server%d" % i) for i in xrange(5, 9)]:
2099hunk ./src/allmydata/test/test_upload.py 1365
2100-            t = FakeServerTracker()
2101-            t.serverid = server
2102-            t.buckets = [i]
2103+            t = FakeServerTracker(server, [i])
2104             trackers.append(t)
2105         expected = {
2106                     1 : set(["server1"]),
2107hunk ./src/allmydata/test/test_upload.py 1391
2108         expected = {}
2109         for (i, server) in [(i, "server%d" % i) for i in xrange(10)]:
2110             shares3[i] = set([server])
2111-            t = FakeServerTracker()
2112-            t.serverid = server
2113-            t.buckets = [i]
2114+            t = FakeServerTracker(server, [i])
2115             trackers.append(t)
2116             expected[i] = set([server])
2117         self.failUnlessEqual(expected, merge_peers(shares3, set(trackers)))
2118hunk ./src/allmydata/test/test_upload.py 1425
2119         # ServerTracker instances, but for testing it is fine to make a
2120         # FakeServerTracker whose job is to hold those instance variables to
2121         # test that part.
2122-        class FakeServerTracker:
2123-            pass
2124         trackers = []
2125         for (i, server) in [(i, "server%d" % i) for i in xrange(5, 9)]:
2126hunk ./src/allmydata/test/test_upload.py 1427
2127-            t = FakeServerTracker()
2128-            t.serverid = server
2129-            t.buckets = [i]
2130+            t = FakeServerTracker(server, [i])
2131             trackers.append(t)
2132         # Recall that test1 is a server layout with servers_of_happiness
2133         # = 3.  Since there isn't any overlap between the shnum ->
2134hunk ./src/allmydata/test/test_upload.py 1439
2135         # Now add an overlapping server to trackers. This is redundant,
2136         # so it should not cause the previously reported happiness value
2137         # to change.
2138-        t = FakeServerTracker()
2139-        t.serverid = "server1"
2140-        t.buckets = [1]
2141+        t = FakeServerTracker("server1", [1])
2142         trackers.append(t)
2143         test2 = merge_peers(test1, set(trackers))
2144         happy = servers_of_happiness(test2)
2145hunk ./src/allmydata/test/test_upload.py 1456
2146             4 : set(['server4']),
2147         }
2148         trackers = []
2149-        t = FakeServerTracker()
2150-        t.serverid = 'server5'
2151-        t.buckets = [4]
2152+        t = FakeServerTracker('server5', [4])
2153         trackers.append(t)
2154hunk ./src/allmydata/test/test_upload.py 1458
2155-        t = FakeServerTracker()
2156-        t.serverid = 'server6'
2157-        t.buckets = [3, 5]
2158+        t = FakeServerTracker('server6', [3, 5])
2159         trackers.append(t)
2160         # The value returned by servers_of_happiness is the size
2161         # of a maximum matching in the bipartite graph that
2162}
2163[happinessutil.py: finally rename merge_peers to merge_servers
2164warner@lothar.com**20110227011124
2165 Ignore-this: c8cd381fea1dd888899cb71e4f86de6e
2166] {
2167hunk ./src/allmydata/immutable/upload.py 17
2168 from allmydata.immutable import encode
2169 from allmydata.util import base32, dictutil, idlib, log, mathutil
2170 from allmydata.util.happinessutil import servers_of_happiness, \
2171-                                         shares_by_server, merge_peers, \
2172+                                         shares_by_server, merge_servers, \
2173                                          failure_message
2174 from allmydata.util.assertutil import precondition
2175 from allmydata.util.rrefutil import add_version_to_remote_reference
2176hunk ./src/allmydata/immutable/upload.py 327
2177 
2178     def _loop(self):
2179         if not self.homeless_shares:
2180-            merged = merge_peers(self.preexisting_shares, self.use_trackers)
2181+            merged = merge_servers(self.preexisting_shares, self.use_trackers)
2182             effective_happiness = servers_of_happiness(merged)
2183             if self.servers_of_happiness <= effective_happiness:
2184                 msg = ("server selection successful for %s: %s: pretty_print_merged: %s, "
2185hunk ./src/allmydata/immutable/upload.py 439
2186             return self._loop()
2187         else:
2188             # no more servers. If we haven't placed enough shares, we fail.
2189-            merged = merge_peers(self.preexisting_shares, self.use_trackers)
2190+            merged = merge_servers(self.preexisting_shares, self.use_trackers)
2191             effective_happiness = servers_of_happiness(merged)
2192             if effective_happiness < self.servers_of_happiness:
2193                 msg = failure_message(len(self.serverids_with_shares),
2194hunk ./src/allmydata/test/test_upload.py 20
2195 from allmydata.test.no_network import GridTestMixin
2196 from allmydata.test.common_util import ShouldFailMixin
2197 from allmydata.util.happinessutil import servers_of_happiness, \
2198-                                         shares_by_server, merge_peers
2199+                                         shares_by_server, merge_servers
2200 from allmydata.storage_client import StorageFarmBroker
2201 from allmydata.storage.server import storage_index_to_dir
2202 
2203hunk ./src/allmydata/test/test_upload.py 1350
2204         return d
2205 
2206 
2207-    def test_merge_peers(self):
2208-        # merge_peers merges a list of upload_servers and a dict of
2209+    def test_merge_servers(self):
2210+        # merge_servers merges a list of upload_servers and a dict of
2211         # shareid -> serverid mappings.
2212         shares = {
2213                     1 : set(["server1"]),
2214hunk ./src/allmydata/test/test_upload.py 1362
2215                  }
2216         # if not provided with a upload_servers argument, it should just
2217         # return the first argument unchanged.
2218-        self.failUnlessEqual(shares, merge_peers(shares, set([])))
2219+        self.failUnlessEqual(shares, merge_servers(shares, set([])))
2220         trackers = []
2221         for (i, server) in [(i, "server%d" % i) for i in xrange(5, 9)]:
2222             t = FakeServerTracker(server, [i])
2223hunk ./src/allmydata/test/test_upload.py 1377
2224                     7 : set(["server7"]),
2225                     8 : set(["server8"]),
2226                    }
2227-        self.failUnlessEqual(expected, merge_peers(shares, set(trackers)))
2228+        self.failUnlessEqual(expected, merge_servers(shares, set(trackers)))
2229         shares2 = {}
2230         expected = {
2231                     5 : set(["server5"]),
2232hunk ./src/allmydata/test/test_upload.py 1385
2233                     7 : set(["server7"]),
2234                     8 : set(["server8"]),
2235                    }
2236-        self.failUnlessEqual(expected, merge_peers(shares2, set(trackers)))
2237+        self.failUnlessEqual(expected, merge_servers(shares2, set(trackers)))
2238         shares3 = {}
2239         trackers = []
2240         expected = {}
2241hunk ./src/allmydata/test/test_upload.py 1394
2242             t = FakeServerTracker(server, [i])
2243             trackers.append(t)
2244             expected[i] = set([server])
2245-        self.failUnlessEqual(expected, merge_peers(shares3, set(trackers)))
2246+        self.failUnlessEqual(expected, merge_servers(shares3, set(trackers)))
2247 
2248 
2249     def test_servers_of_happiness_utility_function(self):
2250hunk ./src/allmydata/test/test_upload.py 1420
2251         # should be 3 instead of 4.
2252         happy = servers_of_happiness(test1)
2253         self.failUnlessEqual(3, happy)
2254-        # The second argument of merge_peers should be a set of objects with
2255+        # The second argument of merge_servers should be a set of objects with
2256         # serverid and buckets as attributes. In actual use, these will be
2257         # ServerTracker instances, but for testing it is fine to make a
2258         # FakeServerTracker whose job is to hold those instance variables to
2259hunk ./src/allmydata/test/test_upload.py 1433
2260         # = 3.  Since there isn't any overlap between the shnum ->
2261         # set([serverid]) correspondences in test1 and those in trackers,
2262         # the result here should be 7.
2263-        test2 = merge_peers(test1, set(trackers))
2264+        test2 = merge_servers(test1, set(trackers))
2265         happy = servers_of_happiness(test2)
2266         self.failUnlessEqual(7, happy)
2267         # Now add an overlapping server to trackers. This is redundant,
2268hunk ./src/allmydata/test/test_upload.py 1441
2269         # to change.
2270         t = FakeServerTracker("server1", [1])
2271         trackers.append(t)
2272-        test2 = merge_peers(test1, set(trackers))
2273+        test2 = merge_servers(test1, set(trackers))
2274         happy = servers_of_happiness(test2)
2275         self.failUnlessEqual(7, happy)
2276         test = {}
2277hunk ./src/allmydata/test/test_upload.py 1472
2278         #
2279         # and, since there are 5 edges in this matching, it should
2280         # return 5.
2281-        test2 = merge_peers(test, set(trackers))
2282+        test2 = merge_servers(test, set(trackers))
2283         happy = servers_of_happiness(test2)
2284         self.failUnlessEqual(5, happy)
2285         # Zooko's first puzzle:
2286hunk ./src/allmydata/util/happinessutil.py 57
2287             ret.setdefault(peerid, set()).add(shareid)
2288     return ret
2289 
2290-def merge_peers(servermap, upload_trackers=None):
2291+def merge_servers(servermap, upload_trackers=None):
2292     """
2293hunk ./src/allmydata/util/happinessutil.py 59
2294-    I accept a dict of shareid -> set(peerid) mappings, and optionally a
2295-    set of PeerTrackers. If no set of PeerTrackers is provided, I return
2296+    I accept a dict of shareid -> set(serverid) mappings, and optionally a
2297+    set of ServerTrackers. If no set of ServerTrackers is provided, I return
2298     my first argument unmodified. Otherwise, I update a copy of my first
2299hunk ./src/allmydata/util/happinessutil.py 62
2300-    argument to include the shareid -> peerid mappings implied in the
2301-    set of PeerTrackers, returning the resulting dict.
2302+    argument to include the shareid -> serverid mappings implied in the
2303+    set of ServerTrackers, returning the resulting dict.
2304     """
2305     # Since we mutate servermap, and are called outside of a
2306     # context where it is okay to do that, make a copy of servermap and
2307}
2308[upload.py: rearrange _make_trackers a bit, no behavior changes
2309warner@lothar.com**20110227011128
2310 Ignore-this: 296d4819e2af452b107177aef6ebb40f
2311] hunk ./src/allmydata/immutable/upload.py 239
2312         file_cancel_secret = file_cancel_secret_hash(client_cancel_secret,
2313                                                      storage_index)
2314         def _make_trackers(servers):
2315-           return [ServerTracker(serverid, conn,
2316-                                 share_size, block_size,
2317-                                 num_segments, num_share_hashes,
2318-                                 storage_index,
2319-                                 bucket_renewal_secret_hash(file_renewal_secret,
2320-                                                            serverid),
2321-                                 bucket_cancel_secret_hash(file_cancel_secret,
2322-                                                           serverid))
2323-                   for (serverid, conn) in servers]
2324+            trackers = []
2325+            for (serverid, conn) in servers:
2326+                seed = serverid
2327+                renew = bucket_renewal_secret_hash(file_renewal_secret, seed)
2328+                cancel = bucket_cancel_secret_hash(file_cancel_secret, seed)
2329+                st = ServerTracker(serverid, conn,
2330+                                   share_size, block_size,
2331+                                   num_segments, num_share_hashes,
2332+                                   storage_index,
2333+                                   renew, cancel)
2334+                trackers.append(st)
2335+            return trackers
2336         self.uncontacted_trackers = _make_trackers(writable_servers)
2337 
2338         # We don't try to allocate shares to these servers, since they've
2339[add remaining get_* methods to storage_client.Server, NoNetworkServer, and
2340warner@lothar.com**20110227011132
2341 Ignore-this: 6078279ddf42b179996a4b53bee8c421
2342 MockIServer stubs
2343] {
2344hunk ./src/allmydata/storage_client.py 182
2345 
2346     def __init__(self, serverid, ann_d, min_shares=1):
2347         self.serverid = serverid
2348+        self._tubid = serverid
2349         self.announcement = ann_d
2350         self.min_shares = min_shares
2351 
2352hunk ./src/allmydata/storage_client.py 195
2353         self._reconnector = None
2354         self._trigger_cb = None
2355 
2356+    def __repr__(self):
2357+        return "<NativeStorageServer for %s>" % self.name()
2358     def get_serverid(self):
2359hunk ./src/allmydata/storage_client.py 198
2360-        return self.serverid
2361+        return self._tubid
2362     def get_permutation_seed(self):
2363hunk ./src/allmydata/storage_client.py 200
2364-        return self.serverid
2365+        return self._tubid
2366+    def get_version(self):
2367+        if self.rref:
2368+            return self.rref.version
2369+        return None
2370+    def name(self): # keep methodname short
2371+        return self.serverid_s
2372+    def longname(self):
2373+        return idlib.nodeid_b2a(self._tubid)
2374+    def get_lease_seed(self):
2375+        return self._tubid
2376+    def get_foolscap_write_enabler_seed(self):
2377+        return self._tubid
2378 
2379     def get_nickname(self):
2380         return self.announcement["nickname"].decode("utf-8")
2381hunk ./src/allmydata/storage_client.py 233
2382         self._reconnector = tub.connectTo(furl, self._got_connection)
2383 
2384     def _got_connection(self, rref):
2385-        lp = log.msg(format="got connection to %(serverid)s, getting versions",
2386-                     serverid=self.serverid_s,
2387+        lp = log.msg(format="got connection to %(name)s, getting versions",
2388+                     name=self.name(),
2389                      facility="tahoe.storage_broker", umid="coUECQ")
2390         if self._trigger_cb:
2391             eventually(self._trigger_cb)
2392hunk ./src/allmydata/storage_client.py 242
2393         d = add_version_to_remote_reference(rref, default)
2394         d.addCallback(self._got_versioned_service, lp)
2395         d.addErrback(log.err, format="storageclient._got_connection",
2396-                     serverid=self.serverid_s, umid="Sdq3pg")
2397+                     name=self.name(), umid="Sdq3pg")
2398 
2399     def _got_versioned_service(self, rref, lp):
2400hunk ./src/allmydata/storage_client.py 245
2401-        log.msg(format="%(serverid)s provided version info %(version)s",
2402-                serverid=self.serverid_s, version=rref.version,
2403+        log.msg(format="%(name)s provided version info %(version)s",
2404+                name=self.name(), version=rref.version,
2405                 facility="tahoe.storage_broker", umid="SWmJYg",
2406                 level=log.NOISY, parent=lp)
2407 
2408hunk ./src/allmydata/storage_client.py 259
2409         return self.rref
2410 
2411     def _lost(self):
2412-        log.msg(format="lost connection to %(serverid)s",
2413-                serverid=self.serverid_s,
2414+        log.msg(format="lost connection to %(name)s", name=self.name(),
2415                 facility="tahoe.storage_broker", umid="zbRllw")
2416         self.last_loss_time = time.time()
2417         self.rref = None
2418hunk ./src/allmydata/test/no_network.py 124
2419     def __init__(self, serverid, rref):
2420         self.serverid = serverid
2421         self.rref = rref
2422+    def __repr__(self):
2423+        return "<NoNetworkServer for %s>" % self.name()
2424     def get_serverid(self):
2425         return self.serverid
2426     def get_permutation_seed(self):
2427hunk ./src/allmydata/test/no_network.py 130
2428         return self.serverid
2429+    def get_lease_seed(self):
2430+        return self.serverid
2431+    def name(self):
2432+        return idlib.shortnodeid_b2a(self.serverid)
2433+    def longname(self):
2434+        return idlib.nodeid_b2a(self.serverid)
2435+    def get_nickname(self):
2436+        return "nickname"
2437     def get_rref(self):
2438         return self.rref
2439hunk ./src/allmydata/test/no_network.py 140
2440+    def get_version(self):
2441+        return self.rref.version
2442 
2443 class NoNetworkStorageBroker:
2444     implements(IStorageBroker)
2445hunk ./src/allmydata/test/test_immutable.py 106
2446                 return self.serverid
2447             def get_rref(self):
2448                 return self.rref
2449+            def name(self):
2450+                return "name-%s" % self.serverid
2451+            def get_version(self):
2452+                return self.rref.version
2453 
2454         mockserver1 = MockServer({1: mock.Mock(), 2: mock.Mock()})
2455         mockserver2 = MockServer({})
2456}
2457[immutable/checker.py: remove some uses of s.get_serverid(), not all
2458warner@lothar.com**20110227011134
2459 Ignore-this: e480a37efa9e94e8016d826c492f626e
2460] {
2461hunk ./src/allmydata/immutable/checker.py 10
2462 from allmydata.check_results import CheckResults
2463 from allmydata.uri import CHKFileVerifierURI
2464 from allmydata.util.assertutil import precondition
2465-from allmydata.util import base32, idlib, deferredutil, dictutil, log, mathutil
2466+from allmydata.util import base32, deferredutil, dictutil, log, mathutil
2467 from allmydata.util.hashutil import file_renewal_secret_hash, \
2468      file_cancel_secret_hash, bucket_renewal_secret_hash, \
2469      bucket_cancel_secret_hash, uri_extension_hash, CRYPTO_VAL_SIZE, \
2470hunk ./src/allmydata/immutable/checker.py 484
2471                                       self._verifycap.get_storage_index())
2472         self.file_cancel_secret = fcs
2473 
2474-    def _get_renewal_secret(self, peerid):
2475-        return bucket_renewal_secret_hash(self.file_renewal_secret, peerid)
2476-    def _get_cancel_secret(self, peerid):
2477-        return bucket_cancel_secret_hash(self.file_cancel_secret, peerid)
2478+    def _get_renewal_secret(self, seed):
2479+        return bucket_renewal_secret_hash(self.file_renewal_secret, seed)
2480+    def _get_cancel_secret(self, seed):
2481+        return bucket_cancel_secret_hash(self.file_cancel_secret, seed)
2482 
2483     def _get_buckets(self, s, storageindex):
2484         """Return a deferred that eventually fires with ({sharenum: bucket},
2485hunk ./src/allmydata/immutable/checker.py 499
2486         responded.)"""
2487 
2488         rref = s.get_rref()
2489+        lease_seed = s.get_lease_seed()
2490         serverid = s.get_serverid()
2491         if self._add_lease:
2492hunk ./src/allmydata/immutable/checker.py 502
2493-            renew_secret = self._get_renewal_secret(serverid)
2494-            cancel_secret = self._get_cancel_secret(serverid)
2495+            renew_secret = self._get_renewal_secret(lease_seed)
2496+            cancel_secret = self._get_cancel_secret(lease_seed)
2497             d2 = rref.callRemote("add_lease", storageindex,
2498                                  renew_secret, cancel_secret)
2499hunk ./src/allmydata/immutable/checker.py 506
2500-            d2.addErrback(self._add_lease_failed, serverid, storageindex)
2501+            d2.addErrback(self._add_lease_failed, s.name(), storageindex)
2502 
2503         d = rref.callRemote("get_buckets", storageindex)
2504         def _wrap_results(res):
2505hunk ./src/allmydata/immutable/checker.py 524
2506         d.addCallbacks(_wrap_results, _trap_errs)
2507         return d
2508 
2509-    def _add_lease_failed(self, f, peerid, storage_index):
2510+    def _add_lease_failed(self, f, server_name, storage_index):
2511         # Older versions of Tahoe didn't handle the add-lease message very
2512         # well: <=1.1.0 throws a NameError because it doesn't implement
2513         # remote_add_lease(), 1.2.0/1.3.0 throw IndexError on unknown buckets
2514hunk ./src/allmydata/immutable/checker.py 544
2515                 # this may ignore a bit too much, but that only hurts us
2516                 # during debugging
2517                 return
2518-            self.log(format="error in add_lease from [%(peerid)s]: %(f_value)s",
2519-                     peerid=idlib.shortnodeid_b2a(peerid),
2520+            self.log(format="error in add_lease from [%(name)s]: %(f_value)s",
2521+                     name=server_name,
2522                      f_value=str(f.value),
2523                      failure=f,
2524                      level=log.WEIRD, umid="atbAxw")
2525hunk ./src/allmydata/immutable/checker.py 552
2526             return
2527         # local errors are cause for alarm
2528         log.err(f,
2529-                format="local error in add_lease to [%(peerid)s]: %(f_value)s",
2530-                peerid=idlib.shortnodeid_b2a(peerid),
2531+                format="local error in add_lease to [%(name)s]: %(f_value)s",
2532+                name=server_name,
2533                 f_value=str(f.value),
2534                 level=log.WEIRD, umid="hEGuQg")
2535 
2536}
2537[immutable/upload.py: reduce use of get_serverid()
2538warner@lothar.com**20110227011138
2539 Ignore-this: ffdd7ff32bca890782119a6e9f1495f6
2540] {
2541hunk ./src/allmydata/immutable/upload.py 72
2542     return ', '.join([ "sh%s: %s" % (k, '+'.join([idlib.shortnodeid_b2a(x) for x in v])) for k, v in s.iteritems() ])
2543 
2544 class ServerTracker:
2545-    def __init__(self, serverid, storage_server,
2546+    def __init__(self, server,
2547                  sharesize, blocksize, num_segments, num_share_hashes,
2548                  storage_index,
2549                  bucket_renewal_secret, bucket_cancel_secret):
2550hunk ./src/allmydata/immutable/upload.py 76
2551-        precondition(isinstance(serverid, str), serverid)
2552-        precondition(len(serverid) == 20, serverid)
2553-        self.serverid = serverid
2554-        self._storageserver = storage_server # to an RIStorageServer
2555+        self._server = server
2556         self.buckets = {} # k: shareid, v: IRemoteBucketWriter
2557         self.sharesize = sharesize
2558 
2559hunk ./src/allmydata/immutable/upload.py 83
2560         wbp = layout.make_write_bucket_proxy(None, sharesize,
2561                                              blocksize, num_segments,
2562                                              num_share_hashes,
2563-                                             EXTENSION_SIZE, serverid)
2564+                                             EXTENSION_SIZE, server.get_serverid())
2565         self.wbp_class = wbp.__class__ # to create more of them
2566         self.allocated_size = wbp.get_allocated_size()
2567         self.blocksize = blocksize
2568hunk ./src/allmydata/immutable/upload.py 96
2569 
2570     def __repr__(self):
2571         return ("<ServerTracker for server %s and SI %s>"
2572-                % (idlib.shortnodeid_b2a(self.serverid),
2573-                   si_b2a(self.storage_index)[:5]))
2574+                % (self._server.name(), si_b2a(self.storage_index)[:5]))
2575+
2576+    def get_serverid(self):
2577+        return self._server.get_serverid()
2578+    def name(self):
2579+        return self._server.name()
2580 
2581     def query(self, sharenums):
2582hunk ./src/allmydata/immutable/upload.py 104
2583-        d = self._storageserver.callRemote("allocate_buckets",
2584-                                           self.storage_index,
2585-                                           self.renew_secret,
2586-                                           self.cancel_secret,
2587-                                           sharenums,
2588-                                           self.allocated_size,
2589-                                           canary=Referenceable())
2590+        rref = self._server.get_rref()
2591+        d = rref.callRemote("allocate_buckets",
2592+                            self.storage_index,
2593+                            self.renew_secret,
2594+                            self.cancel_secret,
2595+                            sharenums,
2596+                            self.allocated_size,
2597+                            canary=Referenceable())
2598         d.addCallback(self._got_reply)
2599         return d
2600 
2601hunk ./src/allmydata/immutable/upload.py 116
2602     def ask_about_existing_shares(self):
2603-        return self._storageserver.callRemote("get_buckets",
2604-                                              self.storage_index)
2605+        rref = self._server.get_rref()
2606+        return rref.callRemote("get_buckets", self.storage_index)
2607 
2608     def _got_reply(self, (alreadygot, buckets)):
2609         #log.msg("%s._got_reply(%s)" % (self, (alreadygot, buckets)))
2610hunk ./src/allmydata/immutable/upload.py 128
2611                                 self.num_segments,
2612                                 self.num_share_hashes,
2613                                 EXTENSION_SIZE,
2614-                                self.serverid)
2615+                                self._server.get_serverid())
2616             b[sharenum] = bp
2617         self.buckets.update(b)
2618         return (alreadygot, set(b.keys()))
2619hunk ./src/allmydata/immutable/upload.py 214
2620                                              num_share_hashes, EXTENSION_SIZE,
2621                                              None)
2622         allocated_size = wbp.get_allocated_size()
2623-        all_servers = [(s.get_serverid(), s.get_rref())
2624-                       for s in storage_broker.get_servers_for_psi(storage_index)]
2625+        all_servers = storage_broker.get_servers_for_psi(storage_index)
2626         if not all_servers:
2627             raise NoServersError("client gave us zero servers")
2628 
2629hunk ./src/allmydata/immutable/upload.py 223
2630         # field) from getting large shares (for files larger than about
2631         # 12GiB). See #439 for details.
2632         def _get_maxsize(server):
2633-            (serverid, conn) = server
2634-            v1 = conn.version["http://allmydata.org/tahoe/protocols/storage/v1"]
2635+            v0 = server.get_rref().version
2636+            v1 = v0["http://allmydata.org/tahoe/protocols/storage/v1"]
2637             return v1["maximum-immutable-share-size"]
2638         writable_servers = [server for server in all_servers
2639                             if _get_maxsize(server) >= allocated_size]
2640hunk ./src/allmydata/immutable/upload.py 241
2641                                                      storage_index)
2642         def _make_trackers(servers):
2643             trackers = []
2644-            for (serverid, conn) in servers:
2645-                seed = serverid
2646+            for s in servers:
2647+                seed = s.get_lease_seed()
2648                 renew = bucket_renewal_secret_hash(file_renewal_secret, seed)
2649                 cancel = bucket_cancel_secret_hash(file_cancel_secret, seed)
2650hunk ./src/allmydata/immutable/upload.py 245
2651-                st = ServerTracker(serverid, conn,
2652+                st = ServerTracker(s,
2653                                    share_size, block_size,
2654                                    num_segments, num_share_hashes,
2655                                    storage_index,
2656hunk ./src/allmydata/immutable/upload.py 272
2657         for tracker in readonly_trackers:
2658             assert isinstance(tracker, ServerTracker)
2659             d = tracker.ask_about_existing_shares()
2660-            d.addBoth(self._handle_existing_response, tracker.serverid)
2661+            d.addBoth(self._handle_existing_response, tracker)
2662             ds.append(d)
2663             self.num_servers_contacted += 1
2664             self.query_count += 1
2665hunk ./src/allmydata/immutable/upload.py 277
2666             self.log("asking server %s for any existing shares" %
2667-                     (idlib.shortnodeid_b2a(tracker.serverid),),
2668-                    level=log.NOISY)
2669+                     (tracker.name(),), level=log.NOISY)
2670         dl = defer.DeferredList(ds)
2671         dl.addCallback(lambda ign: self._loop())
2672         return dl
2673hunk ./src/allmydata/immutable/upload.py 283
2674 
2675 
2676-    def _handle_existing_response(self, res, serverid):
2677+    def _handle_existing_response(self, res, tracker):
2678         """
2679         I handle responses to the queries sent by
2680         Tahoe2ServerSelector._existing_shares.
2681hunk ./src/allmydata/immutable/upload.py 288
2682         """
2683+        serverid = tracker.get_serverid()
2684         if isinstance(res, failure.Failure):
2685             self.log("%s got error during existing shares check: %s"
2686hunk ./src/allmydata/immutable/upload.py 291
2687-                    % (idlib.shortnodeid_b2a(serverid), res),
2688-                    level=log.UNUSUAL)
2689+                    % (tracker.name(), res), level=log.UNUSUAL)
2690             self.error_count += 1
2691             self.bad_query_count += 1
2692         else:
2693hunk ./src/allmydata/immutable/upload.py 299
2694             if buckets:
2695                 self.serverids_with_shares.add(serverid)
2696             self.log("response to get_buckets() from server %s: alreadygot=%s"
2697-                    % (idlib.shortnodeid_b2a(serverid), tuple(sorted(buckets))),
2698+                    % (tracker.name(), tuple(sorted(buckets))),
2699                     level=log.NOISY)
2700             for bucket in buckets:
2701                 self.preexisting_shares.setdefault(bucket, set()).add(serverid)
2702hunk ./src/allmydata/immutable/upload.py 407
2703             if self._status:
2704                 self._status.set_status("Contacting Servers [%s] (first query),"
2705                                         " %d shares left.."
2706-                                        % (idlib.shortnodeid_b2a(tracker.serverid),
2707+                                        % (tracker.name(),
2708                                            len(self.homeless_shares)))
2709             d = tracker.query(shares_to_ask)
2710             d.addBoth(self._got_response, tracker, shares_to_ask,
2711hunk ./src/allmydata/immutable/upload.py 428
2712             if self._status:
2713                 self._status.set_status("Contacting Servers [%s] (second query),"
2714                                         " %d shares left.."
2715-                                        % (idlib.shortnodeid_b2a(tracker.serverid),
2716+                                        % (tracker.name(),
2717                                            len(self.homeless_shares)))
2718             d = tracker.query(shares_to_ask)
2719             d.addBoth(self._got_response, tracker, shares_to_ask,
2720hunk ./src/allmydata/immutable/upload.py 489
2721         else:
2722             (alreadygot, allocated) = res
2723             self.log("response to allocate_buckets() from server %s: alreadygot=%s, allocated=%s"
2724-                    % (idlib.shortnodeid_b2a(tracker.serverid),
2725+                    % (tracker.name(),
2726                        tuple(sorted(alreadygot)), tuple(sorted(allocated))),
2727                     level=log.NOISY)
2728             progress = False
2729hunk ./src/allmydata/immutable/upload.py 494
2730             for s in alreadygot:
2731-                self.preexisting_shares.setdefault(s, set()).add(tracker.serverid)
2732+                self.preexisting_shares.setdefault(s, set()).add(tracker.get_serverid())
2733                 if s in self.homeless_shares:
2734                     self.homeless_shares.remove(s)
2735                     progress = True
2736hunk ./src/allmydata/immutable/upload.py 508
2737                 progress = True
2738 
2739             if allocated or alreadygot:
2740-                self.serverids_with_shares.add(tracker.serverid)
2741+                self.serverids_with_shares.add(tracker.get_serverid())
2742 
2743             not_yet_present = set(shares_to_ask) - set(alreadygot)
2744             still_homeless = not_yet_present - set(allocated)
2745hunk ./src/allmydata/immutable/upload.py 951
2746             buckets.update(tracker.buckets)
2747             for shnum in tracker.buckets:
2748                 self._server_trackers[shnum] = tracker
2749-                servermap.setdefault(shnum, set()).add(tracker.serverid)
2750+                servermap.setdefault(shnum, set()).add(tracker.get_serverid())
2751         assert len(buckets) == sum([len(tracker.buckets)
2752                                     for tracker in upload_trackers]), \
2753             "%s (%s) != %s (%s)" % (
2754hunk ./src/allmydata/immutable/upload.py 958
2755                 len(buckets),
2756                 buckets,
2757                 sum([len(tracker.buckets) for tracker in upload_trackers]),
2758-                [(t.buckets, t.serverid) for t in upload_trackers]
2759+                [(t.buckets, t.get_serverid()) for t in upload_trackers]
2760                 )
2761         encoder.set_shareholders(buckets, servermap)
2762 
2763hunk ./src/allmydata/immutable/upload.py 967
2764         r = self._results
2765         for shnum in self._encoder.get_shares_placed():
2766             server_tracker = self._server_trackers[shnum]
2767-            serverid = server_tracker.serverid
2768+            serverid = server_tracker.get_serverid()
2769             r.sharemap.add(shnum, serverid)
2770             r.servermap.add(serverid, shnum)
2771         r.pushed_shares = len(self._encoder.get_shares_placed())
2772hunk ./src/allmydata/test/test_upload.py 730
2773 
2774 class FakeServerTracker:
2775     def __init__(self, serverid, buckets):
2776-        self.serverid = serverid
2777+        self._serverid = serverid
2778         self.buckets = buckets
2779hunk ./src/allmydata/test/test_upload.py 732
2780+    def get_serverid(self):
2781+        return self._serverid
2782 
2783 class EncodingParameters(GridTestMixin, unittest.TestCase, SetDEPMixin,
2784     ShouldFailMixin):
2785hunk ./src/allmydata/test/test_upload.py 794
2786             for tracker in upload_trackers:
2787                 buckets.update(tracker.buckets)
2788                 for bucket in tracker.buckets:
2789-                    servermap.setdefault(bucket, set()).add(tracker.serverid)
2790+                    servermap.setdefault(bucket, set()).add(tracker.get_serverid())
2791             encoder.set_shareholders(buckets, servermap)
2792             d = encoder.start()
2793             return d
2794hunk ./src/allmydata/util/happinessutil.py 77
2795 
2796     for tracker in upload_trackers:
2797         for shnum in tracker.buckets:
2798-            servermap.setdefault(shnum, set()).add(tracker.serverid)
2799+            servermap.setdefault(shnum, set()).add(tracker.get_serverid())
2800     return servermap
2801 
2802 def servers_of_happiness(sharemap):
2803}
2804[immutable/offloaded.py: reduce use of get_serverid() a bit more
2805warner@lothar.com**20110227011142
2806 Ignore-this: b48acc1b2ae1b311da7f3ba4ffba38f
2807] {
2808hunk ./src/allmydata/immutable/offloaded.py 12
2809 from allmydata.immutable import upload
2810 from allmydata.immutable.layout import ReadBucketProxy
2811 from allmydata.util.assertutil import precondition
2812-from allmydata.util import idlib, log, observer, fileutil, hashutil, dictutil
2813+from allmydata.util import log, observer, fileutil, hashutil, dictutil
2814 
2815 
2816 class NotEnoughWritersError(Exception):
2817hunk ./src/allmydata/immutable/offloaded.py 59
2818         for s in self._peer_getter(storage_index):
2819             d = s.get_rref().callRemote("get_buckets", storage_index)
2820             d.addCallbacks(self._got_response, self._got_error,
2821-                           callbackArgs=(s.get_serverid(),))
2822+                           callbackArgs=(s,))
2823             dl.append(d)
2824         return defer.DeferredList(dl)
2825 
2826hunk ./src/allmydata/immutable/offloaded.py 63
2827-    def _got_response(self, buckets, peerid):
2828+    def _got_response(self, buckets, server):
2829         # buckets is a dict: maps shum to an rref of the server who holds it
2830         shnums_s = ",".join([str(shnum) for shnum in buckets])
2831         self.log("got_response: [%s] has %d shares (%s)" %
2832hunk ./src/allmydata/immutable/offloaded.py 67
2833-                 (idlib.shortnodeid_b2a(peerid), len(buckets), shnums_s),
2834+                 (server.name(), len(buckets), shnums_s),
2835                  level=log.NOISY)
2836         self._found_shares.update(buckets.keys())
2837         for k in buckets:
2838hunk ./src/allmydata/immutable/offloaded.py 71
2839-            self._sharemap.add(k, peerid)
2840-        self._readers.update( [ (bucket, peerid)
2841+            self._sharemap.add(k, server.get_serverid())
2842+        self._readers.update( [ (bucket, server)
2843                                 for bucket in buckets.values() ] )
2844 
2845     def _got_error(self, f):
2846hunk ./src/allmydata/immutable/offloaded.py 87
2847         if not self._readers:
2848             self.log("no readers, so no UEB", level=log.NOISY)
2849             return
2850-        b,peerid = self._readers.pop()
2851-        rbp = ReadBucketProxy(b, peerid, si_b2a(self._storage_index))
2852+        b,server = self._readers.pop()
2853+        rbp = ReadBucketProxy(b, server.get_serverid(), si_b2a(self._storage_index))
2854         d = rbp.get_uri_extension()
2855         d.addCallback(self._got_uri_extension)
2856         d.addErrback(self._ueb_error)
2857}
2858[immutable/downloader/finder.py: reduce use of get_serverid(), one left
2859warner@lothar.com**20110227011146
2860 Ignore-this: 5785be173b491ae8a78faf5142892020
2861] {
2862hunk ./src/allmydata/immutable/downloader/finder.py 5
2863 import time
2864 now = time.time
2865 from foolscap.api import eventually
2866-from allmydata.util import base32, log, idlib
2867+from allmydata.util import base32, log
2868 from twisted.internet import reactor
2869 
2870 from share import Share, CommonShare
2871hunk ./src/allmydata/immutable/downloader/finder.py 24
2872     return res
2873 
2874 class RequestToken:
2875-    def __init__(self, peerid):
2876-        self.peerid = peerid
2877+    def __init__(self, server):
2878+        self.server = server
2879 
2880 class ShareFinder:
2881     OVERDUE_TIMEOUT = 10.0
2882hunk ./src/allmydata/immutable/downloader/finder.py 65
2883         # test_dirnode, which creates us with storage_broker=None
2884         if not self._started:
2885             si = self.verifycap.storage_index
2886-            servers = [(s.get_serverid(), s.get_rref())
2887-                       for s in self._storage_broker.get_servers_for_psi(si)]
2888+            servers = self._storage_broker.get_servers_for_psi(si)
2889             self._servers = iter(servers)
2890             self._started = True
2891 
2892hunk ./src/allmydata/immutable/downloader/finder.py 90
2893 
2894     # internal methods
2895     def loop(self):
2896-        pending_s = ",".join([idlib.shortnodeid_b2a(rt.peerid)
2897+        pending_s = ",".join([rt.server.name()
2898                               for rt in self.pending_requests]) # sort?
2899         self.log(format="ShareFinder loop: running=%(running)s"
2900                  " hungry=%(hungry)s, pending=%(pending)s",
2901hunk ./src/allmydata/immutable/downloader/finder.py 133
2902         eventually(self.share_consumer.no_more_shares)
2903 
2904     def send_request(self, server):
2905-        peerid, rref = server
2906-        req = RequestToken(peerid)
2907+        req = RequestToken(server)
2908         self.pending_requests.add(req)
2909hunk ./src/allmydata/immutable/downloader/finder.py 135
2910-        lp = self.log(format="sending DYHB to [%(peerid)s]",
2911-                      peerid=idlib.shortnodeid_b2a(peerid),
2912+        lp = self.log(format="sending DYHB to [%(name)s]", name=server.name(),
2913                       level=log.NOISY, umid="Io7pyg")
2914         time_sent = now()
2915hunk ./src/allmydata/immutable/downloader/finder.py 138
2916-        d_ev = self._download_status.add_dyhb_sent(peerid, time_sent)
2917+        d_ev = self._download_status.add_dyhb_sent(server.get_serverid(),
2918+                                                   time_sent)
2919         # TODO: get the timer from a Server object, it knows best
2920         self.overdue_timers[req] = reactor.callLater(self.OVERDUE_TIMEOUT,
2921                                                      self.overdue, req)
2922hunk ./src/allmydata/immutable/downloader/finder.py 143
2923-        d = rref.callRemote("get_buckets", self._storage_index)
2924+        d = server.get_rref().callRemote("get_buckets", self._storage_index)
2925         d.addBoth(incidentally, self._request_retired, req)
2926         d.addCallbacks(self._got_response, self._got_error,
2927hunk ./src/allmydata/immutable/downloader/finder.py 146
2928-                       callbackArgs=(rref.version, peerid, req, d_ev,
2929-                                     time_sent, lp),
2930-                       errbackArgs=(peerid, req, d_ev, lp))
2931+                       callbackArgs=(server, req, d_ev, time_sent, lp),
2932+                       errbackArgs=(server, req, d_ev, lp))
2933         d.addErrback(log.err, format="error in send_request",
2934                      level=log.WEIRD, parent=lp, umid="rpdV0w")
2935         d.addCallback(incidentally, eventually, self.loop)
2936hunk ./src/allmydata/immutable/downloader/finder.py 165
2937         self.overdue_requests.add(req)
2938         eventually(self.loop)
2939 
2940-    def _got_response(self, buckets, server_version, peerid, req, d_ev,
2941-                      time_sent, lp):
2942+    def _got_response(self, buckets, server, req, d_ev, time_sent, lp):
2943         shnums = sorted([shnum for shnum in buckets])
2944         time_received = now()
2945         d_ev.finished(shnums, time_received)
2946hunk ./src/allmydata/immutable/downloader/finder.py 171
2947         dyhb_rtt = time_received - time_sent
2948         if not buckets:
2949-            self.log(format="no shares from [%(peerid)s]",
2950-                     peerid=idlib.shortnodeid_b2a(peerid),
2951+            self.log(format="no shares from [%(name)s]", name=server.name(),
2952                      level=log.NOISY, parent=lp, umid="U7d4JA")
2953             return
2954         shnums_s = ",".join([str(shnum) for shnum in shnums])
2955hunk ./src/allmydata/immutable/downloader/finder.py 175
2956-        self.log(format="got shnums [%(shnums)s] from [%(peerid)s]",
2957-                 shnums=shnums_s, peerid=idlib.shortnodeid_b2a(peerid),
2958+        self.log(format="got shnums [%(shnums)s] from [%(name)s]",
2959+                 shnums=shnums_s, name=server.name(),
2960                  level=log.NOISY, parent=lp, umid="0fcEZw")
2961         shares = []
2962         for shnum, bucket in buckets.iteritems():
2963hunk ./src/allmydata/immutable/downloader/finder.py 180
2964-            s = self._create_share(shnum, bucket, server_version, peerid,
2965-                                   dyhb_rtt)
2966+            s = self._create_share(shnum, bucket, server, dyhb_rtt)
2967             shares.append(s)
2968         self._deliver_shares(shares)
2969 
2970hunk ./src/allmydata/immutable/downloader/finder.py 184
2971-    def _create_share(self, shnum, bucket, server_version, peerid, dyhb_rtt):
2972+    def _create_share(self, shnum, bucket, server, dyhb_rtt):
2973         if shnum in self._commonshares:
2974             cs = self._commonshares[shnum]
2975         else:
2976hunk ./src/allmydata/immutable/downloader/finder.py 207
2977             #  2: break _get_satisfaction into Deferred-attached pieces.
2978             #     Yuck.
2979             self._commonshares[shnum] = cs
2980-        s = Share(bucket, server_version, self.verifycap, cs, self.node,
2981-                  self._download_status, peerid, shnum, dyhb_rtt,
2982+        s = Share(bucket, server.get_version(), self.verifycap, cs, self.node,
2983+                  self._download_status, server.get_serverid(), shnum, dyhb_rtt,
2984                   self._node_logparent)
2985         return s
2986 
2987hunk ./src/allmydata/immutable/downloader/finder.py 220
2988                  level=log.NOISY, umid="2n1qQw")
2989         eventually(self.share_consumer.got_shares, shares)
2990 
2991-    def _got_error(self, f, peerid, req, d_ev, lp):
2992+    def _got_error(self, f, server, req, d_ev, lp):
2993         d_ev.finished("error", now())
2994hunk ./src/allmydata/immutable/downloader/finder.py 222
2995-        self.log(format="got error from [%(peerid)s]",
2996-                 peerid=idlib.shortnodeid_b2a(peerid), failure=f,
2997+        self.log(format="got error from [%(name)s]",
2998+                 name=server.name(), failure=f,
2999                  level=log.UNUSUAL, parent=lp, umid="zUKdCw")
3000 
3001 
3002}
3003[immutable/downloader/share.py: reduce get_serverid(), one left, update ext deps
3004warner@lothar.com**20110227011150
3005 Ignore-this: d8d56dd8e7b280792b40105e13664554
3006 
3007 test_download.py: create+check MyShare instances better, make sure they share
3008 Server objects, now that finder.py cares
3009] {
3010hunk ./src/allmydata/immutable/downloader/fetcher.py 192
3011         sent_something = False
3012         want_more_diversity = False
3013         for sh in self._shares: # find one good share to fetch
3014-            shnum = sh._shnum ; serverid = sh._peerid
3015+            shnum = sh._shnum ; serverid = sh._server.get_serverid()
3016             if shnum in self._blocks:
3017                 continue # don't request data we already have
3018             if shnum in self._active_share_map:
3019hunk ./src/allmydata/immutable/downloader/fetcher.py 232
3020         # called by Shares, in response to our s.send_request() calls.
3021         if not self._running:
3022             return
3023-        log.msg("SegmentFetcher(%s)._block_request_activity:"
3024-                " Share(sh%d-on-%s) -> %s" %
3025-                (self._node._si_prefix, shnum, share._peerid_s, state),
3026+        log.msg("SegmentFetcher(%s)._block_request_activity: %s -> %s" %
3027+                (self._node._si_prefix, repr(share), state),
3028                 level=log.NOISY, parent=self._lp, umid="vilNWA")
3029         # COMPLETE, CORRUPT, DEAD, BADSEGNUM are terminal. Remove the share
3030         # from all our tracking lists.
3031hunk ./src/allmydata/immutable/downloader/finder.py 207
3032             #  2: break _get_satisfaction into Deferred-attached pieces.
3033             #     Yuck.
3034             self._commonshares[shnum] = cs
3035-        s = Share(bucket, server.get_version(), self.verifycap, cs, self.node,
3036-                  self._download_status, server.get_serverid(), shnum, dyhb_rtt,
3037+        s = Share(bucket, server, self.verifycap, cs, self.node,
3038+                  self._download_status, shnum, dyhb_rtt,
3039                   self._node_logparent)
3040         return s
3041 
3042hunk ./src/allmydata/immutable/downloader/share.py 35
3043     # this is a specific implementation of IShare for tahoe's native storage
3044     # servers. A different backend would use a different class.
3045 
3046-    def __init__(self, rref, server_version, verifycap, commonshare, node,
3047-                 download_status, peerid, shnum, dyhb_rtt, logparent):
3048+    def __init__(self, rref, server, verifycap, commonshare, node,
3049+                 download_status, shnum, dyhb_rtt, logparent):
3050         self._rref = rref
3051hunk ./src/allmydata/immutable/downloader/share.py 38
3052-        self._server_version = server_version
3053+        self._server = server
3054         self._node = node # holds share_hash_tree and UEB
3055         self.actual_segment_size = node.segment_size # might still be None
3056         # XXX change node.guessed_segment_size to
3057hunk ./src/allmydata/immutable/downloader/share.py 49
3058         self._UEB_length = None
3059         self._commonshare = commonshare # holds block_hash_tree
3060         self._download_status = download_status
3061-        self._peerid = peerid
3062-        self._peerid_s = base32.b2a(peerid)[:5]
3063         self._storage_index = verifycap.storage_index
3064         self._si_prefix = base32.b2a(verifycap.storage_index)[:8]
3065         self._shnum = shnum
3066hunk ./src/allmydata/immutable/downloader/share.py 83
3067         # download can re-fetch it.
3068 
3069         self._requested_blocks = [] # (segnum, set(observer2..))
3070-        ver = server_version["http://allmydata.org/tahoe/protocols/storage/v1"]
3071+        v = server.get_version()
3072+        ver = v["http://allmydata.org/tahoe/protocols/storage/v1"]
3073         self._overrun_ok = ver["tolerates-immutable-read-overrun"]
3074         # If _overrun_ok and we guess the offsets correctly, we can get
3075         # everything in one RTT. If _overrun_ok and we guess wrong, we might
3076hunk ./src/allmydata/immutable/downloader/share.py 96
3077         self.had_corruption = False # for unit tests
3078 
3079     def __repr__(self):
3080-        return "Share(sh%d-on-%s)" % (self._shnum, self._peerid_s)
3081+        return "Share(sh%d-on-%s)" % (self._shnum, self._server.name())
3082 
3083     def is_alive(self):
3084         # XXX: reconsider. If the share sees a single error, should it remain
3085hunk ./src/allmydata/immutable/downloader/share.py 729
3086                          share=repr(self),
3087                          start=start, length=length,
3088                          level=log.NOISY, parent=self._lp, umid="sgVAyA")
3089-            req_ev = ds.add_request_sent(self._peerid, self._shnum,
3090+            req_ev = ds.add_request_sent(self._server.get_serverid(),
3091+                                         self._shnum,
3092                                          start, length, now())
3093             d = self._send_request(start, length)
3094             d.addCallback(self._got_data, start, length, req_ev, lp)
3095hunk ./src/allmydata/immutable/downloader/share.py 792
3096         log.msg(format="error requesting %(start)d+%(length)d"
3097                 " from %(server)s for si %(si)s",
3098                 start=start, length=length,
3099-                server=self._peerid_s, si=self._si_prefix,
3100+                server=self._server.name(), si=self._si_prefix,
3101                 failure=f, parent=lp, level=log.UNUSUAL, umid="BZgAJw")
3102         # retire our observers, assuming we won't be able to make any
3103         # further progress
3104hunk ./src/allmydata/test/test_cli.py 2415
3105         # enough shares. The one remaining share might be in either the
3106         # COMPLETE or the PENDING state.
3107         in_complete_msg = "ran out of shares: complete=sh0 pending= overdue= unused= need 3"
3108-        in_pending_msg = "ran out of shares: complete= pending=Share(sh0-on-fob7v) overdue= unused= need 3"
3109+        in_pending_msg = "ran out of shares: complete= pending=Share(sh0-on-fob7vqgd) overdue= unused= need 3"
3110 
3111         d.addCallback(lambda ign: self.do_cli("get", self.uri_1share))
3112         def _check1((rc, out, err)):
3113hunk ./src/allmydata/test/test_download.py 11
3114 from twisted.internet import defer, reactor
3115 from allmydata import uri
3116 from allmydata.storage.server import storage_index_to_dir
3117-from allmydata.util import base32, fileutil, spans, log
3118+from allmydata.util import base32, fileutil, spans, log, hashutil
3119 from allmydata.util.consumer import download_to_data, MemoryConsumer
3120 from allmydata.immutable import upload, layout
3121hunk ./src/allmydata/test/test_download.py 14
3122-from allmydata.test.no_network import GridTestMixin
3123+from allmydata.test.no_network import GridTestMixin, NoNetworkServer
3124 from allmydata.test.common import ShouldFailMixin
3125 from allmydata.interfaces import NotEnoughSharesError, NoSharesError
3126 from allmydata.immutable.downloader.common import BadSegmentNumberError, \
3127hunk ./src/allmydata/test/test_download.py 1270
3128         e2.finished(now+3)
3129         self.failUnlessEqual(ds.get_active(), False)
3130 
3131+def make_server(clientid):
3132+    tubid = hashutil.tagged_hash("clientid", clientid)[:20]
3133+    return NoNetworkServer(tubid, None)
3134+def make_servers(clientids):
3135+    servers = {}
3136+    for clientid in clientids:
3137+        servers[clientid] = make_server(clientid)
3138+    return servers
3139+
3140 class MyShare:
3141hunk ./src/allmydata/test/test_download.py 1280
3142-    def __init__(self, shnum, peerid, rtt):
3143+    def __init__(self, shnum, server, rtt):
3144         self._shnum = shnum
3145hunk ./src/allmydata/test/test_download.py 1282
3146-        self._peerid = peerid
3147-        self._peerid_s = peerid
3148+        self._server = server
3149         self._dyhb_rtt = rtt
3150     def __repr__(self):
3151hunk ./src/allmydata/test/test_download.py 1285
3152-        return "sh%d-on-%s" % (self._shnum, self._peerid)
3153+        return "sh%d-on-%s" % (self._shnum, self._server.name())
3154 
3155 class MySegmentFetcher(SegmentFetcher):
3156     def __init__(self, *args, **kwargs):
3157hunk ./src/allmydata/test/test_download.py 1330
3158     def test_only_one_share(self):
3159         node = FakeNode()
3160         sf = MySegmentFetcher(node, 0, 3, None)
3161-        shares = [MyShare(0, "peer-A", 0.0)]
3162+        serverA = make_server("peer-A")
3163+        shares = [MyShare(0, serverA, 0.0)]
3164         sf.add_shares(shares)
3165         d = flushEventualQueue()
3166         def _check1(ign):
3167hunk ./src/allmydata/test/test_download.py 1343
3168         def _check2(ign):
3169             self.failUnless(node.failed)
3170             self.failUnless(node.failed.check(NotEnoughSharesError))
3171-            self.failUnlessIn("complete= pending=sh0-on-peer-A overdue= unused=",
3172+            sname = serverA.name()
3173+            self.failUnlessIn("complete= pending=sh0-on-%s overdue= unused="  % sname,
3174                               str(node.failed))
3175         d.addCallback(_check2)
3176         return d
3177hunk ./src/allmydata/test/test_download.py 1352
3178     def test_good_diversity_early(self):
3179         node = FakeNode()
3180         sf = MySegmentFetcher(node, 0, 3, None)
3181-        shares = [MyShare(i, "peer-%d" % i, i) for i in range(10)]
3182+        shares = [MyShare(i, make_server("peer-%d" % i), i) for i in range(10)]
3183         sf.add_shares(shares)
3184         d = flushEventualQueue()
3185         def _check1(ign):
3186hunk ./src/allmydata/test/test_download.py 1374
3187     def test_good_diversity_late(self):
3188         node = FakeNode()
3189         sf = MySegmentFetcher(node, 0, 3, None)
3190-        shares = [MyShare(i, "peer-%d" % i, i) for i in range(10)]
3191+        shares = [MyShare(i, make_server("peer-%d" % i), i) for i in range(10)]
3192         sf.add_shares([])
3193         d = flushEventualQueue()
3194         def _check1(ign):
3195hunk ./src/allmydata/test/test_download.py 1403
3196         # we could satisfy the read entirely from the first server, but we'd
3197         # prefer not to. Instead, we expect to only pull one share from the
3198         # first server
3199-        shares = [MyShare(0, "peer-A", 0.0),
3200-                  MyShare(1, "peer-A", 0.0),
3201-                  MyShare(2, "peer-A", 0.0),
3202-                  MyShare(3, "peer-B", 1.0),
3203-                  MyShare(4, "peer-C", 2.0),
3204+        servers = make_servers(["peer-A", "peer-B", "peer-C"])
3205+        shares = [MyShare(0, servers["peer-A"], 0.0),
3206+                  MyShare(1, servers["peer-A"], 0.0),
3207+                  MyShare(2, servers["peer-A"], 0.0),
3208+                  MyShare(3, servers["peer-B"], 1.0),
3209+                  MyShare(4, servers["peer-C"], 2.0),
3210                   ]
3211         sf.add_shares([])
3212         d = flushEventualQueue()
3213hunk ./src/allmydata/test/test_download.py 1438
3214         sf = MySegmentFetcher(node, 0, 3, None)
3215         # we satisfy the read entirely from the first server because we don't
3216         # have any other choice.
3217-        shares = [MyShare(0, "peer-A", 0.0),
3218-                  MyShare(1, "peer-A", 0.0),
3219-                  MyShare(2, "peer-A", 0.0),
3220-                  MyShare(3, "peer-A", 0.0),
3221-                  MyShare(4, "peer-A", 0.0),
3222+        serverA = make_server("peer-A")
3223+        shares = [MyShare(0, serverA, 0.0),
3224+                  MyShare(1, serverA, 0.0),
3225+                  MyShare(2, serverA, 0.0),
3226+                  MyShare(3, serverA, 0.0),
3227+                  MyShare(4, serverA, 0.0),
3228                   ]
3229         sf.add_shares([])
3230         d = flushEventualQueue()
3231hunk ./src/allmydata/test/test_download.py 1474
3232         sf = MySegmentFetcher(node, 0, 3, None)
3233         # we satisfy the read entirely from the first server because we don't
3234         # have any other choice.
3235-        shares = [MyShare(0, "peer-A", 0.0),
3236-                  MyShare(1, "peer-A", 0.0),
3237-                  MyShare(2, "peer-A", 0.0),
3238-                  MyShare(3, "peer-A", 0.0),
3239-                  MyShare(4, "peer-A", 0.0),
3240+        serverA = make_server("peer-A")
3241+        shares = [MyShare(0, serverA, 0.0),
3242+                  MyShare(1, serverA, 0.0),
3243+                  MyShare(2, serverA, 0.0),
3244+                  MyShare(3, serverA, 0.0),
3245+                  MyShare(4, serverA, 0.0),
3246                   ]
3247         sf.add_shares(shares)
3248         d = flushEventualQueue()
3249hunk ./src/allmydata/test/test_download.py 1503
3250     def test_overdue(self):
3251         node = FakeNode()
3252         sf = MySegmentFetcher(node, 0, 3, None)
3253-        shares = [MyShare(i, "peer-%d" % i, i) for i in range(10)]
3254+        shares = [MyShare(i, make_server("peer-%d" % i), i) for i in range(10)]
3255         sf.add_shares(shares)
3256         d = flushEventualQueue()
3257         def _check1(ign):
3258hunk ./src/allmydata/test/test_download.py 1531
3259     def test_overdue_fails(self):
3260         node = FakeNode()
3261         sf = MySegmentFetcher(node, 0, 3, None)
3262-        shares = [MyShare(i, "peer-%d" % i, i) for i in range(6)]
3263+        servers = make_servers(["peer-%d" % i for i in range(6)])
3264+        shares = [MyShare(i, servers["peer-%d" % i], i) for i in range(6)]
3265         sf.add_shares(shares)
3266         sf.no_more_shares()
3267         d = flushEventualQueue()
3268hunk ./src/allmydata/test/test_download.py 1565
3269         def _check4(ign):
3270             self.failUnless(node.failed)
3271             self.failUnless(node.failed.check(NotEnoughSharesError))
3272-            self.failUnlessIn("complete=sh0 pending= overdue=sh2-on-peer-2 unused=",
3273+            sname = servers["peer-2"].name()
3274+            self.failUnlessIn("complete=sh0 pending= overdue=sh2-on-%s unused=" % sname,
3275                               str(node.failed))
3276         d.addCallback(_check4)
3277         return d
3278hunk ./src/allmydata/test/test_download.py 1577
3279         # we could satisfy the read entirely from the first server, but we'd
3280         # prefer not to. Instead, we expect to only pull one share from the
3281         # first server
3282-        shares = [MyShare(0, "peer-A", 0.0),
3283-                  MyShare(1, "peer-B", 1.0),
3284-                  MyShare(0, "peer-C", 2.0), # this will be skipped
3285-                  MyShare(1, "peer-D", 3.0),
3286-                  MyShare(2, "peer-E", 4.0),
3287+        servers = make_servers(["peer-A", "peer-B", "peer-C", "peer-D",
3288+                                "peer-E"])
3289+        shares = [MyShare(0, servers["peer-A"],0.0),
3290+                  MyShare(1, servers["peer-B"],1.0),
3291+                  MyShare(0, servers["peer-C"],2.0), # this will be skipped
3292+                  MyShare(1, servers["peer-D"],3.0),
3293+                  MyShare(2, servers["peer-E"],4.0),
3294                   ]
3295         sf.add_shares(shares[:3])
3296         d = flushEventualQueue()
3297}
3298[immutable/downloader/fetcher.py: fix diversity bug in server-response handling
3299warner@lothar.com**20110227011153
3300 Ignore-this: bcd62232c9159371ae8a16ff63d22c1b
3301 
3302 When blocks terminate (either COMPLETE or CORRUPT/DEAD/BADSEGNUM), the
3303 _shares_from_server dict was being popped incorrectly (using shnum as the
3304 index instead of serverid). I'm still thinking through the consequences of
3305 this bug. It was probably benign and really hard to detect. I think it would
3306 cause us to incorrectly believe that we're pulling too many shares from a
3307 server, and thus prefer a different server rather than asking for a second
3308 share from the first server. The diversity code is intended to spread out the
3309 number of shares simultaneously being requested from each server, but with
3310 this bug, it might be spreading out the total number of shares requested at
3311 all, not just simultaneously. (note that SegmentFetcher is scoped to a single
3312 segment, so the effect doesn't last very long).
3313] hunk ./src/allmydata/immutable/downloader/fetcher.py 239
3314         # from all our tracking lists.
3315         if state in (COMPLETE, CORRUPT, DEAD, BADSEGNUM):
3316             self._share_observers.pop(share, None)
3317-            self._shares_from_server.discard(shnum, share)
3318+            self._shares_from_server.discard(share._server.get_serverid(), share)
3319             if self._active_share_map.get(shnum) is share:
3320                 del self._active_share_map[shnum]
3321             self._overdue_share_map.discard(shnum, share)
3322[immutable/downloader/fetcher.py: remove all get_serverid() calls
3323warner@lothar.com**20110227011156
3324 Ignore-this: fb5ef018ade1749348b546ec24f7f09a
3325] {
3326hunk ./src/allmydata/immutable/downloader/fetcher.py 34
3327                           # responses arrive, or (for later segments) at
3328                           # startup. We remove shares from it when we call
3329                           # sh.get_block() on them.
3330-        self._shares_from_server = DictOfSets() # maps serverid to set of
3331+        self._shares_from_server = DictOfSets() # maps server to set of
3332                                                 # Shares on that server for
3333                                                 # which we have outstanding
3334                                                 # get_block() calls.
3335hunk ./src/allmydata/immutable/downloader/fetcher.py 192
3336         sent_something = False
3337         want_more_diversity = False
3338         for sh in self._shares: # find one good share to fetch
3339-            shnum = sh._shnum ; serverid = sh._server.get_serverid()
3340+            shnum = sh._shnum ; server = sh._server # XXX
3341             if shnum in self._blocks:
3342                 continue # don't request data we already have
3343             if shnum in self._active_share_map:
3344hunk ./src/allmydata/immutable/downloader/fetcher.py 200
3345                 # and added to _overdue_share_map instead.
3346                 continue # don't send redundant requests
3347             sfs = self._shares_from_server
3348-            if len(sfs.get(serverid,set())) >= self._max_shares_per_server:
3349+            if len(sfs.get(server,set())) >= self._max_shares_per_server:
3350                 # don't pull too much from a single server
3351                 want_more_diversity = True
3352                 continue
3353hunk ./src/allmydata/immutable/downloader/fetcher.py 207
3354             # ok, we can use this share
3355             self._shares.remove(sh)
3356             self._active_share_map[shnum] = sh
3357-            self._shares_from_server.add(serverid, sh)
3358+            self._shares_from_server.add(server, sh)
3359             self._start_share(sh, shnum)
3360             sent_something = True
3361             break
3362hunk ./src/allmydata/immutable/downloader/fetcher.py 239
3363         # from all our tracking lists.
3364         if state in (COMPLETE, CORRUPT, DEAD, BADSEGNUM):
3365             self._share_observers.pop(share, None)
3366-            self._shares_from_server.discard(share._server.get_serverid(), share)
3367+            server = share._server # XXX
3368+            self._shares_from_server.discard(server, share)
3369             if self._active_share_map.get(shnum) is share:
3370                 del self._active_share_map[shnum]
3371             self._overdue_share_map.discard(shnum, share)
3372}
3373[web: remove some uses of s.get_serverid(), not all
3374warner@lothar.com**20110227011159
3375 Ignore-this: a9347d9cf6436537a47edc6efde9f8be
3376] {
3377hunk ./src/allmydata/web/check_results.py 142
3378 
3379         # this table is sorted by permuted order
3380         sb = c.get_storage_broker()
3381-        permuted_peer_ids = [s.get_serverid()
3382-                             for s
3383-                             in sb.get_servers_for_psi(cr.get_storage_index())]
3384+        permuted_servers = [s
3385+                            for s
3386+                            in sb.get_servers_for_psi(cr.get_storage_index())]
3387 
3388         num_shares_left = sum([len(shares) for shares in servers.values()])
3389         servermap = []
3390hunk ./src/allmydata/web/check_results.py 148
3391-        for serverid in permuted_peer_ids:
3392-            nickname = sb.get_nickname_for_serverid(serverid)
3393-            shareids = servers.get(serverid, [])
3394+        for s in permuted_servers:
3395+            nickname = s.get_nickname()
3396+            shareids = servers.get(s.get_serverid(), [])
3397             shareids.reverse()
3398             shareids_s = [ T.tt[shareid, " "] for shareid in sorted(shareids) ]
3399             servermap.append(T.tr[T.td[T.div(class_="nickname")[nickname],
3400hunk ./src/allmydata/web/check_results.py 154
3401-                                       T.div(class_="nodeid")[T.tt[base32.b2a(serverid)]]],
3402+                                       T.div(class_="nodeid")[T.tt[s.name()]]],
3403                                   T.td[shareids_s],
3404                                   ])
3405             num_shares_left -= len(shareids)
3406hunk ./src/allmydata/web/root.py 259
3407     def render_service_row(self, ctx, server):
3408         nodeid = server.get_serverid()
3409 
3410-        ctx.fillSlots("peerid", idlib.nodeid_b2a(nodeid))
3411+        ctx.fillSlots("peerid", server.longname())
3412         ctx.fillSlots("nickname", server.get_nickname())
3413         rhost = server.get_remote_host()
3414         if rhost:
3415}
3416[control.py: remove all uses of s.get_serverid()
3417warner@lothar.com**20110227011203
3418 Ignore-this: f80a787953bd7fa3d40e828bde00e855
3419] {
3420hunk ./src/allmydata/control.py 103
3421         if not everyone_left:
3422             return results
3423         server = everyone_left.pop(0)
3424-        peerid = server.get_serverid()
3425+        server_name = server.longname()
3426         connection = server.get_rref()
3427         start = time.time()
3428         d = connection.callRemote("get_buckets", "\x00"*16)
3429hunk ./src/allmydata/control.py 110
3430         def _done(ignored):
3431             stop = time.time()
3432             elapsed = stop - start
3433-            if peerid in results:
3434-                results[peerid].append(elapsed)
3435+            if server_name in results:
3436+                results[server_name].append(elapsed)
3437             else:
3438hunk ./src/allmydata/control.py 113
3439-                results[peerid] = [elapsed]
3440+                results[server_name] = [elapsed]
3441         d.addCallback(_done)
3442         d.addCallback(self._do_one_ping, everyone_left, results)
3443         def _average(res):
3444hunk ./src/allmydata/control.py 118
3445             averaged = {}
3446-            for peerid,times in results.iteritems():
3447-                averaged[peerid] = sum(times) / len(times)
3448+            for server_name,times in results.iteritems():
3449+                averaged[server_name] = sum(times) / len(times)
3450             return averaged
3451         d.addCallback(_average)
3452         return d
3453hunk ./src/allmydata/interfaces.py 2324
3454         @return: a dictionary mapping peerid to a float (RTT time in seconds)
3455         """
3456 
3457-        return DictOf(Nodeid, float)
3458+        return DictOf(str, float)
3459 
3460 UploadResults = Any() #DictOf(str, str)
3461 
3462}
3463
3464Context:
3465
3466[docs/configuration.rst: add a "Frontend Configuration" section
3467Brian Warner <warner@lothar.com>**20110222014323
3468 Ignore-this: 657018aa501fe4f0efef9851628444ca
3469 
3470 this points to docs/frontends/*.rst, which were previously underlinked
3471] 
3472[web/filenode.py: avoid calling req.finish() on closed HTTP connections. Closes #1366
3473"Brian Warner <warner@lothar.com>"**20110221061544
3474 Ignore-this: 799d4de19933f2309b3c0c19a63bb888
3475] 
3476[Add unit tests for cross_check_pkg_resources_versus_import, and a regression test for ref #1355. This requires a little refactoring to make it testable.
3477david-sarah@jacaranda.org**20110221015817
3478 Ignore-this: 51d181698f8c20d3aca58b057e9c475a
3479] 
3480[allmydata/__init__.py: .name was used in place of the correct .__name__ when printing an exception. Also, robustify string formatting by using %r instead of %s in some places. fixes #1355.
3481david-sarah@jacaranda.org**20110221020125
3482 Ignore-this: b0744ed58f161bf188e037bad077fc48
3483] 
3484[Refactor StorageFarmBroker handling of servers
3485Brian Warner <warner@lothar.com>**20110221015804
3486 Ignore-this: 842144ed92f5717699b8f580eab32a51
3487 
3488 Pass around IServer instance instead of (peerid, rref) tuple. Replace
3489 "descriptor" with "server". Other replacements:
3490 
3491  get_all_servers -> get_connected_servers/get_known_servers
3492  get_servers_for_index -> get_servers_for_psi (now returns IServers)
3493 
3494 This change still needs to be pushed further down: lots of code is now
3495 getting the IServer and then distributing (peerid, rref) internally.
3496 Instead, it ought to distribute the IServer internally and delay
3497 extracting a serverid or rref until the last moment.
3498 
3499 no_network.py was updated to retain parallelism.
3500] 
3501[TAG allmydata-tahoe-1.8.2
3502warner@lothar.com**20110131020101] 
3503Patch bundle hash:
350416ec08c4c679b07a1836ca90086a01711f1cc6ae