Ticket #999: checkpoint12.darcs.patch

File checkpoint12.darcs.patch, 166.8 KB (added by arch_o_median, at 2011-07-11T19:08:47Z)

no longer trying to mock FS in TestServerFSBackend

Line 
1Fri Mar 25 14:35:14 MDT 2011  wilcoxjg@gmail.com
2  * storage: new mocking tests of storage server read and write
3  There are already tests of read and functionality in test_storage.py, but those tests let the code under test use a real filesystem whereas these tests mock all file system calls.
4
5Fri Jun 24 14:28:50 MDT 2011  wilcoxjg@gmail.com
6  * server.py, test_backends.py, interfaces.py, immutable.py (others?): working patch for implementation of backends plugin
7  sloppy not for production
8
9Sat Jun 25 23:27:32 MDT 2011  wilcoxjg@gmail.com
10  * a temp patch used as a snapshot
11
12Sat Jun 25 23:32:44 MDT 2011  wilcoxjg@gmail.com
13  * snapshot of progress on backend implementation (not suitable for trunk)
14
15Sun Jun 26 10:57:15 MDT 2011  wilcoxjg@gmail.com
16  * checkpoint patch
17
18Tue Jun 28 14:22:02 MDT 2011  wilcoxjg@gmail.com
19  * checkpoint4
20
21Mon Jul  4 21:46:26 MDT 2011  wilcoxjg@gmail.com
22  * checkpoint5
23
24Wed Jul  6 13:08:24 MDT 2011  wilcoxjg@gmail.com
25  * checkpoint 6
26
27Wed Jul  6 14:08:20 MDT 2011  wilcoxjg@gmail.com
28  * checkpoint 7
29
30Wed Jul  6 16:31:26 MDT 2011  wilcoxjg@gmail.com
31  * checkpoint8
32    The nullbackend is necessary to test unlimited space in a backend.  It is a mock-like object.
33
34Wed Jul  6 22:29:42 MDT 2011  wilcoxjg@gmail.com
35  * checkpoint 9
36
37Thu Jul  7 11:20:49 MDT 2011  wilcoxjg@gmail.com
38  * checkpoint10
39
40Fri Jul  8 15:39:19 MDT 2011  wilcoxjg@gmail.com
41  * jacp 11
42
43Sun Jul 10 13:19:15 MDT 2011  wilcoxjg@gmail.com
44  * checkpoint12 testing correct behavior with regard to incoming and final
45
46Sun Jul 10 13:51:39 MDT 2011  wilcoxjg@gmail.com
47  * fix inconsistent naming of storage_index vs storageindex in storage/server.py
48
49Sun Jul 10 16:06:23 MDT 2011  wilcoxjg@gmail.com
50  * adding comments to clarify what I'm about to do.
51
52Mon Jul 11 13:08:49 MDT 2011  wilcoxjg@gmail.com
53  * branching back, no longer attempting to mock inside TestServerFSBackend
54
55New patches:
56
57[storage: new mocking tests of storage server read and write
58wilcoxjg@gmail.com**20110325203514
59 Ignore-this: df65c3c4f061dd1516f88662023fdb41
60 There are already tests of read and functionality in test_storage.py, but those tests let the code under test use a real filesystem whereas these tests mock all file system calls.
61] {
62addfile ./src/allmydata/test/test_server.py
63hunk ./src/allmydata/test/test_server.py 1
64+from twisted.trial import unittest
65+
66+from StringIO import StringIO
67+
68+from allmydata.test.common_util import ReallyEqualMixin
69+
70+import mock
71+
72+# This is the code that we're going to be testing.
73+from allmydata.storage.server import StorageServer
74+
75+# The following share file contents was generated with
76+# storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
77+# with share data == 'a'.
78+share_data = 'a\x00\x00\x00\x00xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy\x00(\xde\x80'
79+share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data
80+
81+sharefname = 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a/0'
82+
83+class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
84+    @mock.patch('__builtin__.open')
85+    def test_create_server(self, mockopen):
86+        """ This tests whether a server instance can be constructed. """
87+
88+        def call_open(fname, mode):
89+            if fname == 'testdir/bucket_counter.state':
90+                raise IOError(2, "No such file or directory: 'testdir/bucket_counter.state'")
91+            elif fname == 'testdir/lease_checker.state':
92+                raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
93+            elif fname == 'testdir/lease_checker.history':
94+                return StringIO()
95+        mockopen.side_effect = call_open
96+
97+        # Now begin the test.
98+        s = StorageServer('testdir', 'testnodeidxxxxxxxxxx')
99+
100+        # You passed!
101+
102+class TestServer(unittest.TestCase, ReallyEqualMixin):
103+    @mock.patch('__builtin__.open')
104+    def setUp(self, mockopen):
105+        def call_open(fname, mode):
106+            if fname == 'testdir/bucket_counter.state':
107+                raise IOError(2, "No such file or directory: 'testdir/bucket_counter.state'")
108+            elif fname == 'testdir/lease_checker.state':
109+                raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
110+            elif fname == 'testdir/lease_checker.history':
111+                return StringIO()
112+        mockopen.side_effect = call_open
113+
114+        self.s = StorageServer('testdir', 'testnodeidxxxxxxxxxx')
115+
116+
117+    @mock.patch('time.time')
118+    @mock.patch('os.mkdir')
119+    @mock.patch('__builtin__.open')
120+    @mock.patch('os.listdir')
121+    @mock.patch('os.path.isdir')
122+    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
123+        """Handle a report of corruption."""
124+
125+        def call_listdir(dirname):
126+            self.failUnlessReallyEqual(dirname, 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
127+            raise OSError(2, "No such file or directory: 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a'")
128+
129+        mocklistdir.side_effect = call_listdir
130+
131+        class MockFile:
132+            def __init__(self):
133+                self.buffer = ''
134+                self.pos = 0
135+            def write(self, instring):
136+                begin = self.pos
137+                padlen = begin - len(self.buffer)
138+                if padlen > 0:
139+                    self.buffer += '\x00' * padlen
140+                end = self.pos + len(instring)
141+                self.buffer = self.buffer[:begin]+instring+self.buffer[end:]
142+                self.pos = end
143+            def close(self):
144+                pass
145+            def seek(self, pos):
146+                self.pos = pos
147+            def read(self, numberbytes):
148+                return self.buffer[self.pos:self.pos+numberbytes]
149+            def tell(self):
150+                return self.pos
151+
152+        mocktime.return_value = 0
153+
154+        sharefile = MockFile()
155+        def call_open(fname, mode):
156+            self.failUnlessReallyEqual(fname, 'testdir/shares/incoming/or/orsxg5dtorxxeylhmvpws3temv4a/0' )
157+            return sharefile
158+
159+        mockopen.side_effect = call_open
160+        # Now begin the test.
161+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
162+        print bs
163+        bs[0].remote_write(0, 'a')
164+        self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
165+
166+
167+    @mock.patch('os.path.exists')
168+    @mock.patch('os.path.getsize')
169+    @mock.patch('__builtin__.open')
170+    @mock.patch('os.listdir')
171+    def test_read_share(self, mocklistdir, mockopen, mockgetsize, mockexists):
172+        """ This tests whether the code correctly finds and reads
173+        shares written out by old (Tahoe-LAFS <= v1.8.2)
174+        servers. There is a similar test in test_download, but that one
175+        is from the perspective of the client and exercises a deeper
176+        stack of code. This one is for exercising just the
177+        StorageServer object. """
178+
179+        def call_listdir(dirname):
180+            self.failUnlessReallyEqual(dirname,'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
181+            return ['0']
182+
183+        mocklistdir.side_effect = call_listdir
184+
185+        def call_open(fname, mode):
186+            self.failUnlessReallyEqual(fname, sharefname)
187+            self.failUnless('r' in mode, mode)
188+            self.failUnless('b' in mode, mode)
189+
190+            return StringIO(share_file_data)
191+        mockopen.side_effect = call_open
192+
193+        datalen = len(share_file_data)
194+        def call_getsize(fname):
195+            self.failUnlessReallyEqual(fname, sharefname)
196+            return datalen
197+        mockgetsize.side_effect = call_getsize
198+
199+        def call_exists(fname):
200+            self.failUnlessReallyEqual(fname, sharefname)
201+            return True
202+        mockexists.side_effect = call_exists
203+
204+        # Now begin the test.
205+        bs = self.s.remote_get_buckets('teststorage_index')
206+
207+        self.failUnlessEqual(len(bs), 1)
208+        b = bs[0]
209+        self.failUnlessReallyEqual(b.remote_read(0, datalen), share_data)
210+        # If you try to read past the end you get the as much data as is there.
211+        self.failUnlessReallyEqual(b.remote_read(0, datalen+20), share_data)
212+        # If you start reading past the end of the file you get the empty string.
213+        self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
214}
215[server.py, test_backends.py, interfaces.py, immutable.py (others?): working patch for implementation of backends plugin
216wilcoxjg@gmail.com**20110624202850
217 Ignore-this: ca6f34987ee3b0d25cac17c1fc22d50c
218 sloppy not for production
219] {
220move ./src/allmydata/test/test_server.py ./src/allmydata/test/test_backends.py
221hunk ./src/allmydata/storage/crawler.py 13
222     pass
223 
224 class ShareCrawler(service.MultiService):
225-    """A ShareCrawler subclass is attached to a StorageServer, and
226+    """A subcless of ShareCrawler is attached to a StorageServer, and
227     periodically walks all of its shares, processing each one in some
228     fashion. This crawl is rate-limited, to reduce the IO burden on the host,
229     since large servers can easily have a terabyte of shares, in several
230hunk ./src/allmydata/storage/crawler.py 31
231     We assume that the normal upload/download/get_buckets traffic of a tahoe
232     grid will cause the prefixdir contents to be mostly cached in the kernel,
233     or that the number of buckets in each prefixdir will be small enough to
234-    load quickly. A 1TB allmydata.com server was measured to have 2.56M
235+    load quickly. A 1TB allmydata.com server was measured to have 2.56 * 10^6
236     buckets, spread into the 1024 prefixdirs, with about 2500 buckets per
237     prefix. On this server, each prefixdir took 130ms-200ms to list the first
238     time, and 17ms to list the second time.
239hunk ./src/allmydata/storage/crawler.py 68
240     cpu_slice = 1.0 # use up to 1.0 seconds before yielding
241     minimum_cycle_time = 300 # don't run a cycle faster than this
242 
243-    def __init__(self, server, statefile, allowed_cpu_percentage=None):
244+    def __init__(self, backend, statefile, allowed_cpu_percentage=None):
245         service.MultiService.__init__(self)
246         if allowed_cpu_percentage is not None:
247             self.allowed_cpu_percentage = allowed_cpu_percentage
248hunk ./src/allmydata/storage/crawler.py 72
249-        self.server = server
250-        self.sharedir = server.sharedir
251-        self.statefile = statefile
252+        self.backend = backend
253         self.prefixes = [si_b2a(struct.pack(">H", i << (16-10)))[:2]
254                          for i in range(2**10)]
255         self.prefixes.sort()
256hunk ./src/allmydata/storage/crawler.py 446
257 
258     minimum_cycle_time = 60*60 # we don't need this more than once an hour
259 
260-    def __init__(self, server, statefile, num_sample_prefixes=1):
261-        ShareCrawler.__init__(self, server, statefile)
262+    def __init__(self, statefile, num_sample_prefixes=1):
263+        ShareCrawler.__init__(self, statefile)
264         self.num_sample_prefixes = num_sample_prefixes
265 
266     def add_initial_state(self):
267hunk ./src/allmydata/storage/expirer.py 15
268     removed.
269 
270     I collect statistics on the leases and make these available to a web
271-    status page, including::
272+    status page, including:
273 
274     Space recovered during this cycle-so-far:
275      actual (only if expiration_enabled=True):
276hunk ./src/allmydata/storage/expirer.py 51
277     slow_start = 360 # wait 6 minutes after startup
278     minimum_cycle_time = 12*60*60 # not more than twice per day
279 
280-    def __init__(self, server, statefile, historyfile,
281+    def __init__(self, statefile, historyfile,
282                  expiration_enabled, mode,
283                  override_lease_duration, # used if expiration_mode=="age"
284                  cutoff_date, # used if expiration_mode=="cutoff-date"
285hunk ./src/allmydata/storage/expirer.py 71
286         else:
287             raise ValueError("GC mode '%s' must be 'age' or 'cutoff-date'" % mode)
288         self.sharetypes_to_expire = sharetypes
289-        ShareCrawler.__init__(self, server, statefile)
290+        ShareCrawler.__init__(self, statefile)
291 
292     def add_initial_state(self):
293         # we fill ["cycle-to-date"] here (even though they will be reset in
294hunk ./src/allmydata/storage/immutable.py 44
295     sharetype = "immutable"
296 
297     def __init__(self, filename, max_size=None, create=False):
298-        """ If max_size is not None then I won't allow more than max_size to be written to me. If create=True and max_size must not be None. """
299+        """ If max_size is not None then I won't allow more than
300+        max_size to be written to me. If create=True then max_size
301+        must not be None. """
302         precondition((max_size is not None) or (not create), max_size, create)
303         self.home = filename
304         self._max_size = max_size
305hunk ./src/allmydata/storage/immutable.py 87
306 
307     def read_share_data(self, offset, length):
308         precondition(offset >= 0)
309-        # reads beyond the end of the data are truncated. Reads that start
310-        # beyond the end of the data return an empty string. I wonder why
311-        # Python doesn't do the following computation for me?
312+        # Reads beyond the end of the data are truncated. Reads that start
313+        # beyond the end of the data return an empty string.
314         seekpos = self._data_offset+offset
315         fsize = os.path.getsize(self.home)
316         actuallength = max(0, min(length, fsize-seekpos))
317hunk ./src/allmydata/storage/immutable.py 198
318             space_freed += os.stat(self.home)[stat.ST_SIZE]
319             self.unlink()
320         return space_freed
321+class NullBucketWriter(Referenceable):
322+    implements(RIBucketWriter)
323 
324hunk ./src/allmydata/storage/immutable.py 201
325+    def remote_write(self, offset, data):
326+        return
327 
328 class BucketWriter(Referenceable):
329     implements(RIBucketWriter)
330hunk ./src/allmydata/storage/server.py 7
331 from twisted.application import service
332 
333 from zope.interface import implements
334-from allmydata.interfaces import RIStorageServer, IStatsProducer
335+from allmydata.interfaces import RIStorageServer, IStatsProducer, IShareStore
336 from allmydata.util import fileutil, idlib, log, time_format
337 import allmydata # for __full_version__
338 
339hunk ./src/allmydata/storage/server.py 16
340 from allmydata.storage.lease import LeaseInfo
341 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
342      create_mutable_sharefile
343-from allmydata.storage.immutable import ShareFile, BucketWriter, BucketReader
344+from allmydata.storage.immutable import ShareFile, NullBucketWriter, BucketWriter, BucketReader
345 from allmydata.storage.crawler import BucketCountingCrawler
346 from allmydata.storage.expirer import LeaseCheckingCrawler
347 
348hunk ./src/allmydata/storage/server.py 20
349+from zope.interface import implements
350+
351+# A Backend is a MultiService so that its server's crawlers (if the server has any) can
352+# be started and stopped.
353+class Backend(service.MultiService):
354+    implements(IStatsProducer)
355+    def __init__(self):
356+        service.MultiService.__init__(self)
357+
358+    def get_bucket_shares(self):
359+        """XXX"""
360+        raise NotImplementedError
361+
362+    def get_share(self):
363+        """XXX"""
364+        raise NotImplementedError
365+
366+    def make_bucket_writer(self):
367+        """XXX"""
368+        raise NotImplementedError
369+
370+class NullBackend(Backend):
371+    def __init__(self):
372+        Backend.__init__(self)
373+
374+    def get_available_space(self):
375+        return None
376+
377+    def get_bucket_shares(self, storage_index):
378+        return set()
379+
380+    def get_share(self, storage_index, sharenum):
381+        return None
382+
383+    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
384+        return NullBucketWriter()
385+
386+class FSBackend(Backend):
387+    def __init__(self, storedir, readonly=False, reserved_space=0):
388+        Backend.__init__(self)
389+
390+        self._setup_storage(storedir, readonly, reserved_space)
391+        self._setup_corruption_advisory()
392+        self._setup_bucket_counter()
393+        self._setup_lease_checkerf()
394+
395+    def _setup_storage(self, storedir, readonly, reserved_space):
396+        self.storedir = storedir
397+        self.readonly = readonly
398+        self.reserved_space = int(reserved_space)
399+        if self.reserved_space:
400+            if self.get_available_space() is None:
401+                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
402+                        umid="0wZ27w", level=log.UNUSUAL)
403+
404+        self.sharedir = os.path.join(self.storedir, "shares")
405+        fileutil.make_dirs(self.sharedir)
406+        self.incomingdir = os.path.join(self.sharedir, 'incoming')
407+        self._clean_incomplete()
408+
409+    def _clean_incomplete(self):
410+        fileutil.rm_dir(self.incomingdir)
411+        fileutil.make_dirs(self.incomingdir)
412+
413+    def _setup_corruption_advisory(self):
414+        # we don't actually create the corruption-advisory dir until necessary
415+        self.corruption_advisory_dir = os.path.join(self.storedir,
416+                                                    "corruption-advisories")
417+
418+    def _setup_bucket_counter(self):
419+        statefile = os.path.join(self.storedir, "bucket_counter.state")
420+        self.bucket_counter = BucketCountingCrawler(statefile)
421+        self.bucket_counter.setServiceParent(self)
422+
423+    def _setup_lease_checkerf(self):
424+        statefile = os.path.join(self.storedir, "lease_checker.state")
425+        historyfile = os.path.join(self.storedir, "lease_checker.history")
426+        self.lease_checker = LeaseCheckingCrawler(statefile, historyfile,
427+                                   expiration_enabled, expiration_mode,
428+                                   expiration_override_lease_duration,
429+                                   expiration_cutoff_date,
430+                                   expiration_sharetypes)
431+        self.lease_checker.setServiceParent(self)
432+
433+    def get_available_space(self):
434+        if self.readonly:
435+            return 0
436+        return fileutil.get_available_space(self.storedir, self.reserved_space)
437+
438+    def get_bucket_shares(self, storage_index):
439+        """Return a list of (shnum, pathname) tuples for files that hold
440+        shares for this storage_index. In each tuple, 'shnum' will always be
441+        the integer form of the last component of 'pathname'."""
442+        storagedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
443+        try:
444+            for f in os.listdir(storagedir):
445+                if NUM_RE.match(f):
446+                    filename = os.path.join(storagedir, f)
447+                    yield (int(f), filename)
448+        except OSError:
449+            # Commonly caused by there being no buckets at all.
450+            pass
451+
452 # storage/
453 # storage/shares/incoming
454 #   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
455hunk ./src/allmydata/storage/server.py 143
456     name = 'storage'
457     LeaseCheckerClass = LeaseCheckingCrawler
458 
459-    def __init__(self, storedir, nodeid, reserved_space=0,
460-                 discard_storage=False, readonly_storage=False,
461+    def __init__(self, nodeid, backend, reserved_space=0,
462+                 readonly_storage=False,
463                  stats_provider=None,
464                  expiration_enabled=False,
465                  expiration_mode="age",
466hunk ./src/allmydata/storage/server.py 155
467         assert isinstance(nodeid, str)
468         assert len(nodeid) == 20
469         self.my_nodeid = nodeid
470-        self.storedir = storedir
471-        sharedir = os.path.join(storedir, "shares")
472-        fileutil.make_dirs(sharedir)
473-        self.sharedir = sharedir
474-        # we don't actually create the corruption-advisory dir until necessary
475-        self.corruption_advisory_dir = os.path.join(storedir,
476-                                                    "corruption-advisories")
477-        self.reserved_space = int(reserved_space)
478-        self.no_storage = discard_storage
479-        self.readonly_storage = readonly_storage
480         self.stats_provider = stats_provider
481         if self.stats_provider:
482             self.stats_provider.register_producer(self)
483hunk ./src/allmydata/storage/server.py 158
484-        self.incomingdir = os.path.join(sharedir, 'incoming')
485-        self._clean_incomplete()
486-        fileutil.make_dirs(self.incomingdir)
487         self._active_writers = weakref.WeakKeyDictionary()
488hunk ./src/allmydata/storage/server.py 159
489+        self.backend = backend
490+        self.backend.setServiceParent(self)
491         log.msg("StorageServer created", facility="tahoe.storage")
492 
493hunk ./src/allmydata/storage/server.py 163
494-        if reserved_space:
495-            if self.get_available_space() is None:
496-                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
497-                        umin="0wZ27w", level=log.UNUSUAL)
498-
499         self.latencies = {"allocate": [], # immutable
500                           "write": [],
501                           "close": [],
502hunk ./src/allmydata/storage/server.py 174
503                           "renew": [],
504                           "cancel": [],
505                           }
506-        self.add_bucket_counter()
507-
508-        statefile = os.path.join(self.storedir, "lease_checker.state")
509-        historyfile = os.path.join(self.storedir, "lease_checker.history")
510-        klass = self.LeaseCheckerClass
511-        self.lease_checker = klass(self, statefile, historyfile,
512-                                   expiration_enabled, expiration_mode,
513-                                   expiration_override_lease_duration,
514-                                   expiration_cutoff_date,
515-                                   expiration_sharetypes)
516-        self.lease_checker.setServiceParent(self)
517 
518     def __repr__(self):
519         return "<StorageServer %s>" % (idlib.shortnodeid_b2a(self.my_nodeid),)
520hunk ./src/allmydata/storage/server.py 178
521 
522-    def add_bucket_counter(self):
523-        statefile = os.path.join(self.storedir, "bucket_counter.state")
524-        self.bucket_counter = BucketCountingCrawler(self, statefile)
525-        self.bucket_counter.setServiceParent(self)
526-
527     def count(self, name, delta=1):
528         if self.stats_provider:
529             self.stats_provider.count("storage_server." + name, delta)
530hunk ./src/allmydata/storage/server.py 233
531             kwargs["facility"] = "tahoe.storage"
532         return log.msg(*args, **kwargs)
533 
534-    def _clean_incomplete(self):
535-        fileutil.rm_dir(self.incomingdir)
536-
537     def get_stats(self):
538         # remember: RIStatsProvider requires that our return dict
539         # contains numeric values.
540hunk ./src/allmydata/storage/server.py 269
541             stats['storage_server.total_bucket_count'] = bucket_count
542         return stats
543 
544-    def get_available_space(self):
545-        """Returns available space for share storage in bytes, or None if no
546-        API to get this information is available."""
547-
548-        if self.readonly_storage:
549-            return 0
550-        return fileutil.get_available_space(self.storedir, self.reserved_space)
551-
552     def allocated_size(self):
553         space = 0
554         for bw in self._active_writers:
555hunk ./src/allmydata/storage/server.py 276
556         return space
557 
558     def remote_get_version(self):
559-        remaining_space = self.get_available_space()
560+        remaining_space = self.backend.get_available_space()
561         if remaining_space is None:
562             # We're on a platform that has no API to get disk stats.
563             remaining_space = 2**64
564hunk ./src/allmydata/storage/server.py 301
565         self.count("allocate")
566         alreadygot = set()
567         bucketwriters = {} # k: shnum, v: BucketWriter
568-        si_dir = storage_index_to_dir(storage_index)
569-        si_s = si_b2a(storage_index)
570 
571hunk ./src/allmydata/storage/server.py 302
572+        si_s = si_b2a(storage_index)
573         log.msg("storage: allocate_buckets %s" % si_s)
574 
575         # in this implementation, the lease information (including secrets)
576hunk ./src/allmydata/storage/server.py 316
577 
578         max_space_per_bucket = allocated_size
579 
580-        remaining_space = self.get_available_space()
581+        remaining_space = self.backend.get_available_space()
582         limited = remaining_space is not None
583         if limited:
584             # this is a bit conservative, since some of this allocated_size()
585hunk ./src/allmydata/storage/server.py 329
586         # they asked about: this will save them a lot of work. Add or update
587         # leases for all of them: if they want us to hold shares for this
588         # file, they'll want us to hold leases for this file.
589-        for (shnum, fn) in self._get_bucket_shares(storage_index):
590+        for (shnum, fn) in self.backend.get_bucket_shares(storage_index):
591             alreadygot.add(shnum)
592             sf = ShareFile(fn)
593             sf.add_or_renew_lease(lease_info)
594hunk ./src/allmydata/storage/server.py 335
595 
596         for shnum in sharenums:
597-            incominghome = os.path.join(self.incomingdir, si_dir, "%d" % shnum)
598-            finalhome = os.path.join(self.sharedir, si_dir, "%d" % shnum)
599-            if os.path.exists(finalhome):
600+            share = self.backend.get_share(storage_index, shnum)
601+
602+            if not share:
603+                if (not limited) or (remaining_space >= max_space_per_bucket):
604+                    # ok! we need to create the new share file.
605+                    bw = self.backend.make_bucket_writer(storage_index, shnum,
606+                                      max_space_per_bucket, lease_info, canary)
607+                    bucketwriters[shnum] = bw
608+                    self._active_writers[bw] = 1
609+                    if limited:
610+                        remaining_space -= max_space_per_bucket
611+                else:
612+                    # bummer! not enough space to accept this bucket
613+                    pass
614+
615+            elif share.is_complete():
616                 # great! we already have it. easy.
617                 pass
618hunk ./src/allmydata/storage/server.py 353
619-            elif os.path.exists(incominghome):
620+            elif not share.is_complete():
621                 # Note that we don't create BucketWriters for shnums that
622                 # have a partial share (in incoming/), so if a second upload
623                 # occurs while the first is still in progress, the second
624hunk ./src/allmydata/storage/server.py 359
625                 # uploader will use different storage servers.
626                 pass
627-            elif (not limited) or (remaining_space >= max_space_per_bucket):
628-                # ok! we need to create the new share file.
629-                bw = BucketWriter(self, incominghome, finalhome,
630-                                  max_space_per_bucket, lease_info, canary)
631-                if self.no_storage:
632-                    bw.throw_out_all_data = True
633-                bucketwriters[shnum] = bw
634-                self._active_writers[bw] = 1
635-                if limited:
636-                    remaining_space -= max_space_per_bucket
637-            else:
638-                # bummer! not enough space to accept this bucket
639-                pass
640-
641-        if bucketwriters:
642-            fileutil.make_dirs(os.path.join(self.sharedir, si_dir))
643 
644         self.add_latency("allocate", time.time() - start)
645         return alreadygot, bucketwriters
646hunk ./src/allmydata/storage/server.py 437
647             self.stats_provider.count('storage_server.bytes_added', consumed_size)
648         del self._active_writers[bw]
649 
650-    def _get_bucket_shares(self, storage_index):
651-        """Return a list of (shnum, pathname) tuples for files that hold
652-        shares for this storage_index. In each tuple, 'shnum' will always be
653-        the integer form of the last component of 'pathname'."""
654-        storagedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
655-        try:
656-            for f in os.listdir(storagedir):
657-                if NUM_RE.match(f):
658-                    filename = os.path.join(storagedir, f)
659-                    yield (int(f), filename)
660-        except OSError:
661-            # Commonly caused by there being no buckets at all.
662-            pass
663 
664     def remote_get_buckets(self, storage_index):
665         start = time.time()
666hunk ./src/allmydata/storage/server.py 444
667         si_s = si_b2a(storage_index)
668         log.msg("storage: get_buckets %s" % si_s)
669         bucketreaders = {} # k: sharenum, v: BucketReader
670-        for shnum, filename in self._get_bucket_shares(storage_index):
671+        for shnum, filename in self.backend.get_bucket_shares(storage_index):
672             bucketreaders[shnum] = BucketReader(self, filename,
673                                                 storage_index, shnum)
674         self.add_latency("get", time.time() - start)
675hunk ./src/allmydata/test/test_backends.py 10
676 import mock
677 
678 # This is the code that we're going to be testing.
679-from allmydata.storage.server import StorageServer
680+from allmydata.storage.server import StorageServer, FSBackend, NullBackend
681 
682 # The following share file contents was generated with
683 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
684hunk ./src/allmydata/test/test_backends.py 21
685 sharefname = 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a/0'
686 
687 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
688+    @mock.patch('time.time')
689+    @mock.patch('os.mkdir')
690+    @mock.patch('__builtin__.open')
691+    @mock.patch('os.listdir')
692+    @mock.patch('os.path.isdir')
693+    def test_create_server_null_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
694+        """ This tests whether a server instance can be constructed
695+        with a null backend. The server instance fails the test if it
696+        tries to read or write to the file system. """
697+
698+        # Now begin the test.
699+        s = StorageServer('testnodeidxxxxxxxxxx', backend=NullBackend())
700+
701+        self.failIf(mockisdir.called)
702+        self.failIf(mocklistdir.called)
703+        self.failIf(mockopen.called)
704+        self.failIf(mockmkdir.called)
705+
706+        # You passed!
707+
708+    @mock.patch('time.time')
709+    @mock.patch('os.mkdir')
710     @mock.patch('__builtin__.open')
711hunk ./src/allmydata/test/test_backends.py 44
712-    def test_create_server(self, mockopen):
713-        """ This tests whether a server instance can be constructed. """
714+    @mock.patch('os.listdir')
715+    @mock.patch('os.path.isdir')
716+    def test_create_server_fs_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
717+        """ This tests whether a server instance can be constructed
718+        with a filesystem backend. To pass the test, it has to use the
719+        filesystem in only the prescribed ways. """
720 
721         def call_open(fname, mode):
722             if fname == 'testdir/bucket_counter.state':
723hunk ./src/allmydata/test/test_backends.py 58
724                 raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
725             elif fname == 'testdir/lease_checker.history':
726                 return StringIO()
727+            else:
728+                self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
729         mockopen.side_effect = call_open
730 
731         # Now begin the test.
732hunk ./src/allmydata/test/test_backends.py 63
733-        s = StorageServer('testdir', 'testnodeidxxxxxxxxxx')
734+        s = StorageServer('testnodeidxxxxxxxxxx', backend=FSBackend('teststoredir'))
735+
736+        self.failIf(mockisdir.called)
737+        self.failIf(mocklistdir.called)
738+        self.failIf(mockopen.called)
739+        self.failIf(mockmkdir.called)
740+        self.failIf(mocktime.called)
741 
742         # You passed!
743 
744hunk ./src/allmydata/test/test_backends.py 73
745-class TestServer(unittest.TestCase, ReallyEqualMixin):
746+class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
747+    def setUp(self):
748+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullBackend())
749+
750+    @mock.patch('os.mkdir')
751+    @mock.patch('__builtin__.open')
752+    @mock.patch('os.listdir')
753+    @mock.patch('os.path.isdir')
754+    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir):
755+        """ Write a new share. """
756+
757+        # Now begin the test.
758+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
759+        bs[0].remote_write(0, 'a')
760+        self.failIf(mockisdir.called)
761+        self.failIf(mocklistdir.called)
762+        self.failIf(mockopen.called)
763+        self.failIf(mockmkdir.called)
764+
765+    @mock.patch('os.path.exists')
766+    @mock.patch('os.path.getsize')
767+    @mock.patch('__builtin__.open')
768+    @mock.patch('os.listdir')
769+    def test_read_share(self, mocklistdir, mockopen, mockgetsize, mockexists):
770+        """ This tests whether the code correctly finds and reads
771+        shares written out by old (Tahoe-LAFS <= v1.8.2)
772+        servers. There is a similar test in test_download, but that one
773+        is from the perspective of the client and exercises a deeper
774+        stack of code. This one is for exercising just the
775+        StorageServer object. """
776+
777+        # Now begin the test.
778+        bs = self.s.remote_get_buckets('teststorage_index')
779+
780+        self.failUnlessEqual(len(bs), 0)
781+        self.failIf(mocklistdir.called)
782+        self.failIf(mockopen.called)
783+        self.failIf(mockgetsize.called)
784+        self.failIf(mockexists.called)
785+
786+
787+class TestServerFSBackend(unittest.TestCase, ReallyEqualMixin):
788     @mock.patch('__builtin__.open')
789     def setUp(self, mockopen):
790         def call_open(fname, mode):
791hunk ./src/allmydata/test/test_backends.py 126
792                 return StringIO()
793         mockopen.side_effect = call_open
794 
795-        self.s = StorageServer('testdir', 'testnodeidxxxxxxxxxx')
796-
797+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=FSBackend('teststoredir'))
798 
799     @mock.patch('time.time')
800     @mock.patch('os.mkdir')
801hunk ./src/allmydata/test/test_backends.py 134
802     @mock.patch('os.listdir')
803     @mock.patch('os.path.isdir')
804     def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
805-        """Handle a report of corruption."""
806+        """ Write a new share. """
807 
808         def call_listdir(dirname):
809             self.failUnlessReallyEqual(dirname, 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
810hunk ./src/allmydata/test/test_backends.py 173
811         mockopen.side_effect = call_open
812         # Now begin the test.
813         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
814-        print bs
815         bs[0].remote_write(0, 'a')
816         self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
817 
818hunk ./src/allmydata/test/test_backends.py 176
819-
820     @mock.patch('os.path.exists')
821     @mock.patch('os.path.getsize')
822     @mock.patch('__builtin__.open')
823hunk ./src/allmydata/test/test_backends.py 218
824 
825         self.failUnlessEqual(len(bs), 1)
826         b = bs[0]
827+        # These should match by definition, the next two cases cover cases without (completely) unambiguous behaviors.
828         self.failUnlessReallyEqual(b.remote_read(0, datalen), share_data)
829         # If you try to read past the end you get the as much data as is there.
830         self.failUnlessReallyEqual(b.remote_read(0, datalen+20), share_data)
831hunk ./src/allmydata/test/test_backends.py 224
832         # If you start reading past the end of the file you get the empty string.
833         self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
834+
835+
836}
837[a temp patch used as a snapshot
838wilcoxjg@gmail.com**20110626052732
839 Ignore-this: 95f05e314eaec870afa04c76d979aa44
840] {
841hunk ./docs/configuration.rst 637
842   [storage]
843   enabled = True
844   readonly = True
845-  sizelimit = 10000000000
846 
847 
848   [helper]
849hunk ./docs/garbage-collection.rst 16
850 
851 When a file or directory in the virtual filesystem is no longer referenced,
852 the space that its shares occupied on each storage server can be freed,
853-making room for other shares. Tahoe currently uses a garbage collection
854+making room for other shares. Tahoe uses a garbage collection
855 ("GC") mechanism to implement this space-reclamation process. Each share has
856 one or more "leases", which are managed by clients who want the
857 file/directory to be retained. The storage server accepts each share for a
858hunk ./docs/garbage-collection.rst 34
859 the `<lease-tradeoffs.svg>`_ diagram to get an idea for the tradeoffs involved.
860 If lease renewal occurs quickly and with 100% reliability, than any renewal
861 time that is shorter than the lease duration will suffice, but a larger ratio
862-of duration-over-renewal-time will be more robust in the face of occasional
863+of lease duration to renewal time will be more robust in the face of occasional
864 delays or failures.
865 
866 The current recommended values for a small Tahoe grid are to renew the leases
867replace ./docs/garbage-collection.rst [A-Za-z_0-9\-\.] Tahoe Tahoe-LAFS
868hunk ./src/allmydata/client.py 260
869             sharetypes.append("mutable")
870         expiration_sharetypes = tuple(sharetypes)
871 
872+        if self.get_config("storage", "backend", "filesystem") == "filesystem":
873+            xyz
874+        xyz
875         ss = StorageServer(storedir, self.nodeid,
876                            reserved_space=reserved,
877                            discard_storage=discard,
878hunk ./src/allmydata/storage/crawler.py 234
879         f = open(tmpfile, "wb")
880         pickle.dump(self.state, f)
881         f.close()
882-        fileutil.move_into_place(tmpfile, self.statefile)
883+        fileutil.move_into_place(tmpfile, self.statefname)
884 
885     def startService(self):
886         # arrange things to look like we were just sleeping, so
887}
888[snapshot of progress on backend implementation (not suitable for trunk)
889wilcoxjg@gmail.com**20110626053244
890 Ignore-this: 50c764af791c2b99ada8289546806a0a
891] {
892adddir ./src/allmydata/storage/backends
893adddir ./src/allmydata/storage/backends/das
894move ./src/allmydata/storage/expirer.py ./src/allmydata/storage/backends/das/expirer.py
895adddir ./src/allmydata/storage/backends/null
896hunk ./src/allmydata/interfaces.py 270
897         store that on disk.
898         """
899 
900+class IStorageBackend(Interface):
901+    """
902+    Objects of this kind live on the server side and are used by the
903+    storage server object.
904+    """
905+    def get_available_space(self, reserved_space):
906+        """ Returns available space for share storage in bytes, or
907+        None if this information is not available or if the available
908+        space is unlimited.
909+
910+        If the backend is configured for read-only mode then this will
911+        return 0.
912+
913+        reserved_space is how many bytes to subtract from the answer, so
914+        you can pass how many bytes you would like to leave unused on this
915+        filesystem as reserved_space. """
916+
917+    def get_bucket_shares(self):
918+        """XXX"""
919+
920+    def get_share(self):
921+        """XXX"""
922+
923+    def make_bucket_writer(self):
924+        """XXX"""
925+
926+class IStorageBackendShare(Interface):
927+    """
928+    This object contains as much as all of the share data.  It is intended
929+    for lazy evaluation such that in many use cases substantially less than
930+    all of the share data will be accessed.
931+    """
932+    def is_complete(self):
933+        """
934+        Returns the share state, or None if the share does not exist.
935+        """
936+
937 class IStorageBucketWriter(Interface):
938     """
939     Objects of this kind live on the client side.
940hunk ./src/allmydata/interfaces.py 2492
941 
942 class EmptyPathnameComponentError(Exception):
943     """The webapi disallows empty pathname components."""
944+
945+class IShareStore(Interface):
946+    pass
947+
948addfile ./src/allmydata/storage/backends/__init__.py
949addfile ./src/allmydata/storage/backends/das/__init__.py
950addfile ./src/allmydata/storage/backends/das/core.py
951hunk ./src/allmydata/storage/backends/das/core.py 1
952+from allmydata.interfaces import IStorageBackend
953+from allmydata.storage.backends.base import Backend
954+from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
955+from allmydata.util.assertutil import precondition
956+
957+import os, re, weakref, struct, time
958+
959+from foolscap.api import Referenceable
960+from twisted.application import service
961+
962+from zope.interface import implements
963+from allmydata.interfaces import RIStorageServer, IStatsProducer, IShareStore
964+from allmydata.util import fileutil, idlib, log, time_format
965+import allmydata # for __full_version__
966+
967+from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
968+_pyflakes_hush = [si_b2a, si_a2b, storage_index_to_dir] # re-exported
969+from allmydata.storage.lease import LeaseInfo
970+from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
971+     create_mutable_sharefile
972+from allmydata.storage.backends.das.immutable import NullBucketWriter, BucketWriter, BucketReader
973+from allmydata.storage.crawler import FSBucketCountingCrawler
974+from allmydata.storage.backends.das.expirer import FSLeaseCheckingCrawler
975+
976+from zope.interface import implements
977+
978+class DASCore(Backend):
979+    implements(IStorageBackend)
980+    def __init__(self, storedir, expiration_policy, readonly=False, reserved_space=0):
981+        Backend.__init__(self)
982+
983+        self._setup_storage(storedir, readonly, reserved_space)
984+        self._setup_corruption_advisory()
985+        self._setup_bucket_counter()
986+        self._setup_lease_checkerf(expiration_policy)
987+
988+    def _setup_storage(self, storedir, readonly, reserved_space):
989+        self.storedir = storedir
990+        self.readonly = readonly
991+        self.reserved_space = int(reserved_space)
992+        if self.reserved_space:
993+            if self.get_available_space() is None:
994+                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
995+                        umid="0wZ27w", level=log.UNUSUAL)
996+
997+        self.sharedir = os.path.join(self.storedir, "shares")
998+        fileutil.make_dirs(self.sharedir)
999+        self.incomingdir = os.path.join(self.sharedir, 'incoming')
1000+        self._clean_incomplete()
1001+
1002+    def _clean_incomplete(self):
1003+        fileutil.rm_dir(self.incomingdir)
1004+        fileutil.make_dirs(self.incomingdir)
1005+
1006+    def _setup_corruption_advisory(self):
1007+        # we don't actually create the corruption-advisory dir until necessary
1008+        self.corruption_advisory_dir = os.path.join(self.storedir,
1009+                                                    "corruption-advisories")
1010+
1011+    def _setup_bucket_counter(self):
1012+        statefname = os.path.join(self.storedir, "bucket_counter.state")
1013+        self.bucket_counter = FSBucketCountingCrawler(statefname)
1014+        self.bucket_counter.setServiceParent(self)
1015+
1016+    def _setup_lease_checkerf(self, expiration_policy):
1017+        statefile = os.path.join(self.storedir, "lease_checker.state")
1018+        historyfile = os.path.join(self.storedir, "lease_checker.history")
1019+        self.lease_checker = FSLeaseCheckingCrawler(statefile, historyfile, expiration_policy)
1020+        self.lease_checker.setServiceParent(self)
1021+
1022+    def get_available_space(self):
1023+        if self.readonly:
1024+            return 0
1025+        return fileutil.get_available_space(self.storedir, self.reserved_space)
1026+
1027+    def get_shares(self, storage_index):
1028+        """Return a list of the FSBShare objects that correspond to the passed storage_index."""
1029+        finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
1030+        try:
1031+            for f in os.listdir(finalstoragedir):
1032+                if NUM_RE.match(f):
1033+                    filename = os.path.join(finalstoragedir, f)
1034+                    yield FSBShare(filename, int(f))
1035+        except OSError:
1036+            # Commonly caused by there being no buckets at all.
1037+            pass
1038+       
1039+    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
1040+        immsh = ImmutableShare(self.sharedir, storage_index, shnum, max_size=max_space_per_bucket, create=True)
1041+        bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
1042+        return bw
1043+       
1044+
1045+# each share file (in storage/shares/$SI/$SHNUM) contains lease information
1046+# and share data. The share data is accessed by RIBucketWriter.write and
1047+# RIBucketReader.read . The lease information is not accessible through these
1048+# interfaces.
1049+
1050+# The share file has the following layout:
1051+#  0x00: share file version number, four bytes, current version is 1
1052+#  0x04: share data length, four bytes big-endian = A # See Footnote 1 below.
1053+#  0x08: number of leases, four bytes big-endian
1054+#  0x0c: beginning of share data (see immutable.layout.WriteBucketProxy)
1055+#  A+0x0c = B: first lease. Lease format is:
1056+#   B+0x00: owner number, 4 bytes big-endian, 0 is reserved for no-owner
1057+#   B+0x04: renew secret, 32 bytes (SHA256)
1058+#   B+0x24: cancel secret, 32 bytes (SHA256)
1059+#   B+0x44: expiration time, 4 bytes big-endian seconds-since-epoch
1060+#   B+0x48: next lease, or end of record
1061+
1062+# Footnote 1: as of Tahoe v1.3.0 this field is not used by storage servers,
1063+# but it is still filled in by storage servers in case the storage server
1064+# software gets downgraded from >= Tahoe v1.3.0 to < Tahoe v1.3.0, or the
1065+# share file is moved from one storage server to another. The value stored in
1066+# this field is truncated, so if the actual share data length is >= 2**32,
1067+# then the value stored in this field will be the actual share data length
1068+# modulo 2**32.
1069+
1070+class ImmutableShare:
1071+    LEASE_SIZE = struct.calcsize(">L32s32sL")
1072+    sharetype = "immutable"
1073+
1074+    def __init__(self, sharedir, storageindex, shnum, max_size=None, create=False):
1075+        """ If max_size is not None then I won't allow more than
1076+        max_size to be written to me. If create=True then max_size
1077+        must not be None. """
1078+        precondition((max_size is not None) or (not create), max_size, create)
1079+        self.shnum = shnum
1080+        self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
1081+        self._max_size = max_size
1082+        if create:
1083+            # touch the file, so later callers will see that we're working on
1084+            # it. Also construct the metadata.
1085+            assert not os.path.exists(self.fname)
1086+            fileutil.make_dirs(os.path.dirname(self.fname))
1087+            f = open(self.fname, 'wb')
1088+            # The second field -- the four-byte share data length -- is no
1089+            # longer used as of Tahoe v1.3.0, but we continue to write it in
1090+            # there in case someone downgrades a storage server from >=
1091+            # Tahoe-1.3.0 to < Tahoe-1.3.0, or moves a share file from one
1092+            # server to another, etc. We do saturation -- a share data length
1093+            # larger than 2**32-1 (what can fit into the field) is marked as
1094+            # the largest length that can fit into the field. That way, even
1095+            # if this does happen, the old < v1.3.0 server will still allow
1096+            # clients to read the first part of the share.
1097+            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
1098+            f.close()
1099+            self._lease_offset = max_size + 0x0c
1100+            self._num_leases = 0
1101+        else:
1102+            f = open(self.fname, 'rb')
1103+            filesize = os.path.getsize(self.fname)
1104+            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1105+            f.close()
1106+            if version != 1:
1107+                msg = "sharefile %s had version %d but we wanted 1" % \
1108+                      (self.fname, version)
1109+                raise UnknownImmutableContainerVersionError(msg)
1110+            self._num_leases = num_leases
1111+            self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
1112+        self._data_offset = 0xc
1113+
1114+    def unlink(self):
1115+        os.unlink(self.fname)
1116+
1117+    def read_share_data(self, offset, length):
1118+        precondition(offset >= 0)
1119+        # Reads beyond the end of the data are truncated. Reads that start
1120+        # beyond the end of the data return an empty string.
1121+        seekpos = self._data_offset+offset
1122+        fsize = os.path.getsize(self.fname)
1123+        actuallength = max(0, min(length, fsize-seekpos))
1124+        if actuallength == 0:
1125+            return ""
1126+        f = open(self.fname, 'rb')
1127+        f.seek(seekpos)
1128+        return f.read(actuallength)
1129+
1130+    def write_share_data(self, offset, data):
1131+        length = len(data)
1132+        precondition(offset >= 0, offset)
1133+        if self._max_size is not None and offset+length > self._max_size:
1134+            raise DataTooLargeError(self._max_size, offset, length)
1135+        f = open(self.fname, 'rb+')
1136+        real_offset = self._data_offset+offset
1137+        f.seek(real_offset)
1138+        assert f.tell() == real_offset
1139+        f.write(data)
1140+        f.close()
1141+
1142+    def _write_lease_record(self, f, lease_number, lease_info):
1143+        offset = self._lease_offset + lease_number * self.LEASE_SIZE
1144+        f.seek(offset)
1145+        assert f.tell() == offset
1146+        f.write(lease_info.to_immutable_data())
1147+
1148+    def _read_num_leases(self, f):
1149+        f.seek(0x08)
1150+        (num_leases,) = struct.unpack(">L", f.read(4))
1151+        return num_leases
1152+
1153+    def _write_num_leases(self, f, num_leases):
1154+        f.seek(0x08)
1155+        f.write(struct.pack(">L", num_leases))
1156+
1157+    def _truncate_leases(self, f, num_leases):
1158+        f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE)
1159+
1160+    def get_leases(self):
1161+        """Yields a LeaseInfo instance for all leases."""
1162+        f = open(self.fname, 'rb')
1163+        (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1164+        f.seek(self._lease_offset)
1165+        for i in range(num_leases):
1166+            data = f.read(self.LEASE_SIZE)
1167+            if data:
1168+                yield LeaseInfo().from_immutable_data(data)
1169+
1170+    def add_lease(self, lease_info):
1171+        f = open(self.fname, 'rb+')
1172+        num_leases = self._read_num_leases(f)
1173+        self._write_lease_record(f, num_leases, lease_info)
1174+        self._write_num_leases(f, num_leases+1)
1175+        f.close()
1176+
1177+    def renew_lease(self, renew_secret, new_expire_time):
1178+        for i,lease in enumerate(self.get_leases()):
1179+            if constant_time_compare(lease.renew_secret, renew_secret):
1180+                # yup. See if we need to update the owner time.
1181+                if new_expire_time > lease.expiration_time:
1182+                    # yes
1183+                    lease.expiration_time = new_expire_time
1184+                    f = open(self.fname, 'rb+')
1185+                    self._write_lease_record(f, i, lease)
1186+                    f.close()
1187+                return
1188+        raise IndexError("unable to renew non-existent lease")
1189+
1190+    def add_or_renew_lease(self, lease_info):
1191+        try:
1192+            self.renew_lease(lease_info.renew_secret,
1193+                             lease_info.expiration_time)
1194+        except IndexError:
1195+            self.add_lease(lease_info)
1196+
1197+
1198+    def cancel_lease(self, cancel_secret):
1199+        """Remove a lease with the given cancel_secret. If the last lease is
1200+        cancelled, the file will be removed. Return the number of bytes that
1201+        were freed (by truncating the list of leases, and possibly by
1202+        deleting the file. Raise IndexError if there was no lease with the
1203+        given cancel_secret.
1204+        """
1205+
1206+        leases = list(self.get_leases())
1207+        num_leases_removed = 0
1208+        for i,lease in enumerate(leases):
1209+            if constant_time_compare(lease.cancel_secret, cancel_secret):
1210+                leases[i] = None
1211+                num_leases_removed += 1
1212+        if not num_leases_removed:
1213+            raise IndexError("unable to find matching lease to cancel")
1214+        if num_leases_removed:
1215+            # pack and write out the remaining leases. We write these out in
1216+            # the same order as they were added, so that if we crash while
1217+            # doing this, we won't lose any non-cancelled leases.
1218+            leases = [l for l in leases if l] # remove the cancelled leases
1219+            f = open(self.fname, 'rb+')
1220+            for i,lease in enumerate(leases):
1221+                self._write_lease_record(f, i, lease)
1222+            self._write_num_leases(f, len(leases))
1223+            self._truncate_leases(f, len(leases))
1224+            f.close()
1225+        space_freed = self.LEASE_SIZE * num_leases_removed
1226+        if not len(leases):
1227+            space_freed += os.stat(self.fname)[stat.ST_SIZE]
1228+            self.unlink()
1229+        return space_freed
1230hunk ./src/allmydata/storage/backends/das/expirer.py 2
1231 import time, os, pickle, struct
1232-from allmydata.storage.crawler import ShareCrawler
1233-from allmydata.storage.shares import get_share_file
1234+from allmydata.storage.crawler import FSShareCrawler
1235 from allmydata.storage.common import UnknownMutableContainerVersionError, \
1236      UnknownImmutableContainerVersionError
1237 from twisted.python import log as twlog
1238hunk ./src/allmydata/storage/backends/das/expirer.py 7
1239 
1240-class LeaseCheckingCrawler(ShareCrawler):
1241+class FSLeaseCheckingCrawler(FSShareCrawler):
1242     """I examine the leases on all shares, determining which are still valid
1243     and which have expired. I can remove the expired leases (if so
1244     configured), and the share will be deleted when the last lease is
1245hunk ./src/allmydata/storage/backends/das/expirer.py 50
1246     slow_start = 360 # wait 6 minutes after startup
1247     minimum_cycle_time = 12*60*60 # not more than twice per day
1248 
1249-    def __init__(self, statefile, historyfile,
1250-                 expiration_enabled, mode,
1251-                 override_lease_duration, # used if expiration_mode=="age"
1252-                 cutoff_date, # used if expiration_mode=="cutoff-date"
1253-                 sharetypes):
1254+    def __init__(self, statefile, historyfile, expiration_policy):
1255         self.historyfile = historyfile
1256hunk ./src/allmydata/storage/backends/das/expirer.py 52
1257-        self.expiration_enabled = expiration_enabled
1258-        self.mode = mode
1259+        self.expiration_enabled = expiration_policy['enabled']
1260+        self.mode = expiration_policy['mode']
1261         self.override_lease_duration = None
1262         self.cutoff_date = None
1263         if self.mode == "age":
1264hunk ./src/allmydata/storage/backends/das/expirer.py 57
1265-            assert isinstance(override_lease_duration, (int, type(None)))
1266-            self.override_lease_duration = override_lease_duration # seconds
1267+            assert isinstance(expiration_policy['override_lease_duration'], (int, type(None)))
1268+            self.override_lease_duration = expiration_policy['override_lease_duration']# seconds
1269         elif self.mode == "cutoff-date":
1270hunk ./src/allmydata/storage/backends/das/expirer.py 60
1271-            assert isinstance(cutoff_date, int) # seconds-since-epoch
1272+            assert isinstance(expiration_policy['cutoff_date'], int) # seconds-since-epoch
1273             assert cutoff_date is not None
1274hunk ./src/allmydata/storage/backends/das/expirer.py 62
1275-            self.cutoff_date = cutoff_date
1276+            self.cutoff_date = expiration_policy['cutoff_date']
1277         else:
1278hunk ./src/allmydata/storage/backends/das/expirer.py 64
1279-            raise ValueError("GC mode '%s' must be 'age' or 'cutoff-date'" % mode)
1280-        self.sharetypes_to_expire = sharetypes
1281-        ShareCrawler.__init__(self, statefile)
1282+            raise ValueError("GC mode '%s' must be 'age' or 'cutoff-date'" % expiration_policy['mode'])
1283+        self.sharetypes_to_expire = expiration_policy['sharetypes']
1284+        FSShareCrawler.__init__(self, statefile)
1285 
1286     def add_initial_state(self):
1287         # we fill ["cycle-to-date"] here (even though they will be reset in
1288hunk ./src/allmydata/storage/backends/das/expirer.py 156
1289 
1290     def process_share(self, sharefilename):
1291         # first, find out what kind of a share it is
1292-        sf = get_share_file(sharefilename)
1293+        f = open(sharefilename, "rb")
1294+        prefix = f.read(32)
1295+        f.close()
1296+        if prefix == MutableShareFile.MAGIC:
1297+            sf = MutableShareFile(sharefilename)
1298+        else:
1299+            # otherwise assume it's immutable
1300+            sf = FSBShare(sharefilename)
1301         sharetype = sf.sharetype
1302         now = time.time()
1303         s = self.stat(sharefilename)
1304addfile ./src/allmydata/storage/backends/null/__init__.py
1305addfile ./src/allmydata/storage/backends/null/core.py
1306hunk ./src/allmydata/storage/backends/null/core.py 1
1307+from allmydata.storage.backends.base import Backend
1308+
1309+class NullCore(Backend):
1310+    def __init__(self):
1311+        Backend.__init__(self)
1312+
1313+    def get_available_space(self):
1314+        return None
1315+
1316+    def get_shares(self, storage_index):
1317+        return set()
1318+
1319+    def get_share(self, storage_index, sharenum):
1320+        return None
1321+
1322+    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
1323+        return NullBucketWriter()
1324hunk ./src/allmydata/storage/crawler.py 12
1325 class TimeSliceExceeded(Exception):
1326     pass
1327 
1328-class ShareCrawler(service.MultiService):
1329+class FSShareCrawler(service.MultiService):
1330     """A subcless of ShareCrawler is attached to a StorageServer, and
1331     periodically walks all of its shares, processing each one in some
1332     fashion. This crawl is rate-limited, to reduce the IO burden on the host,
1333hunk ./src/allmydata/storage/crawler.py 68
1334     cpu_slice = 1.0 # use up to 1.0 seconds before yielding
1335     minimum_cycle_time = 300 # don't run a cycle faster than this
1336 
1337-    def __init__(self, backend, statefile, allowed_cpu_percentage=None):
1338+    def __init__(self, statefname, allowed_cpu_percentage=None):
1339         service.MultiService.__init__(self)
1340         if allowed_cpu_percentage is not None:
1341             self.allowed_cpu_percentage = allowed_cpu_percentage
1342hunk ./src/allmydata/storage/crawler.py 72
1343-        self.backend = backend
1344+        self.statefname = statefname
1345         self.prefixes = [si_b2a(struct.pack(">H", i << (16-10)))[:2]
1346                          for i in range(2**10)]
1347         self.prefixes.sort()
1348hunk ./src/allmydata/storage/crawler.py 192
1349         #                            of the last bucket to be processed, or
1350         #                            None if we are sleeping between cycles
1351         try:
1352-            f = open(self.statefile, "rb")
1353+            f = open(self.statefname, "rb")
1354             state = pickle.load(f)
1355             f.close()
1356         except EnvironmentError:
1357hunk ./src/allmydata/storage/crawler.py 230
1358         else:
1359             last_complete_prefix = self.prefixes[lcpi]
1360         self.state["last-complete-prefix"] = last_complete_prefix
1361-        tmpfile = self.statefile + ".tmp"
1362+        tmpfile = self.statefname + ".tmp"
1363         f = open(tmpfile, "wb")
1364         pickle.dump(self.state, f)
1365         f.close()
1366hunk ./src/allmydata/storage/crawler.py 433
1367         pass
1368 
1369 
1370-class BucketCountingCrawler(ShareCrawler):
1371+class FSBucketCountingCrawler(FSShareCrawler):
1372     """I keep track of how many buckets are being managed by this server.
1373     This is equivalent to the number of distributed files and directories for
1374     which I am providing storage. The actual number of files+directories in
1375hunk ./src/allmydata/storage/crawler.py 446
1376 
1377     minimum_cycle_time = 60*60 # we don't need this more than once an hour
1378 
1379-    def __init__(self, statefile, num_sample_prefixes=1):
1380-        ShareCrawler.__init__(self, statefile)
1381+    def __init__(self, statefname, num_sample_prefixes=1):
1382+        FSShareCrawler.__init__(self, statefname)
1383         self.num_sample_prefixes = num_sample_prefixes
1384 
1385     def add_initial_state(self):
1386hunk ./src/allmydata/storage/immutable.py 14
1387 from allmydata.storage.common import UnknownImmutableContainerVersionError, \
1388      DataTooLargeError
1389 
1390-# each share file (in storage/shares/$SI/$SHNUM) contains lease information
1391-# and share data. The share data is accessed by RIBucketWriter.write and
1392-# RIBucketReader.read . The lease information is not accessible through these
1393-# interfaces.
1394-
1395-# The share file has the following layout:
1396-#  0x00: share file version number, four bytes, current version is 1
1397-#  0x04: share data length, four bytes big-endian = A # See Footnote 1 below.
1398-#  0x08: number of leases, four bytes big-endian
1399-#  0x0c: beginning of share data (see immutable.layout.WriteBucketProxy)
1400-#  A+0x0c = B: first lease. Lease format is:
1401-#   B+0x00: owner number, 4 bytes big-endian, 0 is reserved for no-owner
1402-#   B+0x04: renew secret, 32 bytes (SHA256)
1403-#   B+0x24: cancel secret, 32 bytes (SHA256)
1404-#   B+0x44: expiration time, 4 bytes big-endian seconds-since-epoch
1405-#   B+0x48: next lease, or end of record
1406-
1407-# Footnote 1: as of Tahoe v1.3.0 this field is not used by storage servers,
1408-# but it is still filled in by storage servers in case the storage server
1409-# software gets downgraded from >= Tahoe v1.3.0 to < Tahoe v1.3.0, or the
1410-# share file is moved from one storage server to another. The value stored in
1411-# this field is truncated, so if the actual share data length is >= 2**32,
1412-# then the value stored in this field will be the actual share data length
1413-# modulo 2**32.
1414-
1415-class ShareFile:
1416-    LEASE_SIZE = struct.calcsize(">L32s32sL")
1417-    sharetype = "immutable"
1418-
1419-    def __init__(self, filename, max_size=None, create=False):
1420-        """ If max_size is not None then I won't allow more than
1421-        max_size to be written to me. If create=True then max_size
1422-        must not be None. """
1423-        precondition((max_size is not None) or (not create), max_size, create)
1424-        self.home = filename
1425-        self._max_size = max_size
1426-        if create:
1427-            # touch the file, so later callers will see that we're working on
1428-            # it. Also construct the metadata.
1429-            assert not os.path.exists(self.home)
1430-            fileutil.make_dirs(os.path.dirname(self.home))
1431-            f = open(self.home, 'wb')
1432-            # The second field -- the four-byte share data length -- is no
1433-            # longer used as of Tahoe v1.3.0, but we continue to write it in
1434-            # there in case someone downgrades a storage server from >=
1435-            # Tahoe-1.3.0 to < Tahoe-1.3.0, or moves a share file from one
1436-            # server to another, etc. We do saturation -- a share data length
1437-            # larger than 2**32-1 (what can fit into the field) is marked as
1438-            # the largest length that can fit into the field. That way, even
1439-            # if this does happen, the old < v1.3.0 server will still allow
1440-            # clients to read the first part of the share.
1441-            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
1442-            f.close()
1443-            self._lease_offset = max_size + 0x0c
1444-            self._num_leases = 0
1445-        else:
1446-            f = open(self.home, 'rb')
1447-            filesize = os.path.getsize(self.home)
1448-            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1449-            f.close()
1450-            if version != 1:
1451-                msg = "sharefile %s had version %d but we wanted 1" % \
1452-                      (filename, version)
1453-                raise UnknownImmutableContainerVersionError(msg)
1454-            self._num_leases = num_leases
1455-            self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
1456-        self._data_offset = 0xc
1457-
1458-    def unlink(self):
1459-        os.unlink(self.home)
1460-
1461-    def read_share_data(self, offset, length):
1462-        precondition(offset >= 0)
1463-        # Reads beyond the end of the data are truncated. Reads that start
1464-        # beyond the end of the data return an empty string.
1465-        seekpos = self._data_offset+offset
1466-        fsize = os.path.getsize(self.home)
1467-        actuallength = max(0, min(length, fsize-seekpos))
1468-        if actuallength == 0:
1469-            return ""
1470-        f = open(self.home, 'rb')
1471-        f.seek(seekpos)
1472-        return f.read(actuallength)
1473-
1474-    def write_share_data(self, offset, data):
1475-        length = len(data)
1476-        precondition(offset >= 0, offset)
1477-        if self._max_size is not None and offset+length > self._max_size:
1478-            raise DataTooLargeError(self._max_size, offset, length)
1479-        f = open(self.home, 'rb+')
1480-        real_offset = self._data_offset+offset
1481-        f.seek(real_offset)
1482-        assert f.tell() == real_offset
1483-        f.write(data)
1484-        f.close()
1485-
1486-    def _write_lease_record(self, f, lease_number, lease_info):
1487-        offset = self._lease_offset + lease_number * self.LEASE_SIZE
1488-        f.seek(offset)
1489-        assert f.tell() == offset
1490-        f.write(lease_info.to_immutable_data())
1491-
1492-    def _read_num_leases(self, f):
1493-        f.seek(0x08)
1494-        (num_leases,) = struct.unpack(">L", f.read(4))
1495-        return num_leases
1496-
1497-    def _write_num_leases(self, f, num_leases):
1498-        f.seek(0x08)
1499-        f.write(struct.pack(">L", num_leases))
1500-
1501-    def _truncate_leases(self, f, num_leases):
1502-        f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE)
1503-
1504-    def get_leases(self):
1505-        """Yields a LeaseInfo instance for all leases."""
1506-        f = open(self.home, 'rb')
1507-        (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1508-        f.seek(self._lease_offset)
1509-        for i in range(num_leases):
1510-            data = f.read(self.LEASE_SIZE)
1511-            if data:
1512-                yield LeaseInfo().from_immutable_data(data)
1513-
1514-    def add_lease(self, lease_info):
1515-        f = open(self.home, 'rb+')
1516-        num_leases = self._read_num_leases(f)
1517-        self._write_lease_record(f, num_leases, lease_info)
1518-        self._write_num_leases(f, num_leases+1)
1519-        f.close()
1520-
1521-    def renew_lease(self, renew_secret, new_expire_time):
1522-        for i,lease in enumerate(self.get_leases()):
1523-            if constant_time_compare(lease.renew_secret, renew_secret):
1524-                # yup. See if we need to update the owner time.
1525-                if new_expire_time > lease.expiration_time:
1526-                    # yes
1527-                    lease.expiration_time = new_expire_time
1528-                    f = open(self.home, 'rb+')
1529-                    self._write_lease_record(f, i, lease)
1530-                    f.close()
1531-                return
1532-        raise IndexError("unable to renew non-existent lease")
1533-
1534-    def add_or_renew_lease(self, lease_info):
1535-        try:
1536-            self.renew_lease(lease_info.renew_secret,
1537-                             lease_info.expiration_time)
1538-        except IndexError:
1539-            self.add_lease(lease_info)
1540-
1541-
1542-    def cancel_lease(self, cancel_secret):
1543-        """Remove a lease with the given cancel_secret. If the last lease is
1544-        cancelled, the file will be removed. Return the number of bytes that
1545-        were freed (by truncating the list of leases, and possibly by
1546-        deleting the file. Raise IndexError if there was no lease with the
1547-        given cancel_secret.
1548-        """
1549-
1550-        leases = list(self.get_leases())
1551-        num_leases_removed = 0
1552-        for i,lease in enumerate(leases):
1553-            if constant_time_compare(lease.cancel_secret, cancel_secret):
1554-                leases[i] = None
1555-                num_leases_removed += 1
1556-        if not num_leases_removed:
1557-            raise IndexError("unable to find matching lease to cancel")
1558-        if num_leases_removed:
1559-            # pack and write out the remaining leases. We write these out in
1560-            # the same order as they were added, so that if we crash while
1561-            # doing this, we won't lose any non-cancelled leases.
1562-            leases = [l for l in leases if l] # remove the cancelled leases
1563-            f = open(self.home, 'rb+')
1564-            for i,lease in enumerate(leases):
1565-                self._write_lease_record(f, i, lease)
1566-            self._write_num_leases(f, len(leases))
1567-            self._truncate_leases(f, len(leases))
1568-            f.close()
1569-        space_freed = self.LEASE_SIZE * num_leases_removed
1570-        if not len(leases):
1571-            space_freed += os.stat(self.home)[stat.ST_SIZE]
1572-            self.unlink()
1573-        return space_freed
1574-class NullBucketWriter(Referenceable):
1575-    implements(RIBucketWriter)
1576-
1577-    def remote_write(self, offset, data):
1578-        return
1579-
1580 class BucketWriter(Referenceable):
1581     implements(RIBucketWriter)
1582 
1583hunk ./src/allmydata/storage/immutable.py 17
1584-    def __init__(self, ss, incominghome, finalhome, max_size, lease_info, canary):
1585+    def __init__(self, ss, immutableshare, max_size, lease_info, canary):
1586         self.ss = ss
1587hunk ./src/allmydata/storage/immutable.py 19
1588-        self.incominghome = incominghome
1589-        self.finalhome = finalhome
1590         self._max_size = max_size # don't allow the client to write more than this
1591         self._canary = canary
1592         self._disconnect_marker = canary.notifyOnDisconnect(self._disconnected)
1593hunk ./src/allmydata/storage/immutable.py 24
1594         self.closed = False
1595         self.throw_out_all_data = False
1596-        self._sharefile = ShareFile(incominghome, create=True, max_size=max_size)
1597+        self._sharefile = immutableshare
1598         # also, add our lease to the file now, so that other ones can be
1599         # added by simultaneous uploaders
1600         self._sharefile.add_lease(lease_info)
1601hunk ./src/allmydata/storage/server.py 16
1602 from allmydata.storage.lease import LeaseInfo
1603 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
1604      create_mutable_sharefile
1605-from allmydata.storage.immutable import ShareFile, NullBucketWriter, BucketWriter, BucketReader
1606-from allmydata.storage.crawler import BucketCountingCrawler
1607-from allmydata.storage.expirer import LeaseCheckingCrawler
1608 
1609 from zope.interface import implements
1610 
1611hunk ./src/allmydata/storage/server.py 19
1612-# A Backend is a MultiService so that its server's crawlers (if the server has any) can
1613-# be started and stopped.
1614-class Backend(service.MultiService):
1615-    implements(IStatsProducer)
1616-    def __init__(self):
1617-        service.MultiService.__init__(self)
1618-
1619-    def get_bucket_shares(self):
1620-        """XXX"""
1621-        raise NotImplementedError
1622-
1623-    def get_share(self):
1624-        """XXX"""
1625-        raise NotImplementedError
1626-
1627-    def make_bucket_writer(self):
1628-        """XXX"""
1629-        raise NotImplementedError
1630-
1631-class NullBackend(Backend):
1632-    def __init__(self):
1633-        Backend.__init__(self)
1634-
1635-    def get_available_space(self):
1636-        return None
1637-
1638-    def get_bucket_shares(self, storage_index):
1639-        return set()
1640-
1641-    def get_share(self, storage_index, sharenum):
1642-        return None
1643-
1644-    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
1645-        return NullBucketWriter()
1646-
1647-class FSBackend(Backend):
1648-    def __init__(self, storedir, readonly=False, reserved_space=0):
1649-        Backend.__init__(self)
1650-
1651-        self._setup_storage(storedir, readonly, reserved_space)
1652-        self._setup_corruption_advisory()
1653-        self._setup_bucket_counter()
1654-        self._setup_lease_checkerf()
1655-
1656-    def _setup_storage(self, storedir, readonly, reserved_space):
1657-        self.storedir = storedir
1658-        self.readonly = readonly
1659-        self.reserved_space = int(reserved_space)
1660-        if self.reserved_space:
1661-            if self.get_available_space() is None:
1662-                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
1663-                        umid="0wZ27w", level=log.UNUSUAL)
1664-
1665-        self.sharedir = os.path.join(self.storedir, "shares")
1666-        fileutil.make_dirs(self.sharedir)
1667-        self.incomingdir = os.path.join(self.sharedir, 'incoming')
1668-        self._clean_incomplete()
1669-
1670-    def _clean_incomplete(self):
1671-        fileutil.rm_dir(self.incomingdir)
1672-        fileutil.make_dirs(self.incomingdir)
1673-
1674-    def _setup_corruption_advisory(self):
1675-        # we don't actually create the corruption-advisory dir until necessary
1676-        self.corruption_advisory_dir = os.path.join(self.storedir,
1677-                                                    "corruption-advisories")
1678-
1679-    def _setup_bucket_counter(self):
1680-        statefile = os.path.join(self.storedir, "bucket_counter.state")
1681-        self.bucket_counter = BucketCountingCrawler(statefile)
1682-        self.bucket_counter.setServiceParent(self)
1683-
1684-    def _setup_lease_checkerf(self):
1685-        statefile = os.path.join(self.storedir, "lease_checker.state")
1686-        historyfile = os.path.join(self.storedir, "lease_checker.history")
1687-        self.lease_checker = LeaseCheckingCrawler(statefile, historyfile,
1688-                                   expiration_enabled, expiration_mode,
1689-                                   expiration_override_lease_duration,
1690-                                   expiration_cutoff_date,
1691-                                   expiration_sharetypes)
1692-        self.lease_checker.setServiceParent(self)
1693-
1694-    def get_available_space(self):
1695-        if self.readonly:
1696-            return 0
1697-        return fileutil.get_available_space(self.storedir, self.reserved_space)
1698-
1699-    def get_bucket_shares(self, storage_index):
1700-        """Return a list of (shnum, pathname) tuples for files that hold
1701-        shares for this storage_index. In each tuple, 'shnum' will always be
1702-        the integer form of the last component of 'pathname'."""
1703-        storagedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
1704-        try:
1705-            for f in os.listdir(storagedir):
1706-                if NUM_RE.match(f):
1707-                    filename = os.path.join(storagedir, f)
1708-                    yield (int(f), filename)
1709-        except OSError:
1710-            # Commonly caused by there being no buckets at all.
1711-            pass
1712-
1713 # storage/
1714 # storage/shares/incoming
1715 #   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
1716hunk ./src/allmydata/storage/server.py 32
1717 # $SHARENUM matches this regex:
1718 NUM_RE=re.compile("^[0-9]+$")
1719 
1720-
1721-
1722 class StorageServer(service.MultiService, Referenceable):
1723     implements(RIStorageServer, IStatsProducer)
1724     name = 'storage'
1725hunk ./src/allmydata/storage/server.py 35
1726-    LeaseCheckerClass = LeaseCheckingCrawler
1727 
1728     def __init__(self, nodeid, backend, reserved_space=0,
1729                  readonly_storage=False,
1730hunk ./src/allmydata/storage/server.py 38
1731-                 stats_provider=None,
1732-                 expiration_enabled=False,
1733-                 expiration_mode="age",
1734-                 expiration_override_lease_duration=None,
1735-                 expiration_cutoff_date=None,
1736-                 expiration_sharetypes=("mutable", "immutable")):
1737+                 stats_provider=None ):
1738         service.MultiService.__init__(self)
1739         assert isinstance(nodeid, str)
1740         assert len(nodeid) == 20
1741hunk ./src/allmydata/storage/server.py 217
1742         # they asked about: this will save them a lot of work. Add or update
1743         # leases for all of them: if they want us to hold shares for this
1744         # file, they'll want us to hold leases for this file.
1745-        for (shnum, fn) in self.backend.get_bucket_shares(storage_index):
1746-            alreadygot.add(shnum)
1747-            sf = ShareFile(fn)
1748-            sf.add_or_renew_lease(lease_info)
1749-
1750-        for shnum in sharenums:
1751-            share = self.backend.get_share(storage_index, shnum)
1752+        for share in self.backend.get_shares(storage_index):
1753+            alreadygot.add(share.shnum)
1754+            share.add_or_renew_lease(lease_info)
1755 
1756hunk ./src/allmydata/storage/server.py 221
1757-            if not share:
1758-                if (not limited) or (remaining_space >= max_space_per_bucket):
1759-                    # ok! we need to create the new share file.
1760-                    bw = self.backend.make_bucket_writer(storage_index, shnum,
1761-                                      max_space_per_bucket, lease_info, canary)
1762-                    bucketwriters[shnum] = bw
1763-                    self._active_writers[bw] = 1
1764-                    if limited:
1765-                        remaining_space -= max_space_per_bucket
1766-                else:
1767-                    # bummer! not enough space to accept this bucket
1768-                    pass
1769+        for shnum in (sharenums - alreadygot):
1770+            if (not limited) or (remaining_space >= max_space_per_bucket):
1771+                #XXX or should the following line occur in storage server construtor? ok! we need to create the new share file.
1772+                self.backend.set_storage_server(self)
1773+                bw = self.backend.make_bucket_writer(storage_index, shnum,
1774+                                                     max_space_per_bucket, lease_info, canary)
1775+                bucketwriters[shnum] = bw
1776+                self._active_writers[bw] = 1
1777+                if limited:
1778+                    remaining_space -= max_space_per_bucket
1779 
1780hunk ./src/allmydata/storage/server.py 232
1781-            elif share.is_complete():
1782-                # great! we already have it. easy.
1783-                pass
1784-            elif not share.is_complete():
1785-                # Note that we don't create BucketWriters for shnums that
1786-                # have a partial share (in incoming/), so if a second upload
1787-                # occurs while the first is still in progress, the second
1788-                # uploader will use different storage servers.
1789-                pass
1790+        #XXX We SHOULD DOCUMENT LATER.
1791 
1792         self.add_latency("allocate", time.time() - start)
1793         return alreadygot, bucketwriters
1794hunk ./src/allmydata/storage/server.py 238
1795 
1796     def _iter_share_files(self, storage_index):
1797-        for shnum, filename in self._get_bucket_shares(storage_index):
1798+        for shnum, filename in self._get_shares(storage_index):
1799             f = open(filename, 'rb')
1800             header = f.read(32)
1801             f.close()
1802hunk ./src/allmydata/storage/server.py 318
1803         si_s = si_b2a(storage_index)
1804         log.msg("storage: get_buckets %s" % si_s)
1805         bucketreaders = {} # k: sharenum, v: BucketReader
1806-        for shnum, filename in self.backend.get_bucket_shares(storage_index):
1807+        for shnum, filename in self.backend.get_shares(storage_index):
1808             bucketreaders[shnum] = BucketReader(self, filename,
1809                                                 storage_index, shnum)
1810         self.add_latency("get", time.time() - start)
1811hunk ./src/allmydata/storage/server.py 334
1812         # since all shares get the same lease data, we just grab the leases
1813         # from the first share
1814         try:
1815-            shnum, filename = self._get_bucket_shares(storage_index).next()
1816+            shnum, filename = self._get_shares(storage_index).next()
1817             sf = ShareFile(filename)
1818             return sf.get_leases()
1819         except StopIteration:
1820hunk ./src/allmydata/storage/shares.py 1
1821-#! /usr/bin/python
1822-
1823-from allmydata.storage.mutable import MutableShareFile
1824-from allmydata.storage.immutable import ShareFile
1825-
1826-def get_share_file(filename):
1827-    f = open(filename, "rb")
1828-    prefix = f.read(32)
1829-    f.close()
1830-    if prefix == MutableShareFile.MAGIC:
1831-        return MutableShareFile(filename)
1832-    # otherwise assume it's immutable
1833-    return ShareFile(filename)
1834-
1835rmfile ./src/allmydata/storage/shares.py
1836hunk ./src/allmydata/test/common_util.py 20
1837 
1838 def flip_one_bit(s, offset=0, size=None):
1839     """ flip one random bit of the string s, in a byte greater than or equal to offset and less
1840-    than offset+size. """
1841+    than offset+size. Return the new string. """
1842     if size is None:
1843         size=len(s)-offset
1844     i = randrange(offset, offset+size)
1845hunk ./src/allmydata/test/test_backends.py 7
1846 
1847 from allmydata.test.common_util import ReallyEqualMixin
1848 
1849-import mock
1850+import mock, os
1851 
1852 # This is the code that we're going to be testing.
1853hunk ./src/allmydata/test/test_backends.py 10
1854-from allmydata.storage.server import StorageServer, FSBackend, NullBackend
1855+from allmydata.storage.server import StorageServer
1856+
1857+from allmydata.storage.backends.das.core import DASCore
1858+from allmydata.storage.backends.null.core import NullCore
1859+
1860 
1861 # The following share file contents was generated with
1862 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
1863hunk ./src/allmydata/test/test_backends.py 22
1864 share_data = 'a\x00\x00\x00\x00xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy\x00(\xde\x80'
1865 share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data
1866 
1867-sharefname = 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a/0'
1868+tempdir = 'teststoredir'
1869+sharedirname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
1870+sharefname = os.path.join(sharedirname, '0')
1871 
1872 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
1873     @mock.patch('time.time')
1874hunk ./src/allmydata/test/test_backends.py 58
1875         filesystem in only the prescribed ways. """
1876 
1877         def call_open(fname, mode):
1878-            if fname == 'testdir/bucket_counter.state':
1879-                raise IOError(2, "No such file or directory: 'testdir/bucket_counter.state'")
1880-            elif fname == 'testdir/lease_checker.state':
1881-                raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
1882-            elif fname == 'testdir/lease_checker.history':
1883+            if fname == os.path.join(tempdir,'bucket_counter.state'):
1884+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
1885+            elif fname == os.path.join(tempdir, 'lease_checker.state'):
1886+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
1887+            elif fname == os.path.join(tempdir, 'lease_checker.history'):
1888                 return StringIO()
1889             else:
1890                 self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
1891hunk ./src/allmydata/test/test_backends.py 124
1892     @mock.patch('__builtin__.open')
1893     def setUp(self, mockopen):
1894         def call_open(fname, mode):
1895-            if fname == 'testdir/bucket_counter.state':
1896-                raise IOError(2, "No such file or directory: 'testdir/bucket_counter.state'")
1897-            elif fname == 'testdir/lease_checker.state':
1898-                raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
1899-            elif fname == 'testdir/lease_checker.history':
1900+            if fname == os.path.join(tempdir, 'bucket_counter.state'):
1901+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
1902+            elif fname == os.path.join(tempdir, 'lease_checker.state'):
1903+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
1904+            elif fname == os.path.join(tempdir, 'lease_checker.history'):
1905                 return StringIO()
1906         mockopen.side_effect = call_open
1907hunk ./src/allmydata/test/test_backends.py 131
1908-
1909-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=FSBackend('teststoredir'))
1910+        expiration_policy = {'enabled' : False,
1911+                             'mode' : 'age',
1912+                             'override_lease_duration' : None,
1913+                             'cutoff_date' : None,
1914+                             'sharetypes' : None}
1915+        testbackend = DASCore(tempdir, expiration_policy)
1916+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
1917 
1918     @mock.patch('time.time')
1919     @mock.patch('os.mkdir')
1920hunk ./src/allmydata/test/test_backends.py 148
1921         """ Write a new share. """
1922 
1923         def call_listdir(dirname):
1924-            self.failUnlessReallyEqual(dirname, 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
1925-            raise OSError(2, "No such file or directory: 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a'")
1926+            self.failUnlessReallyEqual(dirname, sharedirname)
1927+            raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares/or/orsxg5dtorxxeylhmvpws3temv4a'))
1928 
1929         mocklistdir.side_effect = call_listdir
1930 
1931hunk ./src/allmydata/test/test_backends.py 178
1932 
1933         sharefile = MockFile()
1934         def call_open(fname, mode):
1935-            self.failUnlessReallyEqual(fname, 'testdir/shares/incoming/or/orsxg5dtorxxeylhmvpws3temv4a/0' )
1936+            self.failUnlessReallyEqual(fname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a', '0' ))
1937             return sharefile
1938 
1939         mockopen.side_effect = call_open
1940hunk ./src/allmydata/test/test_backends.py 200
1941         StorageServer object. """
1942 
1943         def call_listdir(dirname):
1944-            self.failUnlessReallyEqual(dirname,'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
1945+            self.failUnlessReallyEqual(dirname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
1946             return ['0']
1947 
1948         mocklistdir.side_effect = call_listdir
1949}
1950[checkpoint patch
1951wilcoxjg@gmail.com**20110626165715
1952 Ignore-this: fbfce2e8a1c1bb92715793b8ad6854d5
1953] {
1954hunk ./src/allmydata/storage/backends/das/core.py 21
1955 from allmydata.storage.lease import LeaseInfo
1956 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
1957      create_mutable_sharefile
1958-from allmydata.storage.backends.das.immutable import NullBucketWriter, BucketWriter, BucketReader
1959+from allmydata.storage.immutable import BucketWriter, BucketReader
1960 from allmydata.storage.crawler import FSBucketCountingCrawler
1961 from allmydata.storage.backends.das.expirer import FSLeaseCheckingCrawler
1962 
1963hunk ./src/allmydata/storage/backends/das/core.py 27
1964 from zope.interface import implements
1965 
1966+# $SHARENUM matches this regex:
1967+NUM_RE=re.compile("^[0-9]+$")
1968+
1969 class DASCore(Backend):
1970     implements(IStorageBackend)
1971     def __init__(self, storedir, expiration_policy, readonly=False, reserved_space=0):
1972hunk ./src/allmydata/storage/backends/das/core.py 80
1973         return fileutil.get_available_space(self.storedir, self.reserved_space)
1974 
1975     def get_shares(self, storage_index):
1976-        """Return a list of the FSBShare objects that correspond to the passed storage_index."""
1977+        """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
1978         finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
1979         try:
1980             for f in os.listdir(finalstoragedir):
1981hunk ./src/allmydata/storage/backends/das/core.py 86
1982                 if NUM_RE.match(f):
1983                     filename = os.path.join(finalstoragedir, f)
1984-                    yield FSBShare(filename, int(f))
1985+                    yield ImmutableShare(self.sharedir, storage_index, int(f))
1986         except OSError:
1987             # Commonly caused by there being no buckets at all.
1988             pass
1989hunk ./src/allmydata/storage/backends/das/core.py 95
1990         immsh = ImmutableShare(self.sharedir, storage_index, shnum, max_size=max_space_per_bucket, create=True)
1991         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
1992         return bw
1993+
1994+    def set_storage_server(self, ss):
1995+        self.ss = ss
1996         
1997 
1998 # each share file (in storage/shares/$SI/$SHNUM) contains lease information
1999hunk ./src/allmydata/storage/server.py 29
2000 # Where "$START" denotes the first 10 bits worth of $STORAGEINDEX (that's 2
2001 # base-32 chars).
2002 
2003-# $SHARENUM matches this regex:
2004-NUM_RE=re.compile("^[0-9]+$")
2005 
2006 class StorageServer(service.MultiService, Referenceable):
2007     implements(RIStorageServer, IStatsProducer)
2008}
2009[checkpoint4
2010wilcoxjg@gmail.com**20110628202202
2011 Ignore-this: 9778596c10bb066b58fc211f8c1707b7
2012] {
2013hunk ./src/allmydata/storage/backends/das/core.py 96
2014         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
2015         return bw
2016 
2017+    def make_bucket_reader(self, share):
2018+        return BucketReader(self.ss, share)
2019+
2020     def set_storage_server(self, ss):
2021         self.ss = ss
2022         
2023hunk ./src/allmydata/storage/backends/das/core.py 138
2024         must not be None. """
2025         precondition((max_size is not None) or (not create), max_size, create)
2026         self.shnum = shnum
2027+        self.storage_index = storageindex
2028         self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
2029         self._max_size = max_size
2030         if create:
2031hunk ./src/allmydata/storage/backends/das/core.py 173
2032             self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
2033         self._data_offset = 0xc
2034 
2035+    def get_shnum(self):
2036+        return self.shnum
2037+
2038     def unlink(self):
2039         os.unlink(self.fname)
2040 
2041hunk ./src/allmydata/storage/backends/null/core.py 2
2042 from allmydata.storage.backends.base import Backend
2043+from allmydata.storage.immutable import BucketWriter, BucketReader
2044 
2045 class NullCore(Backend):
2046     def __init__(self):
2047hunk ./src/allmydata/storage/backends/null/core.py 17
2048     def get_share(self, storage_index, sharenum):
2049         return None
2050 
2051-    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
2052-        return NullBucketWriter()
2053+    def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
2054+       
2055+        return BucketWriter(self.ss, immutableshare, max_space_per_bucket, lease_info, canary)
2056+
2057+    def set_storage_server(self, ss):
2058+        self.ss = ss
2059+
2060+class ImmutableShare:
2061+    sharetype = "immutable"
2062+
2063+    def __init__(self, sharedir, storageindex, shnum, max_size=None, create=False):
2064+        """ If max_size is not None then I won't allow more than
2065+        max_size to be written to me. If create=True then max_size
2066+        must not be None. """
2067+        precondition((max_size is not None) or (not create), max_size, create)
2068+        self.shnum = shnum
2069+        self.storage_index = storageindex
2070+        self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
2071+        self._max_size = max_size
2072+        if create:
2073+            # touch the file, so later callers will see that we're working on
2074+            # it. Also construct the metadata.
2075+            assert not os.path.exists(self.fname)
2076+            fileutil.make_dirs(os.path.dirname(self.fname))
2077+            f = open(self.fname, 'wb')
2078+            # The second field -- the four-byte share data length -- is no
2079+            # longer used as of Tahoe v1.3.0, but we continue to write it in
2080+            # there in case someone downgrades a storage server from >=
2081+            # Tahoe-1.3.0 to < Tahoe-1.3.0, or moves a share file from one
2082+            # server to another, etc. We do saturation -- a share data length
2083+            # larger than 2**32-1 (what can fit into the field) is marked as
2084+            # the largest length that can fit into the field. That way, even
2085+            # if this does happen, the old < v1.3.0 server will still allow
2086+            # clients to read the first part of the share.
2087+            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
2088+            f.close()
2089+            self._lease_offset = max_size + 0x0c
2090+            self._num_leases = 0
2091+        else:
2092+            f = open(self.fname, 'rb')
2093+            filesize = os.path.getsize(self.fname)
2094+            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
2095+            f.close()
2096+            if version != 1:
2097+                msg = "sharefile %s had version %d but we wanted 1" % \
2098+                      (self.fname, version)
2099+                raise UnknownImmutableContainerVersionError(msg)
2100+            self._num_leases = num_leases
2101+            self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
2102+        self._data_offset = 0xc
2103+
2104+    def get_shnum(self):
2105+        return self.shnum
2106+
2107+    def unlink(self):
2108+        os.unlink(self.fname)
2109+
2110+    def read_share_data(self, offset, length):
2111+        precondition(offset >= 0)
2112+        # Reads beyond the end of the data are truncated. Reads that start
2113+        # beyond the end of the data return an empty string.
2114+        seekpos = self._data_offset+offset
2115+        fsize = os.path.getsize(self.fname)
2116+        actuallength = max(0, min(length, fsize-seekpos))
2117+        if actuallength == 0:
2118+            return ""
2119+        f = open(self.fname, 'rb')
2120+        f.seek(seekpos)
2121+        return f.read(actuallength)
2122+
2123+    def write_share_data(self, offset, data):
2124+        length = len(data)
2125+        precondition(offset >= 0, offset)
2126+        if self._max_size is not None and offset+length > self._max_size:
2127+            raise DataTooLargeError(self._max_size, offset, length)
2128+        f = open(self.fname, 'rb+')
2129+        real_offset = self._data_offset+offset
2130+        f.seek(real_offset)
2131+        assert f.tell() == real_offset
2132+        f.write(data)
2133+        f.close()
2134+
2135+    def _write_lease_record(self, f, lease_number, lease_info):
2136+        offset = self._lease_offset + lease_number * self.LEASE_SIZE
2137+        f.seek(offset)
2138+        assert f.tell() == offset
2139+        f.write(lease_info.to_immutable_data())
2140+
2141+    def _read_num_leases(self, f):
2142+        f.seek(0x08)
2143+        (num_leases,) = struct.unpack(">L", f.read(4))
2144+        return num_leases
2145+
2146+    def _write_num_leases(self, f, num_leases):
2147+        f.seek(0x08)
2148+        f.write(struct.pack(">L", num_leases))
2149+
2150+    def _truncate_leases(self, f, num_leases):
2151+        f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE)
2152+
2153+    def get_leases(self):
2154+        """Yields a LeaseInfo instance for all leases."""
2155+        f = open(self.fname, 'rb')
2156+        (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
2157+        f.seek(self._lease_offset)
2158+        for i in range(num_leases):
2159+            data = f.read(self.LEASE_SIZE)
2160+            if data:
2161+                yield LeaseInfo().from_immutable_data(data)
2162+
2163+    def add_lease(self, lease_info):
2164+        f = open(self.fname, 'rb+')
2165+        num_leases = self._read_num_leases(f)
2166+        self._write_lease_record(f, num_leases, lease_info)
2167+        self._write_num_leases(f, num_leases+1)
2168+        f.close()
2169+
2170+    def renew_lease(self, renew_secret, new_expire_time):
2171+        for i,lease in enumerate(self.get_leases()):
2172+            if constant_time_compare(lease.renew_secret, renew_secret):
2173+                # yup. See if we need to update the owner time.
2174+                if new_expire_time > lease.expiration_time:
2175+                    # yes
2176+                    lease.expiration_time = new_expire_time
2177+                    f = open(self.fname, 'rb+')
2178+                    self._write_lease_record(f, i, lease)
2179+                    f.close()
2180+                return
2181+        raise IndexError("unable to renew non-existent lease")
2182+
2183+    def add_or_renew_lease(self, lease_info):
2184+        try:
2185+            self.renew_lease(lease_info.renew_secret,
2186+                             lease_info.expiration_time)
2187+        except IndexError:
2188+            self.add_lease(lease_info)
2189+
2190+
2191+    def cancel_lease(self, cancel_secret):
2192+        """Remove a lease with the given cancel_secret. If the last lease is
2193+        cancelled, the file will be removed. Return the number of bytes that
2194+        were freed (by truncating the list of leases, and possibly by
2195+        deleting the file. Raise IndexError if there was no lease with the
2196+        given cancel_secret.
2197+        """
2198+
2199+        leases = list(self.get_leases())
2200+        num_leases_removed = 0
2201+        for i,lease in enumerate(leases):
2202+            if constant_time_compare(lease.cancel_secret, cancel_secret):
2203+                leases[i] = None
2204+                num_leases_removed += 1
2205+        if not num_leases_removed:
2206+            raise IndexError("unable to find matching lease to cancel")
2207+        if num_leases_removed:
2208+            # pack and write out the remaining leases. We write these out in
2209+            # the same order as they were added, so that if we crash while
2210+            # doing this, we won't lose any non-cancelled leases.
2211+            leases = [l for l in leases if l] # remove the cancelled leases
2212+            f = open(self.fname, 'rb+')
2213+            for i,lease in enumerate(leases):
2214+                self._write_lease_record(f, i, lease)
2215+            self._write_num_leases(f, len(leases))
2216+            self._truncate_leases(f, len(leases))
2217+            f.close()
2218+        space_freed = self.LEASE_SIZE * num_leases_removed
2219+        if not len(leases):
2220+            space_freed += os.stat(self.fname)[stat.ST_SIZE]
2221+            self.unlink()
2222+        return space_freed
2223hunk ./src/allmydata/storage/immutable.py 114
2224 class BucketReader(Referenceable):
2225     implements(RIBucketReader)
2226 
2227-    def __init__(self, ss, sharefname, storage_index=None, shnum=None):
2228+    def __init__(self, ss, share):
2229         self.ss = ss
2230hunk ./src/allmydata/storage/immutable.py 116
2231-        self._share_file = ShareFile(sharefname)
2232-        self.storage_index = storage_index
2233-        self.shnum = shnum
2234+        self._share_file = share
2235+        self.storage_index = share.storage_index
2236+        self.shnum = share.shnum
2237 
2238     def __repr__(self):
2239         return "<%s %s %s>" % (self.__class__.__name__,
2240hunk ./src/allmydata/storage/server.py 316
2241         si_s = si_b2a(storage_index)
2242         log.msg("storage: get_buckets %s" % si_s)
2243         bucketreaders = {} # k: sharenum, v: BucketReader
2244-        for shnum, filename in self.backend.get_shares(storage_index):
2245-            bucketreaders[shnum] = BucketReader(self, filename,
2246-                                                storage_index, shnum)
2247+        self.backend.set_storage_server(self)
2248+        for share in self.backend.get_shares(storage_index):
2249+            bucketreaders[share.get_shnum()] = self.backend.make_bucket_reader(share)
2250         self.add_latency("get", time.time() - start)
2251         return bucketreaders
2252 
2253hunk ./src/allmydata/test/test_backends.py 25
2254 tempdir = 'teststoredir'
2255 sharedirname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
2256 sharefname = os.path.join(sharedirname, '0')
2257+expiration_policy = {'enabled' : False,
2258+                     'mode' : 'age',
2259+                     'override_lease_duration' : None,
2260+                     'cutoff_date' : None,
2261+                     'sharetypes' : None}
2262 
2263 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
2264     @mock.patch('time.time')
2265hunk ./src/allmydata/test/test_backends.py 43
2266         tries to read or write to the file system. """
2267 
2268         # Now begin the test.
2269-        s = StorageServer('testnodeidxxxxxxxxxx', backend=NullBackend())
2270+        s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2271 
2272         self.failIf(mockisdir.called)
2273         self.failIf(mocklistdir.called)
2274hunk ./src/allmydata/test/test_backends.py 74
2275         mockopen.side_effect = call_open
2276 
2277         # Now begin the test.
2278-        s = StorageServer('testnodeidxxxxxxxxxx', backend=FSBackend('teststoredir'))
2279+        s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore('teststoredir', expiration_policy))
2280 
2281         self.failIf(mockisdir.called)
2282         self.failIf(mocklistdir.called)
2283hunk ./src/allmydata/test/test_backends.py 86
2284 
2285 class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
2286     def setUp(self):
2287-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullBackend())
2288+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2289 
2290     @mock.patch('os.mkdir')
2291     @mock.patch('__builtin__.open')
2292hunk ./src/allmydata/test/test_backends.py 136
2293             elif fname == os.path.join(tempdir, 'lease_checker.history'):
2294                 return StringIO()
2295         mockopen.side_effect = call_open
2296-        expiration_policy = {'enabled' : False,
2297-                             'mode' : 'age',
2298-                             'override_lease_duration' : None,
2299-                             'cutoff_date' : None,
2300-                             'sharetypes' : None}
2301         testbackend = DASCore(tempdir, expiration_policy)
2302         self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
2303 
2304}
2305[checkpoint5
2306wilcoxjg@gmail.com**20110705034626
2307 Ignore-this: 255780bd58299b0aa33c027e9d008262
2308] {
2309addfile ./src/allmydata/storage/backends/base.py
2310hunk ./src/allmydata/storage/backends/base.py 1
2311+from twisted.application import service
2312+
2313+class Backend(service.MultiService):
2314+    def __init__(self):
2315+        service.MultiService.__init__(self)
2316hunk ./src/allmydata/storage/backends/null/core.py 19
2317 
2318     def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
2319         
2320+        immutableshare = ImmutableShare()
2321         return BucketWriter(self.ss, immutableshare, max_space_per_bucket, lease_info, canary)
2322 
2323     def set_storage_server(self, ss):
2324hunk ./src/allmydata/storage/backends/null/core.py 28
2325 class ImmutableShare:
2326     sharetype = "immutable"
2327 
2328-    def __init__(self, sharedir, storageindex, shnum, max_size=None, create=False):
2329+    def __init__(self):
2330         """ If max_size is not None then I won't allow more than
2331         max_size to be written to me. If create=True then max_size
2332         must not be None. """
2333hunk ./src/allmydata/storage/backends/null/core.py 32
2334-        precondition((max_size is not None) or (not create), max_size, create)
2335-        self.shnum = shnum
2336-        self.storage_index = storageindex
2337-        self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
2338-        self._max_size = max_size
2339-        if create:
2340-            # touch the file, so later callers will see that we're working on
2341-            # it. Also construct the metadata.
2342-            assert not os.path.exists(self.fname)
2343-            fileutil.make_dirs(os.path.dirname(self.fname))
2344-            f = open(self.fname, 'wb')
2345-            # The second field -- the four-byte share data length -- is no
2346-            # longer used as of Tahoe v1.3.0, but we continue to write it in
2347-            # there in case someone downgrades a storage server from >=
2348-            # Tahoe-1.3.0 to < Tahoe-1.3.0, or moves a share file from one
2349-            # server to another, etc. We do saturation -- a share data length
2350-            # larger than 2**32-1 (what can fit into the field) is marked as
2351-            # the largest length that can fit into the field. That way, even
2352-            # if this does happen, the old < v1.3.0 server will still allow
2353-            # clients to read the first part of the share.
2354-            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
2355-            f.close()
2356-            self._lease_offset = max_size + 0x0c
2357-            self._num_leases = 0
2358-        else:
2359-            f = open(self.fname, 'rb')
2360-            filesize = os.path.getsize(self.fname)
2361-            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
2362-            f.close()
2363-            if version != 1:
2364-                msg = "sharefile %s had version %d but we wanted 1" % \
2365-                      (self.fname, version)
2366-                raise UnknownImmutableContainerVersionError(msg)
2367-            self._num_leases = num_leases
2368-            self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
2369-        self._data_offset = 0xc
2370+        pass
2371 
2372     def get_shnum(self):
2373         return self.shnum
2374hunk ./src/allmydata/storage/backends/null/core.py 54
2375         return f.read(actuallength)
2376 
2377     def write_share_data(self, offset, data):
2378-        length = len(data)
2379-        precondition(offset >= 0, offset)
2380-        if self._max_size is not None and offset+length > self._max_size:
2381-            raise DataTooLargeError(self._max_size, offset, length)
2382-        f = open(self.fname, 'rb+')
2383-        real_offset = self._data_offset+offset
2384-        f.seek(real_offset)
2385-        assert f.tell() == real_offset
2386-        f.write(data)
2387-        f.close()
2388+        pass
2389 
2390     def _write_lease_record(self, f, lease_number, lease_info):
2391         offset = self._lease_offset + lease_number * self.LEASE_SIZE
2392hunk ./src/allmydata/storage/backends/null/core.py 84
2393             if data:
2394                 yield LeaseInfo().from_immutable_data(data)
2395 
2396-    def add_lease(self, lease_info):
2397-        f = open(self.fname, 'rb+')
2398-        num_leases = self._read_num_leases(f)
2399-        self._write_lease_record(f, num_leases, lease_info)
2400-        self._write_num_leases(f, num_leases+1)
2401-        f.close()
2402+    def add_lease(self, lease):
2403+        pass
2404 
2405     def renew_lease(self, renew_secret, new_expire_time):
2406         for i,lease in enumerate(self.get_leases()):
2407hunk ./src/allmydata/test/test_backends.py 32
2408                      'sharetypes' : None}
2409 
2410 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
2411-    @mock.patch('time.time')
2412-    @mock.patch('os.mkdir')
2413-    @mock.patch('__builtin__.open')
2414-    @mock.patch('os.listdir')
2415-    @mock.patch('os.path.isdir')
2416-    def test_create_server_null_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
2417-        """ This tests whether a server instance can be constructed
2418-        with a null backend. The server instance fails the test if it
2419-        tries to read or write to the file system. """
2420-
2421-        # Now begin the test.
2422-        s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2423-
2424-        self.failIf(mockisdir.called)
2425-        self.failIf(mocklistdir.called)
2426-        self.failIf(mockopen.called)
2427-        self.failIf(mockmkdir.called)
2428-
2429-        # You passed!
2430-
2431     @mock.patch('time.time')
2432     @mock.patch('os.mkdir')
2433     @mock.patch('__builtin__.open')
2434hunk ./src/allmydata/test/test_backends.py 53
2435                 self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
2436         mockopen.side_effect = call_open
2437 
2438-        # Now begin the test.
2439-        s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore('teststoredir', expiration_policy))
2440-
2441-        self.failIf(mockisdir.called)
2442-        self.failIf(mocklistdir.called)
2443-        self.failIf(mockopen.called)
2444-        self.failIf(mockmkdir.called)
2445-        self.failIf(mocktime.called)
2446-
2447-        # You passed!
2448-
2449-class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
2450-    def setUp(self):
2451-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2452-
2453-    @mock.patch('os.mkdir')
2454-    @mock.patch('__builtin__.open')
2455-    @mock.patch('os.listdir')
2456-    @mock.patch('os.path.isdir')
2457-    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir):
2458-        """ Write a new share. """
2459-
2460-        # Now begin the test.
2461-        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
2462-        bs[0].remote_write(0, 'a')
2463-        self.failIf(mockisdir.called)
2464-        self.failIf(mocklistdir.called)
2465-        self.failIf(mockopen.called)
2466-        self.failIf(mockmkdir.called)
2467+        def call_isdir(fname):
2468+            if fname == os.path.join(tempdir,'shares'):
2469+                return True
2470+            elif fname == os.path.join(tempdir,'shares', 'incoming'):
2471+                return True
2472+            else:
2473+                self.fail("Server with FS backend tried to idsir '%s'" % (fname,))
2474+        mockisdir.side_effect = call_isdir
2475 
2476hunk ./src/allmydata/test/test_backends.py 62
2477-    @mock.patch('os.path.exists')
2478-    @mock.patch('os.path.getsize')
2479-    @mock.patch('__builtin__.open')
2480-    @mock.patch('os.listdir')
2481-    def test_read_share(self, mocklistdir, mockopen, mockgetsize, mockexists):
2482-        """ This tests whether the code correctly finds and reads
2483-        shares written out by old (Tahoe-LAFS <= v1.8.2)
2484-        servers. There is a similar test in test_download, but that one
2485-        is from the perspective of the client and exercises a deeper
2486-        stack of code. This one is for exercising just the
2487-        StorageServer object. """
2488+        def call_mkdir(fname, mode):
2489+            """XXX something is calling mkdir teststoredir and teststoredir/shares twice...  this is odd!"""
2490+            self.failUnlessEqual(0777, mode)
2491+            if fname == tempdir:
2492+                return None
2493+            elif fname == os.path.join(tempdir,'shares'):
2494+                return None
2495+            elif fname == os.path.join(tempdir,'shares', 'incoming'):
2496+                return None
2497+            else:
2498+                self.fail("Server with FS backend tried to mkdir '%s'" % (fname,))
2499+        mockmkdir.side_effect = call_mkdir
2500 
2501         # Now begin the test.
2502hunk ./src/allmydata/test/test_backends.py 76
2503-        bs = self.s.remote_get_buckets('teststorage_index')
2504+        s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore('teststoredir', expiration_policy))
2505 
2506hunk ./src/allmydata/test/test_backends.py 78
2507-        self.failUnlessEqual(len(bs), 0)
2508-        self.failIf(mocklistdir.called)
2509-        self.failIf(mockopen.called)
2510-        self.failIf(mockgetsize.called)
2511-        self.failIf(mockexists.called)
2512+        self.failIf(mocklistdir.called, mocklistdir.call_args_list)
2513 
2514 
2515 class TestServerFSBackend(unittest.TestCase, ReallyEqualMixin):
2516hunk ./src/allmydata/test/test_backends.py 193
2517         self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
2518 
2519 
2520+
2521+class TestBackendConstruction(unittest.TestCase, ReallyEqualMixin):
2522+    @mock.patch('time.time')
2523+    @mock.patch('os.mkdir')
2524+    @mock.patch('__builtin__.open')
2525+    @mock.patch('os.listdir')
2526+    @mock.patch('os.path.isdir')
2527+    def test_create_fs_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
2528+        """ This tests whether a file system backend instance can be
2529+        constructed. To pass the test, it has to use the
2530+        filesystem in only the prescribed ways. """
2531+
2532+        def call_open(fname, mode):
2533+            if fname == os.path.join(tempdir,'bucket_counter.state'):
2534+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
2535+            elif fname == os.path.join(tempdir, 'lease_checker.state'):
2536+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
2537+            elif fname == os.path.join(tempdir, 'lease_checker.history'):
2538+                return StringIO()
2539+            else:
2540+                self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
2541+        mockopen.side_effect = call_open
2542+
2543+        def call_isdir(fname):
2544+            if fname == os.path.join(tempdir,'shares'):
2545+                return True
2546+            elif fname == os.path.join(tempdir,'shares', 'incoming'):
2547+                return True
2548+            else:
2549+                self.fail("Server with FS backend tried to idsir '%s'" % (fname,))
2550+        mockisdir.side_effect = call_isdir
2551+
2552+        def call_mkdir(fname, mode):
2553+            """XXX something is calling mkdir teststoredir and teststoredir/shares twice...  this is odd!"""
2554+            self.failUnlessEqual(0777, mode)
2555+            if fname == tempdir:
2556+                return None
2557+            elif fname == os.path.join(tempdir,'shares'):
2558+                return None
2559+            elif fname == os.path.join(tempdir,'shares', 'incoming'):
2560+                return None
2561+            else:
2562+                self.fail("Server with FS backend tried to mkdir '%s'" % (fname,))
2563+        mockmkdir.side_effect = call_mkdir
2564+
2565+        # Now begin the test.
2566+        DASCore('teststoredir', expiration_policy)
2567+
2568+        self.failIf(mocklistdir.called, mocklistdir.call_args_list)
2569}
2570[checkpoint 6
2571wilcoxjg@gmail.com**20110706190824
2572 Ignore-this: 2fb2d722b53fe4a72c99118c01fceb69
2573] {
2574hunk ./src/allmydata/interfaces.py 100
2575                          renew_secret=LeaseRenewSecret,
2576                          cancel_secret=LeaseCancelSecret,
2577                          sharenums=SetOf(int, maxLength=MAX_BUCKETS),
2578-                         allocated_size=Offset, canary=Referenceable):
2579+                         allocated_size=Offset,
2580+                         canary=Referenceable):
2581         """
2582hunk ./src/allmydata/interfaces.py 103
2583-        @param storage_index: the index of the bucket to be created or
2584+        @param storage_index: the index of the shares to be created or
2585                               increfed.
2586hunk ./src/allmydata/interfaces.py 105
2587-        @param sharenums: these are the share numbers (probably between 0 and
2588-                          99) that the sender is proposing to store on this
2589-                          server.
2590-        @param renew_secret: This is the secret used to protect bucket refresh
2591+        @param renew_secret: This is the secret used to protect shares refresh
2592                              This secret is generated by the client and
2593                              stored for later comparison by the server. Each
2594                              server is given a different secret.
2595hunk ./src/allmydata/interfaces.py 109
2596-        @param cancel_secret: Like renew_secret, but protects bucket decref.
2597-        @param canary: If the canary is lost before close(), the bucket is
2598+        @param cancel_secret: Like renew_secret, but protects shares decref.
2599+        @param sharenums: these are the share numbers (probably between 0 and
2600+                          99) that the sender is proposing to store on this
2601+                          server.
2602+        @param allocated_size: XXX The size of the shares the client wishes to store.
2603+        @param canary: If the canary is lost before close(), the shares are
2604                        deleted.
2605hunk ./src/allmydata/interfaces.py 116
2606+
2607         @return: tuple of (alreadygot, allocated), where alreadygot is what we
2608                  already have and allocated is what we hereby agree to accept.
2609                  New leases are added for shares in both lists.
2610hunk ./src/allmydata/interfaces.py 128
2611                   renew_secret=LeaseRenewSecret,
2612                   cancel_secret=LeaseCancelSecret):
2613         """
2614-        Add a new lease on the given bucket. If the renew_secret matches an
2615+        Add a new lease on the given shares. If the renew_secret matches an
2616         existing lease, that lease will be renewed instead. If there is no
2617         bucket for the given storage_index, return silently. (note that in
2618         tahoe-1.3.0 and earlier, IndexError was raised if there was no
2619hunk ./src/allmydata/storage/server.py 17
2620 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
2621      create_mutable_sharefile
2622 
2623-from zope.interface import implements
2624-
2625 # storage/
2626 # storage/shares/incoming
2627 #   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
2628hunk ./src/allmydata/test/test_backends.py 6
2629 from StringIO import StringIO
2630 
2631 from allmydata.test.common_util import ReallyEqualMixin
2632+from allmydata.util.assertutil import _assert
2633 
2634 import mock, os
2635 
2636hunk ./src/allmydata/test/test_backends.py 92
2637                 raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
2638             elif fname == os.path.join(tempdir, 'lease_checker.history'):
2639                 return StringIO()
2640+            else:
2641+                _assert(False, "The tester code doesn't recognize this case.") 
2642+
2643         mockopen.side_effect = call_open
2644         testbackend = DASCore(tempdir, expiration_policy)
2645         self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
2646hunk ./src/allmydata/test/test_backends.py 109
2647 
2648         def call_listdir(dirname):
2649             self.failUnlessReallyEqual(dirname, sharedirname)
2650-            raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares/or/orsxg5dtorxxeylhmvpws3temv4a'))
2651+            raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
2652 
2653         mocklistdir.side_effect = call_listdir
2654 
2655hunk ./src/allmydata/test/test_backends.py 113
2656+        def call_isdir(dirname):
2657+            self.failUnlessReallyEqual(dirname, sharedirname)
2658+            return True
2659+
2660+        mockisdir.side_effect = call_isdir
2661+
2662+        def call_mkdir(dirname, permissions):
2663+            if dirname not in [sharedirname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
2664+                self.Fail
2665+            else:
2666+                return True
2667+
2668+        mockmkdir.side_effect = call_mkdir
2669+
2670         class MockFile:
2671             def __init__(self):
2672                 self.buffer = ''
2673hunk ./src/allmydata/test/test_backends.py 156
2674             return sharefile
2675 
2676         mockopen.side_effect = call_open
2677+
2678         # Now begin the test.
2679         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
2680         bs[0].remote_write(0, 'a')
2681hunk ./src/allmydata/test/test_backends.py 161
2682         self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
2683+       
2684+        # Now test the allocated_size method.
2685+        spaceint = self.s.allocated_size()
2686 
2687     @mock.patch('os.path.exists')
2688     @mock.patch('os.path.getsize')
2689}
2690[checkpoint 7
2691wilcoxjg@gmail.com**20110706200820
2692 Ignore-this: 16b790efc41a53964cbb99c0e86dafba
2693] hunk ./src/allmydata/test/test_backends.py 164
2694         
2695         # Now test the allocated_size method.
2696         spaceint = self.s.allocated_size()
2697+        self.failUnlessReallyEqual(spaceint, 1)
2698 
2699     @mock.patch('os.path.exists')
2700     @mock.patch('os.path.getsize')
2701[checkpoint8
2702wilcoxjg@gmail.com**20110706223126
2703 Ignore-this: 97336180883cb798b16f15411179f827
2704   The nullbackend is necessary to test unlimited space in a backend.  It is a mock-like object.
2705] hunk ./src/allmydata/test/test_backends.py 32
2706                      'cutoff_date' : None,
2707                      'sharetypes' : None}
2708 
2709+class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
2710+    def setUp(self):
2711+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2712+
2713+    @mock.patch('os.mkdir')
2714+    @mock.patch('__builtin__.open')
2715+    @mock.patch('os.listdir')
2716+    @mock.patch('os.path.isdir')
2717+    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir):
2718+        """ Write a new share. """
2719+
2720+        # Now begin the test.
2721+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
2722+        bs[0].remote_write(0, 'a')
2723+        self.failIf(mockisdir.called)
2724+        self.failIf(mocklistdir.called)
2725+        self.failIf(mockopen.called)
2726+        self.failIf(mockmkdir.called)
2727+
2728 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
2729     @mock.patch('time.time')
2730     @mock.patch('os.mkdir')
2731[checkpoint 9
2732wilcoxjg@gmail.com**20110707042942
2733 Ignore-this: 75396571fd05944755a104a8fc38aaf6
2734] {
2735hunk ./src/allmydata/storage/backends/das/core.py 88
2736                     filename = os.path.join(finalstoragedir, f)
2737                     yield ImmutableShare(self.sharedir, storage_index, int(f))
2738         except OSError:
2739-            # Commonly caused by there being no buckets at all.
2740+            # Commonly caused by there being no shares at all.
2741             pass
2742         
2743     def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
2744hunk ./src/allmydata/storage/backends/das/core.py 141
2745         self.storage_index = storageindex
2746         self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
2747         self._max_size = max_size
2748+        self.incomingdir = os.path.join(sharedir, 'incoming')
2749+        si_dir = storage_index_to_dir(storageindex)
2750+        self.incominghome = os.path.join(self.incomingdir, si_dir, "%d" % shnum)
2751+        self.finalhome = os.path.join(sharedir, si_dir, "%d" % shnum)
2752         if create:
2753             # touch the file, so later callers will see that we're working on
2754             # it. Also construct the metadata.
2755hunk ./src/allmydata/storage/backends/das/core.py 177
2756             self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
2757         self._data_offset = 0xc
2758 
2759+    def close(self):
2760+        fileutil.make_dirs(os.path.dirname(self.finalhome))
2761+        fileutil.rename(self.incominghome, self.finalhome)
2762+        try:
2763+            # self.incominghome is like storage/shares/incoming/ab/abcde/4 .
2764+            # We try to delete the parent (.../ab/abcde) to avoid leaving
2765+            # these directories lying around forever, but the delete might
2766+            # fail if we're working on another share for the same storage
2767+            # index (like ab/abcde/5). The alternative approach would be to
2768+            # use a hierarchy of objects (PrefixHolder, BucketHolder,
2769+            # ShareWriter), each of which is responsible for a single
2770+            # directory on disk, and have them use reference counting of
2771+            # their children to know when they should do the rmdir. This
2772+            # approach is simpler, but relies on os.rmdir refusing to delete
2773+            # a non-empty directory. Do *not* use fileutil.rm_dir() here!
2774+            os.rmdir(os.path.dirname(self.incominghome))
2775+            # we also delete the grandparent (prefix) directory, .../ab ,
2776+            # again to avoid leaving directories lying around. This might
2777+            # fail if there is another bucket open that shares a prefix (like
2778+            # ab/abfff).
2779+            os.rmdir(os.path.dirname(os.path.dirname(self.incominghome)))
2780+            # we leave the great-grandparent (incoming/) directory in place.
2781+        except EnvironmentError:
2782+            # ignore the "can't rmdir because the directory is not empty"
2783+            # exceptions, those are normal consequences of the
2784+            # above-mentioned conditions.
2785+            pass
2786+        pass
2787+       
2788+    def stat(self):
2789+        return os.stat(self.finalhome)[stat.ST_SIZE]
2790+
2791     def get_shnum(self):
2792         return self.shnum
2793 
2794hunk ./src/allmydata/storage/immutable.py 7
2795 
2796 from zope.interface import implements
2797 from allmydata.interfaces import RIBucketWriter, RIBucketReader
2798-from allmydata.util import base32, fileutil, log
2799+from allmydata.util import base32, log
2800 from allmydata.util.assertutil import precondition
2801 from allmydata.util.hashutil import constant_time_compare
2802 from allmydata.storage.lease import LeaseInfo
2803hunk ./src/allmydata/storage/immutable.py 44
2804     def remote_close(self):
2805         precondition(not self.closed)
2806         start = time.time()
2807-
2808-        fileutil.make_dirs(os.path.dirname(self.finalhome))
2809-        fileutil.rename(self.incominghome, self.finalhome)
2810-        try:
2811-            # self.incominghome is like storage/shares/incoming/ab/abcde/4 .
2812-            # We try to delete the parent (.../ab/abcde) to avoid leaving
2813-            # these directories lying around forever, but the delete might
2814-            # fail if we're working on another share for the same storage
2815-            # index (like ab/abcde/5). The alternative approach would be to
2816-            # use a hierarchy of objects (PrefixHolder, BucketHolder,
2817-            # ShareWriter), each of which is responsible for a single
2818-            # directory on disk, and have them use reference counting of
2819-            # their children to know when they should do the rmdir. This
2820-            # approach is simpler, but relies on os.rmdir refusing to delete
2821-            # a non-empty directory. Do *not* use fileutil.rm_dir() here!
2822-            os.rmdir(os.path.dirname(self.incominghome))
2823-            # we also delete the grandparent (prefix) directory, .../ab ,
2824-            # again to avoid leaving directories lying around. This might
2825-            # fail if there is another bucket open that shares a prefix (like
2826-            # ab/abfff).
2827-            os.rmdir(os.path.dirname(os.path.dirname(self.incominghome)))
2828-            # we leave the great-grandparent (incoming/) directory in place.
2829-        except EnvironmentError:
2830-            # ignore the "can't rmdir because the directory is not empty"
2831-            # exceptions, those are normal consequences of the
2832-            # above-mentioned conditions.
2833-            pass
2834+        self._sharefile.close()
2835         self._sharefile = None
2836         self.closed = True
2837         self._canary.dontNotifyOnDisconnect(self._disconnect_marker)
2838hunk ./src/allmydata/storage/immutable.py 49
2839 
2840-        filelen = os.stat(self.finalhome)[stat.ST_SIZE]
2841+        filelen = self._sharefile.stat()
2842         self.ss.bucket_writer_closed(self, filelen)
2843         self.ss.add_latency("close", time.time() - start)
2844         self.ss.count("close")
2845hunk ./src/allmydata/storage/server.py 45
2846         self._active_writers = weakref.WeakKeyDictionary()
2847         self.backend = backend
2848         self.backend.setServiceParent(self)
2849+        self.backend.set_storage_server(self)
2850         log.msg("StorageServer created", facility="tahoe.storage")
2851 
2852         self.latencies = {"allocate": [], # immutable
2853hunk ./src/allmydata/storage/server.py 220
2854 
2855         for shnum in (sharenums - alreadygot):
2856             if (not limited) or (remaining_space >= max_space_per_bucket):
2857-                #XXX or should the following line occur in storage server construtor? ok! we need to create the new share file.
2858-                self.backend.set_storage_server(self)
2859                 bw = self.backend.make_bucket_writer(storage_index, shnum,
2860                                                      max_space_per_bucket, lease_info, canary)
2861                 bucketwriters[shnum] = bw
2862hunk ./src/allmydata/test/test_backends.py 117
2863         mockopen.side_effect = call_open
2864         testbackend = DASCore(tempdir, expiration_policy)
2865         self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
2866-
2867+   
2868+    @mock.patch('allmydata.util.fileutil.get_available_space')
2869     @mock.patch('time.time')
2870     @mock.patch('os.mkdir')
2871     @mock.patch('__builtin__.open')
2872hunk ./src/allmydata/test/test_backends.py 124
2873     @mock.patch('os.listdir')
2874     @mock.patch('os.path.isdir')
2875-    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
2876+    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime,\
2877+                             mockget_available_space):
2878         """ Write a new share. """
2879 
2880         def call_listdir(dirname):
2881hunk ./src/allmydata/test/test_backends.py 148
2882 
2883         mockmkdir.side_effect = call_mkdir
2884 
2885+        def call_get_available_space(storedir, reserved_space):
2886+            self.failUnlessReallyEqual(storedir, tempdir)
2887+            return 1
2888+
2889+        mockget_available_space.side_effect = call_get_available_space
2890+
2891         class MockFile:
2892             def __init__(self):
2893                 self.buffer = ''
2894hunk ./src/allmydata/test/test_backends.py 188
2895         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
2896         bs[0].remote_write(0, 'a')
2897         self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
2898-       
2899+
2900+        # What happens when there's not enough space for the client's request?
2901+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
2902+
2903         # Now test the allocated_size method.
2904         spaceint = self.s.allocated_size()
2905         self.failUnlessReallyEqual(spaceint, 1)
2906}
2907[checkpoint10
2908wilcoxjg@gmail.com**20110707172049
2909 Ignore-this: 9dd2fb8bee93a88cea2625058decff32
2910] {
2911hunk ./src/allmydata/test/test_backends.py 20
2912 # The following share file contents was generated with
2913 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
2914 # with share data == 'a'.
2915-share_data = 'a\x00\x00\x00\x00xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy\x00(\xde\x80'
2916+renew_secret  = 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
2917+cancel_secret = 'yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy'
2918+share_data = 'a\x00\x00\x00\x00' + renew_secret + cancel_secret + '\x00(\xde\x80'
2919 share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data
2920 
2921hunk ./src/allmydata/test/test_backends.py 25
2922+testnodeid = 'testnodeidxxxxxxxxxx'
2923 tempdir = 'teststoredir'
2924 sharedirname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
2925 sharefname = os.path.join(sharedirname, '0')
2926hunk ./src/allmydata/test/test_backends.py 37
2927 
2928 class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
2929     def setUp(self):
2930-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2931+        self.s = StorageServer(testnodeid, backend=NullCore())
2932 
2933     @mock.patch('os.mkdir')
2934     @mock.patch('__builtin__.open')
2935hunk ./src/allmydata/test/test_backends.py 99
2936         mockmkdir.side_effect = call_mkdir
2937 
2938         # Now begin the test.
2939-        s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore('teststoredir', expiration_policy))
2940+        s = StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
2941 
2942         self.failIf(mocklistdir.called, mocklistdir.call_args_list)
2943 
2944hunk ./src/allmydata/test/test_backends.py 119
2945 
2946         mockopen.side_effect = call_open
2947         testbackend = DASCore(tempdir, expiration_policy)
2948-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
2949-   
2950+        self.s = StorageServer(testnodeid, backend=DASCore(tempdir, expiration_policy) )
2951+       
2952+    @mock.patch('allmydata.storage.backends.das.core.DASCore.get_shares')
2953     @mock.patch('allmydata.util.fileutil.get_available_space')
2954     @mock.patch('time.time')
2955     @mock.patch('os.mkdir')
2956hunk ./src/allmydata/test/test_backends.py 129
2957     @mock.patch('os.listdir')
2958     @mock.patch('os.path.isdir')
2959     def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime,\
2960-                             mockget_available_space):
2961+                             mockget_available_space, mockget_shares):
2962         """ Write a new share. """
2963 
2964         def call_listdir(dirname):
2965hunk ./src/allmydata/test/test_backends.py 139
2966         mocklistdir.side_effect = call_listdir
2967 
2968         def call_isdir(dirname):
2969+            #XXX Should there be any other tests here?
2970             self.failUnlessReallyEqual(dirname, sharedirname)
2971             return True
2972 
2973hunk ./src/allmydata/test/test_backends.py 159
2974 
2975         mockget_available_space.side_effect = call_get_available_space
2976 
2977+        mocktime.return_value = 0
2978+        class MockShare:
2979+            def __init__(self):
2980+                self.shnum = 1
2981+               
2982+            def add_or_renew_lease(elf, lease_info):
2983+                self.failUnlessReallyEqual(lease_info.renew_secret, renew_secret)
2984+                self.failUnlessReallyEqual(lease_info.cancel_secret, cancel_secret)
2985+                self.failUnlessReallyEqual(lease_info.owner_num, 0)
2986+                self.failUnlessReallyEqual(lease_info.expiration_time, mocktime() + 31*24*60*60)
2987+                self.failUnlessReallyEqual(lease_info.nodeid, testnodeid)
2988+               
2989+
2990+        share = MockShare()
2991+        def call_get_shares(storageindex):
2992+            return [share]
2993+
2994+        mockget_shares.side_effect = call_get_shares
2995+
2996         class MockFile:
2997             def __init__(self):
2998                 self.buffer = ''
2999hunk ./src/allmydata/test/test_backends.py 199
3000             def tell(self):
3001                 return self.pos
3002 
3003-        mocktime.return_value = 0
3004 
3005         sharefile = MockFile()
3006         def call_open(fname, mode):
3007}
3008[jacp 11
3009wilcoxjg@gmail.com**20110708213919
3010 Ignore-this: b8f81b264800590b3e2bfc6fffd21ff9
3011] {
3012hunk ./src/allmydata/storage/backends/das/core.py 144
3013         self.incomingdir = os.path.join(sharedir, 'incoming')
3014         si_dir = storage_index_to_dir(storageindex)
3015         self.incominghome = os.path.join(self.incomingdir, si_dir, "%d" % shnum)
3016+        #XXX  self.fname and self.finalhome need to be resolve/merged.
3017         self.finalhome = os.path.join(sharedir, si_dir, "%d" % shnum)
3018         if create:
3019             # touch the file, so later callers will see that we're working on
3020hunk ./src/allmydata/storage/backends/das/core.py 208
3021         pass
3022         
3023     def stat(self):
3024-        return os.stat(self.finalhome)[stat.ST_SIZE]
3025+        return os.stat(self.finalhome)[os.stat.ST_SIZE]
3026 
3027     def get_shnum(self):
3028         return self.shnum
3029hunk ./src/allmydata/storage/immutable.py 44
3030     def remote_close(self):
3031         precondition(not self.closed)
3032         start = time.time()
3033+
3034         self._sharefile.close()
3035hunk ./src/allmydata/storage/immutable.py 46
3036+        filelen = self._sharefile.stat()
3037         self._sharefile = None
3038hunk ./src/allmydata/storage/immutable.py 48
3039+
3040         self.closed = True
3041         self._canary.dontNotifyOnDisconnect(self._disconnect_marker)
3042 
3043hunk ./src/allmydata/storage/immutable.py 52
3044-        filelen = self._sharefile.stat()
3045         self.ss.bucket_writer_closed(self, filelen)
3046         self.ss.add_latency("close", time.time() - start)
3047         self.ss.count("close")
3048hunk ./src/allmydata/storage/server.py 220
3049 
3050         for shnum in (sharenums - alreadygot):
3051             if (not limited) or (remaining_space >= max_space_per_bucket):
3052-                bw = self.backend.make_bucket_writer(storage_index, shnum,
3053-                                                     max_space_per_bucket, lease_info, canary)
3054+                bw = self.backend.make_bucket_writer(storage_index, shnum, max_space_per_bucket, lease_info, canary)
3055                 bucketwriters[shnum] = bw
3056                 self._active_writers[bw] = 1
3057                 if limited:
3058hunk ./src/allmydata/test/test_backends.py 20
3059 # The following share file contents was generated with
3060 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
3061 # with share data == 'a'.
3062-renew_secret  = 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
3063-cancel_secret = 'yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy'
3064+renew_secret  = 'x'*32
3065+cancel_secret = 'y'*32
3066 share_data = 'a\x00\x00\x00\x00' + renew_secret + cancel_secret + '\x00(\xde\x80'
3067 share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data
3068 
3069hunk ./src/allmydata/test/test_backends.py 27
3070 testnodeid = 'testnodeidxxxxxxxxxx'
3071 tempdir = 'teststoredir'
3072-sharedirname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3073-sharefname = os.path.join(sharedirname, '0')
3074+sharedirfinalname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3075+sharedirincomingname = os.path.join(tempdir, 'shares', 'incoming', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3076+shareincomingname = os.path.join(sharedirincomingname, '0')
3077+sharefname = os.path.join(sharedirfinalname, '0')
3078+
3079 expiration_policy = {'enabled' : False,
3080                      'mode' : 'age',
3081                      'override_lease_duration' : None,
3082hunk ./src/allmydata/test/test_backends.py 123
3083         mockopen.side_effect = call_open
3084         testbackend = DASCore(tempdir, expiration_policy)
3085         self.s = StorageServer(testnodeid, backend=DASCore(tempdir, expiration_policy) )
3086-       
3087+
3088+    @mock.patch('allmydata.util.fileutil.rename')
3089+    @mock.patch('allmydata.util.fileutil.make_dirs')
3090+    @mock.patch('os.path.exists')
3091+    @mock.patch('os.stat')
3092     @mock.patch('allmydata.storage.backends.das.core.DASCore.get_shares')
3093     @mock.patch('allmydata.util.fileutil.get_available_space')
3094     @mock.patch('time.time')
3095hunk ./src/allmydata/test/test_backends.py 136
3096     @mock.patch('os.listdir')
3097     @mock.patch('os.path.isdir')
3098     def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime,\
3099-                             mockget_available_space, mockget_shares):
3100+                             mockget_available_space, mockget_shares, mockstat, mockexists, \
3101+                             mockmake_dirs, mockrename):
3102         """ Write a new share. """
3103 
3104         def call_listdir(dirname):
3105hunk ./src/allmydata/test/test_backends.py 141
3106-            self.failUnlessReallyEqual(dirname, sharedirname)
3107+            self.failUnlessReallyEqual(dirname, sharedirfinalname)
3108             raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
3109 
3110         mocklistdir.side_effect = call_listdir
3111hunk ./src/allmydata/test/test_backends.py 148
3112 
3113         def call_isdir(dirname):
3114             #XXX Should there be any other tests here?
3115-            self.failUnlessReallyEqual(dirname, sharedirname)
3116+            self.failUnlessReallyEqual(dirname, sharedirfinalname)
3117             return True
3118 
3119         mockisdir.side_effect = call_isdir
3120hunk ./src/allmydata/test/test_backends.py 154
3121 
3122         def call_mkdir(dirname, permissions):
3123-            if dirname not in [sharedirname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
3124+            if dirname not in [sharedirfinalname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
3125                 self.Fail
3126             else:
3127                 return True
3128hunk ./src/allmydata/test/test_backends.py 208
3129                 return self.pos
3130 
3131 
3132-        sharefile = MockFile()
3133+        fobj = MockFile()
3134         def call_open(fname, mode):
3135             self.failUnlessReallyEqual(fname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a', '0' ))
3136hunk ./src/allmydata/test/test_backends.py 211
3137-            return sharefile
3138+            return fobj
3139 
3140         mockopen.side_effect = call_open
3141 
3142hunk ./src/allmydata/test/test_backends.py 215
3143+        def call_make_dirs(dname):
3144+            self.failUnlessReallyEqual(dname, sharedirfinalname)
3145+           
3146+        mockmake_dirs.side_effect = call_make_dirs
3147+
3148+        def call_rename(src, dst):
3149+           self.failUnlessReallyEqual(src, shareincomingname)
3150+           self.failUnlessReallyEqual(dst, sharefname)
3151+           
3152+        mockrename.side_effect = call_rename
3153+
3154+        def call_exists(fname):
3155+            self.failUnlessReallyEqual(fname, sharefname)
3156+
3157+        mockexists.side_effect = call_exists
3158+
3159         # Now begin the test.
3160         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
3161         bs[0].remote_write(0, 'a')
3162hunk ./src/allmydata/test/test_backends.py 234
3163-        self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
3164+        self.failUnlessReallyEqual(fobj.buffer, share_file_data)
3165+        spaceint = self.s.allocated_size()
3166+        self.failUnlessReallyEqual(spaceint, 1)
3167+
3168+        bs[0].remote_close()
3169 
3170         # What happens when there's not enough space for the client's request?
3171hunk ./src/allmydata/test/test_backends.py 241
3172-        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
3173+        # XXX Need to uncomment! alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
3174 
3175         # Now test the allocated_size method.
3176hunk ./src/allmydata/test/test_backends.py 244
3177-        spaceint = self.s.allocated_size()
3178-        self.failUnlessReallyEqual(spaceint, 1)
3179+        #self.failIf(mockexists.called, mockexists.call_args_list)
3180+        #self.failIf(mockmake_dirs.called, mockmake_dirs.call_args_list)
3181+        #self.failIf(mockrename.called, mockrename.call_args_list)
3182+        #self.failIf(mockstat.called, mockstat.call_args_list)
3183 
3184     @mock.patch('os.path.exists')
3185     @mock.patch('os.path.getsize')
3186}
3187[checkpoint12 testing correct behavior with regard to incoming and final
3188wilcoxjg@gmail.com**20110710191915
3189 Ignore-this: 34413c6dc100f8aec3c1bb217eaa6bc7
3190] {
3191hunk ./src/allmydata/storage/backends/das/core.py 74
3192         self.lease_checker = FSLeaseCheckingCrawler(statefile, historyfile, expiration_policy)
3193         self.lease_checker.setServiceParent(self)
3194 
3195+    def get_incoming(self, storageindex):
3196+        return set((1,))
3197+
3198     def get_available_space(self):
3199         if self.readonly:
3200             return 0
3201hunk ./src/allmydata/storage/server.py 77
3202         """Return a dict, indexed by category, that contains a dict of
3203         latency numbers for each category. If there are sufficient samples
3204         for unambiguous interpretation, each dict will contain the
3205-        following keys: mean, 01_0_percentile, 10_0_percentile,
3206+        following keys: samplesize, mean, 01_0_percentile, 10_0_percentile,
3207         50_0_percentile (median), 90_0_percentile, 95_0_percentile,
3208         99_0_percentile, 99_9_percentile.  If there are insufficient
3209         samples for a given percentile to be interpreted unambiguously
3210hunk ./src/allmydata/storage/server.py 120
3211 
3212     def get_stats(self):
3213         # remember: RIStatsProvider requires that our return dict
3214-        # contains numeric values.
3215+        # contains numeric, or None values.
3216         stats = { 'storage_server.allocated': self.allocated_size(), }
3217         stats['storage_server.reserved_space'] = self.reserved_space
3218         for category,ld in self.get_latencies().items():
3219hunk ./src/allmydata/storage/server.py 185
3220         start = time.time()
3221         self.count("allocate")
3222         alreadygot = set()
3223+        incoming = set()
3224         bucketwriters = {} # k: shnum, v: BucketWriter
3225 
3226         si_s = si_b2a(storage_index)
3227hunk ./src/allmydata/storage/server.py 219
3228             alreadygot.add(share.shnum)
3229             share.add_or_renew_lease(lease_info)
3230 
3231-        for shnum in (sharenums - alreadygot):
3232+        # fill incoming with all shares that are incoming use a set operation since there's no need to operate on individual pieces
3233+        incoming = self.backend.get_incoming(storageindex)
3234+
3235+        for shnum in ((sharenums - alreadygot) - incoming):
3236             if (not limited) or (remaining_space >= max_space_per_bucket):
3237                 bw = self.backend.make_bucket_writer(storage_index, shnum, max_space_per_bucket, lease_info, canary)
3238                 bucketwriters[shnum] = bw
3239hunk ./src/allmydata/storage/server.py 229
3240                 self._active_writers[bw] = 1
3241                 if limited:
3242                     remaining_space -= max_space_per_bucket
3243-
3244-        #XXX We SHOULD DOCUMENT LATER.
3245+            else:
3246+                # Bummer not enough space to accept this share.
3247+                pass
3248 
3249         self.add_latency("allocate", time.time() - start)
3250         return alreadygot, bucketwriters
3251hunk ./src/allmydata/storage/server.py 323
3252         self.add_latency("get", time.time() - start)
3253         return bucketreaders
3254 
3255-    def get_leases(self, storage_index):
3256+    def remote_get_incoming(self, storageindex):
3257+        incoming_share_set = self.backend.get_incoming(storageindex)
3258+        return incoming_share_set
3259+
3260+    def get_leases(self, storageindex):
3261         """Provide an iterator that yields all of the leases attached to this
3262         bucket. Each lease is returned as a LeaseInfo instance.
3263 
3264hunk ./src/allmydata/storage/server.py 337
3265         # since all shares get the same lease data, we just grab the leases
3266         # from the first share
3267         try:
3268-            shnum, filename = self._get_shares(storage_index).next()
3269+            shnum, filename = self._get_shares(storageindex).next()
3270             sf = ShareFile(filename)
3271             return sf.get_leases()
3272         except StopIteration:
3273hunk ./src/allmydata/test/test_backends.py 182
3274 
3275         share = MockShare()
3276         def call_get_shares(storageindex):
3277-            return [share]
3278+            #XXX  Whether or not to return an empty list depends on which case of get_shares we are interested in.
3279+            return []#share]
3280 
3281         mockget_shares.side_effect = call_get_shares
3282 
3283hunk ./src/allmydata/test/test_backends.py 222
3284         mockmake_dirs.side_effect = call_make_dirs
3285 
3286         def call_rename(src, dst):
3287-           self.failUnlessReallyEqual(src, shareincomingname)
3288-           self.failUnlessReallyEqual(dst, sharefname)
3289+            self.failUnlessReallyEqual(src, shareincomingname)
3290+            self.failUnlessReallyEqual(dst, sharefname)
3291             
3292         mockrename.side_effect = call_rename
3293 
3294hunk ./src/allmydata/test/test_backends.py 233
3295         mockexists.side_effect = call_exists
3296 
3297         # Now begin the test.
3298+
3299+        # XXX (0) ???  Fail unless something is not properly set-up?
3300         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
3301hunk ./src/allmydata/test/test_backends.py 236
3302+
3303+        # XXX (1) Inspect incoming and fail unless the sharenum is listed there.
3304+        alreadygota, bsa = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
3305+
3306+        self.failUnlessEqual(self.s.remote_get_incoming('teststorage_index'), set((0,)))
3307+        # XXX (2) Test that no bucketwriter results from a remote_allocate_buckets
3308+        # with the same si, until BucketWriter.remote_close() has been called.
3309+        # self.failIf(bsa)
3310+
3311+        # XXX (3) Inspect final and fail unless there's nothing there.
3312         bs[0].remote_write(0, 'a')
3313hunk ./src/allmydata/test/test_backends.py 247
3314+        # XXX (4a) Inspect final and fail unless share 0 is there.
3315+        # XXX (4b) Inspect incoming and fail unless share 0 is NOT there.
3316         self.failUnlessReallyEqual(fobj.buffer, share_file_data)
3317         spaceint = self.s.allocated_size()
3318         self.failUnlessReallyEqual(spaceint, 1)
3319hunk ./src/allmydata/test/test_backends.py 253
3320 
3321+        #  If there's something in self.alreadygot prior to remote_close() then fail.
3322         bs[0].remote_close()
3323 
3324         # What happens when there's not enough space for the client's request?
3325hunk ./src/allmydata/test/test_backends.py 260
3326         # XXX Need to uncomment! alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
3327 
3328         # Now test the allocated_size method.
3329-        #self.failIf(mockexists.called, mockexists.call_args_list)
3330+        # self.failIf(mockexists.called, mockexists.call_args_list)
3331         #self.failIf(mockmake_dirs.called, mockmake_dirs.call_args_list)
3332         #self.failIf(mockrename.called, mockrename.call_args_list)
3333         #self.failIf(mockstat.called, mockstat.call_args_list)
3334}
3335[fix inconsistent naming of storage_index vs storageindex in storage/server.py
3336wilcoxjg@gmail.com**20110710195139
3337 Ignore-this: 3b05cf549f3374f2c891159a8d4015aa
3338] {
3339hunk ./src/allmydata/storage/server.py 220
3340             share.add_or_renew_lease(lease_info)
3341 
3342         # fill incoming with all shares that are incoming use a set operation since there's no need to operate on individual pieces
3343-        incoming = self.backend.get_incoming(storageindex)
3344+        incoming = self.backend.get_incoming(storage_index)
3345 
3346         for shnum in ((sharenums - alreadygot) - incoming):
3347             if (not limited) or (remaining_space >= max_space_per_bucket):
3348hunk ./src/allmydata/storage/server.py 323
3349         self.add_latency("get", time.time() - start)
3350         return bucketreaders
3351 
3352-    def remote_get_incoming(self, storageindex):
3353-        incoming_share_set = self.backend.get_incoming(storageindex)
3354+    def remote_get_incoming(self, storage_index):
3355+        incoming_share_set = self.backend.get_incoming(storage_index)
3356         return incoming_share_set
3357 
3358hunk ./src/allmydata/storage/server.py 327
3359-    def get_leases(self, storageindex):
3360+    def get_leases(self, storage_index):
3361         """Provide an iterator that yields all of the leases attached to this
3362         bucket. Each lease is returned as a LeaseInfo instance.
3363 
3364hunk ./src/allmydata/storage/server.py 337
3365         # since all shares get the same lease data, we just grab the leases
3366         # from the first share
3367         try:
3368-            shnum, filename = self._get_shares(storageindex).next()
3369+            shnum, filename = self._get_shares(storage_index).next()
3370             sf = ShareFile(filename)
3371             return sf.get_leases()
3372         except StopIteration:
3373replace ./src/allmydata/storage/server.py [A-Za-z_0-9] storage_index storageindex
3374}
3375[adding comments to clarify what I'm about to do.
3376wilcoxjg@gmail.com**20110710220623
3377 Ignore-this: 44f97633c3eac1047660272e2308dd7c
3378] {
3379hunk ./src/allmydata/storage/backends/das/core.py 8
3380 
3381 import os, re, weakref, struct, time
3382 
3383-from foolscap.api import Referenceable
3384+#from foolscap.api import Referenceable
3385 from twisted.application import service
3386 
3387 from zope.interface import implements
3388hunk ./src/allmydata/storage/backends/das/core.py 12
3389-from allmydata.interfaces import RIStorageServer, IStatsProducer, IShareStore
3390+from allmydata.interfaces import IStatsProducer, IShareStore# XXX, RIStorageServer
3391 from allmydata.util import fileutil, idlib, log, time_format
3392 import allmydata # for __full_version__
3393 
3394hunk ./src/allmydata/storage/server.py 219
3395             alreadygot.add(share.shnum)
3396             share.add_or_renew_lease(lease_info)
3397 
3398-        # fill incoming with all shares that are incoming use a set operation since there's no need to operate on individual pieces
3399+        # fill incoming with all shares that are incoming use a set operation
3400+        # since there's no need to operate on individual pieces
3401         incoming = self.backend.get_incoming(storageindex)
3402 
3403         for shnum in ((sharenums - alreadygot) - incoming):
3404hunk ./src/allmydata/test/test_backends.py 245
3405         # with the same si, until BucketWriter.remote_close() has been called.
3406         # self.failIf(bsa)
3407 
3408-        # XXX (3) Inspect final and fail unless there's nothing there.
3409         bs[0].remote_write(0, 'a')
3410hunk ./src/allmydata/test/test_backends.py 246
3411-        # XXX (4a) Inspect final and fail unless share 0 is there.
3412-        # XXX (4b) Inspect incoming and fail unless share 0 is NOT there.
3413         self.failUnlessReallyEqual(fobj.buffer, share_file_data)
3414         spaceint = self.s.allocated_size()
3415         self.failUnlessReallyEqual(spaceint, 1)
3416hunk ./src/allmydata/test/test_backends.py 250
3417 
3418-        #  If there's something in self.alreadygot prior to remote_close() then fail.
3419+        # XXX (3) Inspect final and fail unless there's nothing there.
3420         bs[0].remote_close()
3421hunk ./src/allmydata/test/test_backends.py 252
3422+        # XXX (4a) Inspect final and fail unless share 0 is there.
3423+        # XXX (4b) Inspect incoming and fail unless share 0 is NOT there.
3424 
3425         # What happens when there's not enough space for the client's request?
3426         # XXX Need to uncomment! alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
3427}
3428[branching back, no longer attempting to mock inside TestServerFSBackend
3429wilcoxjg@gmail.com**20110711190849
3430 Ignore-this: e72c9560f8d05f1f93d46c91d2354df0
3431] {
3432hunk ./src/allmydata/storage/backends/das/core.py 75
3433         self.lease_checker.setServiceParent(self)
3434 
3435     def get_incoming(self, storageindex):
3436-        return set((1,))
3437-
3438-    def get_available_space(self):
3439-        if self.readonly:
3440-            return 0
3441-        return fileutil.get_available_space(self.storedir, self.reserved_space)
3442+        """Return the set of incoming shnums."""
3443+        return set(os.listdir(self.incomingdir))
3444 
3445     def get_shares(self, storage_index):
3446         """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
3447hunk ./src/allmydata/storage/backends/das/core.py 90
3448             # Commonly caused by there being no shares at all.
3449             pass
3450         
3451+    def get_available_space(self):
3452+        if self.readonly:
3453+            return 0
3454+        return fileutil.get_available_space(self.storedir, self.reserved_space)
3455+
3456     def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
3457         immsh = ImmutableShare(self.sharedir, storage_index, shnum, max_size=max_space_per_bucket, create=True)
3458         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
3459hunk ./src/allmydata/test/test_backends.py 27
3460 
3461 testnodeid = 'testnodeidxxxxxxxxxx'
3462 tempdir = 'teststoredir'
3463-sharedirfinalname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3464-sharedirincomingname = os.path.join(tempdir, 'shares', 'incoming', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3465+basedir = os.path.join(tempdir, 'shares')
3466+baseincdir = os.path.join(basedir, 'incoming')
3467+sharedirfinalname = os.path.join(basedir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3468+sharedirincomingname = os.path.join(baseincdir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3469 shareincomingname = os.path.join(sharedirincomingname, '0')
3470 sharefname = os.path.join(sharedirfinalname, '0')
3471 
3472hunk ./src/allmydata/test/test_backends.py 142
3473                              mockmake_dirs, mockrename):
3474         """ Write a new share. """
3475 
3476-        def call_listdir(dirname):
3477-            self.failUnlessReallyEqual(dirname, sharedirfinalname)
3478-            raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
3479-
3480-        mocklistdir.side_effect = call_listdir
3481-
3482-        def call_isdir(dirname):
3483-            #XXX Should there be any other tests here?
3484-            self.failUnlessReallyEqual(dirname, sharedirfinalname)
3485-            return True
3486-
3487-        mockisdir.side_effect = call_isdir
3488-
3489-        def call_mkdir(dirname, permissions):
3490-            if dirname not in [sharedirfinalname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
3491-                self.Fail
3492-            else:
3493-                return True
3494-
3495-        mockmkdir.side_effect = call_mkdir
3496-
3497-        def call_get_available_space(storedir, reserved_space):
3498-            self.failUnlessReallyEqual(storedir, tempdir)
3499-            return 1
3500-
3501-        mockget_available_space.side_effect = call_get_available_space
3502-
3503-        mocktime.return_value = 0
3504         class MockShare:
3505             def __init__(self):
3506                 self.shnum = 1
3507hunk ./src/allmydata/test/test_backends.py 152
3508                 self.failUnlessReallyEqual(lease_info.owner_num, 0)
3509                 self.failUnlessReallyEqual(lease_info.expiration_time, mocktime() + 31*24*60*60)
3510                 self.failUnlessReallyEqual(lease_info.nodeid, testnodeid)
3511-               
3512 
3513         share = MockShare()
3514hunk ./src/allmydata/test/test_backends.py 154
3515-        def call_get_shares(storageindex):
3516-            #XXX  Whether or not to return an empty list depends on which case of get_shares we are interested in.
3517-            return []#share]
3518-
3519-        mockget_shares.side_effect = call_get_shares
3520 
3521         class MockFile:
3522             def __init__(self):
3523hunk ./src/allmydata/test/test_backends.py 176
3524             def tell(self):
3525                 return self.pos
3526 
3527-
3528         fobj = MockFile()
3529hunk ./src/allmydata/test/test_backends.py 177
3530+
3531+        directories = {}
3532+        def call_listdir(dirname):
3533+            if dirname not in directories:
3534+                raise OSError(2, "No such file or directory: '%s'" % os.path.join(basedir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
3535+            else:
3536+                return directories[dirname].get_contents()
3537+
3538+        mocklistdir.side_effect = call_listdir
3539+
3540+        class MockDir:
3541+            def __init__(self, dirname):
3542+                self.name = dirname
3543+                self.contents = []
3544+   
3545+            def get_contents(self):
3546+                return self.contents
3547+
3548+        def call_isdir(dirname):
3549+            #XXX Should there be any other tests here?
3550+            self.failUnlessReallyEqual(dirname, sharedirfinalname)
3551+            return True
3552+
3553+        mockisdir.side_effect = call_isdir
3554+
3555+        def call_mkdir(dirname, permissions):
3556+            if dirname not in [sharedirfinalname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
3557+                self.Fail
3558+            if dirname in directories:
3559+                raise OSError(17, "File exists: '%s'" % dirname)
3560+                self.Fail
3561+            elif dirname not in directories:
3562+                directories[dirname] = MockDir(dirname)
3563+                return True
3564+
3565+        mockmkdir.side_effect = call_mkdir
3566+
3567+        def call_get_available_space(storedir, reserved_space):
3568+            self.failUnlessReallyEqual(storedir, tempdir)
3569+            return 1
3570+
3571+        mockget_available_space.side_effect = call_get_available_space
3572+
3573+        mocktime.return_value = 0
3574+        def call_get_shares(storageindex):
3575+            #XXX  Whether or not to return an empty list depends on which case of get_shares we are interested in.
3576+            return []#share]
3577+
3578+        mockget_shares.side_effect = call_get_shares
3579+
3580         def call_open(fname, mode):
3581             self.failUnlessReallyEqual(fname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a', '0' ))
3582             return fobj
3583}
3584
3585Context:
3586
3587[add Protovis.js-based download-status timeline visualization
3588Brian Warner <warner@lothar.com>**20110629222606
3589 Ignore-this: 477ccef5c51b30e246f5b6e04ab4a127
3590 
3591 provide status overlap info on the webapi t=json output, add decode/decrypt
3592 rate tooltips, add zoomin/zoomout buttons
3593]
3594[add more download-status data, fix tests
3595Brian Warner <warner@lothar.com>**20110629222555
3596 Ignore-this: e9e0b7e0163f1e95858aa646b9b17b8c
3597]
3598[prepare for viz: improve DownloadStatus events
3599Brian Warner <warner@lothar.com>**20110629222542
3600 Ignore-this: 16d0bde6b734bb501aa6f1174b2b57be
3601 
3602 consolidate IDownloadStatusHandlingConsumer stuff into DownloadNode
3603]
3604[docs: fix error in crypto specification that was noticed by Taylor R Campbell <campbell+tahoe@mumble.net>
3605zooko@zooko.com**20110629185711
3606 Ignore-this: b921ed60c1c8ba3c390737fbcbe47a67
3607]
3608[setup.py: don't make bin/tahoe.pyscript executable. fixes #1347
3609david-sarah@jacaranda.org**20110130235809
3610 Ignore-this: 3454c8b5d9c2c77ace03de3ef2d9398a
3611]
3612[Makefile: remove targets relating to 'setup.py check_auto_deps' which no longer exists. fixes #1345
3613david-sarah@jacaranda.org**20110626054124
3614 Ignore-this: abb864427a1b91bd10d5132b4589fd90
3615]
3616[Makefile: add 'make check' as an alias for 'make test'. Also remove an unnecessary dependency of 'test' on 'build' and 'src/allmydata/_version.py'. fixes #1344
3617david-sarah@jacaranda.org**20110623205528
3618 Ignore-this: c63e23146c39195de52fb17c7c49b2da
3619]
3620[Rename test_package_initialization.py to (much shorter) test_import.py .
3621Brian Warner <warner@lothar.com>**20110611190234
3622 Ignore-this: 3eb3dbac73600eeff5cfa6b65d65822
3623 
3624 The former name was making my 'ls' listings hard to read, by forcing them
3625 down to just two columns.
3626]
3627[tests: fix tests to accomodate [20110611153758-92b7f-0ba5e4726fb6318dac28fb762a6512a003f4c430]
3628zooko@zooko.com**20110611163741
3629 Ignore-this: 64073a5f39e7937e8e5e1314c1a302d1
3630 Apparently none of the two authors (stercor, terrell), three reviewers (warner, davidsarah, terrell), or one committer (me) actually ran the tests. This is presumably due to #20.
3631 fixes #1412
3632]
3633[wui: right-align the size column in the WUI
3634zooko@zooko.com**20110611153758
3635 Ignore-this: 492bdaf4373c96f59f90581c7daf7cd7
3636 Thanks to Ted "stercor" Rolle Jr. and Terrell Russell.
3637 fixes #1412
3638]
3639[docs: three minor fixes
3640zooko@zooko.com**20110610121656
3641 Ignore-this: fec96579eb95aceb2ad5fc01a814c8a2
3642 CREDITS for arc for stats tweak
3643 fix link to .zip file in quickstart.rst (thanks to ChosenOne for noticing)
3644 English usage tweak
3645]
3646[docs/running.rst: fix stray HTML (not .rst) link noticed by ChosenOne.
3647david-sarah@jacaranda.org**20110609223719
3648 Ignore-this: fc50ac9c94792dcac6f1067df8ac0d4a
3649]
3650[server.py:  get_latencies now reports percentiles _only_ if there are sufficient observations for the interpretation of the percentile to be unambiguous.
3651wilcoxjg@gmail.com**20110527120135
3652 Ignore-this: 2e7029764bffc60e26f471d7c2b6611e
3653 interfaces.py:  modified the return type of RIStatsProvider.get_stats to allow for None as a return value
3654 NEWS.rst, stats.py: documentation of change to get_latencies
3655 stats.rst: now documents percentile modification in get_latencies
3656 test_storage.py:  test_latencies now expects None in output categories that contain too few samples for the associated percentile to be unambiguously reported.
3657 fixes #1392
3658]
3659[docs: revert link in relnotes.txt from NEWS.rst to NEWS, since the former did not exist at revision 5000.
3660david-sarah@jacaranda.org**20110517011214
3661 Ignore-this: 6a5be6e70241e3ec0575641f64343df7
3662]
3663[docs: convert NEWS to NEWS.rst and change all references to it.
3664david-sarah@jacaranda.org**20110517010255
3665 Ignore-this: a820b93ea10577c77e9c8206dbfe770d
3666]
3667[docs: remove out-of-date docs/testgrid/introducer.furl and containing directory. fixes #1404
3668david-sarah@jacaranda.org**20110512140559
3669 Ignore-this: 784548fc5367fac5450df1c46890876d
3670]
3671[scripts/common.py: don't assume that the default alias is always 'tahoe' (it is, but the API of get_alias doesn't say so). refs #1342
3672david-sarah@jacaranda.org**20110130164923
3673 Ignore-this: a271e77ce81d84bb4c43645b891d92eb
3674]
3675[setup: don't catch all Exception from check_requirement(), but only PackagingError and ImportError
3676zooko@zooko.com**20110128142006
3677 Ignore-this: 57d4bc9298b711e4bc9dc832c75295de
3678 I noticed this because I had accidentally inserted a bug which caused AssertionError to be raised from check_requirement().
3679]
3680[M-x whitespace-cleanup
3681zooko@zooko.com**20110510193653
3682 Ignore-this: dea02f831298c0f65ad096960e7df5c7
3683]
3684[docs: fix typo in running.rst, thanks to arch_o_median
3685zooko@zooko.com**20110510193633
3686 Ignore-this: ca06de166a46abbc61140513918e79e8
3687]
3688[relnotes.txt: don't claim to work on Cygwin (which has been untested for some time). refs #1342
3689david-sarah@jacaranda.org**20110204204902
3690 Ignore-this: 85ef118a48453d93fa4cddc32d65b25b
3691]
3692[relnotes.txt: forseeable -> foreseeable. refs #1342
3693david-sarah@jacaranda.org**20110204204116
3694 Ignore-this: 746debc4d82f4031ebf75ab4031b3a9
3695]
3696[replace remaining .html docs with .rst docs
3697zooko@zooko.com**20110510191650
3698 Ignore-this: d557d960a986d4ac8216d1677d236399
3699 Remove install.html (long since deprecated).
3700 Also replace some obsolete references to install.html with references to quickstart.rst.
3701 Fix some broken internal references within docs/historical/historical_known_issues.txt.
3702 Thanks to Ravi Pinjala and Patrick McDonald.
3703 refs #1227
3704]
3705[docs: FTP-and-SFTP.rst: fix a minor error and update the information about which version of Twisted fixes #1297
3706zooko@zooko.com**20110428055232
3707 Ignore-this: b63cfb4ebdbe32fb3b5f885255db4d39
3708]
3709[munin tahoe_files plugin: fix incorrect file count
3710francois@ctrlaltdel.ch**20110428055312
3711 Ignore-this: 334ba49a0bbd93b4a7b06a25697aba34
3712 fixes #1391
3713]
3714[corrected "k must never be smaller than N" to "k must never be greater than N"
3715secorp@allmydata.org**20110425010308
3716 Ignore-this: 233129505d6c70860087f22541805eac
3717]
3718[Fix a test failure in test_package_initialization on Python 2.4.x due to exceptions being stringified differently than in later versions of Python. refs #1389
3719david-sarah@jacaranda.org**20110411190738
3720 Ignore-this: 7847d26bc117c328c679f08a7baee519
3721]
3722[tests: add test for including the ImportError message and traceback entry in the summary of errors from importing dependencies. refs #1389
3723david-sarah@jacaranda.org**20110410155844
3724 Ignore-this: fbecdbeb0d06a0f875fe8d4030aabafa
3725]
3726[allmydata/__init__.py: preserve the message and last traceback entry (file, line number, function, and source line) of ImportErrors in the package versions string. fixes #1389
3727david-sarah@jacaranda.org**20110410155705
3728 Ignore-this: 2f87b8b327906cf8bfca9440a0904900
3729]
3730[remove unused variable detected by pyflakes
3731zooko@zooko.com**20110407172231
3732 Ignore-this: 7344652d5e0720af822070d91f03daf9
3733]
3734[allmydata/__init__.py: Nicer reporting of unparseable version numbers in dependencies. fixes #1388
3735david-sarah@jacaranda.org**20110401202750
3736 Ignore-this: 9c6bd599259d2405e1caadbb3e0d8c7f
3737]
3738[update FTP-and-SFTP.rst: the necessary patch is included in Twisted-10.1
3739Brian Warner <warner@lothar.com>**20110325232511
3740 Ignore-this: d5307faa6900f143193bfbe14e0f01a
3741]
3742[control.py: remove all uses of s.get_serverid()
3743warner@lothar.com**20110227011203
3744 Ignore-this: f80a787953bd7fa3d40e828bde00e855
3745]
3746[web: remove some uses of s.get_serverid(), not all
3747warner@lothar.com**20110227011159
3748 Ignore-this: a9347d9cf6436537a47edc6efde9f8be
3749]
3750[immutable/downloader/fetcher.py: remove all get_serverid() calls
3751warner@lothar.com**20110227011156
3752 Ignore-this: fb5ef018ade1749348b546ec24f7f09a
3753]
3754[immutable/downloader/fetcher.py: fix diversity bug in server-response handling
3755warner@lothar.com**20110227011153
3756 Ignore-this: bcd62232c9159371ae8a16ff63d22c1b
3757 
3758 When blocks terminate (either COMPLETE or CORRUPT/DEAD/BADSEGNUM), the
3759 _shares_from_server dict was being popped incorrectly (using shnum as the
3760 index instead of serverid). I'm still thinking through the consequences of
3761 this bug. It was probably benign and really hard to detect. I think it would
3762 cause us to incorrectly believe that we're pulling too many shares from a
3763 server, and thus prefer a different server rather than asking for a second
3764 share from the first server. The diversity code is intended to spread out the
3765 number of shares simultaneously being requested from each server, but with
3766 this bug, it might be spreading out the total number of shares requested at
3767 all, not just simultaneously. (note that SegmentFetcher is scoped to a single
3768 segment, so the effect doesn't last very long).
3769]
3770[immutable/downloader/share.py: reduce get_serverid(), one left, update ext deps
3771warner@lothar.com**20110227011150
3772 Ignore-this: d8d56dd8e7b280792b40105e13664554
3773 
3774 test_download.py: create+check MyShare instances better, make sure they share
3775 Server objects, now that finder.py cares
3776]
3777[immutable/downloader/finder.py: reduce use of get_serverid(), one left
3778warner@lothar.com**20110227011146
3779 Ignore-this: 5785be173b491ae8a78faf5142892020
3780]
3781[immutable/offloaded.py: reduce use of get_serverid() a bit more
3782warner@lothar.com**20110227011142
3783 Ignore-this: b48acc1b2ae1b311da7f3ba4ffba38f
3784]
3785[immutable/upload.py: reduce use of get_serverid()
3786warner@lothar.com**20110227011138
3787 Ignore-this: ffdd7ff32bca890782119a6e9f1495f6
3788]
3789[immutable/checker.py: remove some uses of s.get_serverid(), not all
3790warner@lothar.com**20110227011134
3791 Ignore-this: e480a37efa9e94e8016d826c492f626e
3792]
3793[add remaining get_* methods to storage_client.Server, NoNetworkServer, and
3794warner@lothar.com**20110227011132
3795 Ignore-this: 6078279ddf42b179996a4b53bee8c421
3796 MockIServer stubs
3797]
3798[upload.py: rearrange _make_trackers a bit, no behavior changes
3799warner@lothar.com**20110227011128
3800 Ignore-this: 296d4819e2af452b107177aef6ebb40f
3801]
3802[happinessutil.py: finally rename merge_peers to merge_servers
3803warner@lothar.com**20110227011124
3804 Ignore-this: c8cd381fea1dd888899cb71e4f86de6e
3805]
3806[test_upload.py: factor out FakeServerTracker
3807warner@lothar.com**20110227011120
3808 Ignore-this: 6c182cba90e908221099472cc159325b
3809]
3810[test_upload.py: server-vs-tracker cleanup
3811warner@lothar.com**20110227011115
3812 Ignore-this: 2915133be1a3ba456e8603885437e03
3813]
3814[happinessutil.py: server-vs-tracker cleanup
3815warner@lothar.com**20110227011111
3816 Ignore-this: b856c84033562d7d718cae7cb01085a9
3817]
3818[upload.py: more tracker-vs-server cleanup
3819warner@lothar.com**20110227011107
3820 Ignore-this: bb75ed2afef55e47c085b35def2de315
3821]
3822[upload.py: fix var names to avoid confusion between 'trackers' and 'servers'
3823warner@lothar.com**20110227011103
3824 Ignore-this: 5d5e3415b7d2732d92f42413c25d205d
3825]
3826[refactor: s/peer/server/ in immutable/upload, happinessutil.py, test_upload
3827warner@lothar.com**20110227011100
3828 Ignore-this: 7ea858755cbe5896ac212a925840fe68
3829 
3830 No behavioral changes, just updating variable/method names and log messages.
3831 The effects outside these three files should be minimal: some exception
3832 messages changed (to say "server" instead of "peer"), and some internal class
3833 names were changed. A few things still use "peer" to minimize external
3834 changes, like UploadResults.timings["peer_selection"] and
3835 happinessutil.merge_peers, which can be changed later.
3836]
3837[storage_client.py: clean up test_add_server/test_add_descriptor, remove .test_servers
3838warner@lothar.com**20110227011056
3839 Ignore-this: efad933e78179d3d5fdcd6d1ef2b19cc
3840]
3841[test_client.py, upload.py:: remove KiB/MiB/etc constants, and other dead code
3842warner@lothar.com**20110227011051
3843 Ignore-this: dc83c5794c2afc4f81e592f689c0dc2d
3844]
3845[test: increase timeout on a network test because Francois's ARM machine hit that timeout
3846zooko@zooko.com**20110317165909
3847 Ignore-this: 380c345cdcbd196268ca5b65664ac85b
3848 I'm skeptical that the test was proceeding correctly but ran out of time. It seems more likely that it had gotten hung. But if we raise the timeout to an even more extravagant number then we can be even more certain that the test was never going to finish.
3849]
3850[docs/configuration.rst: add a "Frontend Configuration" section
3851Brian Warner <warner@lothar.com>**20110222014323
3852 Ignore-this: 657018aa501fe4f0efef9851628444ca
3853 
3854 this points to docs/frontends/*.rst, which were previously underlinked
3855]
3856[web/filenode.py: avoid calling req.finish() on closed HTTP connections. Closes #1366
3857"Brian Warner <warner@lothar.com>"**20110221061544
3858 Ignore-this: 799d4de19933f2309b3c0c19a63bb888
3859]
3860[Add unit tests for cross_check_pkg_resources_versus_import, and a regression test for ref #1355. This requires a little refactoring to make it testable.
3861david-sarah@jacaranda.org**20110221015817
3862 Ignore-this: 51d181698f8c20d3aca58b057e9c475a
3863]
3864[allmydata/__init__.py: .name was used in place of the correct .__name__ when printing an exception. Also, robustify string formatting by using %r instead of %s in some places. fixes #1355.
3865david-sarah@jacaranda.org**20110221020125
3866 Ignore-this: b0744ed58f161bf188e037bad077fc48
3867]
3868[Refactor StorageFarmBroker handling of servers
3869Brian Warner <warner@lothar.com>**20110221015804
3870 Ignore-this: 842144ed92f5717699b8f580eab32a51
3871 
3872 Pass around IServer instance instead of (peerid, rref) tuple. Replace
3873 "descriptor" with "server". Other replacements:
3874 
3875  get_all_servers -> get_connected_servers/get_known_servers
3876  get_servers_for_index -> get_servers_for_psi (now returns IServers)
3877 
3878 This change still needs to be pushed further down: lots of code is now
3879 getting the IServer and then distributing (peerid, rref) internally.
3880 Instead, it ought to distribute the IServer internally and delay
3881 extracting a serverid or rref until the last moment.
3882 
3883 no_network.py was updated to retain parallelism.
3884]
3885[TAG allmydata-tahoe-1.8.2
3886warner@lothar.com**20110131020101]
3887Patch bundle hash:
3888ee4fcac34706b6a817b8560ce9231a527e1b7ab4