Ticket #999: checkpoint6.darcs.patch

File checkpoint6.darcs.patch, 127.2 KB (added by arch_o_median, at 2011-07-06T19:08:50Z)

backing myself up, some comments cleaned in interfaces, new tests in test_backends

Line 
1Fri Mar 25 14:35:14 MDT 2011  wilcoxjg@gmail.com
2  * storage: new mocking tests of storage server read and write
3  There are already tests of read and functionality in test_storage.py, but those tests let the code under test use a real filesystem whereas these tests mock all file system calls.
4
5Fri Jun 24 14:28:50 MDT 2011  wilcoxjg@gmail.com
6  * server.py, test_backends.py, interfaces.py, immutable.py (others?): working patch for implementation of backends plugin
7  sloppy not for production
8
9Sat Jun 25 23:27:32 MDT 2011  wilcoxjg@gmail.com
10  * a temp patch used as a snapshot
11
12Sat Jun 25 23:32:44 MDT 2011  wilcoxjg@gmail.com
13  * snapshot of progress on backend implementation (not suitable for trunk)
14
15Sun Jun 26 10:57:15 MDT 2011  wilcoxjg@gmail.com
16  * checkpoint patch
17
18Tue Jun 28 14:22:02 MDT 2011  wilcoxjg@gmail.com
19  * checkpoint4
20
21Mon Jul  4 21:46:26 MDT 2011  wilcoxjg@gmail.com
22  * checkpoint5
23
24Wed Jul  6 13:08:24 MDT 2011  wilcoxjg@gmail.com
25  * checkpoint 6
26
27New patches:
28
29[storage: new mocking tests of storage server read and write
30wilcoxjg@gmail.com**20110325203514
31 Ignore-this: df65c3c4f061dd1516f88662023fdb41
32 There are already tests of read and functionality in test_storage.py, but those tests let the code under test use a real filesystem whereas these tests mock all file system calls.
33] {
34addfile ./src/allmydata/test/test_server.py
35hunk ./src/allmydata/test/test_server.py 1
36+from twisted.trial import unittest
37+
38+from StringIO import StringIO
39+
40+from allmydata.test.common_util import ReallyEqualMixin
41+
42+import mock
43+
44+# This is the code that we're going to be testing.
45+from allmydata.storage.server import StorageServer
46+
47+# The following share file contents was generated with
48+# storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
49+# with share data == 'a'.
50+share_data = 'a\x00\x00\x00\x00xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy\x00(\xde\x80'
51+share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data
52+
53+sharefname = 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a/0'
54+
55+class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
56+    @mock.patch('__builtin__.open')
57+    def test_create_server(self, mockopen):
58+        """ This tests whether a server instance can be constructed. """
59+
60+        def call_open(fname, mode):
61+            if fname == 'testdir/bucket_counter.state':
62+                raise IOError(2, "No such file or directory: 'testdir/bucket_counter.state'")
63+            elif fname == 'testdir/lease_checker.state':
64+                raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
65+            elif fname == 'testdir/lease_checker.history':
66+                return StringIO()
67+        mockopen.side_effect = call_open
68+
69+        # Now begin the test.
70+        s = StorageServer('testdir', 'testnodeidxxxxxxxxxx')
71+
72+        # You passed!
73+
74+class TestServer(unittest.TestCase, ReallyEqualMixin):
75+    @mock.patch('__builtin__.open')
76+    def setUp(self, mockopen):
77+        def call_open(fname, mode):
78+            if fname == 'testdir/bucket_counter.state':
79+                raise IOError(2, "No such file or directory: 'testdir/bucket_counter.state'")
80+            elif fname == 'testdir/lease_checker.state':
81+                raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
82+            elif fname == 'testdir/lease_checker.history':
83+                return StringIO()
84+        mockopen.side_effect = call_open
85+
86+        self.s = StorageServer('testdir', 'testnodeidxxxxxxxxxx')
87+
88+
89+    @mock.patch('time.time')
90+    @mock.patch('os.mkdir')
91+    @mock.patch('__builtin__.open')
92+    @mock.patch('os.listdir')
93+    @mock.patch('os.path.isdir')
94+    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
95+        """Handle a report of corruption."""
96+
97+        def call_listdir(dirname):
98+            self.failUnlessReallyEqual(dirname, 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
99+            raise OSError(2, "No such file or directory: 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a'")
100+
101+        mocklistdir.side_effect = call_listdir
102+
103+        class MockFile:
104+            def __init__(self):
105+                self.buffer = ''
106+                self.pos = 0
107+            def write(self, instring):
108+                begin = self.pos
109+                padlen = begin - len(self.buffer)
110+                if padlen > 0:
111+                    self.buffer += '\x00' * padlen
112+                end = self.pos + len(instring)
113+                self.buffer = self.buffer[:begin]+instring+self.buffer[end:]
114+                self.pos = end
115+            def close(self):
116+                pass
117+            def seek(self, pos):
118+                self.pos = pos
119+            def read(self, numberbytes):
120+                return self.buffer[self.pos:self.pos+numberbytes]
121+            def tell(self):
122+                return self.pos
123+
124+        mocktime.return_value = 0
125+
126+        sharefile = MockFile()
127+        def call_open(fname, mode):
128+            self.failUnlessReallyEqual(fname, 'testdir/shares/incoming/or/orsxg5dtorxxeylhmvpws3temv4a/0' )
129+            return sharefile
130+
131+        mockopen.side_effect = call_open
132+        # Now begin the test.
133+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
134+        print bs
135+        bs[0].remote_write(0, 'a')
136+        self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
137+
138+
139+    @mock.patch('os.path.exists')
140+    @mock.patch('os.path.getsize')
141+    @mock.patch('__builtin__.open')
142+    @mock.patch('os.listdir')
143+    def test_read_share(self, mocklistdir, mockopen, mockgetsize, mockexists):
144+        """ This tests whether the code correctly finds and reads
145+        shares written out by old (Tahoe-LAFS <= v1.8.2)
146+        servers. There is a similar test in test_download, but that one
147+        is from the perspective of the client and exercises a deeper
148+        stack of code. This one is for exercising just the
149+        StorageServer object. """
150+
151+        def call_listdir(dirname):
152+            self.failUnlessReallyEqual(dirname,'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
153+            return ['0']
154+
155+        mocklistdir.side_effect = call_listdir
156+
157+        def call_open(fname, mode):
158+            self.failUnlessReallyEqual(fname, sharefname)
159+            self.failUnless('r' in mode, mode)
160+            self.failUnless('b' in mode, mode)
161+
162+            return StringIO(share_file_data)
163+        mockopen.side_effect = call_open
164+
165+        datalen = len(share_file_data)
166+        def call_getsize(fname):
167+            self.failUnlessReallyEqual(fname, sharefname)
168+            return datalen
169+        mockgetsize.side_effect = call_getsize
170+
171+        def call_exists(fname):
172+            self.failUnlessReallyEqual(fname, sharefname)
173+            return True
174+        mockexists.side_effect = call_exists
175+
176+        # Now begin the test.
177+        bs = self.s.remote_get_buckets('teststorage_index')
178+
179+        self.failUnlessEqual(len(bs), 1)
180+        b = bs[0]
181+        self.failUnlessReallyEqual(b.remote_read(0, datalen), share_data)
182+        # If you try to read past the end you get the as much data as is there.
183+        self.failUnlessReallyEqual(b.remote_read(0, datalen+20), share_data)
184+        # If you start reading past the end of the file you get the empty string.
185+        self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
186}
187[server.py, test_backends.py, interfaces.py, immutable.py (others?): working patch for implementation of backends plugin
188wilcoxjg@gmail.com**20110624202850
189 Ignore-this: ca6f34987ee3b0d25cac17c1fc22d50c
190 sloppy not for production
191] {
192move ./src/allmydata/test/test_server.py ./src/allmydata/test/test_backends.py
193hunk ./src/allmydata/storage/crawler.py 13
194     pass
195 
196 class ShareCrawler(service.MultiService):
197-    """A ShareCrawler subclass is attached to a StorageServer, and
198+    """A subcless of ShareCrawler is attached to a StorageServer, and
199     periodically walks all of its shares, processing each one in some
200     fashion. This crawl is rate-limited, to reduce the IO burden on the host,
201     since large servers can easily have a terabyte of shares, in several
202hunk ./src/allmydata/storage/crawler.py 31
203     We assume that the normal upload/download/get_buckets traffic of a tahoe
204     grid will cause the prefixdir contents to be mostly cached in the kernel,
205     or that the number of buckets in each prefixdir will be small enough to
206-    load quickly. A 1TB allmydata.com server was measured to have 2.56M
207+    load quickly. A 1TB allmydata.com server was measured to have 2.56 * 10^6
208     buckets, spread into the 1024 prefixdirs, with about 2500 buckets per
209     prefix. On this server, each prefixdir took 130ms-200ms to list the first
210     time, and 17ms to list the second time.
211hunk ./src/allmydata/storage/crawler.py 68
212     cpu_slice = 1.0 # use up to 1.0 seconds before yielding
213     minimum_cycle_time = 300 # don't run a cycle faster than this
214 
215-    def __init__(self, server, statefile, allowed_cpu_percentage=None):
216+    def __init__(self, backend, statefile, allowed_cpu_percentage=None):
217         service.MultiService.__init__(self)
218         if allowed_cpu_percentage is not None:
219             self.allowed_cpu_percentage = allowed_cpu_percentage
220hunk ./src/allmydata/storage/crawler.py 72
221-        self.server = server
222-        self.sharedir = server.sharedir
223-        self.statefile = statefile
224+        self.backend = backend
225         self.prefixes = [si_b2a(struct.pack(">H", i << (16-10)))[:2]
226                          for i in range(2**10)]
227         self.prefixes.sort()
228hunk ./src/allmydata/storage/crawler.py 446
229 
230     minimum_cycle_time = 60*60 # we don't need this more than once an hour
231 
232-    def __init__(self, server, statefile, num_sample_prefixes=1):
233-        ShareCrawler.__init__(self, server, statefile)
234+    def __init__(self, statefile, num_sample_prefixes=1):
235+        ShareCrawler.__init__(self, statefile)
236         self.num_sample_prefixes = num_sample_prefixes
237 
238     def add_initial_state(self):
239hunk ./src/allmydata/storage/expirer.py 15
240     removed.
241 
242     I collect statistics on the leases and make these available to a web
243-    status page, including::
244+    status page, including:
245 
246     Space recovered during this cycle-so-far:
247      actual (only if expiration_enabled=True):
248hunk ./src/allmydata/storage/expirer.py 51
249     slow_start = 360 # wait 6 minutes after startup
250     minimum_cycle_time = 12*60*60 # not more than twice per day
251 
252-    def __init__(self, server, statefile, historyfile,
253+    def __init__(self, statefile, historyfile,
254                  expiration_enabled, mode,
255                  override_lease_duration, # used if expiration_mode=="age"
256                  cutoff_date, # used if expiration_mode=="cutoff-date"
257hunk ./src/allmydata/storage/expirer.py 71
258         else:
259             raise ValueError("GC mode '%s' must be 'age' or 'cutoff-date'" % mode)
260         self.sharetypes_to_expire = sharetypes
261-        ShareCrawler.__init__(self, server, statefile)
262+        ShareCrawler.__init__(self, statefile)
263 
264     def add_initial_state(self):
265         # we fill ["cycle-to-date"] here (even though they will be reset in
266hunk ./src/allmydata/storage/immutable.py 44
267     sharetype = "immutable"
268 
269     def __init__(self, filename, max_size=None, create=False):
270-        """ If max_size is not None then I won't allow more than max_size to be written to me. If create=True and max_size must not be None. """
271+        """ If max_size is not None then I won't allow more than
272+        max_size to be written to me. If create=True then max_size
273+        must not be None. """
274         precondition((max_size is not None) or (not create), max_size, create)
275         self.home = filename
276         self._max_size = max_size
277hunk ./src/allmydata/storage/immutable.py 87
278 
279     def read_share_data(self, offset, length):
280         precondition(offset >= 0)
281-        # reads beyond the end of the data are truncated. Reads that start
282-        # beyond the end of the data return an empty string. I wonder why
283-        # Python doesn't do the following computation for me?
284+        # Reads beyond the end of the data are truncated. Reads that start
285+        # beyond the end of the data return an empty string.
286         seekpos = self._data_offset+offset
287         fsize = os.path.getsize(self.home)
288         actuallength = max(0, min(length, fsize-seekpos))
289hunk ./src/allmydata/storage/immutable.py 198
290             space_freed += os.stat(self.home)[stat.ST_SIZE]
291             self.unlink()
292         return space_freed
293+class NullBucketWriter(Referenceable):
294+    implements(RIBucketWriter)
295 
296hunk ./src/allmydata/storage/immutable.py 201
297+    def remote_write(self, offset, data):
298+        return
299 
300 class BucketWriter(Referenceable):
301     implements(RIBucketWriter)
302hunk ./src/allmydata/storage/server.py 7
303 from twisted.application import service
304 
305 from zope.interface import implements
306-from allmydata.interfaces import RIStorageServer, IStatsProducer
307+from allmydata.interfaces import RIStorageServer, IStatsProducer, IShareStore
308 from allmydata.util import fileutil, idlib, log, time_format
309 import allmydata # for __full_version__
310 
311hunk ./src/allmydata/storage/server.py 16
312 from allmydata.storage.lease import LeaseInfo
313 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
314      create_mutable_sharefile
315-from allmydata.storage.immutable import ShareFile, BucketWriter, BucketReader
316+from allmydata.storage.immutable import ShareFile, NullBucketWriter, BucketWriter, BucketReader
317 from allmydata.storage.crawler import BucketCountingCrawler
318 from allmydata.storage.expirer import LeaseCheckingCrawler
319 
320hunk ./src/allmydata/storage/server.py 20
321+from zope.interface import implements
322+
323+# A Backend is a MultiService so that its server's crawlers (if the server has any) can
324+# be started and stopped.
325+class Backend(service.MultiService):
326+    implements(IStatsProducer)
327+    def __init__(self):
328+        service.MultiService.__init__(self)
329+
330+    def get_bucket_shares(self):
331+        """XXX"""
332+        raise NotImplementedError
333+
334+    def get_share(self):
335+        """XXX"""
336+        raise NotImplementedError
337+
338+    def make_bucket_writer(self):
339+        """XXX"""
340+        raise NotImplementedError
341+
342+class NullBackend(Backend):
343+    def __init__(self):
344+        Backend.__init__(self)
345+
346+    def get_available_space(self):
347+        return None
348+
349+    def get_bucket_shares(self, storage_index):
350+        return set()
351+
352+    def get_share(self, storage_index, sharenum):
353+        return None
354+
355+    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
356+        return NullBucketWriter()
357+
358+class FSBackend(Backend):
359+    def __init__(self, storedir, readonly=False, reserved_space=0):
360+        Backend.__init__(self)
361+
362+        self._setup_storage(storedir, readonly, reserved_space)
363+        self._setup_corruption_advisory()
364+        self._setup_bucket_counter()
365+        self._setup_lease_checkerf()
366+
367+    def _setup_storage(self, storedir, readonly, reserved_space):
368+        self.storedir = storedir
369+        self.readonly = readonly
370+        self.reserved_space = int(reserved_space)
371+        if self.reserved_space:
372+            if self.get_available_space() is None:
373+                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
374+                        umid="0wZ27w", level=log.UNUSUAL)
375+
376+        self.sharedir = os.path.join(self.storedir, "shares")
377+        fileutil.make_dirs(self.sharedir)
378+        self.incomingdir = os.path.join(self.sharedir, 'incoming')
379+        self._clean_incomplete()
380+
381+    def _clean_incomplete(self):
382+        fileutil.rm_dir(self.incomingdir)
383+        fileutil.make_dirs(self.incomingdir)
384+
385+    def _setup_corruption_advisory(self):
386+        # we don't actually create the corruption-advisory dir until necessary
387+        self.corruption_advisory_dir = os.path.join(self.storedir,
388+                                                    "corruption-advisories")
389+
390+    def _setup_bucket_counter(self):
391+        statefile = os.path.join(self.storedir, "bucket_counter.state")
392+        self.bucket_counter = BucketCountingCrawler(statefile)
393+        self.bucket_counter.setServiceParent(self)
394+
395+    def _setup_lease_checkerf(self):
396+        statefile = os.path.join(self.storedir, "lease_checker.state")
397+        historyfile = os.path.join(self.storedir, "lease_checker.history")
398+        self.lease_checker = LeaseCheckingCrawler(statefile, historyfile,
399+                                   expiration_enabled, expiration_mode,
400+                                   expiration_override_lease_duration,
401+                                   expiration_cutoff_date,
402+                                   expiration_sharetypes)
403+        self.lease_checker.setServiceParent(self)
404+
405+    def get_available_space(self):
406+        if self.readonly:
407+            return 0
408+        return fileutil.get_available_space(self.storedir, self.reserved_space)
409+
410+    def get_bucket_shares(self, storage_index):
411+        """Return a list of (shnum, pathname) tuples for files that hold
412+        shares for this storage_index. In each tuple, 'shnum' will always be
413+        the integer form of the last component of 'pathname'."""
414+        storagedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
415+        try:
416+            for f in os.listdir(storagedir):
417+                if NUM_RE.match(f):
418+                    filename = os.path.join(storagedir, f)
419+                    yield (int(f), filename)
420+        except OSError:
421+            # Commonly caused by there being no buckets at all.
422+            pass
423+
424 # storage/
425 # storage/shares/incoming
426 #   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
427hunk ./src/allmydata/storage/server.py 143
428     name = 'storage'
429     LeaseCheckerClass = LeaseCheckingCrawler
430 
431-    def __init__(self, storedir, nodeid, reserved_space=0,
432-                 discard_storage=False, readonly_storage=False,
433+    def __init__(self, nodeid, backend, reserved_space=0,
434+                 readonly_storage=False,
435                  stats_provider=None,
436                  expiration_enabled=False,
437                  expiration_mode="age",
438hunk ./src/allmydata/storage/server.py 155
439         assert isinstance(nodeid, str)
440         assert len(nodeid) == 20
441         self.my_nodeid = nodeid
442-        self.storedir = storedir
443-        sharedir = os.path.join(storedir, "shares")
444-        fileutil.make_dirs(sharedir)
445-        self.sharedir = sharedir
446-        # we don't actually create the corruption-advisory dir until necessary
447-        self.corruption_advisory_dir = os.path.join(storedir,
448-                                                    "corruption-advisories")
449-        self.reserved_space = int(reserved_space)
450-        self.no_storage = discard_storage
451-        self.readonly_storage = readonly_storage
452         self.stats_provider = stats_provider
453         if self.stats_provider:
454             self.stats_provider.register_producer(self)
455hunk ./src/allmydata/storage/server.py 158
456-        self.incomingdir = os.path.join(sharedir, 'incoming')
457-        self._clean_incomplete()
458-        fileutil.make_dirs(self.incomingdir)
459         self._active_writers = weakref.WeakKeyDictionary()
460hunk ./src/allmydata/storage/server.py 159
461+        self.backend = backend
462+        self.backend.setServiceParent(self)
463         log.msg("StorageServer created", facility="tahoe.storage")
464 
465hunk ./src/allmydata/storage/server.py 163
466-        if reserved_space:
467-            if self.get_available_space() is None:
468-                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
469-                        umin="0wZ27w", level=log.UNUSUAL)
470-
471         self.latencies = {"allocate": [], # immutable
472                           "write": [],
473                           "close": [],
474hunk ./src/allmydata/storage/server.py 174
475                           "renew": [],
476                           "cancel": [],
477                           }
478-        self.add_bucket_counter()
479-
480-        statefile = os.path.join(self.storedir, "lease_checker.state")
481-        historyfile = os.path.join(self.storedir, "lease_checker.history")
482-        klass = self.LeaseCheckerClass
483-        self.lease_checker = klass(self, statefile, historyfile,
484-                                   expiration_enabled, expiration_mode,
485-                                   expiration_override_lease_duration,
486-                                   expiration_cutoff_date,
487-                                   expiration_sharetypes)
488-        self.lease_checker.setServiceParent(self)
489 
490     def __repr__(self):
491         return "<StorageServer %s>" % (idlib.shortnodeid_b2a(self.my_nodeid),)
492hunk ./src/allmydata/storage/server.py 178
493 
494-    def add_bucket_counter(self):
495-        statefile = os.path.join(self.storedir, "bucket_counter.state")
496-        self.bucket_counter = BucketCountingCrawler(self, statefile)
497-        self.bucket_counter.setServiceParent(self)
498-
499     def count(self, name, delta=1):
500         if self.stats_provider:
501             self.stats_provider.count("storage_server." + name, delta)
502hunk ./src/allmydata/storage/server.py 233
503             kwargs["facility"] = "tahoe.storage"
504         return log.msg(*args, **kwargs)
505 
506-    def _clean_incomplete(self):
507-        fileutil.rm_dir(self.incomingdir)
508-
509     def get_stats(self):
510         # remember: RIStatsProvider requires that our return dict
511         # contains numeric values.
512hunk ./src/allmydata/storage/server.py 269
513             stats['storage_server.total_bucket_count'] = bucket_count
514         return stats
515 
516-    def get_available_space(self):
517-        """Returns available space for share storage in bytes, or None if no
518-        API to get this information is available."""
519-
520-        if self.readonly_storage:
521-            return 0
522-        return fileutil.get_available_space(self.storedir, self.reserved_space)
523-
524     def allocated_size(self):
525         space = 0
526         for bw in self._active_writers:
527hunk ./src/allmydata/storage/server.py 276
528         return space
529 
530     def remote_get_version(self):
531-        remaining_space = self.get_available_space()
532+        remaining_space = self.backend.get_available_space()
533         if remaining_space is None:
534             # We're on a platform that has no API to get disk stats.
535             remaining_space = 2**64
536hunk ./src/allmydata/storage/server.py 301
537         self.count("allocate")
538         alreadygot = set()
539         bucketwriters = {} # k: shnum, v: BucketWriter
540-        si_dir = storage_index_to_dir(storage_index)
541-        si_s = si_b2a(storage_index)
542 
543hunk ./src/allmydata/storage/server.py 302
544+        si_s = si_b2a(storage_index)
545         log.msg("storage: allocate_buckets %s" % si_s)
546 
547         # in this implementation, the lease information (including secrets)
548hunk ./src/allmydata/storage/server.py 316
549 
550         max_space_per_bucket = allocated_size
551 
552-        remaining_space = self.get_available_space()
553+        remaining_space = self.backend.get_available_space()
554         limited = remaining_space is not None
555         if limited:
556             # this is a bit conservative, since some of this allocated_size()
557hunk ./src/allmydata/storage/server.py 329
558         # they asked about: this will save them a lot of work. Add or update
559         # leases for all of them: if they want us to hold shares for this
560         # file, they'll want us to hold leases for this file.
561-        for (shnum, fn) in self._get_bucket_shares(storage_index):
562+        for (shnum, fn) in self.backend.get_bucket_shares(storage_index):
563             alreadygot.add(shnum)
564             sf = ShareFile(fn)
565             sf.add_or_renew_lease(lease_info)
566hunk ./src/allmydata/storage/server.py 335
567 
568         for shnum in sharenums:
569-            incominghome = os.path.join(self.incomingdir, si_dir, "%d" % shnum)
570-            finalhome = os.path.join(self.sharedir, si_dir, "%d" % shnum)
571-            if os.path.exists(finalhome):
572+            share = self.backend.get_share(storage_index, shnum)
573+
574+            if not share:
575+                if (not limited) or (remaining_space >= max_space_per_bucket):
576+                    # ok! we need to create the new share file.
577+                    bw = self.backend.make_bucket_writer(storage_index, shnum,
578+                                      max_space_per_bucket, lease_info, canary)
579+                    bucketwriters[shnum] = bw
580+                    self._active_writers[bw] = 1
581+                    if limited:
582+                        remaining_space -= max_space_per_bucket
583+                else:
584+                    # bummer! not enough space to accept this bucket
585+                    pass
586+
587+            elif share.is_complete():
588                 # great! we already have it. easy.
589                 pass
590hunk ./src/allmydata/storage/server.py 353
591-            elif os.path.exists(incominghome):
592+            elif not share.is_complete():
593                 # Note that we don't create BucketWriters for shnums that
594                 # have a partial share (in incoming/), so if a second upload
595                 # occurs while the first is still in progress, the second
596hunk ./src/allmydata/storage/server.py 359
597                 # uploader will use different storage servers.
598                 pass
599-            elif (not limited) or (remaining_space >= max_space_per_bucket):
600-                # ok! we need to create the new share file.
601-                bw = BucketWriter(self, incominghome, finalhome,
602-                                  max_space_per_bucket, lease_info, canary)
603-                if self.no_storage:
604-                    bw.throw_out_all_data = True
605-                bucketwriters[shnum] = bw
606-                self._active_writers[bw] = 1
607-                if limited:
608-                    remaining_space -= max_space_per_bucket
609-            else:
610-                # bummer! not enough space to accept this bucket
611-                pass
612-
613-        if bucketwriters:
614-            fileutil.make_dirs(os.path.join(self.sharedir, si_dir))
615 
616         self.add_latency("allocate", time.time() - start)
617         return alreadygot, bucketwriters
618hunk ./src/allmydata/storage/server.py 437
619             self.stats_provider.count('storage_server.bytes_added', consumed_size)
620         del self._active_writers[bw]
621 
622-    def _get_bucket_shares(self, storage_index):
623-        """Return a list of (shnum, pathname) tuples for files that hold
624-        shares for this storage_index. In each tuple, 'shnum' will always be
625-        the integer form of the last component of 'pathname'."""
626-        storagedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
627-        try:
628-            for f in os.listdir(storagedir):
629-                if NUM_RE.match(f):
630-                    filename = os.path.join(storagedir, f)
631-                    yield (int(f), filename)
632-        except OSError:
633-            # Commonly caused by there being no buckets at all.
634-            pass
635 
636     def remote_get_buckets(self, storage_index):
637         start = time.time()
638hunk ./src/allmydata/storage/server.py 444
639         si_s = si_b2a(storage_index)
640         log.msg("storage: get_buckets %s" % si_s)
641         bucketreaders = {} # k: sharenum, v: BucketReader
642-        for shnum, filename in self._get_bucket_shares(storage_index):
643+        for shnum, filename in self.backend.get_bucket_shares(storage_index):
644             bucketreaders[shnum] = BucketReader(self, filename,
645                                                 storage_index, shnum)
646         self.add_latency("get", time.time() - start)
647hunk ./src/allmydata/test/test_backends.py 10
648 import mock
649 
650 # This is the code that we're going to be testing.
651-from allmydata.storage.server import StorageServer
652+from allmydata.storage.server import StorageServer, FSBackend, NullBackend
653 
654 # The following share file contents was generated with
655 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
656hunk ./src/allmydata/test/test_backends.py 21
657 sharefname = 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a/0'
658 
659 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
660+    @mock.patch('time.time')
661+    @mock.patch('os.mkdir')
662+    @mock.patch('__builtin__.open')
663+    @mock.patch('os.listdir')
664+    @mock.patch('os.path.isdir')
665+    def test_create_server_null_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
666+        """ This tests whether a server instance can be constructed
667+        with a null backend. The server instance fails the test if it
668+        tries to read or write to the file system. """
669+
670+        # Now begin the test.
671+        s = StorageServer('testnodeidxxxxxxxxxx', backend=NullBackend())
672+
673+        self.failIf(mockisdir.called)
674+        self.failIf(mocklistdir.called)
675+        self.failIf(mockopen.called)
676+        self.failIf(mockmkdir.called)
677+
678+        # You passed!
679+
680+    @mock.patch('time.time')
681+    @mock.patch('os.mkdir')
682     @mock.patch('__builtin__.open')
683hunk ./src/allmydata/test/test_backends.py 44
684-    def test_create_server(self, mockopen):
685-        """ This tests whether a server instance can be constructed. """
686+    @mock.patch('os.listdir')
687+    @mock.patch('os.path.isdir')
688+    def test_create_server_fs_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
689+        """ This tests whether a server instance can be constructed
690+        with a filesystem backend. To pass the test, it has to use the
691+        filesystem in only the prescribed ways. """
692 
693         def call_open(fname, mode):
694             if fname == 'testdir/bucket_counter.state':
695hunk ./src/allmydata/test/test_backends.py 58
696                 raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
697             elif fname == 'testdir/lease_checker.history':
698                 return StringIO()
699+            else:
700+                self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
701         mockopen.side_effect = call_open
702 
703         # Now begin the test.
704hunk ./src/allmydata/test/test_backends.py 63
705-        s = StorageServer('testdir', 'testnodeidxxxxxxxxxx')
706+        s = StorageServer('testnodeidxxxxxxxxxx', backend=FSBackend('teststoredir'))
707+
708+        self.failIf(mockisdir.called)
709+        self.failIf(mocklistdir.called)
710+        self.failIf(mockopen.called)
711+        self.failIf(mockmkdir.called)
712+        self.failIf(mocktime.called)
713 
714         # You passed!
715 
716hunk ./src/allmydata/test/test_backends.py 73
717-class TestServer(unittest.TestCase, ReallyEqualMixin):
718+class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
719+    def setUp(self):
720+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullBackend())
721+
722+    @mock.patch('os.mkdir')
723+    @mock.patch('__builtin__.open')
724+    @mock.patch('os.listdir')
725+    @mock.patch('os.path.isdir')
726+    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir):
727+        """ Write a new share. """
728+
729+        # Now begin the test.
730+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
731+        bs[0].remote_write(0, 'a')
732+        self.failIf(mockisdir.called)
733+        self.failIf(mocklistdir.called)
734+        self.failIf(mockopen.called)
735+        self.failIf(mockmkdir.called)
736+
737+    @mock.patch('os.path.exists')
738+    @mock.patch('os.path.getsize')
739+    @mock.patch('__builtin__.open')
740+    @mock.patch('os.listdir')
741+    def test_read_share(self, mocklistdir, mockopen, mockgetsize, mockexists):
742+        """ This tests whether the code correctly finds and reads
743+        shares written out by old (Tahoe-LAFS <= v1.8.2)
744+        servers. There is a similar test in test_download, but that one
745+        is from the perspective of the client and exercises a deeper
746+        stack of code. This one is for exercising just the
747+        StorageServer object. """
748+
749+        # Now begin the test.
750+        bs = self.s.remote_get_buckets('teststorage_index')
751+
752+        self.failUnlessEqual(len(bs), 0)
753+        self.failIf(mocklistdir.called)
754+        self.failIf(mockopen.called)
755+        self.failIf(mockgetsize.called)
756+        self.failIf(mockexists.called)
757+
758+
759+class TestServerFSBackend(unittest.TestCase, ReallyEqualMixin):
760     @mock.patch('__builtin__.open')
761     def setUp(self, mockopen):
762         def call_open(fname, mode):
763hunk ./src/allmydata/test/test_backends.py 126
764                 return StringIO()
765         mockopen.side_effect = call_open
766 
767-        self.s = StorageServer('testdir', 'testnodeidxxxxxxxxxx')
768-
769+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=FSBackend('teststoredir'))
770 
771     @mock.patch('time.time')
772     @mock.patch('os.mkdir')
773hunk ./src/allmydata/test/test_backends.py 134
774     @mock.patch('os.listdir')
775     @mock.patch('os.path.isdir')
776     def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
777-        """Handle a report of corruption."""
778+        """ Write a new share. """
779 
780         def call_listdir(dirname):
781             self.failUnlessReallyEqual(dirname, 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
782hunk ./src/allmydata/test/test_backends.py 173
783         mockopen.side_effect = call_open
784         # Now begin the test.
785         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
786-        print bs
787         bs[0].remote_write(0, 'a')
788         self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
789 
790hunk ./src/allmydata/test/test_backends.py 176
791-
792     @mock.patch('os.path.exists')
793     @mock.patch('os.path.getsize')
794     @mock.patch('__builtin__.open')
795hunk ./src/allmydata/test/test_backends.py 218
796 
797         self.failUnlessEqual(len(bs), 1)
798         b = bs[0]
799+        # These should match by definition, the next two cases cover cases without (completely) unambiguous behaviors.
800         self.failUnlessReallyEqual(b.remote_read(0, datalen), share_data)
801         # If you try to read past the end you get the as much data as is there.
802         self.failUnlessReallyEqual(b.remote_read(0, datalen+20), share_data)
803hunk ./src/allmydata/test/test_backends.py 224
804         # If you start reading past the end of the file you get the empty string.
805         self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
806+
807+
808}
809[a temp patch used as a snapshot
810wilcoxjg@gmail.com**20110626052732
811 Ignore-this: 95f05e314eaec870afa04c76d979aa44
812] {
813hunk ./docs/configuration.rst 637
814   [storage]
815   enabled = True
816   readonly = True
817-  sizelimit = 10000000000
818 
819 
820   [helper]
821hunk ./docs/garbage-collection.rst 16
822 
823 When a file or directory in the virtual filesystem is no longer referenced,
824 the space that its shares occupied on each storage server can be freed,
825-making room for other shares. Tahoe currently uses a garbage collection
826+making room for other shares. Tahoe uses a garbage collection
827 ("GC") mechanism to implement this space-reclamation process. Each share has
828 one or more "leases", which are managed by clients who want the
829 file/directory to be retained. The storage server accepts each share for a
830hunk ./docs/garbage-collection.rst 34
831 the `<lease-tradeoffs.svg>`_ diagram to get an idea for the tradeoffs involved.
832 If lease renewal occurs quickly and with 100% reliability, than any renewal
833 time that is shorter than the lease duration will suffice, but a larger ratio
834-of duration-over-renewal-time will be more robust in the face of occasional
835+of lease duration to renewal time will be more robust in the face of occasional
836 delays or failures.
837 
838 The current recommended values for a small Tahoe grid are to renew the leases
839replace ./docs/garbage-collection.rst [A-Za-z_0-9\-\.] Tahoe Tahoe-LAFS
840hunk ./src/allmydata/client.py 260
841             sharetypes.append("mutable")
842         expiration_sharetypes = tuple(sharetypes)
843 
844+        if self.get_config("storage", "backend", "filesystem") == "filesystem":
845+            xyz
846+        xyz
847         ss = StorageServer(storedir, self.nodeid,
848                            reserved_space=reserved,
849                            discard_storage=discard,
850hunk ./src/allmydata/storage/crawler.py 234
851         f = open(tmpfile, "wb")
852         pickle.dump(self.state, f)
853         f.close()
854-        fileutil.move_into_place(tmpfile, self.statefile)
855+        fileutil.move_into_place(tmpfile, self.statefname)
856 
857     def startService(self):
858         # arrange things to look like we were just sleeping, so
859}
860[snapshot of progress on backend implementation (not suitable for trunk)
861wilcoxjg@gmail.com**20110626053244
862 Ignore-this: 50c764af791c2b99ada8289546806a0a
863] {
864adddir ./src/allmydata/storage/backends
865adddir ./src/allmydata/storage/backends/das
866move ./src/allmydata/storage/expirer.py ./src/allmydata/storage/backends/das/expirer.py
867adddir ./src/allmydata/storage/backends/null
868hunk ./src/allmydata/interfaces.py 270
869         store that on disk.
870         """
871 
872+class IStorageBackend(Interface):
873+    """
874+    Objects of this kind live on the server side and are used by the
875+    storage server object.
876+    """
877+    def get_available_space(self, reserved_space):
878+        """ Returns available space for share storage in bytes, or
879+        None if this information is not available or if the available
880+        space is unlimited.
881+
882+        If the backend is configured for read-only mode then this will
883+        return 0.
884+
885+        reserved_space is how many bytes to subtract from the answer, so
886+        you can pass how many bytes you would like to leave unused on this
887+        filesystem as reserved_space. """
888+
889+    def get_bucket_shares(self):
890+        """XXX"""
891+
892+    def get_share(self):
893+        """XXX"""
894+
895+    def make_bucket_writer(self):
896+        """XXX"""
897+
898+class IStorageBackendShare(Interface):
899+    """
900+    This object contains as much as all of the share data.  It is intended
901+    for lazy evaluation such that in many use cases substantially less than
902+    all of the share data will be accessed.
903+    """
904+    def is_complete(self):
905+        """
906+        Returns the share state, or None if the share does not exist.
907+        """
908+
909 class IStorageBucketWriter(Interface):
910     """
911     Objects of this kind live on the client side.
912hunk ./src/allmydata/interfaces.py 2492
913 
914 class EmptyPathnameComponentError(Exception):
915     """The webapi disallows empty pathname components."""
916+
917+class IShareStore(Interface):
918+    pass
919+
920addfile ./src/allmydata/storage/backends/__init__.py
921addfile ./src/allmydata/storage/backends/das/__init__.py
922addfile ./src/allmydata/storage/backends/das/core.py
923hunk ./src/allmydata/storage/backends/das/core.py 1
924+from allmydata.interfaces import IStorageBackend
925+from allmydata.storage.backends.base import Backend
926+from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
927+from allmydata.util.assertutil import precondition
928+
929+import os, re, weakref, struct, time
930+
931+from foolscap.api import Referenceable
932+from twisted.application import service
933+
934+from zope.interface import implements
935+from allmydata.interfaces import RIStorageServer, IStatsProducer, IShareStore
936+from allmydata.util import fileutil, idlib, log, time_format
937+import allmydata # for __full_version__
938+
939+from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
940+_pyflakes_hush = [si_b2a, si_a2b, storage_index_to_dir] # re-exported
941+from allmydata.storage.lease import LeaseInfo
942+from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
943+     create_mutable_sharefile
944+from allmydata.storage.backends.das.immutable import NullBucketWriter, BucketWriter, BucketReader
945+from allmydata.storage.crawler import FSBucketCountingCrawler
946+from allmydata.storage.backends.das.expirer import FSLeaseCheckingCrawler
947+
948+from zope.interface import implements
949+
950+class DASCore(Backend):
951+    implements(IStorageBackend)
952+    def __init__(self, storedir, expiration_policy, readonly=False, reserved_space=0):
953+        Backend.__init__(self)
954+
955+        self._setup_storage(storedir, readonly, reserved_space)
956+        self._setup_corruption_advisory()
957+        self._setup_bucket_counter()
958+        self._setup_lease_checkerf(expiration_policy)
959+
960+    def _setup_storage(self, storedir, readonly, reserved_space):
961+        self.storedir = storedir
962+        self.readonly = readonly
963+        self.reserved_space = int(reserved_space)
964+        if self.reserved_space:
965+            if self.get_available_space() is None:
966+                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
967+                        umid="0wZ27w", level=log.UNUSUAL)
968+
969+        self.sharedir = os.path.join(self.storedir, "shares")
970+        fileutil.make_dirs(self.sharedir)
971+        self.incomingdir = os.path.join(self.sharedir, 'incoming')
972+        self._clean_incomplete()
973+
974+    def _clean_incomplete(self):
975+        fileutil.rm_dir(self.incomingdir)
976+        fileutil.make_dirs(self.incomingdir)
977+
978+    def _setup_corruption_advisory(self):
979+        # we don't actually create the corruption-advisory dir until necessary
980+        self.corruption_advisory_dir = os.path.join(self.storedir,
981+                                                    "corruption-advisories")
982+
983+    def _setup_bucket_counter(self):
984+        statefname = os.path.join(self.storedir, "bucket_counter.state")
985+        self.bucket_counter = FSBucketCountingCrawler(statefname)
986+        self.bucket_counter.setServiceParent(self)
987+
988+    def _setup_lease_checkerf(self, expiration_policy):
989+        statefile = os.path.join(self.storedir, "lease_checker.state")
990+        historyfile = os.path.join(self.storedir, "lease_checker.history")
991+        self.lease_checker = FSLeaseCheckingCrawler(statefile, historyfile, expiration_policy)
992+        self.lease_checker.setServiceParent(self)
993+
994+    def get_available_space(self):
995+        if self.readonly:
996+            return 0
997+        return fileutil.get_available_space(self.storedir, self.reserved_space)
998+
999+    def get_shares(self, storage_index):
1000+        """Return a list of the FSBShare objects that correspond to the passed storage_index."""
1001+        finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
1002+        try:
1003+            for f in os.listdir(finalstoragedir):
1004+                if NUM_RE.match(f):
1005+                    filename = os.path.join(finalstoragedir, f)
1006+                    yield FSBShare(filename, int(f))
1007+        except OSError:
1008+            # Commonly caused by there being no buckets at all.
1009+            pass
1010+       
1011+    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
1012+        immsh = ImmutableShare(self.sharedir, storage_index, shnum, max_size=max_space_per_bucket, create=True)
1013+        bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
1014+        return bw
1015+       
1016+
1017+# each share file (in storage/shares/$SI/$SHNUM) contains lease information
1018+# and share data. The share data is accessed by RIBucketWriter.write and
1019+# RIBucketReader.read . The lease information is not accessible through these
1020+# interfaces.
1021+
1022+# The share file has the following layout:
1023+#  0x00: share file version number, four bytes, current version is 1
1024+#  0x04: share data length, four bytes big-endian = A # See Footnote 1 below.
1025+#  0x08: number of leases, four bytes big-endian
1026+#  0x0c: beginning of share data (see immutable.layout.WriteBucketProxy)
1027+#  A+0x0c = B: first lease. Lease format is:
1028+#   B+0x00: owner number, 4 bytes big-endian, 0 is reserved for no-owner
1029+#   B+0x04: renew secret, 32 bytes (SHA256)
1030+#   B+0x24: cancel secret, 32 bytes (SHA256)
1031+#   B+0x44: expiration time, 4 bytes big-endian seconds-since-epoch
1032+#   B+0x48: next lease, or end of record
1033+
1034+# Footnote 1: as of Tahoe v1.3.0 this field is not used by storage servers,
1035+# but it is still filled in by storage servers in case the storage server
1036+# software gets downgraded from >= Tahoe v1.3.0 to < Tahoe v1.3.0, or the
1037+# share file is moved from one storage server to another. The value stored in
1038+# this field is truncated, so if the actual share data length is >= 2**32,
1039+# then the value stored in this field will be the actual share data length
1040+# modulo 2**32.
1041+
1042+class ImmutableShare:
1043+    LEASE_SIZE = struct.calcsize(">L32s32sL")
1044+    sharetype = "immutable"
1045+
1046+    def __init__(self, sharedir, storageindex, shnum, max_size=None, create=False):
1047+        """ If max_size is not None then I won't allow more than
1048+        max_size to be written to me. If create=True then max_size
1049+        must not be None. """
1050+        precondition((max_size is not None) or (not create), max_size, create)
1051+        self.shnum = shnum
1052+        self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
1053+        self._max_size = max_size
1054+        if create:
1055+            # touch the file, so later callers will see that we're working on
1056+            # it. Also construct the metadata.
1057+            assert not os.path.exists(self.fname)
1058+            fileutil.make_dirs(os.path.dirname(self.fname))
1059+            f = open(self.fname, 'wb')
1060+            # The second field -- the four-byte share data length -- is no
1061+            # longer used as of Tahoe v1.3.0, but we continue to write it in
1062+            # there in case someone downgrades a storage server from >=
1063+            # Tahoe-1.3.0 to < Tahoe-1.3.0, or moves a share file from one
1064+            # server to another, etc. We do saturation -- a share data length
1065+            # larger than 2**32-1 (what can fit into the field) is marked as
1066+            # the largest length that can fit into the field. That way, even
1067+            # if this does happen, the old < v1.3.0 server will still allow
1068+            # clients to read the first part of the share.
1069+            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
1070+            f.close()
1071+            self._lease_offset = max_size + 0x0c
1072+            self._num_leases = 0
1073+        else:
1074+            f = open(self.fname, 'rb')
1075+            filesize = os.path.getsize(self.fname)
1076+            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1077+            f.close()
1078+            if version != 1:
1079+                msg = "sharefile %s had version %d but we wanted 1" % \
1080+                      (self.fname, version)
1081+                raise UnknownImmutableContainerVersionError(msg)
1082+            self._num_leases = num_leases
1083+            self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
1084+        self._data_offset = 0xc
1085+
1086+    def unlink(self):
1087+        os.unlink(self.fname)
1088+
1089+    def read_share_data(self, offset, length):
1090+        precondition(offset >= 0)
1091+        # Reads beyond the end of the data are truncated. Reads that start
1092+        # beyond the end of the data return an empty string.
1093+        seekpos = self._data_offset+offset
1094+        fsize = os.path.getsize(self.fname)
1095+        actuallength = max(0, min(length, fsize-seekpos))
1096+        if actuallength == 0:
1097+            return ""
1098+        f = open(self.fname, 'rb')
1099+        f.seek(seekpos)
1100+        return f.read(actuallength)
1101+
1102+    def write_share_data(self, offset, data):
1103+        length = len(data)
1104+        precondition(offset >= 0, offset)
1105+        if self._max_size is not None and offset+length > self._max_size:
1106+            raise DataTooLargeError(self._max_size, offset, length)
1107+        f = open(self.fname, 'rb+')
1108+        real_offset = self._data_offset+offset
1109+        f.seek(real_offset)
1110+        assert f.tell() == real_offset
1111+        f.write(data)
1112+        f.close()
1113+
1114+    def _write_lease_record(self, f, lease_number, lease_info):
1115+        offset = self._lease_offset + lease_number * self.LEASE_SIZE
1116+        f.seek(offset)
1117+        assert f.tell() == offset
1118+        f.write(lease_info.to_immutable_data())
1119+
1120+    def _read_num_leases(self, f):
1121+        f.seek(0x08)
1122+        (num_leases,) = struct.unpack(">L", f.read(4))
1123+        return num_leases
1124+
1125+    def _write_num_leases(self, f, num_leases):
1126+        f.seek(0x08)
1127+        f.write(struct.pack(">L", num_leases))
1128+
1129+    def _truncate_leases(self, f, num_leases):
1130+        f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE)
1131+
1132+    def get_leases(self):
1133+        """Yields a LeaseInfo instance for all leases."""
1134+        f = open(self.fname, 'rb')
1135+        (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1136+        f.seek(self._lease_offset)
1137+        for i in range(num_leases):
1138+            data = f.read(self.LEASE_SIZE)
1139+            if data:
1140+                yield LeaseInfo().from_immutable_data(data)
1141+
1142+    def add_lease(self, lease_info):
1143+        f = open(self.fname, 'rb+')
1144+        num_leases = self._read_num_leases(f)
1145+        self._write_lease_record(f, num_leases, lease_info)
1146+        self._write_num_leases(f, num_leases+1)
1147+        f.close()
1148+
1149+    def renew_lease(self, renew_secret, new_expire_time):
1150+        for i,lease in enumerate(self.get_leases()):
1151+            if constant_time_compare(lease.renew_secret, renew_secret):
1152+                # yup. See if we need to update the owner time.
1153+                if new_expire_time > lease.expiration_time:
1154+                    # yes
1155+                    lease.expiration_time = new_expire_time
1156+                    f = open(self.fname, 'rb+')
1157+                    self._write_lease_record(f, i, lease)
1158+                    f.close()
1159+                return
1160+        raise IndexError("unable to renew non-existent lease")
1161+
1162+    def add_or_renew_lease(self, lease_info):
1163+        try:
1164+            self.renew_lease(lease_info.renew_secret,
1165+                             lease_info.expiration_time)
1166+        except IndexError:
1167+            self.add_lease(lease_info)
1168+
1169+
1170+    def cancel_lease(self, cancel_secret):
1171+        """Remove a lease with the given cancel_secret. If the last lease is
1172+        cancelled, the file will be removed. Return the number of bytes that
1173+        were freed (by truncating the list of leases, and possibly by
1174+        deleting the file. Raise IndexError if there was no lease with the
1175+        given cancel_secret.
1176+        """
1177+
1178+        leases = list(self.get_leases())
1179+        num_leases_removed = 0
1180+        for i,lease in enumerate(leases):
1181+            if constant_time_compare(lease.cancel_secret, cancel_secret):
1182+                leases[i] = None
1183+                num_leases_removed += 1
1184+        if not num_leases_removed:
1185+            raise IndexError("unable to find matching lease to cancel")
1186+        if num_leases_removed:
1187+            # pack and write out the remaining leases. We write these out in
1188+            # the same order as they were added, so that if we crash while
1189+            # doing this, we won't lose any non-cancelled leases.
1190+            leases = [l for l in leases if l] # remove the cancelled leases
1191+            f = open(self.fname, 'rb+')
1192+            for i,lease in enumerate(leases):
1193+                self._write_lease_record(f, i, lease)
1194+            self._write_num_leases(f, len(leases))
1195+            self._truncate_leases(f, len(leases))
1196+            f.close()
1197+        space_freed = self.LEASE_SIZE * num_leases_removed
1198+        if not len(leases):
1199+            space_freed += os.stat(self.fname)[stat.ST_SIZE]
1200+            self.unlink()
1201+        return space_freed
1202hunk ./src/allmydata/storage/backends/das/expirer.py 2
1203 import time, os, pickle, struct
1204-from allmydata.storage.crawler import ShareCrawler
1205-from allmydata.storage.shares import get_share_file
1206+from allmydata.storage.crawler import FSShareCrawler
1207 from allmydata.storage.common import UnknownMutableContainerVersionError, \
1208      UnknownImmutableContainerVersionError
1209 from twisted.python import log as twlog
1210hunk ./src/allmydata/storage/backends/das/expirer.py 7
1211 
1212-class LeaseCheckingCrawler(ShareCrawler):
1213+class FSLeaseCheckingCrawler(FSShareCrawler):
1214     """I examine the leases on all shares, determining which are still valid
1215     and which have expired. I can remove the expired leases (if so
1216     configured), and the share will be deleted when the last lease is
1217hunk ./src/allmydata/storage/backends/das/expirer.py 50
1218     slow_start = 360 # wait 6 minutes after startup
1219     minimum_cycle_time = 12*60*60 # not more than twice per day
1220 
1221-    def __init__(self, statefile, historyfile,
1222-                 expiration_enabled, mode,
1223-                 override_lease_duration, # used if expiration_mode=="age"
1224-                 cutoff_date, # used if expiration_mode=="cutoff-date"
1225-                 sharetypes):
1226+    def __init__(self, statefile, historyfile, expiration_policy):
1227         self.historyfile = historyfile
1228hunk ./src/allmydata/storage/backends/das/expirer.py 52
1229-        self.expiration_enabled = expiration_enabled
1230-        self.mode = mode
1231+        self.expiration_enabled = expiration_policy['enabled']
1232+        self.mode = expiration_policy['mode']
1233         self.override_lease_duration = None
1234         self.cutoff_date = None
1235         if self.mode == "age":
1236hunk ./src/allmydata/storage/backends/das/expirer.py 57
1237-            assert isinstance(override_lease_duration, (int, type(None)))
1238-            self.override_lease_duration = override_lease_duration # seconds
1239+            assert isinstance(expiration_policy['override_lease_duration'], (int, type(None)))
1240+            self.override_lease_duration = expiration_policy['override_lease_duration']# seconds
1241         elif self.mode == "cutoff-date":
1242hunk ./src/allmydata/storage/backends/das/expirer.py 60
1243-            assert isinstance(cutoff_date, int) # seconds-since-epoch
1244+            assert isinstance(expiration_policy['cutoff_date'], int) # seconds-since-epoch
1245             assert cutoff_date is not None
1246hunk ./src/allmydata/storage/backends/das/expirer.py 62
1247-            self.cutoff_date = cutoff_date
1248+            self.cutoff_date = expiration_policy['cutoff_date']
1249         else:
1250hunk ./src/allmydata/storage/backends/das/expirer.py 64
1251-            raise ValueError("GC mode '%s' must be 'age' or 'cutoff-date'" % mode)
1252-        self.sharetypes_to_expire = sharetypes
1253-        ShareCrawler.__init__(self, statefile)
1254+            raise ValueError("GC mode '%s' must be 'age' or 'cutoff-date'" % expiration_policy['mode'])
1255+        self.sharetypes_to_expire = expiration_policy['sharetypes']
1256+        FSShareCrawler.__init__(self, statefile)
1257 
1258     def add_initial_state(self):
1259         # we fill ["cycle-to-date"] here (even though they will be reset in
1260hunk ./src/allmydata/storage/backends/das/expirer.py 156
1261 
1262     def process_share(self, sharefilename):
1263         # first, find out what kind of a share it is
1264-        sf = get_share_file(sharefilename)
1265+        f = open(sharefilename, "rb")
1266+        prefix = f.read(32)
1267+        f.close()
1268+        if prefix == MutableShareFile.MAGIC:
1269+            sf = MutableShareFile(sharefilename)
1270+        else:
1271+            # otherwise assume it's immutable
1272+            sf = FSBShare(sharefilename)
1273         sharetype = sf.sharetype
1274         now = time.time()
1275         s = self.stat(sharefilename)
1276addfile ./src/allmydata/storage/backends/null/__init__.py
1277addfile ./src/allmydata/storage/backends/null/core.py
1278hunk ./src/allmydata/storage/backends/null/core.py 1
1279+from allmydata.storage.backends.base import Backend
1280+
1281+class NullCore(Backend):
1282+    def __init__(self):
1283+        Backend.__init__(self)
1284+
1285+    def get_available_space(self):
1286+        return None
1287+
1288+    def get_shares(self, storage_index):
1289+        return set()
1290+
1291+    def get_share(self, storage_index, sharenum):
1292+        return None
1293+
1294+    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
1295+        return NullBucketWriter()
1296hunk ./src/allmydata/storage/crawler.py 12
1297 class TimeSliceExceeded(Exception):
1298     pass
1299 
1300-class ShareCrawler(service.MultiService):
1301+class FSShareCrawler(service.MultiService):
1302     """A subcless of ShareCrawler is attached to a StorageServer, and
1303     periodically walks all of its shares, processing each one in some
1304     fashion. This crawl is rate-limited, to reduce the IO burden on the host,
1305hunk ./src/allmydata/storage/crawler.py 68
1306     cpu_slice = 1.0 # use up to 1.0 seconds before yielding
1307     minimum_cycle_time = 300 # don't run a cycle faster than this
1308 
1309-    def __init__(self, backend, statefile, allowed_cpu_percentage=None):
1310+    def __init__(self, statefname, allowed_cpu_percentage=None):
1311         service.MultiService.__init__(self)
1312         if allowed_cpu_percentage is not None:
1313             self.allowed_cpu_percentage = allowed_cpu_percentage
1314hunk ./src/allmydata/storage/crawler.py 72
1315-        self.backend = backend
1316+        self.statefname = statefname
1317         self.prefixes = [si_b2a(struct.pack(">H", i << (16-10)))[:2]
1318                          for i in range(2**10)]
1319         self.prefixes.sort()
1320hunk ./src/allmydata/storage/crawler.py 192
1321         #                            of the last bucket to be processed, or
1322         #                            None if we are sleeping between cycles
1323         try:
1324-            f = open(self.statefile, "rb")
1325+            f = open(self.statefname, "rb")
1326             state = pickle.load(f)
1327             f.close()
1328         except EnvironmentError:
1329hunk ./src/allmydata/storage/crawler.py 230
1330         else:
1331             last_complete_prefix = self.prefixes[lcpi]
1332         self.state["last-complete-prefix"] = last_complete_prefix
1333-        tmpfile = self.statefile + ".tmp"
1334+        tmpfile = self.statefname + ".tmp"
1335         f = open(tmpfile, "wb")
1336         pickle.dump(self.state, f)
1337         f.close()
1338hunk ./src/allmydata/storage/crawler.py 433
1339         pass
1340 
1341 
1342-class BucketCountingCrawler(ShareCrawler):
1343+class FSBucketCountingCrawler(FSShareCrawler):
1344     """I keep track of how many buckets are being managed by this server.
1345     This is equivalent to the number of distributed files and directories for
1346     which I am providing storage. The actual number of files+directories in
1347hunk ./src/allmydata/storage/crawler.py 446
1348 
1349     minimum_cycle_time = 60*60 # we don't need this more than once an hour
1350 
1351-    def __init__(self, statefile, num_sample_prefixes=1):
1352-        ShareCrawler.__init__(self, statefile)
1353+    def __init__(self, statefname, num_sample_prefixes=1):
1354+        FSShareCrawler.__init__(self, statefname)
1355         self.num_sample_prefixes = num_sample_prefixes
1356 
1357     def add_initial_state(self):
1358hunk ./src/allmydata/storage/immutable.py 14
1359 from allmydata.storage.common import UnknownImmutableContainerVersionError, \
1360      DataTooLargeError
1361 
1362-# each share file (in storage/shares/$SI/$SHNUM) contains lease information
1363-# and share data. The share data is accessed by RIBucketWriter.write and
1364-# RIBucketReader.read . The lease information is not accessible through these
1365-# interfaces.
1366-
1367-# The share file has the following layout:
1368-#  0x00: share file version number, four bytes, current version is 1
1369-#  0x04: share data length, four bytes big-endian = A # See Footnote 1 below.
1370-#  0x08: number of leases, four bytes big-endian
1371-#  0x0c: beginning of share data (see immutable.layout.WriteBucketProxy)
1372-#  A+0x0c = B: first lease. Lease format is:
1373-#   B+0x00: owner number, 4 bytes big-endian, 0 is reserved for no-owner
1374-#   B+0x04: renew secret, 32 bytes (SHA256)
1375-#   B+0x24: cancel secret, 32 bytes (SHA256)
1376-#   B+0x44: expiration time, 4 bytes big-endian seconds-since-epoch
1377-#   B+0x48: next lease, or end of record
1378-
1379-# Footnote 1: as of Tahoe v1.3.0 this field is not used by storage servers,
1380-# but it is still filled in by storage servers in case the storage server
1381-# software gets downgraded from >= Tahoe v1.3.0 to < Tahoe v1.3.0, or the
1382-# share file is moved from one storage server to another. The value stored in
1383-# this field is truncated, so if the actual share data length is >= 2**32,
1384-# then the value stored in this field will be the actual share data length
1385-# modulo 2**32.
1386-
1387-class ShareFile:
1388-    LEASE_SIZE = struct.calcsize(">L32s32sL")
1389-    sharetype = "immutable"
1390-
1391-    def __init__(self, filename, max_size=None, create=False):
1392-        """ If max_size is not None then I won't allow more than
1393-        max_size to be written to me. If create=True then max_size
1394-        must not be None. """
1395-        precondition((max_size is not None) or (not create), max_size, create)
1396-        self.home = filename
1397-        self._max_size = max_size
1398-        if create:
1399-            # touch the file, so later callers will see that we're working on
1400-            # it. Also construct the metadata.
1401-            assert not os.path.exists(self.home)
1402-            fileutil.make_dirs(os.path.dirname(self.home))
1403-            f = open(self.home, 'wb')
1404-            # The second field -- the four-byte share data length -- is no
1405-            # longer used as of Tahoe v1.3.0, but we continue to write it in
1406-            # there in case someone downgrades a storage server from >=
1407-            # Tahoe-1.3.0 to < Tahoe-1.3.0, or moves a share file from one
1408-            # server to another, etc. We do saturation -- a share data length
1409-            # larger than 2**32-1 (what can fit into the field) is marked as
1410-            # the largest length that can fit into the field. That way, even
1411-            # if this does happen, the old < v1.3.0 server will still allow
1412-            # clients to read the first part of the share.
1413-            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
1414-            f.close()
1415-            self._lease_offset = max_size + 0x0c
1416-            self._num_leases = 0
1417-        else:
1418-            f = open(self.home, 'rb')
1419-            filesize = os.path.getsize(self.home)
1420-            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1421-            f.close()
1422-            if version != 1:
1423-                msg = "sharefile %s had version %d but we wanted 1" % \
1424-                      (filename, version)
1425-                raise UnknownImmutableContainerVersionError(msg)
1426-            self._num_leases = num_leases
1427-            self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
1428-        self._data_offset = 0xc
1429-
1430-    def unlink(self):
1431-        os.unlink(self.home)
1432-
1433-    def read_share_data(self, offset, length):
1434-        precondition(offset >= 0)
1435-        # Reads beyond the end of the data are truncated. Reads that start
1436-        # beyond the end of the data return an empty string.
1437-        seekpos = self._data_offset+offset
1438-        fsize = os.path.getsize(self.home)
1439-        actuallength = max(0, min(length, fsize-seekpos))
1440-        if actuallength == 0:
1441-            return ""
1442-        f = open(self.home, 'rb')
1443-        f.seek(seekpos)
1444-        return f.read(actuallength)
1445-
1446-    def write_share_data(self, offset, data):
1447-        length = len(data)
1448-        precondition(offset >= 0, offset)
1449-        if self._max_size is not None and offset+length > self._max_size:
1450-            raise DataTooLargeError(self._max_size, offset, length)
1451-        f = open(self.home, 'rb+')
1452-        real_offset = self._data_offset+offset
1453-        f.seek(real_offset)
1454-        assert f.tell() == real_offset
1455-        f.write(data)
1456-        f.close()
1457-
1458-    def _write_lease_record(self, f, lease_number, lease_info):
1459-        offset = self._lease_offset + lease_number * self.LEASE_SIZE
1460-        f.seek(offset)
1461-        assert f.tell() == offset
1462-        f.write(lease_info.to_immutable_data())
1463-
1464-    def _read_num_leases(self, f):
1465-        f.seek(0x08)
1466-        (num_leases,) = struct.unpack(">L", f.read(4))
1467-        return num_leases
1468-
1469-    def _write_num_leases(self, f, num_leases):
1470-        f.seek(0x08)
1471-        f.write(struct.pack(">L", num_leases))
1472-
1473-    def _truncate_leases(self, f, num_leases):
1474-        f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE)
1475-
1476-    def get_leases(self):
1477-        """Yields a LeaseInfo instance for all leases."""
1478-        f = open(self.home, 'rb')
1479-        (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1480-        f.seek(self._lease_offset)
1481-        for i in range(num_leases):
1482-            data = f.read(self.LEASE_SIZE)
1483-            if data:
1484-                yield LeaseInfo().from_immutable_data(data)
1485-
1486-    def add_lease(self, lease_info):
1487-        f = open(self.home, 'rb+')
1488-        num_leases = self._read_num_leases(f)
1489-        self._write_lease_record(f, num_leases, lease_info)
1490-        self._write_num_leases(f, num_leases+1)
1491-        f.close()
1492-
1493-    def renew_lease(self, renew_secret, new_expire_time):
1494-        for i,lease in enumerate(self.get_leases()):
1495-            if constant_time_compare(lease.renew_secret, renew_secret):
1496-                # yup. See if we need to update the owner time.
1497-                if new_expire_time > lease.expiration_time:
1498-                    # yes
1499-                    lease.expiration_time = new_expire_time
1500-                    f = open(self.home, 'rb+')
1501-                    self._write_lease_record(f, i, lease)
1502-                    f.close()
1503-                return
1504-        raise IndexError("unable to renew non-existent lease")
1505-
1506-    def add_or_renew_lease(self, lease_info):
1507-        try:
1508-            self.renew_lease(lease_info.renew_secret,
1509-                             lease_info.expiration_time)
1510-        except IndexError:
1511-            self.add_lease(lease_info)
1512-
1513-
1514-    def cancel_lease(self, cancel_secret):
1515-        """Remove a lease with the given cancel_secret. If the last lease is
1516-        cancelled, the file will be removed. Return the number of bytes that
1517-        were freed (by truncating the list of leases, and possibly by
1518-        deleting the file. Raise IndexError if there was no lease with the
1519-        given cancel_secret.
1520-        """
1521-
1522-        leases = list(self.get_leases())
1523-        num_leases_removed = 0
1524-        for i,lease in enumerate(leases):
1525-            if constant_time_compare(lease.cancel_secret, cancel_secret):
1526-                leases[i] = None
1527-                num_leases_removed += 1
1528-        if not num_leases_removed:
1529-            raise IndexError("unable to find matching lease to cancel")
1530-        if num_leases_removed:
1531-            # pack and write out the remaining leases. We write these out in
1532-            # the same order as they were added, so that if we crash while
1533-            # doing this, we won't lose any non-cancelled leases.
1534-            leases = [l for l in leases if l] # remove the cancelled leases
1535-            f = open(self.home, 'rb+')
1536-            for i,lease in enumerate(leases):
1537-                self._write_lease_record(f, i, lease)
1538-            self._write_num_leases(f, len(leases))
1539-            self._truncate_leases(f, len(leases))
1540-            f.close()
1541-        space_freed = self.LEASE_SIZE * num_leases_removed
1542-        if not len(leases):
1543-            space_freed += os.stat(self.home)[stat.ST_SIZE]
1544-            self.unlink()
1545-        return space_freed
1546-class NullBucketWriter(Referenceable):
1547-    implements(RIBucketWriter)
1548-
1549-    def remote_write(self, offset, data):
1550-        return
1551-
1552 class BucketWriter(Referenceable):
1553     implements(RIBucketWriter)
1554 
1555hunk ./src/allmydata/storage/immutable.py 17
1556-    def __init__(self, ss, incominghome, finalhome, max_size, lease_info, canary):
1557+    def __init__(self, ss, immutableshare, max_size, lease_info, canary):
1558         self.ss = ss
1559hunk ./src/allmydata/storage/immutable.py 19
1560-        self.incominghome = incominghome
1561-        self.finalhome = finalhome
1562         self._max_size = max_size # don't allow the client to write more than this
1563         self._canary = canary
1564         self._disconnect_marker = canary.notifyOnDisconnect(self._disconnected)
1565hunk ./src/allmydata/storage/immutable.py 24
1566         self.closed = False
1567         self.throw_out_all_data = False
1568-        self._sharefile = ShareFile(incominghome, create=True, max_size=max_size)
1569+        self._sharefile = immutableshare
1570         # also, add our lease to the file now, so that other ones can be
1571         # added by simultaneous uploaders
1572         self._sharefile.add_lease(lease_info)
1573hunk ./src/allmydata/storage/server.py 16
1574 from allmydata.storage.lease import LeaseInfo
1575 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
1576      create_mutable_sharefile
1577-from allmydata.storage.immutable import ShareFile, NullBucketWriter, BucketWriter, BucketReader
1578-from allmydata.storage.crawler import BucketCountingCrawler
1579-from allmydata.storage.expirer import LeaseCheckingCrawler
1580 
1581 from zope.interface import implements
1582 
1583hunk ./src/allmydata/storage/server.py 19
1584-# A Backend is a MultiService so that its server's crawlers (if the server has any) can
1585-# be started and stopped.
1586-class Backend(service.MultiService):
1587-    implements(IStatsProducer)
1588-    def __init__(self):
1589-        service.MultiService.__init__(self)
1590-
1591-    def get_bucket_shares(self):
1592-        """XXX"""
1593-        raise NotImplementedError
1594-
1595-    def get_share(self):
1596-        """XXX"""
1597-        raise NotImplementedError
1598-
1599-    def make_bucket_writer(self):
1600-        """XXX"""
1601-        raise NotImplementedError
1602-
1603-class NullBackend(Backend):
1604-    def __init__(self):
1605-        Backend.__init__(self)
1606-
1607-    def get_available_space(self):
1608-        return None
1609-
1610-    def get_bucket_shares(self, storage_index):
1611-        return set()
1612-
1613-    def get_share(self, storage_index, sharenum):
1614-        return None
1615-
1616-    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
1617-        return NullBucketWriter()
1618-
1619-class FSBackend(Backend):
1620-    def __init__(self, storedir, readonly=False, reserved_space=0):
1621-        Backend.__init__(self)
1622-
1623-        self._setup_storage(storedir, readonly, reserved_space)
1624-        self._setup_corruption_advisory()
1625-        self._setup_bucket_counter()
1626-        self._setup_lease_checkerf()
1627-
1628-    def _setup_storage(self, storedir, readonly, reserved_space):
1629-        self.storedir = storedir
1630-        self.readonly = readonly
1631-        self.reserved_space = int(reserved_space)
1632-        if self.reserved_space:
1633-            if self.get_available_space() is None:
1634-                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
1635-                        umid="0wZ27w", level=log.UNUSUAL)
1636-
1637-        self.sharedir = os.path.join(self.storedir, "shares")
1638-        fileutil.make_dirs(self.sharedir)
1639-        self.incomingdir = os.path.join(self.sharedir, 'incoming')
1640-        self._clean_incomplete()
1641-
1642-    def _clean_incomplete(self):
1643-        fileutil.rm_dir(self.incomingdir)
1644-        fileutil.make_dirs(self.incomingdir)
1645-
1646-    def _setup_corruption_advisory(self):
1647-        # we don't actually create the corruption-advisory dir until necessary
1648-        self.corruption_advisory_dir = os.path.join(self.storedir,
1649-                                                    "corruption-advisories")
1650-
1651-    def _setup_bucket_counter(self):
1652-        statefile = os.path.join(self.storedir, "bucket_counter.state")
1653-        self.bucket_counter = BucketCountingCrawler(statefile)
1654-        self.bucket_counter.setServiceParent(self)
1655-
1656-    def _setup_lease_checkerf(self):
1657-        statefile = os.path.join(self.storedir, "lease_checker.state")
1658-        historyfile = os.path.join(self.storedir, "lease_checker.history")
1659-        self.lease_checker = LeaseCheckingCrawler(statefile, historyfile,
1660-                                   expiration_enabled, expiration_mode,
1661-                                   expiration_override_lease_duration,
1662-                                   expiration_cutoff_date,
1663-                                   expiration_sharetypes)
1664-        self.lease_checker.setServiceParent(self)
1665-
1666-    def get_available_space(self):
1667-        if self.readonly:
1668-            return 0
1669-        return fileutil.get_available_space(self.storedir, self.reserved_space)
1670-
1671-    def get_bucket_shares(self, storage_index):
1672-        """Return a list of (shnum, pathname) tuples for files that hold
1673-        shares for this storage_index. In each tuple, 'shnum' will always be
1674-        the integer form of the last component of 'pathname'."""
1675-        storagedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
1676-        try:
1677-            for f in os.listdir(storagedir):
1678-                if NUM_RE.match(f):
1679-                    filename = os.path.join(storagedir, f)
1680-                    yield (int(f), filename)
1681-        except OSError:
1682-            # Commonly caused by there being no buckets at all.
1683-            pass
1684-
1685 # storage/
1686 # storage/shares/incoming
1687 #   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
1688hunk ./src/allmydata/storage/server.py 32
1689 # $SHARENUM matches this regex:
1690 NUM_RE=re.compile("^[0-9]+$")
1691 
1692-
1693-
1694 class StorageServer(service.MultiService, Referenceable):
1695     implements(RIStorageServer, IStatsProducer)
1696     name = 'storage'
1697hunk ./src/allmydata/storage/server.py 35
1698-    LeaseCheckerClass = LeaseCheckingCrawler
1699 
1700     def __init__(self, nodeid, backend, reserved_space=0,
1701                  readonly_storage=False,
1702hunk ./src/allmydata/storage/server.py 38
1703-                 stats_provider=None,
1704-                 expiration_enabled=False,
1705-                 expiration_mode="age",
1706-                 expiration_override_lease_duration=None,
1707-                 expiration_cutoff_date=None,
1708-                 expiration_sharetypes=("mutable", "immutable")):
1709+                 stats_provider=None ):
1710         service.MultiService.__init__(self)
1711         assert isinstance(nodeid, str)
1712         assert len(nodeid) == 20
1713hunk ./src/allmydata/storage/server.py 217
1714         # they asked about: this will save them a lot of work. Add or update
1715         # leases for all of them: if they want us to hold shares for this
1716         # file, they'll want us to hold leases for this file.
1717-        for (shnum, fn) in self.backend.get_bucket_shares(storage_index):
1718-            alreadygot.add(shnum)
1719-            sf = ShareFile(fn)
1720-            sf.add_or_renew_lease(lease_info)
1721-
1722-        for shnum in sharenums:
1723-            share = self.backend.get_share(storage_index, shnum)
1724+        for share in self.backend.get_shares(storage_index):
1725+            alreadygot.add(share.shnum)
1726+            share.add_or_renew_lease(lease_info)
1727 
1728hunk ./src/allmydata/storage/server.py 221
1729-            if not share:
1730-                if (not limited) or (remaining_space >= max_space_per_bucket):
1731-                    # ok! we need to create the new share file.
1732-                    bw = self.backend.make_bucket_writer(storage_index, shnum,
1733-                                      max_space_per_bucket, lease_info, canary)
1734-                    bucketwriters[shnum] = bw
1735-                    self._active_writers[bw] = 1
1736-                    if limited:
1737-                        remaining_space -= max_space_per_bucket
1738-                else:
1739-                    # bummer! not enough space to accept this bucket
1740-                    pass
1741+        for shnum in (sharenums - alreadygot):
1742+            if (not limited) or (remaining_space >= max_space_per_bucket):
1743+                #XXX or should the following line occur in storage server construtor? ok! we need to create the new share file.
1744+                self.backend.set_storage_server(self)
1745+                bw = self.backend.make_bucket_writer(storage_index, shnum,
1746+                                                     max_space_per_bucket, lease_info, canary)
1747+                bucketwriters[shnum] = bw
1748+                self._active_writers[bw] = 1
1749+                if limited:
1750+                    remaining_space -= max_space_per_bucket
1751 
1752hunk ./src/allmydata/storage/server.py 232
1753-            elif share.is_complete():
1754-                # great! we already have it. easy.
1755-                pass
1756-            elif not share.is_complete():
1757-                # Note that we don't create BucketWriters for shnums that
1758-                # have a partial share (in incoming/), so if a second upload
1759-                # occurs while the first is still in progress, the second
1760-                # uploader will use different storage servers.
1761-                pass
1762+        #XXX We SHOULD DOCUMENT LATER.
1763 
1764         self.add_latency("allocate", time.time() - start)
1765         return alreadygot, bucketwriters
1766hunk ./src/allmydata/storage/server.py 238
1767 
1768     def _iter_share_files(self, storage_index):
1769-        for shnum, filename in self._get_bucket_shares(storage_index):
1770+        for shnum, filename in self._get_shares(storage_index):
1771             f = open(filename, 'rb')
1772             header = f.read(32)
1773             f.close()
1774hunk ./src/allmydata/storage/server.py 318
1775         si_s = si_b2a(storage_index)
1776         log.msg("storage: get_buckets %s" % si_s)
1777         bucketreaders = {} # k: sharenum, v: BucketReader
1778-        for shnum, filename in self.backend.get_bucket_shares(storage_index):
1779+        for shnum, filename in self.backend.get_shares(storage_index):
1780             bucketreaders[shnum] = BucketReader(self, filename,
1781                                                 storage_index, shnum)
1782         self.add_latency("get", time.time() - start)
1783hunk ./src/allmydata/storage/server.py 334
1784         # since all shares get the same lease data, we just grab the leases
1785         # from the first share
1786         try:
1787-            shnum, filename = self._get_bucket_shares(storage_index).next()
1788+            shnum, filename = self._get_shares(storage_index).next()
1789             sf = ShareFile(filename)
1790             return sf.get_leases()
1791         except StopIteration:
1792hunk ./src/allmydata/storage/shares.py 1
1793-#! /usr/bin/python
1794-
1795-from allmydata.storage.mutable import MutableShareFile
1796-from allmydata.storage.immutable import ShareFile
1797-
1798-def get_share_file(filename):
1799-    f = open(filename, "rb")
1800-    prefix = f.read(32)
1801-    f.close()
1802-    if prefix == MutableShareFile.MAGIC:
1803-        return MutableShareFile(filename)
1804-    # otherwise assume it's immutable
1805-    return ShareFile(filename)
1806-
1807rmfile ./src/allmydata/storage/shares.py
1808hunk ./src/allmydata/test/common_util.py 20
1809 
1810 def flip_one_bit(s, offset=0, size=None):
1811     """ flip one random bit of the string s, in a byte greater than or equal to offset and less
1812-    than offset+size. """
1813+    than offset+size. Return the new string. """
1814     if size is None:
1815         size=len(s)-offset
1816     i = randrange(offset, offset+size)
1817hunk ./src/allmydata/test/test_backends.py 7
1818 
1819 from allmydata.test.common_util import ReallyEqualMixin
1820 
1821-import mock
1822+import mock, os
1823 
1824 # This is the code that we're going to be testing.
1825hunk ./src/allmydata/test/test_backends.py 10
1826-from allmydata.storage.server import StorageServer, FSBackend, NullBackend
1827+from allmydata.storage.server import StorageServer
1828+
1829+from allmydata.storage.backends.das.core import DASCore
1830+from allmydata.storage.backends.null.core import NullCore
1831+
1832 
1833 # The following share file contents was generated with
1834 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
1835hunk ./src/allmydata/test/test_backends.py 22
1836 share_data = 'a\x00\x00\x00\x00xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy\x00(\xde\x80'
1837 share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data
1838 
1839-sharefname = 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a/0'
1840+tempdir = 'teststoredir'
1841+sharedirname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
1842+sharefname = os.path.join(sharedirname, '0')
1843 
1844 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
1845     @mock.patch('time.time')
1846hunk ./src/allmydata/test/test_backends.py 58
1847         filesystem in only the prescribed ways. """
1848 
1849         def call_open(fname, mode):
1850-            if fname == 'testdir/bucket_counter.state':
1851-                raise IOError(2, "No such file or directory: 'testdir/bucket_counter.state'")
1852-            elif fname == 'testdir/lease_checker.state':
1853-                raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
1854-            elif fname == 'testdir/lease_checker.history':
1855+            if fname == os.path.join(tempdir,'bucket_counter.state'):
1856+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
1857+            elif fname == os.path.join(tempdir, 'lease_checker.state'):
1858+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
1859+            elif fname == os.path.join(tempdir, 'lease_checker.history'):
1860                 return StringIO()
1861             else:
1862                 self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
1863hunk ./src/allmydata/test/test_backends.py 124
1864     @mock.patch('__builtin__.open')
1865     def setUp(self, mockopen):
1866         def call_open(fname, mode):
1867-            if fname == 'testdir/bucket_counter.state':
1868-                raise IOError(2, "No such file or directory: 'testdir/bucket_counter.state'")
1869-            elif fname == 'testdir/lease_checker.state':
1870-                raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
1871-            elif fname == 'testdir/lease_checker.history':
1872+            if fname == os.path.join(tempdir, 'bucket_counter.state'):
1873+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
1874+            elif fname == os.path.join(tempdir, 'lease_checker.state'):
1875+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
1876+            elif fname == os.path.join(tempdir, 'lease_checker.history'):
1877                 return StringIO()
1878         mockopen.side_effect = call_open
1879hunk ./src/allmydata/test/test_backends.py 131
1880-
1881-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=FSBackend('teststoredir'))
1882+        expiration_policy = {'enabled' : False,
1883+                             'mode' : 'age',
1884+                             'override_lease_duration' : None,
1885+                             'cutoff_date' : None,
1886+                             'sharetypes' : None}
1887+        testbackend = DASCore(tempdir, expiration_policy)
1888+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
1889 
1890     @mock.patch('time.time')
1891     @mock.patch('os.mkdir')
1892hunk ./src/allmydata/test/test_backends.py 148
1893         """ Write a new share. """
1894 
1895         def call_listdir(dirname):
1896-            self.failUnlessReallyEqual(dirname, 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
1897-            raise OSError(2, "No such file or directory: 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a'")
1898+            self.failUnlessReallyEqual(dirname, sharedirname)
1899+            raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares/or/orsxg5dtorxxeylhmvpws3temv4a'))
1900 
1901         mocklistdir.side_effect = call_listdir
1902 
1903hunk ./src/allmydata/test/test_backends.py 178
1904 
1905         sharefile = MockFile()
1906         def call_open(fname, mode):
1907-            self.failUnlessReallyEqual(fname, 'testdir/shares/incoming/or/orsxg5dtorxxeylhmvpws3temv4a/0' )
1908+            self.failUnlessReallyEqual(fname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a', '0' ))
1909             return sharefile
1910 
1911         mockopen.side_effect = call_open
1912hunk ./src/allmydata/test/test_backends.py 200
1913         StorageServer object. """
1914 
1915         def call_listdir(dirname):
1916-            self.failUnlessReallyEqual(dirname,'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
1917+            self.failUnlessReallyEqual(dirname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
1918             return ['0']
1919 
1920         mocklistdir.side_effect = call_listdir
1921}
1922[checkpoint patch
1923wilcoxjg@gmail.com**20110626165715
1924 Ignore-this: fbfce2e8a1c1bb92715793b8ad6854d5
1925] {
1926hunk ./src/allmydata/storage/backends/das/core.py 21
1927 from allmydata.storage.lease import LeaseInfo
1928 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
1929      create_mutable_sharefile
1930-from allmydata.storage.backends.das.immutable import NullBucketWriter, BucketWriter, BucketReader
1931+from allmydata.storage.immutable import BucketWriter, BucketReader
1932 from allmydata.storage.crawler import FSBucketCountingCrawler
1933 from allmydata.storage.backends.das.expirer import FSLeaseCheckingCrawler
1934 
1935hunk ./src/allmydata/storage/backends/das/core.py 27
1936 from zope.interface import implements
1937 
1938+# $SHARENUM matches this regex:
1939+NUM_RE=re.compile("^[0-9]+$")
1940+
1941 class DASCore(Backend):
1942     implements(IStorageBackend)
1943     def __init__(self, storedir, expiration_policy, readonly=False, reserved_space=0):
1944hunk ./src/allmydata/storage/backends/das/core.py 80
1945         return fileutil.get_available_space(self.storedir, self.reserved_space)
1946 
1947     def get_shares(self, storage_index):
1948-        """Return a list of the FSBShare objects that correspond to the passed storage_index."""
1949+        """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
1950         finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
1951         try:
1952             for f in os.listdir(finalstoragedir):
1953hunk ./src/allmydata/storage/backends/das/core.py 86
1954                 if NUM_RE.match(f):
1955                     filename = os.path.join(finalstoragedir, f)
1956-                    yield FSBShare(filename, int(f))
1957+                    yield ImmutableShare(self.sharedir, storage_index, int(f))
1958         except OSError:
1959             # Commonly caused by there being no buckets at all.
1960             pass
1961hunk ./src/allmydata/storage/backends/das/core.py 95
1962         immsh = ImmutableShare(self.sharedir, storage_index, shnum, max_size=max_space_per_bucket, create=True)
1963         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
1964         return bw
1965+
1966+    def set_storage_server(self, ss):
1967+        self.ss = ss
1968         
1969 
1970 # each share file (in storage/shares/$SI/$SHNUM) contains lease information
1971hunk ./src/allmydata/storage/server.py 29
1972 # Where "$START" denotes the first 10 bits worth of $STORAGEINDEX (that's 2
1973 # base-32 chars).
1974 
1975-# $SHARENUM matches this regex:
1976-NUM_RE=re.compile("^[0-9]+$")
1977 
1978 class StorageServer(service.MultiService, Referenceable):
1979     implements(RIStorageServer, IStatsProducer)
1980}
1981[checkpoint4
1982wilcoxjg@gmail.com**20110628202202
1983 Ignore-this: 9778596c10bb066b58fc211f8c1707b7
1984] {
1985hunk ./src/allmydata/storage/backends/das/core.py 96
1986         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
1987         return bw
1988 
1989+    def make_bucket_reader(self, share):
1990+        return BucketReader(self.ss, share)
1991+
1992     def set_storage_server(self, ss):
1993         self.ss = ss
1994         
1995hunk ./src/allmydata/storage/backends/das/core.py 138
1996         must not be None. """
1997         precondition((max_size is not None) or (not create), max_size, create)
1998         self.shnum = shnum
1999+        self.storage_index = storageindex
2000         self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
2001         self._max_size = max_size
2002         if create:
2003hunk ./src/allmydata/storage/backends/das/core.py 173
2004             self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
2005         self._data_offset = 0xc
2006 
2007+    def get_shnum(self):
2008+        return self.shnum
2009+
2010     def unlink(self):
2011         os.unlink(self.fname)
2012 
2013hunk ./src/allmydata/storage/backends/null/core.py 2
2014 from allmydata.storage.backends.base import Backend
2015+from allmydata.storage.immutable import BucketWriter, BucketReader
2016 
2017 class NullCore(Backend):
2018     def __init__(self):
2019hunk ./src/allmydata/storage/backends/null/core.py 17
2020     def get_share(self, storage_index, sharenum):
2021         return None
2022 
2023-    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
2024-        return NullBucketWriter()
2025+    def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
2026+       
2027+        return BucketWriter(self.ss, immutableshare, max_space_per_bucket, lease_info, canary)
2028+
2029+    def set_storage_server(self, ss):
2030+        self.ss = ss
2031+
2032+class ImmutableShare:
2033+    sharetype = "immutable"
2034+
2035+    def __init__(self, sharedir, storageindex, shnum, max_size=None, create=False):
2036+        """ If max_size is not None then I won't allow more than
2037+        max_size to be written to me. If create=True then max_size
2038+        must not be None. """
2039+        precondition((max_size is not None) or (not create), max_size, create)
2040+        self.shnum = shnum
2041+        self.storage_index = storageindex
2042+        self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
2043+        self._max_size = max_size
2044+        if create:
2045+            # touch the file, so later callers will see that we're working on
2046+            # it. Also construct the metadata.
2047+            assert not os.path.exists(self.fname)
2048+            fileutil.make_dirs(os.path.dirname(self.fname))
2049+            f = open(self.fname, 'wb')
2050+            # The second field -- the four-byte share data length -- is no
2051+            # longer used as of Tahoe v1.3.0, but we continue to write it in
2052+            # there in case someone downgrades a storage server from >=
2053+            # Tahoe-1.3.0 to < Tahoe-1.3.0, or moves a share file from one
2054+            # server to another, etc. We do saturation -- a share data length
2055+            # larger than 2**32-1 (what can fit into the field) is marked as
2056+            # the largest length that can fit into the field. That way, even
2057+            # if this does happen, the old < v1.3.0 server will still allow
2058+            # clients to read the first part of the share.
2059+            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
2060+            f.close()
2061+            self._lease_offset = max_size + 0x0c
2062+            self._num_leases = 0
2063+        else:
2064+            f = open(self.fname, 'rb')
2065+            filesize = os.path.getsize(self.fname)
2066+            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
2067+            f.close()
2068+            if version != 1:
2069+                msg = "sharefile %s had version %d but we wanted 1" % \
2070+                      (self.fname, version)
2071+                raise UnknownImmutableContainerVersionError(msg)
2072+            self._num_leases = num_leases
2073+            self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
2074+        self._data_offset = 0xc
2075+
2076+    def get_shnum(self):
2077+        return self.shnum
2078+
2079+    def unlink(self):
2080+        os.unlink(self.fname)
2081+
2082+    def read_share_data(self, offset, length):
2083+        precondition(offset >= 0)
2084+        # Reads beyond the end of the data are truncated. Reads that start
2085+        # beyond the end of the data return an empty string.
2086+        seekpos = self._data_offset+offset
2087+        fsize = os.path.getsize(self.fname)
2088+        actuallength = max(0, min(length, fsize-seekpos))
2089+        if actuallength == 0:
2090+            return ""
2091+        f = open(self.fname, 'rb')
2092+        f.seek(seekpos)
2093+        return f.read(actuallength)
2094+
2095+    def write_share_data(self, offset, data):
2096+        length = len(data)
2097+        precondition(offset >= 0, offset)
2098+        if self._max_size is not None and offset+length > self._max_size:
2099+            raise DataTooLargeError(self._max_size, offset, length)
2100+        f = open(self.fname, 'rb+')
2101+        real_offset = self._data_offset+offset
2102+        f.seek(real_offset)
2103+        assert f.tell() == real_offset
2104+        f.write(data)
2105+        f.close()
2106+
2107+    def _write_lease_record(self, f, lease_number, lease_info):
2108+        offset = self._lease_offset + lease_number * self.LEASE_SIZE
2109+        f.seek(offset)
2110+        assert f.tell() == offset
2111+        f.write(lease_info.to_immutable_data())
2112+
2113+    def _read_num_leases(self, f):
2114+        f.seek(0x08)
2115+        (num_leases,) = struct.unpack(">L", f.read(4))
2116+        return num_leases
2117+
2118+    def _write_num_leases(self, f, num_leases):
2119+        f.seek(0x08)
2120+        f.write(struct.pack(">L", num_leases))
2121+
2122+    def _truncate_leases(self, f, num_leases):
2123+        f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE)
2124+
2125+    def get_leases(self):
2126+        """Yields a LeaseInfo instance for all leases."""
2127+        f = open(self.fname, 'rb')
2128+        (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
2129+        f.seek(self._lease_offset)
2130+        for i in range(num_leases):
2131+            data = f.read(self.LEASE_SIZE)
2132+            if data:
2133+                yield LeaseInfo().from_immutable_data(data)
2134+
2135+    def add_lease(self, lease_info):
2136+        f = open(self.fname, 'rb+')
2137+        num_leases = self._read_num_leases(f)
2138+        self._write_lease_record(f, num_leases, lease_info)
2139+        self._write_num_leases(f, num_leases+1)
2140+        f.close()
2141+
2142+    def renew_lease(self, renew_secret, new_expire_time):
2143+        for i,lease in enumerate(self.get_leases()):
2144+            if constant_time_compare(lease.renew_secret, renew_secret):
2145+                # yup. See if we need to update the owner time.
2146+                if new_expire_time > lease.expiration_time:
2147+                    # yes
2148+                    lease.expiration_time = new_expire_time
2149+                    f = open(self.fname, 'rb+')
2150+                    self._write_lease_record(f, i, lease)
2151+                    f.close()
2152+                return
2153+        raise IndexError("unable to renew non-existent lease")
2154+
2155+    def add_or_renew_lease(self, lease_info):
2156+        try:
2157+            self.renew_lease(lease_info.renew_secret,
2158+                             lease_info.expiration_time)
2159+        except IndexError:
2160+            self.add_lease(lease_info)
2161+
2162+
2163+    def cancel_lease(self, cancel_secret):
2164+        """Remove a lease with the given cancel_secret. If the last lease is
2165+        cancelled, the file will be removed. Return the number of bytes that
2166+        were freed (by truncating the list of leases, and possibly by
2167+        deleting the file. Raise IndexError if there was no lease with the
2168+        given cancel_secret.
2169+        """
2170+
2171+        leases = list(self.get_leases())
2172+        num_leases_removed = 0
2173+        for i,lease in enumerate(leases):
2174+            if constant_time_compare(lease.cancel_secret, cancel_secret):
2175+                leases[i] = None
2176+                num_leases_removed += 1
2177+        if not num_leases_removed:
2178+            raise IndexError("unable to find matching lease to cancel")
2179+        if num_leases_removed:
2180+            # pack and write out the remaining leases. We write these out in
2181+            # the same order as they were added, so that if we crash while
2182+            # doing this, we won't lose any non-cancelled leases.
2183+            leases = [l for l in leases if l] # remove the cancelled leases
2184+            f = open(self.fname, 'rb+')
2185+            for i,lease in enumerate(leases):
2186+                self._write_lease_record(f, i, lease)
2187+            self._write_num_leases(f, len(leases))
2188+            self._truncate_leases(f, len(leases))
2189+            f.close()
2190+        space_freed = self.LEASE_SIZE * num_leases_removed
2191+        if not len(leases):
2192+            space_freed += os.stat(self.fname)[stat.ST_SIZE]
2193+            self.unlink()
2194+        return space_freed
2195hunk ./src/allmydata/storage/immutable.py 114
2196 class BucketReader(Referenceable):
2197     implements(RIBucketReader)
2198 
2199-    def __init__(self, ss, sharefname, storage_index=None, shnum=None):
2200+    def __init__(self, ss, share):
2201         self.ss = ss
2202hunk ./src/allmydata/storage/immutable.py 116
2203-        self._share_file = ShareFile(sharefname)
2204-        self.storage_index = storage_index
2205-        self.shnum = shnum
2206+        self._share_file = share
2207+        self.storage_index = share.storage_index
2208+        self.shnum = share.shnum
2209 
2210     def __repr__(self):
2211         return "<%s %s %s>" % (self.__class__.__name__,
2212hunk ./src/allmydata/storage/server.py 316
2213         si_s = si_b2a(storage_index)
2214         log.msg("storage: get_buckets %s" % si_s)
2215         bucketreaders = {} # k: sharenum, v: BucketReader
2216-        for shnum, filename in self.backend.get_shares(storage_index):
2217-            bucketreaders[shnum] = BucketReader(self, filename,
2218-                                                storage_index, shnum)
2219+        self.backend.set_storage_server(self)
2220+        for share in self.backend.get_shares(storage_index):
2221+            bucketreaders[share.get_shnum()] = self.backend.make_bucket_reader(share)
2222         self.add_latency("get", time.time() - start)
2223         return bucketreaders
2224 
2225hunk ./src/allmydata/test/test_backends.py 25
2226 tempdir = 'teststoredir'
2227 sharedirname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
2228 sharefname = os.path.join(sharedirname, '0')
2229+expiration_policy = {'enabled' : False,
2230+                     'mode' : 'age',
2231+                     'override_lease_duration' : None,
2232+                     'cutoff_date' : None,
2233+                     'sharetypes' : None}
2234 
2235 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
2236     @mock.patch('time.time')
2237hunk ./src/allmydata/test/test_backends.py 43
2238         tries to read or write to the file system. """
2239 
2240         # Now begin the test.
2241-        s = StorageServer('testnodeidxxxxxxxxxx', backend=NullBackend())
2242+        s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2243 
2244         self.failIf(mockisdir.called)
2245         self.failIf(mocklistdir.called)
2246hunk ./src/allmydata/test/test_backends.py 74
2247         mockopen.side_effect = call_open
2248 
2249         # Now begin the test.
2250-        s = StorageServer('testnodeidxxxxxxxxxx', backend=FSBackend('teststoredir'))
2251+        s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore('teststoredir', expiration_policy))
2252 
2253         self.failIf(mockisdir.called)
2254         self.failIf(mocklistdir.called)
2255hunk ./src/allmydata/test/test_backends.py 86
2256 
2257 class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
2258     def setUp(self):
2259-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullBackend())
2260+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2261 
2262     @mock.patch('os.mkdir')
2263     @mock.patch('__builtin__.open')
2264hunk ./src/allmydata/test/test_backends.py 136
2265             elif fname == os.path.join(tempdir, 'lease_checker.history'):
2266                 return StringIO()
2267         mockopen.side_effect = call_open
2268-        expiration_policy = {'enabled' : False,
2269-                             'mode' : 'age',
2270-                             'override_lease_duration' : None,
2271-                             'cutoff_date' : None,
2272-                             'sharetypes' : None}
2273         testbackend = DASCore(tempdir, expiration_policy)
2274         self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
2275 
2276}
2277[checkpoint5
2278wilcoxjg@gmail.com**20110705034626
2279 Ignore-this: 255780bd58299b0aa33c027e9d008262
2280] {
2281addfile ./src/allmydata/storage/backends/base.py
2282hunk ./src/allmydata/storage/backends/base.py 1
2283+from twisted.application import service
2284+
2285+class Backend(service.MultiService):
2286+    def __init__(self):
2287+        service.MultiService.__init__(self)
2288hunk ./src/allmydata/storage/backends/null/core.py 19
2289 
2290     def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
2291         
2292+        immutableshare = ImmutableShare()
2293         return BucketWriter(self.ss, immutableshare, max_space_per_bucket, lease_info, canary)
2294 
2295     def set_storage_server(self, ss):
2296hunk ./src/allmydata/storage/backends/null/core.py 28
2297 class ImmutableShare:
2298     sharetype = "immutable"
2299 
2300-    def __init__(self, sharedir, storageindex, shnum, max_size=None, create=False):
2301+    def __init__(self):
2302         """ If max_size is not None then I won't allow more than
2303         max_size to be written to me. If create=True then max_size
2304         must not be None. """
2305hunk ./src/allmydata/storage/backends/null/core.py 32
2306-        precondition((max_size is not None) or (not create), max_size, create)
2307-        self.shnum = shnum
2308-        self.storage_index = storageindex
2309-        self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
2310-        self._max_size = max_size
2311-        if create:
2312-            # touch the file, so later callers will see that we're working on
2313-            # it. Also construct the metadata.
2314-            assert not os.path.exists(self.fname)
2315-            fileutil.make_dirs(os.path.dirname(self.fname))
2316-            f = open(self.fname, 'wb')
2317-            # The second field -- the four-byte share data length -- is no
2318-            # longer used as of Tahoe v1.3.0, but we continue to write it in
2319-            # there in case someone downgrades a storage server from >=
2320-            # Tahoe-1.3.0 to < Tahoe-1.3.0, or moves a share file from one
2321-            # server to another, etc. We do saturation -- a share data length
2322-            # larger than 2**32-1 (what can fit into the field) is marked as
2323-            # the largest length that can fit into the field. That way, even
2324-            # if this does happen, the old < v1.3.0 server will still allow
2325-            # clients to read the first part of the share.
2326-            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
2327-            f.close()
2328-            self._lease_offset = max_size + 0x0c
2329-            self._num_leases = 0
2330-        else:
2331-            f = open(self.fname, 'rb')
2332-            filesize = os.path.getsize(self.fname)
2333-            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
2334-            f.close()
2335-            if version != 1:
2336-                msg = "sharefile %s had version %d but we wanted 1" % \
2337-                      (self.fname, version)
2338-                raise UnknownImmutableContainerVersionError(msg)
2339-            self._num_leases = num_leases
2340-            self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
2341-        self._data_offset = 0xc
2342+        pass
2343 
2344     def get_shnum(self):
2345         return self.shnum
2346hunk ./src/allmydata/storage/backends/null/core.py 54
2347         return f.read(actuallength)
2348 
2349     def write_share_data(self, offset, data):
2350-        length = len(data)
2351-        precondition(offset >= 0, offset)
2352-        if self._max_size is not None and offset+length > self._max_size:
2353-            raise DataTooLargeError(self._max_size, offset, length)
2354-        f = open(self.fname, 'rb+')
2355-        real_offset = self._data_offset+offset
2356-        f.seek(real_offset)
2357-        assert f.tell() == real_offset
2358-        f.write(data)
2359-        f.close()
2360+        pass
2361 
2362     def _write_lease_record(self, f, lease_number, lease_info):
2363         offset = self._lease_offset + lease_number * self.LEASE_SIZE
2364hunk ./src/allmydata/storage/backends/null/core.py 84
2365             if data:
2366                 yield LeaseInfo().from_immutable_data(data)
2367 
2368-    def add_lease(self, lease_info):
2369-        f = open(self.fname, 'rb+')
2370-        num_leases = self._read_num_leases(f)
2371-        self._write_lease_record(f, num_leases, lease_info)
2372-        self._write_num_leases(f, num_leases+1)
2373-        f.close()
2374+    def add_lease(self, lease):
2375+        pass
2376 
2377     def renew_lease(self, renew_secret, new_expire_time):
2378         for i,lease in enumerate(self.get_leases()):
2379hunk ./src/allmydata/test/test_backends.py 32
2380                      'sharetypes' : None}
2381 
2382 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
2383-    @mock.patch('time.time')
2384-    @mock.patch('os.mkdir')
2385-    @mock.patch('__builtin__.open')
2386-    @mock.patch('os.listdir')
2387-    @mock.patch('os.path.isdir')
2388-    def test_create_server_null_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
2389-        """ This tests whether a server instance can be constructed
2390-        with a null backend. The server instance fails the test if it
2391-        tries to read or write to the file system. """
2392-
2393-        # Now begin the test.
2394-        s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2395-
2396-        self.failIf(mockisdir.called)
2397-        self.failIf(mocklistdir.called)
2398-        self.failIf(mockopen.called)
2399-        self.failIf(mockmkdir.called)
2400-
2401-        # You passed!
2402-
2403     @mock.patch('time.time')
2404     @mock.patch('os.mkdir')
2405     @mock.patch('__builtin__.open')
2406hunk ./src/allmydata/test/test_backends.py 53
2407                 self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
2408         mockopen.side_effect = call_open
2409 
2410-        # Now begin the test.
2411-        s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore('teststoredir', expiration_policy))
2412-
2413-        self.failIf(mockisdir.called)
2414-        self.failIf(mocklistdir.called)
2415-        self.failIf(mockopen.called)
2416-        self.failIf(mockmkdir.called)
2417-        self.failIf(mocktime.called)
2418-
2419-        # You passed!
2420-
2421-class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
2422-    def setUp(self):
2423-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2424-
2425-    @mock.patch('os.mkdir')
2426-    @mock.patch('__builtin__.open')
2427-    @mock.patch('os.listdir')
2428-    @mock.patch('os.path.isdir')
2429-    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir):
2430-        """ Write a new share. """
2431-
2432-        # Now begin the test.
2433-        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
2434-        bs[0].remote_write(0, 'a')
2435-        self.failIf(mockisdir.called)
2436-        self.failIf(mocklistdir.called)
2437-        self.failIf(mockopen.called)
2438-        self.failIf(mockmkdir.called)
2439+        def call_isdir(fname):
2440+            if fname == os.path.join(tempdir,'shares'):
2441+                return True
2442+            elif fname == os.path.join(tempdir,'shares', 'incoming'):
2443+                return True
2444+            else:
2445+                self.fail("Server with FS backend tried to idsir '%s'" % (fname,))
2446+        mockisdir.side_effect = call_isdir
2447 
2448hunk ./src/allmydata/test/test_backends.py 62
2449-    @mock.patch('os.path.exists')
2450-    @mock.patch('os.path.getsize')
2451-    @mock.patch('__builtin__.open')
2452-    @mock.patch('os.listdir')
2453-    def test_read_share(self, mocklistdir, mockopen, mockgetsize, mockexists):
2454-        """ This tests whether the code correctly finds and reads
2455-        shares written out by old (Tahoe-LAFS <= v1.8.2)
2456-        servers. There is a similar test in test_download, but that one
2457-        is from the perspective of the client and exercises a deeper
2458-        stack of code. This one is for exercising just the
2459-        StorageServer object. """
2460+        def call_mkdir(fname, mode):
2461+            """XXX something is calling mkdir teststoredir and teststoredir/shares twice...  this is odd!"""
2462+            self.failUnlessEqual(0777, mode)
2463+            if fname == tempdir:
2464+                return None
2465+            elif fname == os.path.join(tempdir,'shares'):
2466+                return None
2467+            elif fname == os.path.join(tempdir,'shares', 'incoming'):
2468+                return None
2469+            else:
2470+                self.fail("Server with FS backend tried to mkdir '%s'" % (fname,))
2471+        mockmkdir.side_effect = call_mkdir
2472 
2473         # Now begin the test.
2474hunk ./src/allmydata/test/test_backends.py 76
2475-        bs = self.s.remote_get_buckets('teststorage_index')
2476+        s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore('teststoredir', expiration_policy))
2477 
2478hunk ./src/allmydata/test/test_backends.py 78
2479-        self.failUnlessEqual(len(bs), 0)
2480-        self.failIf(mocklistdir.called)
2481-        self.failIf(mockopen.called)
2482-        self.failIf(mockgetsize.called)
2483-        self.failIf(mockexists.called)
2484+        self.failIf(mocklistdir.called, mocklistdir.call_args_list)
2485 
2486 
2487 class TestServerFSBackend(unittest.TestCase, ReallyEqualMixin):
2488hunk ./src/allmydata/test/test_backends.py 193
2489         self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
2490 
2491 
2492+
2493+class TestBackendConstruction(unittest.TestCase, ReallyEqualMixin):
2494+    @mock.patch('time.time')
2495+    @mock.patch('os.mkdir')
2496+    @mock.patch('__builtin__.open')
2497+    @mock.patch('os.listdir')
2498+    @mock.patch('os.path.isdir')
2499+    def test_create_fs_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
2500+        """ This tests whether a file system backend instance can be
2501+        constructed. To pass the test, it has to use the
2502+        filesystem in only the prescribed ways. """
2503+
2504+        def call_open(fname, mode):
2505+            if fname == os.path.join(tempdir,'bucket_counter.state'):
2506+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
2507+            elif fname == os.path.join(tempdir, 'lease_checker.state'):
2508+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
2509+            elif fname == os.path.join(tempdir, 'lease_checker.history'):
2510+                return StringIO()
2511+            else:
2512+                self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
2513+        mockopen.side_effect = call_open
2514+
2515+        def call_isdir(fname):
2516+            if fname == os.path.join(tempdir,'shares'):
2517+                return True
2518+            elif fname == os.path.join(tempdir,'shares', 'incoming'):
2519+                return True
2520+            else:
2521+                self.fail("Server with FS backend tried to idsir '%s'" % (fname,))
2522+        mockisdir.side_effect = call_isdir
2523+
2524+        def call_mkdir(fname, mode):
2525+            """XXX something is calling mkdir teststoredir and teststoredir/shares twice...  this is odd!"""
2526+            self.failUnlessEqual(0777, mode)
2527+            if fname == tempdir:
2528+                return None
2529+            elif fname == os.path.join(tempdir,'shares'):
2530+                return None
2531+            elif fname == os.path.join(tempdir,'shares', 'incoming'):
2532+                return None
2533+            else:
2534+                self.fail("Server with FS backend tried to mkdir '%s'" % (fname,))
2535+        mockmkdir.side_effect = call_mkdir
2536+
2537+        # Now begin the test.
2538+        DASCore('teststoredir', expiration_policy)
2539+
2540+        self.failIf(mocklistdir.called, mocklistdir.call_args_list)
2541}
2542[checkpoint 6
2543wilcoxjg@gmail.com**20110706190824
2544 Ignore-this: 2fb2d722b53fe4a72c99118c01fceb69
2545] {
2546hunk ./src/allmydata/interfaces.py 100
2547                          renew_secret=LeaseRenewSecret,
2548                          cancel_secret=LeaseCancelSecret,
2549                          sharenums=SetOf(int, maxLength=MAX_BUCKETS),
2550-                         allocated_size=Offset, canary=Referenceable):
2551+                         allocated_size=Offset,
2552+                         canary=Referenceable):
2553         """
2554hunk ./src/allmydata/interfaces.py 103
2555-        @param storage_index: the index of the bucket to be created or
2556+        @param storage_index: the index of the shares to be created or
2557                               increfed.
2558hunk ./src/allmydata/interfaces.py 105
2559-        @param sharenums: these are the share numbers (probably between 0 and
2560-                          99) that the sender is proposing to store on this
2561-                          server.
2562-        @param renew_secret: This is the secret used to protect bucket refresh
2563+        @param renew_secret: This is the secret used to protect shares refresh
2564                              This secret is generated by the client and
2565                              stored for later comparison by the server. Each
2566                              server is given a different secret.
2567hunk ./src/allmydata/interfaces.py 109
2568-        @param cancel_secret: Like renew_secret, but protects bucket decref.
2569-        @param canary: If the canary is lost before close(), the bucket is
2570+        @param cancel_secret: Like renew_secret, but protects shares decref.
2571+        @param sharenums: these are the share numbers (probably between 0 and
2572+                          99) that the sender is proposing to store on this
2573+                          server.
2574+        @param allocated_size: XXX The size of the shares the client wishes to store.
2575+        @param canary: If the canary is lost before close(), the shares are
2576                        deleted.
2577hunk ./src/allmydata/interfaces.py 116
2578+
2579         @return: tuple of (alreadygot, allocated), where alreadygot is what we
2580                  already have and allocated is what we hereby agree to accept.
2581                  New leases are added for shares in both lists.
2582hunk ./src/allmydata/interfaces.py 128
2583                   renew_secret=LeaseRenewSecret,
2584                   cancel_secret=LeaseCancelSecret):
2585         """
2586-        Add a new lease on the given bucket. If the renew_secret matches an
2587+        Add a new lease on the given shares. If the renew_secret matches an
2588         existing lease, that lease will be renewed instead. If there is no
2589         bucket for the given storage_index, return silently. (note that in
2590         tahoe-1.3.0 and earlier, IndexError was raised if there was no
2591hunk ./src/allmydata/storage/server.py 17
2592 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
2593      create_mutable_sharefile
2594 
2595-from zope.interface import implements
2596-
2597 # storage/
2598 # storage/shares/incoming
2599 #   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
2600hunk ./src/allmydata/test/test_backends.py 6
2601 from StringIO import StringIO
2602 
2603 from allmydata.test.common_util import ReallyEqualMixin
2604+from allmydata.util.assertutil import _assert
2605 
2606 import mock, os
2607 
2608hunk ./src/allmydata/test/test_backends.py 92
2609                 raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
2610             elif fname == os.path.join(tempdir, 'lease_checker.history'):
2611                 return StringIO()
2612+            else:
2613+                _assert(False, "The tester code doesn't recognize this case.") 
2614+
2615         mockopen.side_effect = call_open
2616         testbackend = DASCore(tempdir, expiration_policy)
2617         self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
2618hunk ./src/allmydata/test/test_backends.py 109
2619 
2620         def call_listdir(dirname):
2621             self.failUnlessReallyEqual(dirname, sharedirname)
2622-            raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares/or/orsxg5dtorxxeylhmvpws3temv4a'))
2623+            raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
2624 
2625         mocklistdir.side_effect = call_listdir
2626 
2627hunk ./src/allmydata/test/test_backends.py 113
2628+        def call_isdir(dirname):
2629+            self.failUnlessReallyEqual(dirname, sharedirname)
2630+            return True
2631+
2632+        mockisdir.side_effect = call_isdir
2633+
2634+        def call_mkdir(dirname, permissions):
2635+            if dirname not in [sharedirname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
2636+                self.Fail
2637+            else:
2638+                return True
2639+
2640+        mockmkdir.side_effect = call_mkdir
2641+
2642         class MockFile:
2643             def __init__(self):
2644                 self.buffer = ''
2645hunk ./src/allmydata/test/test_backends.py 156
2646             return sharefile
2647 
2648         mockopen.side_effect = call_open
2649+
2650         # Now begin the test.
2651         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
2652         bs[0].remote_write(0, 'a')
2653hunk ./src/allmydata/test/test_backends.py 161
2654         self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
2655+       
2656+        # Now test the allocated_size method.
2657+        spaceint = self.s.allocated_size()
2658 
2659     @mock.patch('os.path.exists')
2660     @mock.patch('os.path.getsize')
2661}
2662
2663Context:
2664
2665[add Protovis.js-based download-status timeline visualization
2666Brian Warner <warner@lothar.com>**20110629222606
2667 Ignore-this: 477ccef5c51b30e246f5b6e04ab4a127
2668 
2669 provide status overlap info on the webapi t=json output, add decode/decrypt
2670 rate tooltips, add zoomin/zoomout buttons
2671]
2672[add more download-status data, fix tests
2673Brian Warner <warner@lothar.com>**20110629222555
2674 Ignore-this: e9e0b7e0163f1e95858aa646b9b17b8c
2675]
2676[prepare for viz: improve DownloadStatus events
2677Brian Warner <warner@lothar.com>**20110629222542
2678 Ignore-this: 16d0bde6b734bb501aa6f1174b2b57be
2679 
2680 consolidate IDownloadStatusHandlingConsumer stuff into DownloadNode
2681]
2682[docs: fix error in crypto specification that was noticed by Taylor R Campbell <campbell+tahoe@mumble.net>
2683zooko@zooko.com**20110629185711
2684 Ignore-this: b921ed60c1c8ba3c390737fbcbe47a67
2685]
2686[setup.py: don't make bin/tahoe.pyscript executable. fixes #1347
2687david-sarah@jacaranda.org**20110130235809
2688 Ignore-this: 3454c8b5d9c2c77ace03de3ef2d9398a
2689]
2690[Makefile: remove targets relating to 'setup.py check_auto_deps' which no longer exists. fixes #1345
2691david-sarah@jacaranda.org**20110626054124
2692 Ignore-this: abb864427a1b91bd10d5132b4589fd90
2693]
2694[Makefile: add 'make check' as an alias for 'make test'. Also remove an unnecessary dependency of 'test' on 'build' and 'src/allmydata/_version.py'. fixes #1344
2695david-sarah@jacaranda.org**20110623205528
2696 Ignore-this: c63e23146c39195de52fb17c7c49b2da
2697]
2698[Rename test_package_initialization.py to (much shorter) test_import.py .
2699Brian Warner <warner@lothar.com>**20110611190234
2700 Ignore-this: 3eb3dbac73600eeff5cfa6b65d65822
2701 
2702 The former name was making my 'ls' listings hard to read, by forcing them
2703 down to just two columns.
2704]
2705[tests: fix tests to accomodate [20110611153758-92b7f-0ba5e4726fb6318dac28fb762a6512a003f4c430]
2706zooko@zooko.com**20110611163741
2707 Ignore-this: 64073a5f39e7937e8e5e1314c1a302d1
2708 Apparently none of the two authors (stercor, terrell), three reviewers (warner, davidsarah, terrell), or one committer (me) actually ran the tests. This is presumably due to #20.
2709 fixes #1412
2710]
2711[wui: right-align the size column in the WUI
2712zooko@zooko.com**20110611153758
2713 Ignore-this: 492bdaf4373c96f59f90581c7daf7cd7
2714 Thanks to Ted "stercor" Rolle Jr. and Terrell Russell.
2715 fixes #1412
2716]
2717[docs: three minor fixes
2718zooko@zooko.com**20110610121656
2719 Ignore-this: fec96579eb95aceb2ad5fc01a814c8a2
2720 CREDITS for arc for stats tweak
2721 fix link to .zip file in quickstart.rst (thanks to ChosenOne for noticing)
2722 English usage tweak
2723]
2724[docs/running.rst: fix stray HTML (not .rst) link noticed by ChosenOne.
2725david-sarah@jacaranda.org**20110609223719
2726 Ignore-this: fc50ac9c94792dcac6f1067df8ac0d4a
2727]
2728[server.py:  get_latencies now reports percentiles _only_ if there are sufficient observations for the interpretation of the percentile to be unambiguous.
2729wilcoxjg@gmail.com**20110527120135
2730 Ignore-this: 2e7029764bffc60e26f471d7c2b6611e
2731 interfaces.py:  modified the return type of RIStatsProvider.get_stats to allow for None as a return value
2732 NEWS.rst, stats.py: documentation of change to get_latencies
2733 stats.rst: now documents percentile modification in get_latencies
2734 test_storage.py:  test_latencies now expects None in output categories that contain too few samples for the associated percentile to be unambiguously reported.
2735 fixes #1392
2736]
2737[docs: revert link in relnotes.txt from NEWS.rst to NEWS, since the former did not exist at revision 5000.
2738david-sarah@jacaranda.org**20110517011214
2739 Ignore-this: 6a5be6e70241e3ec0575641f64343df7
2740]
2741[docs: convert NEWS to NEWS.rst and change all references to it.
2742david-sarah@jacaranda.org**20110517010255
2743 Ignore-this: a820b93ea10577c77e9c8206dbfe770d
2744]
2745[docs: remove out-of-date docs/testgrid/introducer.furl and containing directory. fixes #1404
2746david-sarah@jacaranda.org**20110512140559
2747 Ignore-this: 784548fc5367fac5450df1c46890876d
2748]
2749[scripts/common.py: don't assume that the default alias is always 'tahoe' (it is, but the API of get_alias doesn't say so). refs #1342
2750david-sarah@jacaranda.org**20110130164923
2751 Ignore-this: a271e77ce81d84bb4c43645b891d92eb
2752]
2753[setup: don't catch all Exception from check_requirement(), but only PackagingError and ImportError
2754zooko@zooko.com**20110128142006
2755 Ignore-this: 57d4bc9298b711e4bc9dc832c75295de
2756 I noticed this because I had accidentally inserted a bug which caused AssertionError to be raised from check_requirement().
2757]
2758[M-x whitespace-cleanup
2759zooko@zooko.com**20110510193653
2760 Ignore-this: dea02f831298c0f65ad096960e7df5c7
2761]
2762[docs: fix typo in running.rst, thanks to arch_o_median
2763zooko@zooko.com**20110510193633
2764 Ignore-this: ca06de166a46abbc61140513918e79e8
2765]
2766[relnotes.txt: don't claim to work on Cygwin (which has been untested for some time). refs #1342
2767david-sarah@jacaranda.org**20110204204902
2768 Ignore-this: 85ef118a48453d93fa4cddc32d65b25b
2769]
2770[relnotes.txt: forseeable -> foreseeable. refs #1342
2771david-sarah@jacaranda.org**20110204204116
2772 Ignore-this: 746debc4d82f4031ebf75ab4031b3a9
2773]
2774[replace remaining .html docs with .rst docs
2775zooko@zooko.com**20110510191650
2776 Ignore-this: d557d960a986d4ac8216d1677d236399
2777 Remove install.html (long since deprecated).
2778 Also replace some obsolete references to install.html with references to quickstart.rst.
2779 Fix some broken internal references within docs/historical/historical_known_issues.txt.
2780 Thanks to Ravi Pinjala and Patrick McDonald.
2781 refs #1227
2782]
2783[docs: FTP-and-SFTP.rst: fix a minor error and update the information about which version of Twisted fixes #1297
2784zooko@zooko.com**20110428055232
2785 Ignore-this: b63cfb4ebdbe32fb3b5f885255db4d39
2786]
2787[munin tahoe_files plugin: fix incorrect file count
2788francois@ctrlaltdel.ch**20110428055312
2789 Ignore-this: 334ba49a0bbd93b4a7b06a25697aba34
2790 fixes #1391
2791]
2792[corrected "k must never be smaller than N" to "k must never be greater than N"
2793secorp@allmydata.org**20110425010308
2794 Ignore-this: 233129505d6c70860087f22541805eac
2795]
2796[Fix a test failure in test_package_initialization on Python 2.4.x due to exceptions being stringified differently than in later versions of Python. refs #1389
2797david-sarah@jacaranda.org**20110411190738
2798 Ignore-this: 7847d26bc117c328c679f08a7baee519
2799]
2800[tests: add test for including the ImportError message and traceback entry in the summary of errors from importing dependencies. refs #1389
2801david-sarah@jacaranda.org**20110410155844
2802 Ignore-this: fbecdbeb0d06a0f875fe8d4030aabafa
2803]
2804[allmydata/__init__.py: preserve the message and last traceback entry (file, line number, function, and source line) of ImportErrors in the package versions string. fixes #1389
2805david-sarah@jacaranda.org**20110410155705
2806 Ignore-this: 2f87b8b327906cf8bfca9440a0904900
2807]
2808[remove unused variable detected by pyflakes
2809zooko@zooko.com**20110407172231
2810 Ignore-this: 7344652d5e0720af822070d91f03daf9
2811]
2812[allmydata/__init__.py: Nicer reporting of unparseable version numbers in dependencies. fixes #1388
2813david-sarah@jacaranda.org**20110401202750
2814 Ignore-this: 9c6bd599259d2405e1caadbb3e0d8c7f
2815]
2816[update FTP-and-SFTP.rst: the necessary patch is included in Twisted-10.1
2817Brian Warner <warner@lothar.com>**20110325232511
2818 Ignore-this: d5307faa6900f143193bfbe14e0f01a
2819]
2820[control.py: remove all uses of s.get_serverid()
2821warner@lothar.com**20110227011203
2822 Ignore-this: f80a787953bd7fa3d40e828bde00e855
2823]
2824[web: remove some uses of s.get_serverid(), not all
2825warner@lothar.com**20110227011159
2826 Ignore-this: a9347d9cf6436537a47edc6efde9f8be
2827]
2828[immutable/downloader/fetcher.py: remove all get_serverid() calls
2829warner@lothar.com**20110227011156
2830 Ignore-this: fb5ef018ade1749348b546ec24f7f09a
2831]
2832[immutable/downloader/fetcher.py: fix diversity bug in server-response handling
2833warner@lothar.com**20110227011153
2834 Ignore-this: bcd62232c9159371ae8a16ff63d22c1b
2835 
2836 When blocks terminate (either COMPLETE or CORRUPT/DEAD/BADSEGNUM), the
2837 _shares_from_server dict was being popped incorrectly (using shnum as the
2838 index instead of serverid). I'm still thinking through the consequences of
2839 this bug. It was probably benign and really hard to detect. I think it would
2840 cause us to incorrectly believe that we're pulling too many shares from a
2841 server, and thus prefer a different server rather than asking for a second
2842 share from the first server. The diversity code is intended to spread out the
2843 number of shares simultaneously being requested from each server, but with
2844 this bug, it might be spreading out the total number of shares requested at
2845 all, not just simultaneously. (note that SegmentFetcher is scoped to a single
2846 segment, so the effect doesn't last very long).
2847]
2848[immutable/downloader/share.py: reduce get_serverid(), one left, update ext deps
2849warner@lothar.com**20110227011150
2850 Ignore-this: d8d56dd8e7b280792b40105e13664554
2851 
2852 test_download.py: create+check MyShare instances better, make sure they share
2853 Server objects, now that finder.py cares
2854]
2855[immutable/downloader/finder.py: reduce use of get_serverid(), one left
2856warner@lothar.com**20110227011146
2857 Ignore-this: 5785be173b491ae8a78faf5142892020
2858]
2859[immutable/offloaded.py: reduce use of get_serverid() a bit more
2860warner@lothar.com**20110227011142
2861 Ignore-this: b48acc1b2ae1b311da7f3ba4ffba38f
2862]
2863[immutable/upload.py: reduce use of get_serverid()
2864warner@lothar.com**20110227011138
2865 Ignore-this: ffdd7ff32bca890782119a6e9f1495f6
2866]
2867[immutable/checker.py: remove some uses of s.get_serverid(), not all
2868warner@lothar.com**20110227011134
2869 Ignore-this: e480a37efa9e94e8016d826c492f626e
2870]
2871[add remaining get_* methods to storage_client.Server, NoNetworkServer, and
2872warner@lothar.com**20110227011132
2873 Ignore-this: 6078279ddf42b179996a4b53bee8c421
2874 MockIServer stubs
2875]
2876[upload.py: rearrange _make_trackers a bit, no behavior changes
2877warner@lothar.com**20110227011128
2878 Ignore-this: 296d4819e2af452b107177aef6ebb40f
2879]
2880[happinessutil.py: finally rename merge_peers to merge_servers
2881warner@lothar.com**20110227011124
2882 Ignore-this: c8cd381fea1dd888899cb71e4f86de6e
2883]
2884[test_upload.py: factor out FakeServerTracker
2885warner@lothar.com**20110227011120
2886 Ignore-this: 6c182cba90e908221099472cc159325b
2887]
2888[test_upload.py: server-vs-tracker cleanup
2889warner@lothar.com**20110227011115
2890 Ignore-this: 2915133be1a3ba456e8603885437e03
2891]
2892[happinessutil.py: server-vs-tracker cleanup
2893warner@lothar.com**20110227011111
2894 Ignore-this: b856c84033562d7d718cae7cb01085a9
2895]
2896[upload.py: more tracker-vs-server cleanup
2897warner@lothar.com**20110227011107
2898 Ignore-this: bb75ed2afef55e47c085b35def2de315
2899]
2900[upload.py: fix var names to avoid confusion between 'trackers' and 'servers'
2901warner@lothar.com**20110227011103
2902 Ignore-this: 5d5e3415b7d2732d92f42413c25d205d
2903]
2904[refactor: s/peer/server/ in immutable/upload, happinessutil.py, test_upload
2905warner@lothar.com**20110227011100
2906 Ignore-this: 7ea858755cbe5896ac212a925840fe68
2907 
2908 No behavioral changes, just updating variable/method names and log messages.
2909 The effects outside these three files should be minimal: some exception
2910 messages changed (to say "server" instead of "peer"), and some internal class
2911 names were changed. A few things still use "peer" to minimize external
2912 changes, like UploadResults.timings["peer_selection"] and
2913 happinessutil.merge_peers, which can be changed later.
2914]
2915[storage_client.py: clean up test_add_server/test_add_descriptor, remove .test_servers
2916warner@lothar.com**20110227011056
2917 Ignore-this: efad933e78179d3d5fdcd6d1ef2b19cc
2918]
2919[test_client.py, upload.py:: remove KiB/MiB/etc constants, and other dead code
2920warner@lothar.com**20110227011051
2921 Ignore-this: dc83c5794c2afc4f81e592f689c0dc2d
2922]
2923[test: increase timeout on a network test because Francois's ARM machine hit that timeout
2924zooko@zooko.com**20110317165909
2925 Ignore-this: 380c345cdcbd196268ca5b65664ac85b
2926 I'm skeptical that the test was proceeding correctly but ran out of time. It seems more likely that it had gotten hung. But if we raise the timeout to an even more extravagant number then we can be even more certain that the test was never going to finish.
2927]
2928[docs/configuration.rst: add a "Frontend Configuration" section
2929Brian Warner <warner@lothar.com>**20110222014323
2930 Ignore-this: 657018aa501fe4f0efef9851628444ca
2931 
2932 this points to docs/frontends/*.rst, which were previously underlinked
2933]
2934[web/filenode.py: avoid calling req.finish() on closed HTTP connections. Closes #1366
2935"Brian Warner <warner@lothar.com>"**20110221061544
2936 Ignore-this: 799d4de19933f2309b3c0c19a63bb888
2937]
2938[Add unit tests for cross_check_pkg_resources_versus_import, and a regression test for ref #1355. This requires a little refactoring to make it testable.
2939david-sarah@jacaranda.org**20110221015817
2940 Ignore-this: 51d181698f8c20d3aca58b057e9c475a
2941]
2942[allmydata/__init__.py: .name was used in place of the correct .__name__ when printing an exception. Also, robustify string formatting by using %r instead of %s in some places. fixes #1355.
2943david-sarah@jacaranda.org**20110221020125
2944 Ignore-this: b0744ed58f161bf188e037bad077fc48
2945]
2946[Refactor StorageFarmBroker handling of servers
2947Brian Warner <warner@lothar.com>**20110221015804
2948 Ignore-this: 842144ed92f5717699b8f580eab32a51
2949 
2950 Pass around IServer instance instead of (peerid, rref) tuple. Replace
2951 "descriptor" with "server". Other replacements:
2952 
2953  get_all_servers -> get_connected_servers/get_known_servers
2954  get_servers_for_index -> get_servers_for_psi (now returns IServers)
2955 
2956 This change still needs to be pushed further down: lots of code is now
2957 getting the IServer and then distributing (peerid, rref) internally.
2958 Instead, it ought to distribute the IServer internally and delay
2959 extracting a serverid or rref until the last moment.
2960 
2961 no_network.py was updated to retain parallelism.
2962]
2963[TAG allmydata-tahoe-1.8.2
2964warner@lothar.com**20110131020101]
2965Patch bundle hash:
2966945b397d86f7c0286ce8f587b272b5193e005ed5