Ticket #999: checkpoint11.darcs.patch

File checkpoint11.darcs.patch, 149.4 KB (added by arch_o_median, at 2011-07-08T21:39:13Z)

(JACP) Just Another CheckPoint?

Line 
1Fri Mar 25 14:35:14 MDT 2011  wilcoxjg@gmail.com
2  * storage: new mocking tests of storage server read and write
3  There are already tests of read and functionality in test_storage.py, but those tests let the code under test use a real filesystem whereas these tests mock all file system calls.
4
5Fri Jun 24 14:28:50 MDT 2011  wilcoxjg@gmail.com
6  * server.py, test_backends.py, interfaces.py, immutable.py (others?): working patch for implementation of backends plugin
7  sloppy not for production
8
9Sat Jun 25 23:27:32 MDT 2011  wilcoxjg@gmail.com
10  * a temp patch used as a snapshot
11
12Sat Jun 25 23:32:44 MDT 2011  wilcoxjg@gmail.com
13  * snapshot of progress on backend implementation (not suitable for trunk)
14
15Sun Jun 26 10:57:15 MDT 2011  wilcoxjg@gmail.com
16  * checkpoint patch
17
18Tue Jun 28 14:22:02 MDT 2011  wilcoxjg@gmail.com
19  * checkpoint4
20
21Mon Jul  4 21:46:26 MDT 2011  wilcoxjg@gmail.com
22  * checkpoint5
23
24Wed Jul  6 13:08:24 MDT 2011  wilcoxjg@gmail.com
25  * checkpoint 6
26
27Wed Jul  6 14:08:20 MDT 2011  wilcoxjg@gmail.com
28  * checkpoint 7
29
30Wed Jul  6 16:31:26 MDT 2011  wilcoxjg@gmail.com
31  * checkpoint8
32    The nullbackend is necessary to test unlimited space in a backend.  It is a mock-like object.
33
34Wed Jul  6 22:29:42 MDT 2011  wilcoxjg@gmail.com
35  * checkpoint 9
36
37Thu Jul  7 11:20:49 MDT 2011  wilcoxjg@gmail.com
38  * checkpoint10
39
40Fri Jul  8 15:39:19 MDT 2011  wilcoxjg@gmail.com
41  * jacp 11
42
43New patches:
44
45[storage: new mocking tests of storage server read and write
46wilcoxjg@gmail.com**20110325203514
47 Ignore-this: df65c3c4f061dd1516f88662023fdb41
48 There are already tests of read and functionality in test_storage.py, but those tests let the code under test use a real filesystem whereas these tests mock all file system calls.
49] {
50addfile ./src/allmydata/test/test_server.py
51hunk ./src/allmydata/test/test_server.py 1
52+from twisted.trial import unittest
53+
54+from StringIO import StringIO
55+
56+from allmydata.test.common_util import ReallyEqualMixin
57+
58+import mock
59+
60+# This is the code that we're going to be testing.
61+from allmydata.storage.server import StorageServer
62+
63+# The following share file contents was generated with
64+# storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
65+# with share data == 'a'.
66+share_data = 'a\x00\x00\x00\x00xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy\x00(\xde\x80'
67+share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data
68+
69+sharefname = 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a/0'
70+
71+class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
72+    @mock.patch('__builtin__.open')
73+    def test_create_server(self, mockopen):
74+        """ This tests whether a server instance can be constructed. """
75+
76+        def call_open(fname, mode):
77+            if fname == 'testdir/bucket_counter.state':
78+                raise IOError(2, "No such file or directory: 'testdir/bucket_counter.state'")
79+            elif fname == 'testdir/lease_checker.state':
80+                raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
81+            elif fname == 'testdir/lease_checker.history':
82+                return StringIO()
83+        mockopen.side_effect = call_open
84+
85+        # Now begin the test.
86+        s = StorageServer('testdir', 'testnodeidxxxxxxxxxx')
87+
88+        # You passed!
89+
90+class TestServer(unittest.TestCase, ReallyEqualMixin):
91+    @mock.patch('__builtin__.open')
92+    def setUp(self, mockopen):
93+        def call_open(fname, mode):
94+            if fname == 'testdir/bucket_counter.state':
95+                raise IOError(2, "No such file or directory: 'testdir/bucket_counter.state'")
96+            elif fname == 'testdir/lease_checker.state':
97+                raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
98+            elif fname == 'testdir/lease_checker.history':
99+                return StringIO()
100+        mockopen.side_effect = call_open
101+
102+        self.s = StorageServer('testdir', 'testnodeidxxxxxxxxxx')
103+
104+
105+    @mock.patch('time.time')
106+    @mock.patch('os.mkdir')
107+    @mock.patch('__builtin__.open')
108+    @mock.patch('os.listdir')
109+    @mock.patch('os.path.isdir')
110+    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
111+        """Handle a report of corruption."""
112+
113+        def call_listdir(dirname):
114+            self.failUnlessReallyEqual(dirname, 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
115+            raise OSError(2, "No such file or directory: 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a'")
116+
117+        mocklistdir.side_effect = call_listdir
118+
119+        class MockFile:
120+            def __init__(self):
121+                self.buffer = ''
122+                self.pos = 0
123+            def write(self, instring):
124+                begin = self.pos
125+                padlen = begin - len(self.buffer)
126+                if padlen > 0:
127+                    self.buffer += '\x00' * padlen
128+                end = self.pos + len(instring)
129+                self.buffer = self.buffer[:begin]+instring+self.buffer[end:]
130+                self.pos = end
131+            def close(self):
132+                pass
133+            def seek(self, pos):
134+                self.pos = pos
135+            def read(self, numberbytes):
136+                return self.buffer[self.pos:self.pos+numberbytes]
137+            def tell(self):
138+                return self.pos
139+
140+        mocktime.return_value = 0
141+
142+        sharefile = MockFile()
143+        def call_open(fname, mode):
144+            self.failUnlessReallyEqual(fname, 'testdir/shares/incoming/or/orsxg5dtorxxeylhmvpws3temv4a/0' )
145+            return sharefile
146+
147+        mockopen.side_effect = call_open
148+        # Now begin the test.
149+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
150+        print bs
151+        bs[0].remote_write(0, 'a')
152+        self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
153+
154+
155+    @mock.patch('os.path.exists')
156+    @mock.patch('os.path.getsize')
157+    @mock.patch('__builtin__.open')
158+    @mock.patch('os.listdir')
159+    def test_read_share(self, mocklistdir, mockopen, mockgetsize, mockexists):
160+        """ This tests whether the code correctly finds and reads
161+        shares written out by old (Tahoe-LAFS <= v1.8.2)
162+        servers. There is a similar test in test_download, but that one
163+        is from the perspective of the client and exercises a deeper
164+        stack of code. This one is for exercising just the
165+        StorageServer object. """
166+
167+        def call_listdir(dirname):
168+            self.failUnlessReallyEqual(dirname,'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
169+            return ['0']
170+
171+        mocklistdir.side_effect = call_listdir
172+
173+        def call_open(fname, mode):
174+            self.failUnlessReallyEqual(fname, sharefname)
175+            self.failUnless('r' in mode, mode)
176+            self.failUnless('b' in mode, mode)
177+
178+            return StringIO(share_file_data)
179+        mockopen.side_effect = call_open
180+
181+        datalen = len(share_file_data)
182+        def call_getsize(fname):
183+            self.failUnlessReallyEqual(fname, sharefname)
184+            return datalen
185+        mockgetsize.side_effect = call_getsize
186+
187+        def call_exists(fname):
188+            self.failUnlessReallyEqual(fname, sharefname)
189+            return True
190+        mockexists.side_effect = call_exists
191+
192+        # Now begin the test.
193+        bs = self.s.remote_get_buckets('teststorage_index')
194+
195+        self.failUnlessEqual(len(bs), 1)
196+        b = bs[0]
197+        self.failUnlessReallyEqual(b.remote_read(0, datalen), share_data)
198+        # If you try to read past the end you get the as much data as is there.
199+        self.failUnlessReallyEqual(b.remote_read(0, datalen+20), share_data)
200+        # If you start reading past the end of the file you get the empty string.
201+        self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
202}
203[server.py, test_backends.py, interfaces.py, immutable.py (others?): working patch for implementation of backends plugin
204wilcoxjg@gmail.com**20110624202850
205 Ignore-this: ca6f34987ee3b0d25cac17c1fc22d50c
206 sloppy not for production
207] {
208move ./src/allmydata/test/test_server.py ./src/allmydata/test/test_backends.py
209hunk ./src/allmydata/storage/crawler.py 13
210     pass
211 
212 class ShareCrawler(service.MultiService):
213-    """A ShareCrawler subclass is attached to a StorageServer, and
214+    """A subcless of ShareCrawler is attached to a StorageServer, and
215     periodically walks all of its shares, processing each one in some
216     fashion. This crawl is rate-limited, to reduce the IO burden on the host,
217     since large servers can easily have a terabyte of shares, in several
218hunk ./src/allmydata/storage/crawler.py 31
219     We assume that the normal upload/download/get_buckets traffic of a tahoe
220     grid will cause the prefixdir contents to be mostly cached in the kernel,
221     or that the number of buckets in each prefixdir will be small enough to
222-    load quickly. A 1TB allmydata.com server was measured to have 2.56M
223+    load quickly. A 1TB allmydata.com server was measured to have 2.56 * 10^6
224     buckets, spread into the 1024 prefixdirs, with about 2500 buckets per
225     prefix. On this server, each prefixdir took 130ms-200ms to list the first
226     time, and 17ms to list the second time.
227hunk ./src/allmydata/storage/crawler.py 68
228     cpu_slice = 1.0 # use up to 1.0 seconds before yielding
229     minimum_cycle_time = 300 # don't run a cycle faster than this
230 
231-    def __init__(self, server, statefile, allowed_cpu_percentage=None):
232+    def __init__(self, backend, statefile, allowed_cpu_percentage=None):
233         service.MultiService.__init__(self)
234         if allowed_cpu_percentage is not None:
235             self.allowed_cpu_percentage = allowed_cpu_percentage
236hunk ./src/allmydata/storage/crawler.py 72
237-        self.server = server
238-        self.sharedir = server.sharedir
239-        self.statefile = statefile
240+        self.backend = backend
241         self.prefixes = [si_b2a(struct.pack(">H", i << (16-10)))[:2]
242                          for i in range(2**10)]
243         self.prefixes.sort()
244hunk ./src/allmydata/storage/crawler.py 446
245 
246     minimum_cycle_time = 60*60 # we don't need this more than once an hour
247 
248-    def __init__(self, server, statefile, num_sample_prefixes=1):
249-        ShareCrawler.__init__(self, server, statefile)
250+    def __init__(self, statefile, num_sample_prefixes=1):
251+        ShareCrawler.__init__(self, statefile)
252         self.num_sample_prefixes = num_sample_prefixes
253 
254     def add_initial_state(self):
255hunk ./src/allmydata/storage/expirer.py 15
256     removed.
257 
258     I collect statistics on the leases and make these available to a web
259-    status page, including::
260+    status page, including:
261 
262     Space recovered during this cycle-so-far:
263      actual (only if expiration_enabled=True):
264hunk ./src/allmydata/storage/expirer.py 51
265     slow_start = 360 # wait 6 minutes after startup
266     minimum_cycle_time = 12*60*60 # not more than twice per day
267 
268-    def __init__(self, server, statefile, historyfile,
269+    def __init__(self, statefile, historyfile,
270                  expiration_enabled, mode,
271                  override_lease_duration, # used if expiration_mode=="age"
272                  cutoff_date, # used if expiration_mode=="cutoff-date"
273hunk ./src/allmydata/storage/expirer.py 71
274         else:
275             raise ValueError("GC mode '%s' must be 'age' or 'cutoff-date'" % mode)
276         self.sharetypes_to_expire = sharetypes
277-        ShareCrawler.__init__(self, server, statefile)
278+        ShareCrawler.__init__(self, statefile)
279 
280     def add_initial_state(self):
281         # we fill ["cycle-to-date"] here (even though they will be reset in
282hunk ./src/allmydata/storage/immutable.py 44
283     sharetype = "immutable"
284 
285     def __init__(self, filename, max_size=None, create=False):
286-        """ If max_size is not None then I won't allow more than max_size to be written to me. If create=True and max_size must not be None. """
287+        """ If max_size is not None then I won't allow more than
288+        max_size to be written to me. If create=True then max_size
289+        must not be None. """
290         precondition((max_size is not None) or (not create), max_size, create)
291         self.home = filename
292         self._max_size = max_size
293hunk ./src/allmydata/storage/immutable.py 87
294 
295     def read_share_data(self, offset, length):
296         precondition(offset >= 0)
297-        # reads beyond the end of the data are truncated. Reads that start
298-        # beyond the end of the data return an empty string. I wonder why
299-        # Python doesn't do the following computation for me?
300+        # Reads beyond the end of the data are truncated. Reads that start
301+        # beyond the end of the data return an empty string.
302         seekpos = self._data_offset+offset
303         fsize = os.path.getsize(self.home)
304         actuallength = max(0, min(length, fsize-seekpos))
305hunk ./src/allmydata/storage/immutable.py 198
306             space_freed += os.stat(self.home)[stat.ST_SIZE]
307             self.unlink()
308         return space_freed
309+class NullBucketWriter(Referenceable):
310+    implements(RIBucketWriter)
311 
312hunk ./src/allmydata/storage/immutable.py 201
313+    def remote_write(self, offset, data):
314+        return
315 
316 class BucketWriter(Referenceable):
317     implements(RIBucketWriter)
318hunk ./src/allmydata/storage/server.py 7
319 from twisted.application import service
320 
321 from zope.interface import implements
322-from allmydata.interfaces import RIStorageServer, IStatsProducer
323+from allmydata.interfaces import RIStorageServer, IStatsProducer, IShareStore
324 from allmydata.util import fileutil, idlib, log, time_format
325 import allmydata # for __full_version__
326 
327hunk ./src/allmydata/storage/server.py 16
328 from allmydata.storage.lease import LeaseInfo
329 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
330      create_mutable_sharefile
331-from allmydata.storage.immutable import ShareFile, BucketWriter, BucketReader
332+from allmydata.storage.immutable import ShareFile, NullBucketWriter, BucketWriter, BucketReader
333 from allmydata.storage.crawler import BucketCountingCrawler
334 from allmydata.storage.expirer import LeaseCheckingCrawler
335 
336hunk ./src/allmydata/storage/server.py 20
337+from zope.interface import implements
338+
339+# A Backend is a MultiService so that its server's crawlers (if the server has any) can
340+# be started and stopped.
341+class Backend(service.MultiService):
342+    implements(IStatsProducer)
343+    def __init__(self):
344+        service.MultiService.__init__(self)
345+
346+    def get_bucket_shares(self):
347+        """XXX"""
348+        raise NotImplementedError
349+
350+    def get_share(self):
351+        """XXX"""
352+        raise NotImplementedError
353+
354+    def make_bucket_writer(self):
355+        """XXX"""
356+        raise NotImplementedError
357+
358+class NullBackend(Backend):
359+    def __init__(self):
360+        Backend.__init__(self)
361+
362+    def get_available_space(self):
363+        return None
364+
365+    def get_bucket_shares(self, storage_index):
366+        return set()
367+
368+    def get_share(self, storage_index, sharenum):
369+        return None
370+
371+    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
372+        return NullBucketWriter()
373+
374+class FSBackend(Backend):
375+    def __init__(self, storedir, readonly=False, reserved_space=0):
376+        Backend.__init__(self)
377+
378+        self._setup_storage(storedir, readonly, reserved_space)
379+        self._setup_corruption_advisory()
380+        self._setup_bucket_counter()
381+        self._setup_lease_checkerf()
382+
383+    def _setup_storage(self, storedir, readonly, reserved_space):
384+        self.storedir = storedir
385+        self.readonly = readonly
386+        self.reserved_space = int(reserved_space)
387+        if self.reserved_space:
388+            if self.get_available_space() is None:
389+                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
390+                        umid="0wZ27w", level=log.UNUSUAL)
391+
392+        self.sharedir = os.path.join(self.storedir, "shares")
393+        fileutil.make_dirs(self.sharedir)
394+        self.incomingdir = os.path.join(self.sharedir, 'incoming')
395+        self._clean_incomplete()
396+
397+    def _clean_incomplete(self):
398+        fileutil.rm_dir(self.incomingdir)
399+        fileutil.make_dirs(self.incomingdir)
400+
401+    def _setup_corruption_advisory(self):
402+        # we don't actually create the corruption-advisory dir until necessary
403+        self.corruption_advisory_dir = os.path.join(self.storedir,
404+                                                    "corruption-advisories")
405+
406+    def _setup_bucket_counter(self):
407+        statefile = os.path.join(self.storedir, "bucket_counter.state")
408+        self.bucket_counter = BucketCountingCrawler(statefile)
409+        self.bucket_counter.setServiceParent(self)
410+
411+    def _setup_lease_checkerf(self):
412+        statefile = os.path.join(self.storedir, "lease_checker.state")
413+        historyfile = os.path.join(self.storedir, "lease_checker.history")
414+        self.lease_checker = LeaseCheckingCrawler(statefile, historyfile,
415+                                   expiration_enabled, expiration_mode,
416+                                   expiration_override_lease_duration,
417+                                   expiration_cutoff_date,
418+                                   expiration_sharetypes)
419+        self.lease_checker.setServiceParent(self)
420+
421+    def get_available_space(self):
422+        if self.readonly:
423+            return 0
424+        return fileutil.get_available_space(self.storedir, self.reserved_space)
425+
426+    def get_bucket_shares(self, storage_index):
427+        """Return a list of (shnum, pathname) tuples for files that hold
428+        shares for this storage_index. In each tuple, 'shnum' will always be
429+        the integer form of the last component of 'pathname'."""
430+        storagedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
431+        try:
432+            for f in os.listdir(storagedir):
433+                if NUM_RE.match(f):
434+                    filename = os.path.join(storagedir, f)
435+                    yield (int(f), filename)
436+        except OSError:
437+            # Commonly caused by there being no buckets at all.
438+            pass
439+
440 # storage/
441 # storage/shares/incoming
442 #   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
443hunk ./src/allmydata/storage/server.py 143
444     name = 'storage'
445     LeaseCheckerClass = LeaseCheckingCrawler
446 
447-    def __init__(self, storedir, nodeid, reserved_space=0,
448-                 discard_storage=False, readonly_storage=False,
449+    def __init__(self, nodeid, backend, reserved_space=0,
450+                 readonly_storage=False,
451                  stats_provider=None,
452                  expiration_enabled=False,
453                  expiration_mode="age",
454hunk ./src/allmydata/storage/server.py 155
455         assert isinstance(nodeid, str)
456         assert len(nodeid) == 20
457         self.my_nodeid = nodeid
458-        self.storedir = storedir
459-        sharedir = os.path.join(storedir, "shares")
460-        fileutil.make_dirs(sharedir)
461-        self.sharedir = sharedir
462-        # we don't actually create the corruption-advisory dir until necessary
463-        self.corruption_advisory_dir = os.path.join(storedir,
464-                                                    "corruption-advisories")
465-        self.reserved_space = int(reserved_space)
466-        self.no_storage = discard_storage
467-        self.readonly_storage = readonly_storage
468         self.stats_provider = stats_provider
469         if self.stats_provider:
470             self.stats_provider.register_producer(self)
471hunk ./src/allmydata/storage/server.py 158
472-        self.incomingdir = os.path.join(sharedir, 'incoming')
473-        self._clean_incomplete()
474-        fileutil.make_dirs(self.incomingdir)
475         self._active_writers = weakref.WeakKeyDictionary()
476hunk ./src/allmydata/storage/server.py 159
477+        self.backend = backend
478+        self.backend.setServiceParent(self)
479         log.msg("StorageServer created", facility="tahoe.storage")
480 
481hunk ./src/allmydata/storage/server.py 163
482-        if reserved_space:
483-            if self.get_available_space() is None:
484-                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
485-                        umin="0wZ27w", level=log.UNUSUAL)
486-
487         self.latencies = {"allocate": [], # immutable
488                           "write": [],
489                           "close": [],
490hunk ./src/allmydata/storage/server.py 174
491                           "renew": [],
492                           "cancel": [],
493                           }
494-        self.add_bucket_counter()
495-
496-        statefile = os.path.join(self.storedir, "lease_checker.state")
497-        historyfile = os.path.join(self.storedir, "lease_checker.history")
498-        klass = self.LeaseCheckerClass
499-        self.lease_checker = klass(self, statefile, historyfile,
500-                                   expiration_enabled, expiration_mode,
501-                                   expiration_override_lease_duration,
502-                                   expiration_cutoff_date,
503-                                   expiration_sharetypes)
504-        self.lease_checker.setServiceParent(self)
505 
506     def __repr__(self):
507         return "<StorageServer %s>" % (idlib.shortnodeid_b2a(self.my_nodeid),)
508hunk ./src/allmydata/storage/server.py 178
509 
510-    def add_bucket_counter(self):
511-        statefile = os.path.join(self.storedir, "bucket_counter.state")
512-        self.bucket_counter = BucketCountingCrawler(self, statefile)
513-        self.bucket_counter.setServiceParent(self)
514-
515     def count(self, name, delta=1):
516         if self.stats_provider:
517             self.stats_provider.count("storage_server." + name, delta)
518hunk ./src/allmydata/storage/server.py 233
519             kwargs["facility"] = "tahoe.storage"
520         return log.msg(*args, **kwargs)
521 
522-    def _clean_incomplete(self):
523-        fileutil.rm_dir(self.incomingdir)
524-
525     def get_stats(self):
526         # remember: RIStatsProvider requires that our return dict
527         # contains numeric values.
528hunk ./src/allmydata/storage/server.py 269
529             stats['storage_server.total_bucket_count'] = bucket_count
530         return stats
531 
532-    def get_available_space(self):
533-        """Returns available space for share storage in bytes, or None if no
534-        API to get this information is available."""
535-
536-        if self.readonly_storage:
537-            return 0
538-        return fileutil.get_available_space(self.storedir, self.reserved_space)
539-
540     def allocated_size(self):
541         space = 0
542         for bw in self._active_writers:
543hunk ./src/allmydata/storage/server.py 276
544         return space
545 
546     def remote_get_version(self):
547-        remaining_space = self.get_available_space()
548+        remaining_space = self.backend.get_available_space()
549         if remaining_space is None:
550             # We're on a platform that has no API to get disk stats.
551             remaining_space = 2**64
552hunk ./src/allmydata/storage/server.py 301
553         self.count("allocate")
554         alreadygot = set()
555         bucketwriters = {} # k: shnum, v: BucketWriter
556-        si_dir = storage_index_to_dir(storage_index)
557-        si_s = si_b2a(storage_index)
558 
559hunk ./src/allmydata/storage/server.py 302
560+        si_s = si_b2a(storage_index)
561         log.msg("storage: allocate_buckets %s" % si_s)
562 
563         # in this implementation, the lease information (including secrets)
564hunk ./src/allmydata/storage/server.py 316
565 
566         max_space_per_bucket = allocated_size
567 
568-        remaining_space = self.get_available_space()
569+        remaining_space = self.backend.get_available_space()
570         limited = remaining_space is not None
571         if limited:
572             # this is a bit conservative, since some of this allocated_size()
573hunk ./src/allmydata/storage/server.py 329
574         # they asked about: this will save them a lot of work. Add or update
575         # leases for all of them: if they want us to hold shares for this
576         # file, they'll want us to hold leases for this file.
577-        for (shnum, fn) in self._get_bucket_shares(storage_index):
578+        for (shnum, fn) in self.backend.get_bucket_shares(storage_index):
579             alreadygot.add(shnum)
580             sf = ShareFile(fn)
581             sf.add_or_renew_lease(lease_info)
582hunk ./src/allmydata/storage/server.py 335
583 
584         for shnum in sharenums:
585-            incominghome = os.path.join(self.incomingdir, si_dir, "%d" % shnum)
586-            finalhome = os.path.join(self.sharedir, si_dir, "%d" % shnum)
587-            if os.path.exists(finalhome):
588+            share = self.backend.get_share(storage_index, shnum)
589+
590+            if not share:
591+                if (not limited) or (remaining_space >= max_space_per_bucket):
592+                    # ok! we need to create the new share file.
593+                    bw = self.backend.make_bucket_writer(storage_index, shnum,
594+                                      max_space_per_bucket, lease_info, canary)
595+                    bucketwriters[shnum] = bw
596+                    self._active_writers[bw] = 1
597+                    if limited:
598+                        remaining_space -= max_space_per_bucket
599+                else:
600+                    # bummer! not enough space to accept this bucket
601+                    pass
602+
603+            elif share.is_complete():
604                 # great! we already have it. easy.
605                 pass
606hunk ./src/allmydata/storage/server.py 353
607-            elif os.path.exists(incominghome):
608+            elif not share.is_complete():
609                 # Note that we don't create BucketWriters for shnums that
610                 # have a partial share (in incoming/), so if a second upload
611                 # occurs while the first is still in progress, the second
612hunk ./src/allmydata/storage/server.py 359
613                 # uploader will use different storage servers.
614                 pass
615-            elif (not limited) or (remaining_space >= max_space_per_bucket):
616-                # ok! we need to create the new share file.
617-                bw = BucketWriter(self, incominghome, finalhome,
618-                                  max_space_per_bucket, lease_info, canary)
619-                if self.no_storage:
620-                    bw.throw_out_all_data = True
621-                bucketwriters[shnum] = bw
622-                self._active_writers[bw] = 1
623-                if limited:
624-                    remaining_space -= max_space_per_bucket
625-            else:
626-                # bummer! not enough space to accept this bucket
627-                pass
628-
629-        if bucketwriters:
630-            fileutil.make_dirs(os.path.join(self.sharedir, si_dir))
631 
632         self.add_latency("allocate", time.time() - start)
633         return alreadygot, bucketwriters
634hunk ./src/allmydata/storage/server.py 437
635             self.stats_provider.count('storage_server.bytes_added', consumed_size)
636         del self._active_writers[bw]
637 
638-    def _get_bucket_shares(self, storage_index):
639-        """Return a list of (shnum, pathname) tuples for files that hold
640-        shares for this storage_index. In each tuple, 'shnum' will always be
641-        the integer form of the last component of 'pathname'."""
642-        storagedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
643-        try:
644-            for f in os.listdir(storagedir):
645-                if NUM_RE.match(f):
646-                    filename = os.path.join(storagedir, f)
647-                    yield (int(f), filename)
648-        except OSError:
649-            # Commonly caused by there being no buckets at all.
650-            pass
651 
652     def remote_get_buckets(self, storage_index):
653         start = time.time()
654hunk ./src/allmydata/storage/server.py 444
655         si_s = si_b2a(storage_index)
656         log.msg("storage: get_buckets %s" % si_s)
657         bucketreaders = {} # k: sharenum, v: BucketReader
658-        for shnum, filename in self._get_bucket_shares(storage_index):
659+        for shnum, filename in self.backend.get_bucket_shares(storage_index):
660             bucketreaders[shnum] = BucketReader(self, filename,
661                                                 storage_index, shnum)
662         self.add_latency("get", time.time() - start)
663hunk ./src/allmydata/test/test_backends.py 10
664 import mock
665 
666 # This is the code that we're going to be testing.
667-from allmydata.storage.server import StorageServer
668+from allmydata.storage.server import StorageServer, FSBackend, NullBackend
669 
670 # The following share file contents was generated with
671 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
672hunk ./src/allmydata/test/test_backends.py 21
673 sharefname = 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a/0'
674 
675 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
676+    @mock.patch('time.time')
677+    @mock.patch('os.mkdir')
678+    @mock.patch('__builtin__.open')
679+    @mock.patch('os.listdir')
680+    @mock.patch('os.path.isdir')
681+    def test_create_server_null_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
682+        """ This tests whether a server instance can be constructed
683+        with a null backend. The server instance fails the test if it
684+        tries to read or write to the file system. """
685+
686+        # Now begin the test.
687+        s = StorageServer('testnodeidxxxxxxxxxx', backend=NullBackend())
688+
689+        self.failIf(mockisdir.called)
690+        self.failIf(mocklistdir.called)
691+        self.failIf(mockopen.called)
692+        self.failIf(mockmkdir.called)
693+
694+        # You passed!
695+
696+    @mock.patch('time.time')
697+    @mock.patch('os.mkdir')
698     @mock.patch('__builtin__.open')
699hunk ./src/allmydata/test/test_backends.py 44
700-    def test_create_server(self, mockopen):
701-        """ This tests whether a server instance can be constructed. """
702+    @mock.patch('os.listdir')
703+    @mock.patch('os.path.isdir')
704+    def test_create_server_fs_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
705+        """ This tests whether a server instance can be constructed
706+        with a filesystem backend. To pass the test, it has to use the
707+        filesystem in only the prescribed ways. """
708 
709         def call_open(fname, mode):
710             if fname == 'testdir/bucket_counter.state':
711hunk ./src/allmydata/test/test_backends.py 58
712                 raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
713             elif fname == 'testdir/lease_checker.history':
714                 return StringIO()
715+            else:
716+                self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
717         mockopen.side_effect = call_open
718 
719         # Now begin the test.
720hunk ./src/allmydata/test/test_backends.py 63
721-        s = StorageServer('testdir', 'testnodeidxxxxxxxxxx')
722+        s = StorageServer('testnodeidxxxxxxxxxx', backend=FSBackend('teststoredir'))
723+
724+        self.failIf(mockisdir.called)
725+        self.failIf(mocklistdir.called)
726+        self.failIf(mockopen.called)
727+        self.failIf(mockmkdir.called)
728+        self.failIf(mocktime.called)
729 
730         # You passed!
731 
732hunk ./src/allmydata/test/test_backends.py 73
733-class TestServer(unittest.TestCase, ReallyEqualMixin):
734+class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
735+    def setUp(self):
736+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullBackend())
737+
738+    @mock.patch('os.mkdir')
739+    @mock.patch('__builtin__.open')
740+    @mock.patch('os.listdir')
741+    @mock.patch('os.path.isdir')
742+    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir):
743+        """ Write a new share. """
744+
745+        # Now begin the test.
746+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
747+        bs[0].remote_write(0, 'a')
748+        self.failIf(mockisdir.called)
749+        self.failIf(mocklistdir.called)
750+        self.failIf(mockopen.called)
751+        self.failIf(mockmkdir.called)
752+
753+    @mock.patch('os.path.exists')
754+    @mock.patch('os.path.getsize')
755+    @mock.patch('__builtin__.open')
756+    @mock.patch('os.listdir')
757+    def test_read_share(self, mocklistdir, mockopen, mockgetsize, mockexists):
758+        """ This tests whether the code correctly finds and reads
759+        shares written out by old (Tahoe-LAFS <= v1.8.2)
760+        servers. There is a similar test in test_download, but that one
761+        is from the perspective of the client and exercises a deeper
762+        stack of code. This one is for exercising just the
763+        StorageServer object. """
764+
765+        # Now begin the test.
766+        bs = self.s.remote_get_buckets('teststorage_index')
767+
768+        self.failUnlessEqual(len(bs), 0)
769+        self.failIf(mocklistdir.called)
770+        self.failIf(mockopen.called)
771+        self.failIf(mockgetsize.called)
772+        self.failIf(mockexists.called)
773+
774+
775+class TestServerFSBackend(unittest.TestCase, ReallyEqualMixin):
776     @mock.patch('__builtin__.open')
777     def setUp(self, mockopen):
778         def call_open(fname, mode):
779hunk ./src/allmydata/test/test_backends.py 126
780                 return StringIO()
781         mockopen.side_effect = call_open
782 
783-        self.s = StorageServer('testdir', 'testnodeidxxxxxxxxxx')
784-
785+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=FSBackend('teststoredir'))
786 
787     @mock.patch('time.time')
788     @mock.patch('os.mkdir')
789hunk ./src/allmydata/test/test_backends.py 134
790     @mock.patch('os.listdir')
791     @mock.patch('os.path.isdir')
792     def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
793-        """Handle a report of corruption."""
794+        """ Write a new share. """
795 
796         def call_listdir(dirname):
797             self.failUnlessReallyEqual(dirname, 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
798hunk ./src/allmydata/test/test_backends.py 173
799         mockopen.side_effect = call_open
800         # Now begin the test.
801         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
802-        print bs
803         bs[0].remote_write(0, 'a')
804         self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
805 
806hunk ./src/allmydata/test/test_backends.py 176
807-
808     @mock.patch('os.path.exists')
809     @mock.patch('os.path.getsize')
810     @mock.patch('__builtin__.open')
811hunk ./src/allmydata/test/test_backends.py 218
812 
813         self.failUnlessEqual(len(bs), 1)
814         b = bs[0]
815+        # These should match by definition, the next two cases cover cases without (completely) unambiguous behaviors.
816         self.failUnlessReallyEqual(b.remote_read(0, datalen), share_data)
817         # If you try to read past the end you get the as much data as is there.
818         self.failUnlessReallyEqual(b.remote_read(0, datalen+20), share_data)
819hunk ./src/allmydata/test/test_backends.py 224
820         # If you start reading past the end of the file you get the empty string.
821         self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
822+
823+
824}
825[a temp patch used as a snapshot
826wilcoxjg@gmail.com**20110626052732
827 Ignore-this: 95f05e314eaec870afa04c76d979aa44
828] {
829hunk ./docs/configuration.rst 637
830   [storage]
831   enabled = True
832   readonly = True
833-  sizelimit = 10000000000
834 
835 
836   [helper]
837hunk ./docs/garbage-collection.rst 16
838 
839 When a file or directory in the virtual filesystem is no longer referenced,
840 the space that its shares occupied on each storage server can be freed,
841-making room for other shares. Tahoe currently uses a garbage collection
842+making room for other shares. Tahoe uses a garbage collection
843 ("GC") mechanism to implement this space-reclamation process. Each share has
844 one or more "leases", which are managed by clients who want the
845 file/directory to be retained. The storage server accepts each share for a
846hunk ./docs/garbage-collection.rst 34
847 the `<lease-tradeoffs.svg>`_ diagram to get an idea for the tradeoffs involved.
848 If lease renewal occurs quickly and with 100% reliability, than any renewal
849 time that is shorter than the lease duration will suffice, but a larger ratio
850-of duration-over-renewal-time will be more robust in the face of occasional
851+of lease duration to renewal time will be more robust in the face of occasional
852 delays or failures.
853 
854 The current recommended values for a small Tahoe grid are to renew the leases
855replace ./docs/garbage-collection.rst [A-Za-z_0-9\-\.] Tahoe Tahoe-LAFS
856hunk ./src/allmydata/client.py 260
857             sharetypes.append("mutable")
858         expiration_sharetypes = tuple(sharetypes)
859 
860+        if self.get_config("storage", "backend", "filesystem") == "filesystem":
861+            xyz
862+        xyz
863         ss = StorageServer(storedir, self.nodeid,
864                            reserved_space=reserved,
865                            discard_storage=discard,
866hunk ./src/allmydata/storage/crawler.py 234
867         f = open(tmpfile, "wb")
868         pickle.dump(self.state, f)
869         f.close()
870-        fileutil.move_into_place(tmpfile, self.statefile)
871+        fileutil.move_into_place(tmpfile, self.statefname)
872 
873     def startService(self):
874         # arrange things to look like we were just sleeping, so
875}
876[snapshot of progress on backend implementation (not suitable for trunk)
877wilcoxjg@gmail.com**20110626053244
878 Ignore-this: 50c764af791c2b99ada8289546806a0a
879] {
880adddir ./src/allmydata/storage/backends
881adddir ./src/allmydata/storage/backends/das
882move ./src/allmydata/storage/expirer.py ./src/allmydata/storage/backends/das/expirer.py
883adddir ./src/allmydata/storage/backends/null
884hunk ./src/allmydata/interfaces.py 270
885         store that on disk.
886         """
887 
888+class IStorageBackend(Interface):
889+    """
890+    Objects of this kind live on the server side and are used by the
891+    storage server object.
892+    """
893+    def get_available_space(self, reserved_space):
894+        """ Returns available space for share storage in bytes, or
895+        None if this information is not available or if the available
896+        space is unlimited.
897+
898+        If the backend is configured for read-only mode then this will
899+        return 0.
900+
901+        reserved_space is how many bytes to subtract from the answer, so
902+        you can pass how many bytes you would like to leave unused on this
903+        filesystem as reserved_space. """
904+
905+    def get_bucket_shares(self):
906+        """XXX"""
907+
908+    def get_share(self):
909+        """XXX"""
910+
911+    def make_bucket_writer(self):
912+        """XXX"""
913+
914+class IStorageBackendShare(Interface):
915+    """
916+    This object contains as much as all of the share data.  It is intended
917+    for lazy evaluation such that in many use cases substantially less than
918+    all of the share data will be accessed.
919+    """
920+    def is_complete(self):
921+        """
922+        Returns the share state, or None if the share does not exist.
923+        """
924+
925 class IStorageBucketWriter(Interface):
926     """
927     Objects of this kind live on the client side.
928hunk ./src/allmydata/interfaces.py 2492
929 
930 class EmptyPathnameComponentError(Exception):
931     """The webapi disallows empty pathname components."""
932+
933+class IShareStore(Interface):
934+    pass
935+
936addfile ./src/allmydata/storage/backends/__init__.py
937addfile ./src/allmydata/storage/backends/das/__init__.py
938addfile ./src/allmydata/storage/backends/das/core.py
939hunk ./src/allmydata/storage/backends/das/core.py 1
940+from allmydata.interfaces import IStorageBackend
941+from allmydata.storage.backends.base import Backend
942+from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
943+from allmydata.util.assertutil import precondition
944+
945+import os, re, weakref, struct, time
946+
947+from foolscap.api import Referenceable
948+from twisted.application import service
949+
950+from zope.interface import implements
951+from allmydata.interfaces import RIStorageServer, IStatsProducer, IShareStore
952+from allmydata.util import fileutil, idlib, log, time_format
953+import allmydata # for __full_version__
954+
955+from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
956+_pyflakes_hush = [si_b2a, si_a2b, storage_index_to_dir] # re-exported
957+from allmydata.storage.lease import LeaseInfo
958+from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
959+     create_mutable_sharefile
960+from allmydata.storage.backends.das.immutable import NullBucketWriter, BucketWriter, BucketReader
961+from allmydata.storage.crawler import FSBucketCountingCrawler
962+from allmydata.storage.backends.das.expirer import FSLeaseCheckingCrawler
963+
964+from zope.interface import implements
965+
966+class DASCore(Backend):
967+    implements(IStorageBackend)
968+    def __init__(self, storedir, expiration_policy, readonly=False, reserved_space=0):
969+        Backend.__init__(self)
970+
971+        self._setup_storage(storedir, readonly, reserved_space)
972+        self._setup_corruption_advisory()
973+        self._setup_bucket_counter()
974+        self._setup_lease_checkerf(expiration_policy)
975+
976+    def _setup_storage(self, storedir, readonly, reserved_space):
977+        self.storedir = storedir
978+        self.readonly = readonly
979+        self.reserved_space = int(reserved_space)
980+        if self.reserved_space:
981+            if self.get_available_space() is None:
982+                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
983+                        umid="0wZ27w", level=log.UNUSUAL)
984+
985+        self.sharedir = os.path.join(self.storedir, "shares")
986+        fileutil.make_dirs(self.sharedir)
987+        self.incomingdir = os.path.join(self.sharedir, 'incoming')
988+        self._clean_incomplete()
989+
990+    def _clean_incomplete(self):
991+        fileutil.rm_dir(self.incomingdir)
992+        fileutil.make_dirs(self.incomingdir)
993+
994+    def _setup_corruption_advisory(self):
995+        # we don't actually create the corruption-advisory dir until necessary
996+        self.corruption_advisory_dir = os.path.join(self.storedir,
997+                                                    "corruption-advisories")
998+
999+    def _setup_bucket_counter(self):
1000+        statefname = os.path.join(self.storedir, "bucket_counter.state")
1001+        self.bucket_counter = FSBucketCountingCrawler(statefname)
1002+        self.bucket_counter.setServiceParent(self)
1003+
1004+    def _setup_lease_checkerf(self, expiration_policy):
1005+        statefile = os.path.join(self.storedir, "lease_checker.state")
1006+        historyfile = os.path.join(self.storedir, "lease_checker.history")
1007+        self.lease_checker = FSLeaseCheckingCrawler(statefile, historyfile, expiration_policy)
1008+        self.lease_checker.setServiceParent(self)
1009+
1010+    def get_available_space(self):
1011+        if self.readonly:
1012+            return 0
1013+        return fileutil.get_available_space(self.storedir, self.reserved_space)
1014+
1015+    def get_shares(self, storage_index):
1016+        """Return a list of the FSBShare objects that correspond to the passed storage_index."""
1017+        finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
1018+        try:
1019+            for f in os.listdir(finalstoragedir):
1020+                if NUM_RE.match(f):
1021+                    filename = os.path.join(finalstoragedir, f)
1022+                    yield FSBShare(filename, int(f))
1023+        except OSError:
1024+            # Commonly caused by there being no buckets at all.
1025+            pass
1026+       
1027+    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
1028+        immsh = ImmutableShare(self.sharedir, storage_index, shnum, max_size=max_space_per_bucket, create=True)
1029+        bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
1030+        return bw
1031+       
1032+
1033+# each share file (in storage/shares/$SI/$SHNUM) contains lease information
1034+# and share data. The share data is accessed by RIBucketWriter.write and
1035+# RIBucketReader.read . The lease information is not accessible through these
1036+# interfaces.
1037+
1038+# The share file has the following layout:
1039+#  0x00: share file version number, four bytes, current version is 1
1040+#  0x04: share data length, four bytes big-endian = A # See Footnote 1 below.
1041+#  0x08: number of leases, four bytes big-endian
1042+#  0x0c: beginning of share data (see immutable.layout.WriteBucketProxy)
1043+#  A+0x0c = B: first lease. Lease format is:
1044+#   B+0x00: owner number, 4 bytes big-endian, 0 is reserved for no-owner
1045+#   B+0x04: renew secret, 32 bytes (SHA256)
1046+#   B+0x24: cancel secret, 32 bytes (SHA256)
1047+#   B+0x44: expiration time, 4 bytes big-endian seconds-since-epoch
1048+#   B+0x48: next lease, or end of record
1049+
1050+# Footnote 1: as of Tahoe v1.3.0 this field is not used by storage servers,
1051+# but it is still filled in by storage servers in case the storage server
1052+# software gets downgraded from >= Tahoe v1.3.0 to < Tahoe v1.3.0, or the
1053+# share file is moved from one storage server to another. The value stored in
1054+# this field is truncated, so if the actual share data length is >= 2**32,
1055+# then the value stored in this field will be the actual share data length
1056+# modulo 2**32.
1057+
1058+class ImmutableShare:
1059+    LEASE_SIZE = struct.calcsize(">L32s32sL")
1060+    sharetype = "immutable"
1061+
1062+    def __init__(self, sharedir, storageindex, shnum, max_size=None, create=False):
1063+        """ If max_size is not None then I won't allow more than
1064+        max_size to be written to me. If create=True then max_size
1065+        must not be None. """
1066+        precondition((max_size is not None) or (not create), max_size, create)
1067+        self.shnum = shnum
1068+        self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
1069+        self._max_size = max_size
1070+        if create:
1071+            # touch the file, so later callers will see that we're working on
1072+            # it. Also construct the metadata.
1073+            assert not os.path.exists(self.fname)
1074+            fileutil.make_dirs(os.path.dirname(self.fname))
1075+            f = open(self.fname, 'wb')
1076+            # The second field -- the four-byte share data length -- is no
1077+            # longer used as of Tahoe v1.3.0, but we continue to write it in
1078+            # there in case someone downgrades a storage server from >=
1079+            # Tahoe-1.3.0 to < Tahoe-1.3.0, or moves a share file from one
1080+            # server to another, etc. We do saturation -- a share data length
1081+            # larger than 2**32-1 (what can fit into the field) is marked as
1082+            # the largest length that can fit into the field. That way, even
1083+            # if this does happen, the old < v1.3.0 server will still allow
1084+            # clients to read the first part of the share.
1085+            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
1086+            f.close()
1087+            self._lease_offset = max_size + 0x0c
1088+            self._num_leases = 0
1089+        else:
1090+            f = open(self.fname, 'rb')
1091+            filesize = os.path.getsize(self.fname)
1092+            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1093+            f.close()
1094+            if version != 1:
1095+                msg = "sharefile %s had version %d but we wanted 1" % \
1096+                      (self.fname, version)
1097+                raise UnknownImmutableContainerVersionError(msg)
1098+            self._num_leases = num_leases
1099+            self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
1100+        self._data_offset = 0xc
1101+
1102+    def unlink(self):
1103+        os.unlink(self.fname)
1104+
1105+    def read_share_data(self, offset, length):
1106+        precondition(offset >= 0)
1107+        # Reads beyond the end of the data are truncated. Reads that start
1108+        # beyond the end of the data return an empty string.
1109+        seekpos = self._data_offset+offset
1110+        fsize = os.path.getsize(self.fname)
1111+        actuallength = max(0, min(length, fsize-seekpos))
1112+        if actuallength == 0:
1113+            return ""
1114+        f = open(self.fname, 'rb')
1115+        f.seek(seekpos)
1116+        return f.read(actuallength)
1117+
1118+    def write_share_data(self, offset, data):
1119+        length = len(data)
1120+        precondition(offset >= 0, offset)
1121+        if self._max_size is not None and offset+length > self._max_size:
1122+            raise DataTooLargeError(self._max_size, offset, length)
1123+        f = open(self.fname, 'rb+')
1124+        real_offset = self._data_offset+offset
1125+        f.seek(real_offset)
1126+        assert f.tell() == real_offset
1127+        f.write(data)
1128+        f.close()
1129+
1130+    def _write_lease_record(self, f, lease_number, lease_info):
1131+        offset = self._lease_offset + lease_number * self.LEASE_SIZE
1132+        f.seek(offset)
1133+        assert f.tell() == offset
1134+        f.write(lease_info.to_immutable_data())
1135+
1136+    def _read_num_leases(self, f):
1137+        f.seek(0x08)
1138+        (num_leases,) = struct.unpack(">L", f.read(4))
1139+        return num_leases
1140+
1141+    def _write_num_leases(self, f, num_leases):
1142+        f.seek(0x08)
1143+        f.write(struct.pack(">L", num_leases))
1144+
1145+    def _truncate_leases(self, f, num_leases):
1146+        f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE)
1147+
1148+    def get_leases(self):
1149+        """Yields a LeaseInfo instance for all leases."""
1150+        f = open(self.fname, 'rb')
1151+        (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1152+        f.seek(self._lease_offset)
1153+        for i in range(num_leases):
1154+            data = f.read(self.LEASE_SIZE)
1155+            if data:
1156+                yield LeaseInfo().from_immutable_data(data)
1157+
1158+    def add_lease(self, lease_info):
1159+        f = open(self.fname, 'rb+')
1160+        num_leases = self._read_num_leases(f)
1161+        self._write_lease_record(f, num_leases, lease_info)
1162+        self._write_num_leases(f, num_leases+1)
1163+        f.close()
1164+
1165+    def renew_lease(self, renew_secret, new_expire_time):
1166+        for i,lease in enumerate(self.get_leases()):
1167+            if constant_time_compare(lease.renew_secret, renew_secret):
1168+                # yup. See if we need to update the owner time.
1169+                if new_expire_time > lease.expiration_time:
1170+                    # yes
1171+                    lease.expiration_time = new_expire_time
1172+                    f = open(self.fname, 'rb+')
1173+                    self._write_lease_record(f, i, lease)
1174+                    f.close()
1175+                return
1176+        raise IndexError("unable to renew non-existent lease")
1177+
1178+    def add_or_renew_lease(self, lease_info):
1179+        try:
1180+            self.renew_lease(lease_info.renew_secret,
1181+                             lease_info.expiration_time)
1182+        except IndexError:
1183+            self.add_lease(lease_info)
1184+
1185+
1186+    def cancel_lease(self, cancel_secret):
1187+        """Remove a lease with the given cancel_secret. If the last lease is
1188+        cancelled, the file will be removed. Return the number of bytes that
1189+        were freed (by truncating the list of leases, and possibly by
1190+        deleting the file. Raise IndexError if there was no lease with the
1191+        given cancel_secret.
1192+        """
1193+
1194+        leases = list(self.get_leases())
1195+        num_leases_removed = 0
1196+        for i,lease in enumerate(leases):
1197+            if constant_time_compare(lease.cancel_secret, cancel_secret):
1198+                leases[i] = None
1199+                num_leases_removed += 1
1200+        if not num_leases_removed:
1201+            raise IndexError("unable to find matching lease to cancel")
1202+        if num_leases_removed:
1203+            # pack and write out the remaining leases. We write these out in
1204+            # the same order as they were added, so that if we crash while
1205+            # doing this, we won't lose any non-cancelled leases.
1206+            leases = [l for l in leases if l] # remove the cancelled leases
1207+            f = open(self.fname, 'rb+')
1208+            for i,lease in enumerate(leases):
1209+                self._write_lease_record(f, i, lease)
1210+            self._write_num_leases(f, len(leases))
1211+            self._truncate_leases(f, len(leases))
1212+            f.close()
1213+        space_freed = self.LEASE_SIZE * num_leases_removed
1214+        if not len(leases):
1215+            space_freed += os.stat(self.fname)[stat.ST_SIZE]
1216+            self.unlink()
1217+        return space_freed
1218hunk ./src/allmydata/storage/backends/das/expirer.py 2
1219 import time, os, pickle, struct
1220-from allmydata.storage.crawler import ShareCrawler
1221-from allmydata.storage.shares import get_share_file
1222+from allmydata.storage.crawler import FSShareCrawler
1223 from allmydata.storage.common import UnknownMutableContainerVersionError, \
1224      UnknownImmutableContainerVersionError
1225 from twisted.python import log as twlog
1226hunk ./src/allmydata/storage/backends/das/expirer.py 7
1227 
1228-class LeaseCheckingCrawler(ShareCrawler):
1229+class FSLeaseCheckingCrawler(FSShareCrawler):
1230     """I examine the leases on all shares, determining which are still valid
1231     and which have expired. I can remove the expired leases (if so
1232     configured), and the share will be deleted when the last lease is
1233hunk ./src/allmydata/storage/backends/das/expirer.py 50
1234     slow_start = 360 # wait 6 minutes after startup
1235     minimum_cycle_time = 12*60*60 # not more than twice per day
1236 
1237-    def __init__(self, statefile, historyfile,
1238-                 expiration_enabled, mode,
1239-                 override_lease_duration, # used if expiration_mode=="age"
1240-                 cutoff_date, # used if expiration_mode=="cutoff-date"
1241-                 sharetypes):
1242+    def __init__(self, statefile, historyfile, expiration_policy):
1243         self.historyfile = historyfile
1244hunk ./src/allmydata/storage/backends/das/expirer.py 52
1245-        self.expiration_enabled = expiration_enabled
1246-        self.mode = mode
1247+        self.expiration_enabled = expiration_policy['enabled']
1248+        self.mode = expiration_policy['mode']
1249         self.override_lease_duration = None
1250         self.cutoff_date = None
1251         if self.mode == "age":
1252hunk ./src/allmydata/storage/backends/das/expirer.py 57
1253-            assert isinstance(override_lease_duration, (int, type(None)))
1254-            self.override_lease_duration = override_lease_duration # seconds
1255+            assert isinstance(expiration_policy['override_lease_duration'], (int, type(None)))
1256+            self.override_lease_duration = expiration_policy['override_lease_duration']# seconds
1257         elif self.mode == "cutoff-date":
1258hunk ./src/allmydata/storage/backends/das/expirer.py 60
1259-            assert isinstance(cutoff_date, int) # seconds-since-epoch
1260+            assert isinstance(expiration_policy['cutoff_date'], int) # seconds-since-epoch
1261             assert cutoff_date is not None
1262hunk ./src/allmydata/storage/backends/das/expirer.py 62
1263-            self.cutoff_date = cutoff_date
1264+            self.cutoff_date = expiration_policy['cutoff_date']
1265         else:
1266hunk ./src/allmydata/storage/backends/das/expirer.py 64
1267-            raise ValueError("GC mode '%s' must be 'age' or 'cutoff-date'" % mode)
1268-        self.sharetypes_to_expire = sharetypes
1269-        ShareCrawler.__init__(self, statefile)
1270+            raise ValueError("GC mode '%s' must be 'age' or 'cutoff-date'" % expiration_policy['mode'])
1271+        self.sharetypes_to_expire = expiration_policy['sharetypes']
1272+        FSShareCrawler.__init__(self, statefile)
1273 
1274     def add_initial_state(self):
1275         # we fill ["cycle-to-date"] here (even though they will be reset in
1276hunk ./src/allmydata/storage/backends/das/expirer.py 156
1277 
1278     def process_share(self, sharefilename):
1279         # first, find out what kind of a share it is
1280-        sf = get_share_file(sharefilename)
1281+        f = open(sharefilename, "rb")
1282+        prefix = f.read(32)
1283+        f.close()
1284+        if prefix == MutableShareFile.MAGIC:
1285+            sf = MutableShareFile(sharefilename)
1286+        else:
1287+            # otherwise assume it's immutable
1288+            sf = FSBShare(sharefilename)
1289         sharetype = sf.sharetype
1290         now = time.time()
1291         s = self.stat(sharefilename)
1292addfile ./src/allmydata/storage/backends/null/__init__.py
1293addfile ./src/allmydata/storage/backends/null/core.py
1294hunk ./src/allmydata/storage/backends/null/core.py 1
1295+from allmydata.storage.backends.base import Backend
1296+
1297+class NullCore(Backend):
1298+    def __init__(self):
1299+        Backend.__init__(self)
1300+
1301+    def get_available_space(self):
1302+        return None
1303+
1304+    def get_shares(self, storage_index):
1305+        return set()
1306+
1307+    def get_share(self, storage_index, sharenum):
1308+        return None
1309+
1310+    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
1311+        return NullBucketWriter()
1312hunk ./src/allmydata/storage/crawler.py 12
1313 class TimeSliceExceeded(Exception):
1314     pass
1315 
1316-class ShareCrawler(service.MultiService):
1317+class FSShareCrawler(service.MultiService):
1318     """A subcless of ShareCrawler is attached to a StorageServer, and
1319     periodically walks all of its shares, processing each one in some
1320     fashion. This crawl is rate-limited, to reduce the IO burden on the host,
1321hunk ./src/allmydata/storage/crawler.py 68
1322     cpu_slice = 1.0 # use up to 1.0 seconds before yielding
1323     minimum_cycle_time = 300 # don't run a cycle faster than this
1324 
1325-    def __init__(self, backend, statefile, allowed_cpu_percentage=None):
1326+    def __init__(self, statefname, allowed_cpu_percentage=None):
1327         service.MultiService.__init__(self)
1328         if allowed_cpu_percentage is not None:
1329             self.allowed_cpu_percentage = allowed_cpu_percentage
1330hunk ./src/allmydata/storage/crawler.py 72
1331-        self.backend = backend
1332+        self.statefname = statefname
1333         self.prefixes = [si_b2a(struct.pack(">H", i << (16-10)))[:2]
1334                          for i in range(2**10)]
1335         self.prefixes.sort()
1336hunk ./src/allmydata/storage/crawler.py 192
1337         #                            of the last bucket to be processed, or
1338         #                            None if we are sleeping between cycles
1339         try:
1340-            f = open(self.statefile, "rb")
1341+            f = open(self.statefname, "rb")
1342             state = pickle.load(f)
1343             f.close()
1344         except EnvironmentError:
1345hunk ./src/allmydata/storage/crawler.py 230
1346         else:
1347             last_complete_prefix = self.prefixes[lcpi]
1348         self.state["last-complete-prefix"] = last_complete_prefix
1349-        tmpfile = self.statefile + ".tmp"
1350+        tmpfile = self.statefname + ".tmp"
1351         f = open(tmpfile, "wb")
1352         pickle.dump(self.state, f)
1353         f.close()
1354hunk ./src/allmydata/storage/crawler.py 433
1355         pass
1356 
1357 
1358-class BucketCountingCrawler(ShareCrawler):
1359+class FSBucketCountingCrawler(FSShareCrawler):
1360     """I keep track of how many buckets are being managed by this server.
1361     This is equivalent to the number of distributed files and directories for
1362     which I am providing storage. The actual number of files+directories in
1363hunk ./src/allmydata/storage/crawler.py 446
1364 
1365     minimum_cycle_time = 60*60 # we don't need this more than once an hour
1366 
1367-    def __init__(self, statefile, num_sample_prefixes=1):
1368-        ShareCrawler.__init__(self, statefile)
1369+    def __init__(self, statefname, num_sample_prefixes=1):
1370+        FSShareCrawler.__init__(self, statefname)
1371         self.num_sample_prefixes = num_sample_prefixes
1372 
1373     def add_initial_state(self):
1374hunk ./src/allmydata/storage/immutable.py 14
1375 from allmydata.storage.common import UnknownImmutableContainerVersionError, \
1376      DataTooLargeError
1377 
1378-# each share file (in storage/shares/$SI/$SHNUM) contains lease information
1379-# and share data. The share data is accessed by RIBucketWriter.write and
1380-# RIBucketReader.read . The lease information is not accessible through these
1381-# interfaces.
1382-
1383-# The share file has the following layout:
1384-#  0x00: share file version number, four bytes, current version is 1
1385-#  0x04: share data length, four bytes big-endian = A # See Footnote 1 below.
1386-#  0x08: number of leases, four bytes big-endian
1387-#  0x0c: beginning of share data (see immutable.layout.WriteBucketProxy)
1388-#  A+0x0c = B: first lease. Lease format is:
1389-#   B+0x00: owner number, 4 bytes big-endian, 0 is reserved for no-owner
1390-#   B+0x04: renew secret, 32 bytes (SHA256)
1391-#   B+0x24: cancel secret, 32 bytes (SHA256)
1392-#   B+0x44: expiration time, 4 bytes big-endian seconds-since-epoch
1393-#   B+0x48: next lease, or end of record
1394-
1395-# Footnote 1: as of Tahoe v1.3.0 this field is not used by storage servers,
1396-# but it is still filled in by storage servers in case the storage server
1397-# software gets downgraded from >= Tahoe v1.3.0 to < Tahoe v1.3.0, or the
1398-# share file is moved from one storage server to another. The value stored in
1399-# this field is truncated, so if the actual share data length is >= 2**32,
1400-# then the value stored in this field will be the actual share data length
1401-# modulo 2**32.
1402-
1403-class ShareFile:
1404-    LEASE_SIZE = struct.calcsize(">L32s32sL")
1405-    sharetype = "immutable"
1406-
1407-    def __init__(self, filename, max_size=None, create=False):
1408-        """ If max_size is not None then I won't allow more than
1409-        max_size to be written to me. If create=True then max_size
1410-        must not be None. """
1411-        precondition((max_size is not None) or (not create), max_size, create)
1412-        self.home = filename
1413-        self._max_size = max_size
1414-        if create:
1415-            # touch the file, so later callers will see that we're working on
1416-            # it. Also construct the metadata.
1417-            assert not os.path.exists(self.home)
1418-            fileutil.make_dirs(os.path.dirname(self.home))
1419-            f = open(self.home, 'wb')
1420-            # The second field -- the four-byte share data length -- is no
1421-            # longer used as of Tahoe v1.3.0, but we continue to write it in
1422-            # there in case someone downgrades a storage server from >=
1423-            # Tahoe-1.3.0 to < Tahoe-1.3.0, or moves a share file from one
1424-            # server to another, etc. We do saturation -- a share data length
1425-            # larger than 2**32-1 (what can fit into the field) is marked as
1426-            # the largest length that can fit into the field. That way, even
1427-            # if this does happen, the old < v1.3.0 server will still allow
1428-            # clients to read the first part of the share.
1429-            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
1430-            f.close()
1431-            self._lease_offset = max_size + 0x0c
1432-            self._num_leases = 0
1433-        else:
1434-            f = open(self.home, 'rb')
1435-            filesize = os.path.getsize(self.home)
1436-            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1437-            f.close()
1438-            if version != 1:
1439-                msg = "sharefile %s had version %d but we wanted 1" % \
1440-                      (filename, version)
1441-                raise UnknownImmutableContainerVersionError(msg)
1442-            self._num_leases = num_leases
1443-            self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
1444-        self._data_offset = 0xc
1445-
1446-    def unlink(self):
1447-        os.unlink(self.home)
1448-
1449-    def read_share_data(self, offset, length):
1450-        precondition(offset >= 0)
1451-        # Reads beyond the end of the data are truncated. Reads that start
1452-        # beyond the end of the data return an empty string.
1453-        seekpos = self._data_offset+offset
1454-        fsize = os.path.getsize(self.home)
1455-        actuallength = max(0, min(length, fsize-seekpos))
1456-        if actuallength == 0:
1457-            return ""
1458-        f = open(self.home, 'rb')
1459-        f.seek(seekpos)
1460-        return f.read(actuallength)
1461-
1462-    def write_share_data(self, offset, data):
1463-        length = len(data)
1464-        precondition(offset >= 0, offset)
1465-        if self._max_size is not None and offset+length > self._max_size:
1466-            raise DataTooLargeError(self._max_size, offset, length)
1467-        f = open(self.home, 'rb+')
1468-        real_offset = self._data_offset+offset
1469-        f.seek(real_offset)
1470-        assert f.tell() == real_offset
1471-        f.write(data)
1472-        f.close()
1473-
1474-    def _write_lease_record(self, f, lease_number, lease_info):
1475-        offset = self._lease_offset + lease_number * self.LEASE_SIZE
1476-        f.seek(offset)
1477-        assert f.tell() == offset
1478-        f.write(lease_info.to_immutable_data())
1479-
1480-    def _read_num_leases(self, f):
1481-        f.seek(0x08)
1482-        (num_leases,) = struct.unpack(">L", f.read(4))
1483-        return num_leases
1484-
1485-    def _write_num_leases(self, f, num_leases):
1486-        f.seek(0x08)
1487-        f.write(struct.pack(">L", num_leases))
1488-
1489-    def _truncate_leases(self, f, num_leases):
1490-        f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE)
1491-
1492-    def get_leases(self):
1493-        """Yields a LeaseInfo instance for all leases."""
1494-        f = open(self.home, 'rb')
1495-        (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1496-        f.seek(self._lease_offset)
1497-        for i in range(num_leases):
1498-            data = f.read(self.LEASE_SIZE)
1499-            if data:
1500-                yield LeaseInfo().from_immutable_data(data)
1501-
1502-    def add_lease(self, lease_info):
1503-        f = open(self.home, 'rb+')
1504-        num_leases = self._read_num_leases(f)
1505-        self._write_lease_record(f, num_leases, lease_info)
1506-        self._write_num_leases(f, num_leases+1)
1507-        f.close()
1508-
1509-    def renew_lease(self, renew_secret, new_expire_time):
1510-        for i,lease in enumerate(self.get_leases()):
1511-            if constant_time_compare(lease.renew_secret, renew_secret):
1512-                # yup. See if we need to update the owner time.
1513-                if new_expire_time > lease.expiration_time:
1514-                    # yes
1515-                    lease.expiration_time = new_expire_time
1516-                    f = open(self.home, 'rb+')
1517-                    self._write_lease_record(f, i, lease)
1518-                    f.close()
1519-                return
1520-        raise IndexError("unable to renew non-existent lease")
1521-
1522-    def add_or_renew_lease(self, lease_info):
1523-        try:
1524-            self.renew_lease(lease_info.renew_secret,
1525-                             lease_info.expiration_time)
1526-        except IndexError:
1527-            self.add_lease(lease_info)
1528-
1529-
1530-    def cancel_lease(self, cancel_secret):
1531-        """Remove a lease with the given cancel_secret. If the last lease is
1532-        cancelled, the file will be removed. Return the number of bytes that
1533-        were freed (by truncating the list of leases, and possibly by
1534-        deleting the file. Raise IndexError if there was no lease with the
1535-        given cancel_secret.
1536-        """
1537-
1538-        leases = list(self.get_leases())
1539-        num_leases_removed = 0
1540-        for i,lease in enumerate(leases):
1541-            if constant_time_compare(lease.cancel_secret, cancel_secret):
1542-                leases[i] = None
1543-                num_leases_removed += 1
1544-        if not num_leases_removed:
1545-            raise IndexError("unable to find matching lease to cancel")
1546-        if num_leases_removed:
1547-            # pack and write out the remaining leases. We write these out in
1548-            # the same order as they were added, so that if we crash while
1549-            # doing this, we won't lose any non-cancelled leases.
1550-            leases = [l for l in leases if l] # remove the cancelled leases
1551-            f = open(self.home, 'rb+')
1552-            for i,lease in enumerate(leases):
1553-                self._write_lease_record(f, i, lease)
1554-            self._write_num_leases(f, len(leases))
1555-            self._truncate_leases(f, len(leases))
1556-            f.close()
1557-        space_freed = self.LEASE_SIZE * num_leases_removed
1558-        if not len(leases):
1559-            space_freed += os.stat(self.home)[stat.ST_SIZE]
1560-            self.unlink()
1561-        return space_freed
1562-class NullBucketWriter(Referenceable):
1563-    implements(RIBucketWriter)
1564-
1565-    def remote_write(self, offset, data):
1566-        return
1567-
1568 class BucketWriter(Referenceable):
1569     implements(RIBucketWriter)
1570 
1571hunk ./src/allmydata/storage/immutable.py 17
1572-    def __init__(self, ss, incominghome, finalhome, max_size, lease_info, canary):
1573+    def __init__(self, ss, immutableshare, max_size, lease_info, canary):
1574         self.ss = ss
1575hunk ./src/allmydata/storage/immutable.py 19
1576-        self.incominghome = incominghome
1577-        self.finalhome = finalhome
1578         self._max_size = max_size # don't allow the client to write more than this
1579         self._canary = canary
1580         self._disconnect_marker = canary.notifyOnDisconnect(self._disconnected)
1581hunk ./src/allmydata/storage/immutable.py 24
1582         self.closed = False
1583         self.throw_out_all_data = False
1584-        self._sharefile = ShareFile(incominghome, create=True, max_size=max_size)
1585+        self._sharefile = immutableshare
1586         # also, add our lease to the file now, so that other ones can be
1587         # added by simultaneous uploaders
1588         self._sharefile.add_lease(lease_info)
1589hunk ./src/allmydata/storage/server.py 16
1590 from allmydata.storage.lease import LeaseInfo
1591 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
1592      create_mutable_sharefile
1593-from allmydata.storage.immutable import ShareFile, NullBucketWriter, BucketWriter, BucketReader
1594-from allmydata.storage.crawler import BucketCountingCrawler
1595-from allmydata.storage.expirer import LeaseCheckingCrawler
1596 
1597 from zope.interface import implements
1598 
1599hunk ./src/allmydata/storage/server.py 19
1600-# A Backend is a MultiService so that its server's crawlers (if the server has any) can
1601-# be started and stopped.
1602-class Backend(service.MultiService):
1603-    implements(IStatsProducer)
1604-    def __init__(self):
1605-        service.MultiService.__init__(self)
1606-
1607-    def get_bucket_shares(self):
1608-        """XXX"""
1609-        raise NotImplementedError
1610-
1611-    def get_share(self):
1612-        """XXX"""
1613-        raise NotImplementedError
1614-
1615-    def make_bucket_writer(self):
1616-        """XXX"""
1617-        raise NotImplementedError
1618-
1619-class NullBackend(Backend):
1620-    def __init__(self):
1621-        Backend.__init__(self)
1622-
1623-    def get_available_space(self):
1624-        return None
1625-
1626-    def get_bucket_shares(self, storage_index):
1627-        return set()
1628-
1629-    def get_share(self, storage_index, sharenum):
1630-        return None
1631-
1632-    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
1633-        return NullBucketWriter()
1634-
1635-class FSBackend(Backend):
1636-    def __init__(self, storedir, readonly=False, reserved_space=0):
1637-        Backend.__init__(self)
1638-
1639-        self._setup_storage(storedir, readonly, reserved_space)
1640-        self._setup_corruption_advisory()
1641-        self._setup_bucket_counter()
1642-        self._setup_lease_checkerf()
1643-
1644-    def _setup_storage(self, storedir, readonly, reserved_space):
1645-        self.storedir = storedir
1646-        self.readonly = readonly
1647-        self.reserved_space = int(reserved_space)
1648-        if self.reserved_space:
1649-            if self.get_available_space() is None:
1650-                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
1651-                        umid="0wZ27w", level=log.UNUSUAL)
1652-
1653-        self.sharedir = os.path.join(self.storedir, "shares")
1654-        fileutil.make_dirs(self.sharedir)
1655-        self.incomingdir = os.path.join(self.sharedir, 'incoming')
1656-        self._clean_incomplete()
1657-
1658-    def _clean_incomplete(self):
1659-        fileutil.rm_dir(self.incomingdir)
1660-        fileutil.make_dirs(self.incomingdir)
1661-
1662-    def _setup_corruption_advisory(self):
1663-        # we don't actually create the corruption-advisory dir until necessary
1664-        self.corruption_advisory_dir = os.path.join(self.storedir,
1665-                                                    "corruption-advisories")
1666-
1667-    def _setup_bucket_counter(self):
1668-        statefile = os.path.join(self.storedir, "bucket_counter.state")
1669-        self.bucket_counter = BucketCountingCrawler(statefile)
1670-        self.bucket_counter.setServiceParent(self)
1671-
1672-    def _setup_lease_checkerf(self):
1673-        statefile = os.path.join(self.storedir, "lease_checker.state")
1674-        historyfile = os.path.join(self.storedir, "lease_checker.history")
1675-        self.lease_checker = LeaseCheckingCrawler(statefile, historyfile,
1676-                                   expiration_enabled, expiration_mode,
1677-                                   expiration_override_lease_duration,
1678-                                   expiration_cutoff_date,
1679-                                   expiration_sharetypes)
1680-        self.lease_checker.setServiceParent(self)
1681-
1682-    def get_available_space(self):
1683-        if self.readonly:
1684-            return 0
1685-        return fileutil.get_available_space(self.storedir, self.reserved_space)
1686-
1687-    def get_bucket_shares(self, storage_index):
1688-        """Return a list of (shnum, pathname) tuples for files that hold
1689-        shares for this storage_index. In each tuple, 'shnum' will always be
1690-        the integer form of the last component of 'pathname'."""
1691-        storagedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
1692-        try:
1693-            for f in os.listdir(storagedir):
1694-                if NUM_RE.match(f):
1695-                    filename = os.path.join(storagedir, f)
1696-                    yield (int(f), filename)
1697-        except OSError:
1698-            # Commonly caused by there being no buckets at all.
1699-            pass
1700-
1701 # storage/
1702 # storage/shares/incoming
1703 #   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
1704hunk ./src/allmydata/storage/server.py 32
1705 # $SHARENUM matches this regex:
1706 NUM_RE=re.compile("^[0-9]+$")
1707 
1708-
1709-
1710 class StorageServer(service.MultiService, Referenceable):
1711     implements(RIStorageServer, IStatsProducer)
1712     name = 'storage'
1713hunk ./src/allmydata/storage/server.py 35
1714-    LeaseCheckerClass = LeaseCheckingCrawler
1715 
1716     def __init__(self, nodeid, backend, reserved_space=0,
1717                  readonly_storage=False,
1718hunk ./src/allmydata/storage/server.py 38
1719-                 stats_provider=None,
1720-                 expiration_enabled=False,
1721-                 expiration_mode="age",
1722-                 expiration_override_lease_duration=None,
1723-                 expiration_cutoff_date=None,
1724-                 expiration_sharetypes=("mutable", "immutable")):
1725+                 stats_provider=None ):
1726         service.MultiService.__init__(self)
1727         assert isinstance(nodeid, str)
1728         assert len(nodeid) == 20
1729hunk ./src/allmydata/storage/server.py 217
1730         # they asked about: this will save them a lot of work. Add or update
1731         # leases for all of them: if they want us to hold shares for this
1732         # file, they'll want us to hold leases for this file.
1733-        for (shnum, fn) in self.backend.get_bucket_shares(storage_index):
1734-            alreadygot.add(shnum)
1735-            sf = ShareFile(fn)
1736-            sf.add_or_renew_lease(lease_info)
1737-
1738-        for shnum in sharenums:
1739-            share = self.backend.get_share(storage_index, shnum)
1740+        for share in self.backend.get_shares(storage_index):
1741+            alreadygot.add(share.shnum)
1742+            share.add_or_renew_lease(lease_info)
1743 
1744hunk ./src/allmydata/storage/server.py 221
1745-            if not share:
1746-                if (not limited) or (remaining_space >= max_space_per_bucket):
1747-                    # ok! we need to create the new share file.
1748-                    bw = self.backend.make_bucket_writer(storage_index, shnum,
1749-                                      max_space_per_bucket, lease_info, canary)
1750-                    bucketwriters[shnum] = bw
1751-                    self._active_writers[bw] = 1
1752-                    if limited:
1753-                        remaining_space -= max_space_per_bucket
1754-                else:
1755-                    # bummer! not enough space to accept this bucket
1756-                    pass
1757+        for shnum in (sharenums - alreadygot):
1758+            if (not limited) or (remaining_space >= max_space_per_bucket):
1759+                #XXX or should the following line occur in storage server construtor? ok! we need to create the new share file.
1760+                self.backend.set_storage_server(self)
1761+                bw = self.backend.make_bucket_writer(storage_index, shnum,
1762+                                                     max_space_per_bucket, lease_info, canary)
1763+                bucketwriters[shnum] = bw
1764+                self._active_writers[bw] = 1
1765+                if limited:
1766+                    remaining_space -= max_space_per_bucket
1767 
1768hunk ./src/allmydata/storage/server.py 232
1769-            elif share.is_complete():
1770-                # great! we already have it. easy.
1771-                pass
1772-            elif not share.is_complete():
1773-                # Note that we don't create BucketWriters for shnums that
1774-                # have a partial share (in incoming/), so if a second upload
1775-                # occurs while the first is still in progress, the second
1776-                # uploader will use different storage servers.
1777-                pass
1778+        #XXX We SHOULD DOCUMENT LATER.
1779 
1780         self.add_latency("allocate", time.time() - start)
1781         return alreadygot, bucketwriters
1782hunk ./src/allmydata/storage/server.py 238
1783 
1784     def _iter_share_files(self, storage_index):
1785-        for shnum, filename in self._get_bucket_shares(storage_index):
1786+        for shnum, filename in self._get_shares(storage_index):
1787             f = open(filename, 'rb')
1788             header = f.read(32)
1789             f.close()
1790hunk ./src/allmydata/storage/server.py 318
1791         si_s = si_b2a(storage_index)
1792         log.msg("storage: get_buckets %s" % si_s)
1793         bucketreaders = {} # k: sharenum, v: BucketReader
1794-        for shnum, filename in self.backend.get_bucket_shares(storage_index):
1795+        for shnum, filename in self.backend.get_shares(storage_index):
1796             bucketreaders[shnum] = BucketReader(self, filename,
1797                                                 storage_index, shnum)
1798         self.add_latency("get", time.time() - start)
1799hunk ./src/allmydata/storage/server.py 334
1800         # since all shares get the same lease data, we just grab the leases
1801         # from the first share
1802         try:
1803-            shnum, filename = self._get_bucket_shares(storage_index).next()
1804+            shnum, filename = self._get_shares(storage_index).next()
1805             sf = ShareFile(filename)
1806             return sf.get_leases()
1807         except StopIteration:
1808hunk ./src/allmydata/storage/shares.py 1
1809-#! /usr/bin/python
1810-
1811-from allmydata.storage.mutable import MutableShareFile
1812-from allmydata.storage.immutable import ShareFile
1813-
1814-def get_share_file(filename):
1815-    f = open(filename, "rb")
1816-    prefix = f.read(32)
1817-    f.close()
1818-    if prefix == MutableShareFile.MAGIC:
1819-        return MutableShareFile(filename)
1820-    # otherwise assume it's immutable
1821-    return ShareFile(filename)
1822-
1823rmfile ./src/allmydata/storage/shares.py
1824hunk ./src/allmydata/test/common_util.py 20
1825 
1826 def flip_one_bit(s, offset=0, size=None):
1827     """ flip one random bit of the string s, in a byte greater than or equal to offset and less
1828-    than offset+size. """
1829+    than offset+size. Return the new string. """
1830     if size is None:
1831         size=len(s)-offset
1832     i = randrange(offset, offset+size)
1833hunk ./src/allmydata/test/test_backends.py 7
1834 
1835 from allmydata.test.common_util import ReallyEqualMixin
1836 
1837-import mock
1838+import mock, os
1839 
1840 # This is the code that we're going to be testing.
1841hunk ./src/allmydata/test/test_backends.py 10
1842-from allmydata.storage.server import StorageServer, FSBackend, NullBackend
1843+from allmydata.storage.server import StorageServer
1844+
1845+from allmydata.storage.backends.das.core import DASCore
1846+from allmydata.storage.backends.null.core import NullCore
1847+
1848 
1849 # The following share file contents was generated with
1850 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
1851hunk ./src/allmydata/test/test_backends.py 22
1852 share_data = 'a\x00\x00\x00\x00xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy\x00(\xde\x80'
1853 share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data
1854 
1855-sharefname = 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a/0'
1856+tempdir = 'teststoredir'
1857+sharedirname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
1858+sharefname = os.path.join(sharedirname, '0')
1859 
1860 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
1861     @mock.patch('time.time')
1862hunk ./src/allmydata/test/test_backends.py 58
1863         filesystem in only the prescribed ways. """
1864 
1865         def call_open(fname, mode):
1866-            if fname == 'testdir/bucket_counter.state':
1867-                raise IOError(2, "No such file or directory: 'testdir/bucket_counter.state'")
1868-            elif fname == 'testdir/lease_checker.state':
1869-                raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
1870-            elif fname == 'testdir/lease_checker.history':
1871+            if fname == os.path.join(tempdir,'bucket_counter.state'):
1872+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
1873+            elif fname == os.path.join(tempdir, 'lease_checker.state'):
1874+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
1875+            elif fname == os.path.join(tempdir, 'lease_checker.history'):
1876                 return StringIO()
1877             else:
1878                 self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
1879hunk ./src/allmydata/test/test_backends.py 124
1880     @mock.patch('__builtin__.open')
1881     def setUp(self, mockopen):
1882         def call_open(fname, mode):
1883-            if fname == 'testdir/bucket_counter.state':
1884-                raise IOError(2, "No such file or directory: 'testdir/bucket_counter.state'")
1885-            elif fname == 'testdir/lease_checker.state':
1886-                raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
1887-            elif fname == 'testdir/lease_checker.history':
1888+            if fname == os.path.join(tempdir, 'bucket_counter.state'):
1889+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
1890+            elif fname == os.path.join(tempdir, 'lease_checker.state'):
1891+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
1892+            elif fname == os.path.join(tempdir, 'lease_checker.history'):
1893                 return StringIO()
1894         mockopen.side_effect = call_open
1895hunk ./src/allmydata/test/test_backends.py 131
1896-
1897-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=FSBackend('teststoredir'))
1898+        expiration_policy = {'enabled' : False,
1899+                             'mode' : 'age',
1900+                             'override_lease_duration' : None,
1901+                             'cutoff_date' : None,
1902+                             'sharetypes' : None}
1903+        testbackend = DASCore(tempdir, expiration_policy)
1904+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
1905 
1906     @mock.patch('time.time')
1907     @mock.patch('os.mkdir')
1908hunk ./src/allmydata/test/test_backends.py 148
1909         """ Write a new share. """
1910 
1911         def call_listdir(dirname):
1912-            self.failUnlessReallyEqual(dirname, 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
1913-            raise OSError(2, "No such file or directory: 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a'")
1914+            self.failUnlessReallyEqual(dirname, sharedirname)
1915+            raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares/or/orsxg5dtorxxeylhmvpws3temv4a'))
1916 
1917         mocklistdir.side_effect = call_listdir
1918 
1919hunk ./src/allmydata/test/test_backends.py 178
1920 
1921         sharefile = MockFile()
1922         def call_open(fname, mode):
1923-            self.failUnlessReallyEqual(fname, 'testdir/shares/incoming/or/orsxg5dtorxxeylhmvpws3temv4a/0' )
1924+            self.failUnlessReallyEqual(fname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a', '0' ))
1925             return sharefile
1926 
1927         mockopen.side_effect = call_open
1928hunk ./src/allmydata/test/test_backends.py 200
1929         StorageServer object. """
1930 
1931         def call_listdir(dirname):
1932-            self.failUnlessReallyEqual(dirname,'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
1933+            self.failUnlessReallyEqual(dirname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
1934             return ['0']
1935 
1936         mocklistdir.side_effect = call_listdir
1937}
1938[checkpoint patch
1939wilcoxjg@gmail.com**20110626165715
1940 Ignore-this: fbfce2e8a1c1bb92715793b8ad6854d5
1941] {
1942hunk ./src/allmydata/storage/backends/das/core.py 21
1943 from allmydata.storage.lease import LeaseInfo
1944 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
1945      create_mutable_sharefile
1946-from allmydata.storage.backends.das.immutable import NullBucketWriter, BucketWriter, BucketReader
1947+from allmydata.storage.immutable import BucketWriter, BucketReader
1948 from allmydata.storage.crawler import FSBucketCountingCrawler
1949 from allmydata.storage.backends.das.expirer import FSLeaseCheckingCrawler
1950 
1951hunk ./src/allmydata/storage/backends/das/core.py 27
1952 from zope.interface import implements
1953 
1954+# $SHARENUM matches this regex:
1955+NUM_RE=re.compile("^[0-9]+$")
1956+
1957 class DASCore(Backend):
1958     implements(IStorageBackend)
1959     def __init__(self, storedir, expiration_policy, readonly=False, reserved_space=0):
1960hunk ./src/allmydata/storage/backends/das/core.py 80
1961         return fileutil.get_available_space(self.storedir, self.reserved_space)
1962 
1963     def get_shares(self, storage_index):
1964-        """Return a list of the FSBShare objects that correspond to the passed storage_index."""
1965+        """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
1966         finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
1967         try:
1968             for f in os.listdir(finalstoragedir):
1969hunk ./src/allmydata/storage/backends/das/core.py 86
1970                 if NUM_RE.match(f):
1971                     filename = os.path.join(finalstoragedir, f)
1972-                    yield FSBShare(filename, int(f))
1973+                    yield ImmutableShare(self.sharedir, storage_index, int(f))
1974         except OSError:
1975             # Commonly caused by there being no buckets at all.
1976             pass
1977hunk ./src/allmydata/storage/backends/das/core.py 95
1978         immsh = ImmutableShare(self.sharedir, storage_index, shnum, max_size=max_space_per_bucket, create=True)
1979         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
1980         return bw
1981+
1982+    def set_storage_server(self, ss):
1983+        self.ss = ss
1984         
1985 
1986 # each share file (in storage/shares/$SI/$SHNUM) contains lease information
1987hunk ./src/allmydata/storage/server.py 29
1988 # Where "$START" denotes the first 10 bits worth of $STORAGEINDEX (that's 2
1989 # base-32 chars).
1990 
1991-# $SHARENUM matches this regex:
1992-NUM_RE=re.compile("^[0-9]+$")
1993 
1994 class StorageServer(service.MultiService, Referenceable):
1995     implements(RIStorageServer, IStatsProducer)
1996}
1997[checkpoint4
1998wilcoxjg@gmail.com**20110628202202
1999 Ignore-this: 9778596c10bb066b58fc211f8c1707b7
2000] {
2001hunk ./src/allmydata/storage/backends/das/core.py 96
2002         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
2003         return bw
2004 
2005+    def make_bucket_reader(self, share):
2006+        return BucketReader(self.ss, share)
2007+
2008     def set_storage_server(self, ss):
2009         self.ss = ss
2010         
2011hunk ./src/allmydata/storage/backends/das/core.py 138
2012         must not be None. """
2013         precondition((max_size is not None) or (not create), max_size, create)
2014         self.shnum = shnum
2015+        self.storage_index = storageindex
2016         self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
2017         self._max_size = max_size
2018         if create:
2019hunk ./src/allmydata/storage/backends/das/core.py 173
2020             self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
2021         self._data_offset = 0xc
2022 
2023+    def get_shnum(self):
2024+        return self.shnum
2025+
2026     def unlink(self):
2027         os.unlink(self.fname)
2028 
2029hunk ./src/allmydata/storage/backends/null/core.py 2
2030 from allmydata.storage.backends.base import Backend
2031+from allmydata.storage.immutable import BucketWriter, BucketReader
2032 
2033 class NullCore(Backend):
2034     def __init__(self):
2035hunk ./src/allmydata/storage/backends/null/core.py 17
2036     def get_share(self, storage_index, sharenum):
2037         return None
2038 
2039-    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
2040-        return NullBucketWriter()
2041+    def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
2042+       
2043+        return BucketWriter(self.ss, immutableshare, max_space_per_bucket, lease_info, canary)
2044+
2045+    def set_storage_server(self, ss):
2046+        self.ss = ss
2047+
2048+class ImmutableShare:
2049+    sharetype = "immutable"
2050+
2051+    def __init__(self, sharedir, storageindex, shnum, max_size=None, create=False):
2052+        """ If max_size is not None then I won't allow more than
2053+        max_size to be written to me. If create=True then max_size
2054+        must not be None. """
2055+        precondition((max_size is not None) or (not create), max_size, create)
2056+        self.shnum = shnum
2057+        self.storage_index = storageindex
2058+        self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
2059+        self._max_size = max_size
2060+        if create:
2061+            # touch the file, so later callers will see that we're working on
2062+            # it. Also construct the metadata.
2063+            assert not os.path.exists(self.fname)
2064+            fileutil.make_dirs(os.path.dirname(self.fname))
2065+            f = open(self.fname, 'wb')
2066+            # The second field -- the four-byte share data length -- is no
2067+            # longer used as of Tahoe v1.3.0, but we continue to write it in
2068+            # there in case someone downgrades a storage server from >=
2069+            # Tahoe-1.3.0 to < Tahoe-1.3.0, or moves a share file from one
2070+            # server to another, etc. We do saturation -- a share data length
2071+            # larger than 2**32-1 (what can fit into the field) is marked as
2072+            # the largest length that can fit into the field. That way, even
2073+            # if this does happen, the old < v1.3.0 server will still allow
2074+            # clients to read the first part of the share.
2075+            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
2076+            f.close()
2077+            self._lease_offset = max_size + 0x0c
2078+            self._num_leases = 0
2079+        else:
2080+            f = open(self.fname, 'rb')
2081+            filesize = os.path.getsize(self.fname)
2082+            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
2083+            f.close()
2084+            if version != 1:
2085+                msg = "sharefile %s had version %d but we wanted 1" % \
2086+                      (self.fname, version)
2087+                raise UnknownImmutableContainerVersionError(msg)
2088+            self._num_leases = num_leases
2089+            self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
2090+        self._data_offset = 0xc
2091+
2092+    def get_shnum(self):
2093+        return self.shnum
2094+
2095+    def unlink(self):
2096+        os.unlink(self.fname)
2097+
2098+    def read_share_data(self, offset, length):
2099+        precondition(offset >= 0)
2100+        # Reads beyond the end of the data are truncated. Reads that start
2101+        # beyond the end of the data return an empty string.
2102+        seekpos = self._data_offset+offset
2103+        fsize = os.path.getsize(self.fname)
2104+        actuallength = max(0, min(length, fsize-seekpos))
2105+        if actuallength == 0:
2106+            return ""
2107+        f = open(self.fname, 'rb')
2108+        f.seek(seekpos)
2109+        return f.read(actuallength)
2110+
2111+    def write_share_data(self, offset, data):
2112+        length = len(data)
2113+        precondition(offset >= 0, offset)
2114+        if self._max_size is not None and offset+length > self._max_size:
2115+            raise DataTooLargeError(self._max_size, offset, length)
2116+        f = open(self.fname, 'rb+')
2117+        real_offset = self._data_offset+offset
2118+        f.seek(real_offset)
2119+        assert f.tell() == real_offset
2120+        f.write(data)
2121+        f.close()
2122+
2123+    def _write_lease_record(self, f, lease_number, lease_info):
2124+        offset = self._lease_offset + lease_number * self.LEASE_SIZE
2125+        f.seek(offset)
2126+        assert f.tell() == offset
2127+        f.write(lease_info.to_immutable_data())
2128+
2129+    def _read_num_leases(self, f):
2130+        f.seek(0x08)
2131+        (num_leases,) = struct.unpack(">L", f.read(4))
2132+        return num_leases
2133+
2134+    def _write_num_leases(self, f, num_leases):
2135+        f.seek(0x08)
2136+        f.write(struct.pack(">L", num_leases))
2137+
2138+    def _truncate_leases(self, f, num_leases):
2139+        f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE)
2140+
2141+    def get_leases(self):
2142+        """Yields a LeaseInfo instance for all leases."""
2143+        f = open(self.fname, 'rb')
2144+        (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
2145+        f.seek(self._lease_offset)
2146+        for i in range(num_leases):
2147+            data = f.read(self.LEASE_SIZE)
2148+            if data:
2149+                yield LeaseInfo().from_immutable_data(data)
2150+
2151+    def add_lease(self, lease_info):
2152+        f = open(self.fname, 'rb+')
2153+        num_leases = self._read_num_leases(f)
2154+        self._write_lease_record(f, num_leases, lease_info)
2155+        self._write_num_leases(f, num_leases+1)
2156+        f.close()
2157+
2158+    def renew_lease(self, renew_secret, new_expire_time):
2159+        for i,lease in enumerate(self.get_leases()):
2160+            if constant_time_compare(lease.renew_secret, renew_secret):
2161+                # yup. See if we need to update the owner time.
2162+                if new_expire_time > lease.expiration_time:
2163+                    # yes
2164+                    lease.expiration_time = new_expire_time
2165+                    f = open(self.fname, 'rb+')
2166+                    self._write_lease_record(f, i, lease)
2167+                    f.close()
2168+                return
2169+        raise IndexError("unable to renew non-existent lease")
2170+
2171+    def add_or_renew_lease(self, lease_info):
2172+        try:
2173+            self.renew_lease(lease_info.renew_secret,
2174+                             lease_info.expiration_time)
2175+        except IndexError:
2176+            self.add_lease(lease_info)
2177+
2178+
2179+    def cancel_lease(self, cancel_secret):
2180+        """Remove a lease with the given cancel_secret. If the last lease is
2181+        cancelled, the file will be removed. Return the number of bytes that
2182+        were freed (by truncating the list of leases, and possibly by
2183+        deleting the file. Raise IndexError if there was no lease with the
2184+        given cancel_secret.
2185+        """
2186+
2187+        leases = list(self.get_leases())
2188+        num_leases_removed = 0
2189+        for i,lease in enumerate(leases):
2190+            if constant_time_compare(lease.cancel_secret, cancel_secret):
2191+                leases[i] = None
2192+                num_leases_removed += 1
2193+        if not num_leases_removed:
2194+            raise IndexError("unable to find matching lease to cancel")
2195+        if num_leases_removed:
2196+            # pack and write out the remaining leases. We write these out in
2197+            # the same order as they were added, so that if we crash while
2198+            # doing this, we won't lose any non-cancelled leases.
2199+            leases = [l for l in leases if l] # remove the cancelled leases
2200+            f = open(self.fname, 'rb+')
2201+            for i,lease in enumerate(leases):
2202+                self._write_lease_record(f, i, lease)
2203+            self._write_num_leases(f, len(leases))
2204+            self._truncate_leases(f, len(leases))
2205+            f.close()
2206+        space_freed = self.LEASE_SIZE * num_leases_removed
2207+        if not len(leases):
2208+            space_freed += os.stat(self.fname)[stat.ST_SIZE]
2209+            self.unlink()
2210+        return space_freed
2211hunk ./src/allmydata/storage/immutable.py 114
2212 class BucketReader(Referenceable):
2213     implements(RIBucketReader)
2214 
2215-    def __init__(self, ss, sharefname, storage_index=None, shnum=None):
2216+    def __init__(self, ss, share):
2217         self.ss = ss
2218hunk ./src/allmydata/storage/immutable.py 116
2219-        self._share_file = ShareFile(sharefname)
2220-        self.storage_index = storage_index
2221-        self.shnum = shnum
2222+        self._share_file = share
2223+        self.storage_index = share.storage_index
2224+        self.shnum = share.shnum
2225 
2226     def __repr__(self):
2227         return "<%s %s %s>" % (self.__class__.__name__,
2228hunk ./src/allmydata/storage/server.py 316
2229         si_s = si_b2a(storage_index)
2230         log.msg("storage: get_buckets %s" % si_s)
2231         bucketreaders = {} # k: sharenum, v: BucketReader
2232-        for shnum, filename in self.backend.get_shares(storage_index):
2233-            bucketreaders[shnum] = BucketReader(self, filename,
2234-                                                storage_index, shnum)
2235+        self.backend.set_storage_server(self)
2236+        for share in self.backend.get_shares(storage_index):
2237+            bucketreaders[share.get_shnum()] = self.backend.make_bucket_reader(share)
2238         self.add_latency("get", time.time() - start)
2239         return bucketreaders
2240 
2241hunk ./src/allmydata/test/test_backends.py 25
2242 tempdir = 'teststoredir'
2243 sharedirname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
2244 sharefname = os.path.join(sharedirname, '0')
2245+expiration_policy = {'enabled' : False,
2246+                     'mode' : 'age',
2247+                     'override_lease_duration' : None,
2248+                     'cutoff_date' : None,
2249+                     'sharetypes' : None}
2250 
2251 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
2252     @mock.patch('time.time')
2253hunk ./src/allmydata/test/test_backends.py 43
2254         tries to read or write to the file system. """
2255 
2256         # Now begin the test.
2257-        s = StorageServer('testnodeidxxxxxxxxxx', backend=NullBackend())
2258+        s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2259 
2260         self.failIf(mockisdir.called)
2261         self.failIf(mocklistdir.called)
2262hunk ./src/allmydata/test/test_backends.py 74
2263         mockopen.side_effect = call_open
2264 
2265         # Now begin the test.
2266-        s = StorageServer('testnodeidxxxxxxxxxx', backend=FSBackend('teststoredir'))
2267+        s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore('teststoredir', expiration_policy))
2268 
2269         self.failIf(mockisdir.called)
2270         self.failIf(mocklistdir.called)
2271hunk ./src/allmydata/test/test_backends.py 86
2272 
2273 class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
2274     def setUp(self):
2275-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullBackend())
2276+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2277 
2278     @mock.patch('os.mkdir')
2279     @mock.patch('__builtin__.open')
2280hunk ./src/allmydata/test/test_backends.py 136
2281             elif fname == os.path.join(tempdir, 'lease_checker.history'):
2282                 return StringIO()
2283         mockopen.side_effect = call_open
2284-        expiration_policy = {'enabled' : False,
2285-                             'mode' : 'age',
2286-                             'override_lease_duration' : None,
2287-                             'cutoff_date' : None,
2288-                             'sharetypes' : None}
2289         testbackend = DASCore(tempdir, expiration_policy)
2290         self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
2291 
2292}
2293[checkpoint5
2294wilcoxjg@gmail.com**20110705034626
2295 Ignore-this: 255780bd58299b0aa33c027e9d008262
2296] {
2297addfile ./src/allmydata/storage/backends/base.py
2298hunk ./src/allmydata/storage/backends/base.py 1
2299+from twisted.application import service
2300+
2301+class Backend(service.MultiService):
2302+    def __init__(self):
2303+        service.MultiService.__init__(self)
2304hunk ./src/allmydata/storage/backends/null/core.py 19
2305 
2306     def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
2307         
2308+        immutableshare = ImmutableShare()
2309         return BucketWriter(self.ss, immutableshare, max_space_per_bucket, lease_info, canary)
2310 
2311     def set_storage_server(self, ss):
2312hunk ./src/allmydata/storage/backends/null/core.py 28
2313 class ImmutableShare:
2314     sharetype = "immutable"
2315 
2316-    def __init__(self, sharedir, storageindex, shnum, max_size=None, create=False):
2317+    def __init__(self):
2318         """ If max_size is not None then I won't allow more than
2319         max_size to be written to me. If create=True then max_size
2320         must not be None. """
2321hunk ./src/allmydata/storage/backends/null/core.py 32
2322-        precondition((max_size is not None) or (not create), max_size, create)
2323-        self.shnum = shnum
2324-        self.storage_index = storageindex
2325-        self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
2326-        self._max_size = max_size
2327-        if create:
2328-            # touch the file, so later callers will see that we're working on
2329-            # it. Also construct the metadata.
2330-            assert not os.path.exists(self.fname)
2331-            fileutil.make_dirs(os.path.dirname(self.fname))
2332-            f = open(self.fname, 'wb')
2333-            # The second field -- the four-byte share data length -- is no
2334-            # longer used as of Tahoe v1.3.0, but we continue to write it in
2335-            # there in case someone downgrades a storage server from >=
2336-            # Tahoe-1.3.0 to < Tahoe-1.3.0, or moves a share file from one
2337-            # server to another, etc. We do saturation -- a share data length
2338-            # larger than 2**32-1 (what can fit into the field) is marked as
2339-            # the largest length that can fit into the field. That way, even
2340-            # if this does happen, the old < v1.3.0 server will still allow
2341-            # clients to read the first part of the share.
2342-            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
2343-            f.close()
2344-            self._lease_offset = max_size + 0x0c
2345-            self._num_leases = 0
2346-        else:
2347-            f = open(self.fname, 'rb')
2348-            filesize = os.path.getsize(self.fname)
2349-            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
2350-            f.close()
2351-            if version != 1:
2352-                msg = "sharefile %s had version %d but we wanted 1" % \
2353-                      (self.fname, version)
2354-                raise UnknownImmutableContainerVersionError(msg)
2355-            self._num_leases = num_leases
2356-            self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
2357-        self._data_offset = 0xc
2358+        pass
2359 
2360     def get_shnum(self):
2361         return self.shnum
2362hunk ./src/allmydata/storage/backends/null/core.py 54
2363         return f.read(actuallength)
2364 
2365     def write_share_data(self, offset, data):
2366-        length = len(data)
2367-        precondition(offset >= 0, offset)
2368-        if self._max_size is not None and offset+length > self._max_size:
2369-            raise DataTooLargeError(self._max_size, offset, length)
2370-        f = open(self.fname, 'rb+')
2371-        real_offset = self._data_offset+offset
2372-        f.seek(real_offset)
2373-        assert f.tell() == real_offset
2374-        f.write(data)
2375-        f.close()
2376+        pass
2377 
2378     def _write_lease_record(self, f, lease_number, lease_info):
2379         offset = self._lease_offset + lease_number * self.LEASE_SIZE
2380hunk ./src/allmydata/storage/backends/null/core.py 84
2381             if data:
2382                 yield LeaseInfo().from_immutable_data(data)
2383 
2384-    def add_lease(self, lease_info):
2385-        f = open(self.fname, 'rb+')
2386-        num_leases = self._read_num_leases(f)
2387-        self._write_lease_record(f, num_leases, lease_info)
2388-        self._write_num_leases(f, num_leases+1)
2389-        f.close()
2390+    def add_lease(self, lease):
2391+        pass
2392 
2393     def renew_lease(self, renew_secret, new_expire_time):
2394         for i,lease in enumerate(self.get_leases()):
2395hunk ./src/allmydata/test/test_backends.py 32
2396                      'sharetypes' : None}
2397 
2398 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
2399-    @mock.patch('time.time')
2400-    @mock.patch('os.mkdir')
2401-    @mock.patch('__builtin__.open')
2402-    @mock.patch('os.listdir')
2403-    @mock.patch('os.path.isdir')
2404-    def test_create_server_null_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
2405-        """ This tests whether a server instance can be constructed
2406-        with a null backend. The server instance fails the test if it
2407-        tries to read or write to the file system. """
2408-
2409-        # Now begin the test.
2410-        s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2411-
2412-        self.failIf(mockisdir.called)
2413-        self.failIf(mocklistdir.called)
2414-        self.failIf(mockopen.called)
2415-        self.failIf(mockmkdir.called)
2416-
2417-        # You passed!
2418-
2419     @mock.patch('time.time')
2420     @mock.patch('os.mkdir')
2421     @mock.patch('__builtin__.open')
2422hunk ./src/allmydata/test/test_backends.py 53
2423                 self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
2424         mockopen.side_effect = call_open
2425 
2426-        # Now begin the test.
2427-        s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore('teststoredir', expiration_policy))
2428-
2429-        self.failIf(mockisdir.called)
2430-        self.failIf(mocklistdir.called)
2431-        self.failIf(mockopen.called)
2432-        self.failIf(mockmkdir.called)
2433-        self.failIf(mocktime.called)
2434-
2435-        # You passed!
2436-
2437-class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
2438-    def setUp(self):
2439-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2440-
2441-    @mock.patch('os.mkdir')
2442-    @mock.patch('__builtin__.open')
2443-    @mock.patch('os.listdir')
2444-    @mock.patch('os.path.isdir')
2445-    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir):
2446-        """ Write a new share. """
2447-
2448-        # Now begin the test.
2449-        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
2450-        bs[0].remote_write(0, 'a')
2451-        self.failIf(mockisdir.called)
2452-        self.failIf(mocklistdir.called)
2453-        self.failIf(mockopen.called)
2454-        self.failIf(mockmkdir.called)
2455+        def call_isdir(fname):
2456+            if fname == os.path.join(tempdir,'shares'):
2457+                return True
2458+            elif fname == os.path.join(tempdir,'shares', 'incoming'):
2459+                return True
2460+            else:
2461+                self.fail("Server with FS backend tried to idsir '%s'" % (fname,))
2462+        mockisdir.side_effect = call_isdir
2463 
2464hunk ./src/allmydata/test/test_backends.py 62
2465-    @mock.patch('os.path.exists')
2466-    @mock.patch('os.path.getsize')
2467-    @mock.patch('__builtin__.open')
2468-    @mock.patch('os.listdir')
2469-    def test_read_share(self, mocklistdir, mockopen, mockgetsize, mockexists):
2470-        """ This tests whether the code correctly finds and reads
2471-        shares written out by old (Tahoe-LAFS <= v1.8.2)
2472-        servers. There is a similar test in test_download, but that one
2473-        is from the perspective of the client and exercises a deeper
2474-        stack of code. This one is for exercising just the
2475-        StorageServer object. """
2476+        def call_mkdir(fname, mode):
2477+            """XXX something is calling mkdir teststoredir and teststoredir/shares twice...  this is odd!"""
2478+            self.failUnlessEqual(0777, mode)
2479+            if fname == tempdir:
2480+                return None
2481+            elif fname == os.path.join(tempdir,'shares'):
2482+                return None
2483+            elif fname == os.path.join(tempdir,'shares', 'incoming'):
2484+                return None
2485+            else:
2486+                self.fail("Server with FS backend tried to mkdir '%s'" % (fname,))
2487+        mockmkdir.side_effect = call_mkdir
2488 
2489         # Now begin the test.
2490hunk ./src/allmydata/test/test_backends.py 76
2491-        bs = self.s.remote_get_buckets('teststorage_index')
2492+        s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore('teststoredir', expiration_policy))
2493 
2494hunk ./src/allmydata/test/test_backends.py 78
2495-        self.failUnlessEqual(len(bs), 0)
2496-        self.failIf(mocklistdir.called)
2497-        self.failIf(mockopen.called)
2498-        self.failIf(mockgetsize.called)
2499-        self.failIf(mockexists.called)
2500+        self.failIf(mocklistdir.called, mocklistdir.call_args_list)
2501 
2502 
2503 class TestServerFSBackend(unittest.TestCase, ReallyEqualMixin):
2504hunk ./src/allmydata/test/test_backends.py 193
2505         self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
2506 
2507 
2508+
2509+class TestBackendConstruction(unittest.TestCase, ReallyEqualMixin):
2510+    @mock.patch('time.time')
2511+    @mock.patch('os.mkdir')
2512+    @mock.patch('__builtin__.open')
2513+    @mock.patch('os.listdir')
2514+    @mock.patch('os.path.isdir')
2515+    def test_create_fs_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
2516+        """ This tests whether a file system backend instance can be
2517+        constructed. To pass the test, it has to use the
2518+        filesystem in only the prescribed ways. """
2519+
2520+        def call_open(fname, mode):
2521+            if fname == os.path.join(tempdir,'bucket_counter.state'):
2522+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
2523+            elif fname == os.path.join(tempdir, 'lease_checker.state'):
2524+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
2525+            elif fname == os.path.join(tempdir, 'lease_checker.history'):
2526+                return StringIO()
2527+            else:
2528+                self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
2529+        mockopen.side_effect = call_open
2530+
2531+        def call_isdir(fname):
2532+            if fname == os.path.join(tempdir,'shares'):
2533+                return True
2534+            elif fname == os.path.join(tempdir,'shares', 'incoming'):
2535+                return True
2536+            else:
2537+                self.fail("Server with FS backend tried to idsir '%s'" % (fname,))
2538+        mockisdir.side_effect = call_isdir
2539+
2540+        def call_mkdir(fname, mode):
2541+            """XXX something is calling mkdir teststoredir and teststoredir/shares twice...  this is odd!"""
2542+            self.failUnlessEqual(0777, mode)
2543+            if fname == tempdir:
2544+                return None
2545+            elif fname == os.path.join(tempdir,'shares'):
2546+                return None
2547+            elif fname == os.path.join(tempdir,'shares', 'incoming'):
2548+                return None
2549+            else:
2550+                self.fail("Server with FS backend tried to mkdir '%s'" % (fname,))
2551+        mockmkdir.side_effect = call_mkdir
2552+
2553+        # Now begin the test.
2554+        DASCore('teststoredir', expiration_policy)
2555+
2556+        self.failIf(mocklistdir.called, mocklistdir.call_args_list)
2557}
2558[checkpoint 6
2559wilcoxjg@gmail.com**20110706190824
2560 Ignore-this: 2fb2d722b53fe4a72c99118c01fceb69
2561] {
2562hunk ./src/allmydata/interfaces.py 100
2563                          renew_secret=LeaseRenewSecret,
2564                          cancel_secret=LeaseCancelSecret,
2565                          sharenums=SetOf(int, maxLength=MAX_BUCKETS),
2566-                         allocated_size=Offset, canary=Referenceable):
2567+                         allocated_size=Offset,
2568+                         canary=Referenceable):
2569         """
2570hunk ./src/allmydata/interfaces.py 103
2571-        @param storage_index: the index of the bucket to be created or
2572+        @param storage_index: the index of the shares to be created or
2573                               increfed.
2574hunk ./src/allmydata/interfaces.py 105
2575-        @param sharenums: these are the share numbers (probably between 0 and
2576-                          99) that the sender is proposing to store on this
2577-                          server.
2578-        @param renew_secret: This is the secret used to protect bucket refresh
2579+        @param renew_secret: This is the secret used to protect shares refresh
2580                              This secret is generated by the client and
2581                              stored for later comparison by the server. Each
2582                              server is given a different secret.
2583hunk ./src/allmydata/interfaces.py 109
2584-        @param cancel_secret: Like renew_secret, but protects bucket decref.
2585-        @param canary: If the canary is lost before close(), the bucket is
2586+        @param cancel_secret: Like renew_secret, but protects shares decref.
2587+        @param sharenums: these are the share numbers (probably between 0 and
2588+                          99) that the sender is proposing to store on this
2589+                          server.
2590+        @param allocated_size: XXX The size of the shares the client wishes to store.
2591+        @param canary: If the canary is lost before close(), the shares are
2592                        deleted.
2593hunk ./src/allmydata/interfaces.py 116
2594+
2595         @return: tuple of (alreadygot, allocated), where alreadygot is what we
2596                  already have and allocated is what we hereby agree to accept.
2597                  New leases are added for shares in both lists.
2598hunk ./src/allmydata/interfaces.py 128
2599                   renew_secret=LeaseRenewSecret,
2600                   cancel_secret=LeaseCancelSecret):
2601         """
2602-        Add a new lease on the given bucket. If the renew_secret matches an
2603+        Add a new lease on the given shares. If the renew_secret matches an
2604         existing lease, that lease will be renewed instead. If there is no
2605         bucket for the given storage_index, return silently. (note that in
2606         tahoe-1.3.0 and earlier, IndexError was raised if there was no
2607hunk ./src/allmydata/storage/server.py 17
2608 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
2609      create_mutable_sharefile
2610 
2611-from zope.interface import implements
2612-
2613 # storage/
2614 # storage/shares/incoming
2615 #   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
2616hunk ./src/allmydata/test/test_backends.py 6
2617 from StringIO import StringIO
2618 
2619 from allmydata.test.common_util import ReallyEqualMixin
2620+from allmydata.util.assertutil import _assert
2621 
2622 import mock, os
2623 
2624hunk ./src/allmydata/test/test_backends.py 92
2625                 raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
2626             elif fname == os.path.join(tempdir, 'lease_checker.history'):
2627                 return StringIO()
2628+            else:
2629+                _assert(False, "The tester code doesn't recognize this case.") 
2630+
2631         mockopen.side_effect = call_open
2632         testbackend = DASCore(tempdir, expiration_policy)
2633         self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
2634hunk ./src/allmydata/test/test_backends.py 109
2635 
2636         def call_listdir(dirname):
2637             self.failUnlessReallyEqual(dirname, sharedirname)
2638-            raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares/or/orsxg5dtorxxeylhmvpws3temv4a'))
2639+            raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
2640 
2641         mocklistdir.side_effect = call_listdir
2642 
2643hunk ./src/allmydata/test/test_backends.py 113
2644+        def call_isdir(dirname):
2645+            self.failUnlessReallyEqual(dirname, sharedirname)
2646+            return True
2647+
2648+        mockisdir.side_effect = call_isdir
2649+
2650+        def call_mkdir(dirname, permissions):
2651+            if dirname not in [sharedirname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
2652+                self.Fail
2653+            else:
2654+                return True
2655+
2656+        mockmkdir.side_effect = call_mkdir
2657+
2658         class MockFile:
2659             def __init__(self):
2660                 self.buffer = ''
2661hunk ./src/allmydata/test/test_backends.py 156
2662             return sharefile
2663 
2664         mockopen.side_effect = call_open
2665+
2666         # Now begin the test.
2667         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
2668         bs[0].remote_write(0, 'a')
2669hunk ./src/allmydata/test/test_backends.py 161
2670         self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
2671+       
2672+        # Now test the allocated_size method.
2673+        spaceint = self.s.allocated_size()
2674 
2675     @mock.patch('os.path.exists')
2676     @mock.patch('os.path.getsize')
2677}
2678[checkpoint 7
2679wilcoxjg@gmail.com**20110706200820
2680 Ignore-this: 16b790efc41a53964cbb99c0e86dafba
2681] hunk ./src/allmydata/test/test_backends.py 164
2682         
2683         # Now test the allocated_size method.
2684         spaceint = self.s.allocated_size()
2685+        self.failUnlessReallyEqual(spaceint, 1)
2686 
2687     @mock.patch('os.path.exists')
2688     @mock.patch('os.path.getsize')
2689[checkpoint8
2690wilcoxjg@gmail.com**20110706223126
2691 Ignore-this: 97336180883cb798b16f15411179f827
2692   The nullbackend is necessary to test unlimited space in a backend.  It is a mock-like object.
2693] hunk ./src/allmydata/test/test_backends.py 32
2694                      'cutoff_date' : None,
2695                      'sharetypes' : None}
2696 
2697+class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
2698+    def setUp(self):
2699+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2700+
2701+    @mock.patch('os.mkdir')
2702+    @mock.patch('__builtin__.open')
2703+    @mock.patch('os.listdir')
2704+    @mock.patch('os.path.isdir')
2705+    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir):
2706+        """ Write a new share. """
2707+
2708+        # Now begin the test.
2709+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
2710+        bs[0].remote_write(0, 'a')
2711+        self.failIf(mockisdir.called)
2712+        self.failIf(mocklistdir.called)
2713+        self.failIf(mockopen.called)
2714+        self.failIf(mockmkdir.called)
2715+
2716 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
2717     @mock.patch('time.time')
2718     @mock.patch('os.mkdir')
2719[checkpoint 9
2720wilcoxjg@gmail.com**20110707042942
2721 Ignore-this: 75396571fd05944755a104a8fc38aaf6
2722] {
2723hunk ./src/allmydata/storage/backends/das/core.py 88
2724                     filename = os.path.join(finalstoragedir, f)
2725                     yield ImmutableShare(self.sharedir, storage_index, int(f))
2726         except OSError:
2727-            # Commonly caused by there being no buckets at all.
2728+            # Commonly caused by there being no shares at all.
2729             pass
2730         
2731     def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
2732hunk ./src/allmydata/storage/backends/das/core.py 141
2733         self.storage_index = storageindex
2734         self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
2735         self._max_size = max_size
2736+        self.incomingdir = os.path.join(sharedir, 'incoming')
2737+        si_dir = storage_index_to_dir(storageindex)
2738+        self.incominghome = os.path.join(self.incomingdir, si_dir, "%d" % shnum)
2739+        self.finalhome = os.path.join(sharedir, si_dir, "%d" % shnum)
2740         if create:
2741             # touch the file, so later callers will see that we're working on
2742             # it. Also construct the metadata.
2743hunk ./src/allmydata/storage/backends/das/core.py 177
2744             self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
2745         self._data_offset = 0xc
2746 
2747+    def close(self):
2748+        fileutil.make_dirs(os.path.dirname(self.finalhome))
2749+        fileutil.rename(self.incominghome, self.finalhome)
2750+        try:
2751+            # self.incominghome is like storage/shares/incoming/ab/abcde/4 .
2752+            # We try to delete the parent (.../ab/abcde) to avoid leaving
2753+            # these directories lying around forever, but the delete might
2754+            # fail if we're working on another share for the same storage
2755+            # index (like ab/abcde/5). The alternative approach would be to
2756+            # use a hierarchy of objects (PrefixHolder, BucketHolder,
2757+            # ShareWriter), each of which is responsible for a single
2758+            # directory on disk, and have them use reference counting of
2759+            # their children to know when they should do the rmdir. This
2760+            # approach is simpler, but relies on os.rmdir refusing to delete
2761+            # a non-empty directory. Do *not* use fileutil.rm_dir() here!
2762+            os.rmdir(os.path.dirname(self.incominghome))
2763+            # we also delete the grandparent (prefix) directory, .../ab ,
2764+            # again to avoid leaving directories lying around. This might
2765+            # fail if there is another bucket open that shares a prefix (like
2766+            # ab/abfff).
2767+            os.rmdir(os.path.dirname(os.path.dirname(self.incominghome)))
2768+            # we leave the great-grandparent (incoming/) directory in place.
2769+        except EnvironmentError:
2770+            # ignore the "can't rmdir because the directory is not empty"
2771+            # exceptions, those are normal consequences of the
2772+            # above-mentioned conditions.
2773+            pass
2774+        pass
2775+       
2776+    def stat(self):
2777+        return os.stat(self.finalhome)[stat.ST_SIZE]
2778+
2779     def get_shnum(self):
2780         return self.shnum
2781 
2782hunk ./src/allmydata/storage/immutable.py 7
2783 
2784 from zope.interface import implements
2785 from allmydata.interfaces import RIBucketWriter, RIBucketReader
2786-from allmydata.util import base32, fileutil, log
2787+from allmydata.util import base32, log
2788 from allmydata.util.assertutil import precondition
2789 from allmydata.util.hashutil import constant_time_compare
2790 from allmydata.storage.lease import LeaseInfo
2791hunk ./src/allmydata/storage/immutable.py 44
2792     def remote_close(self):
2793         precondition(not self.closed)
2794         start = time.time()
2795-
2796-        fileutil.make_dirs(os.path.dirname(self.finalhome))
2797-        fileutil.rename(self.incominghome, self.finalhome)
2798-        try:
2799-            # self.incominghome is like storage/shares/incoming/ab/abcde/4 .
2800-            # We try to delete the parent (.../ab/abcde) to avoid leaving
2801-            # these directories lying around forever, but the delete might
2802-            # fail if we're working on another share for the same storage
2803-            # index (like ab/abcde/5). The alternative approach would be to
2804-            # use a hierarchy of objects (PrefixHolder, BucketHolder,
2805-            # ShareWriter), each of which is responsible for a single
2806-            # directory on disk, and have them use reference counting of
2807-            # their children to know when they should do the rmdir. This
2808-            # approach is simpler, but relies on os.rmdir refusing to delete
2809-            # a non-empty directory. Do *not* use fileutil.rm_dir() here!
2810-            os.rmdir(os.path.dirname(self.incominghome))
2811-            # we also delete the grandparent (prefix) directory, .../ab ,
2812-            # again to avoid leaving directories lying around. This might
2813-            # fail if there is another bucket open that shares a prefix (like
2814-            # ab/abfff).
2815-            os.rmdir(os.path.dirname(os.path.dirname(self.incominghome)))
2816-            # we leave the great-grandparent (incoming/) directory in place.
2817-        except EnvironmentError:
2818-            # ignore the "can't rmdir because the directory is not empty"
2819-            # exceptions, those are normal consequences of the
2820-            # above-mentioned conditions.
2821-            pass
2822+        self._sharefile.close()
2823         self._sharefile = None
2824         self.closed = True
2825         self._canary.dontNotifyOnDisconnect(self._disconnect_marker)
2826hunk ./src/allmydata/storage/immutable.py 49
2827 
2828-        filelen = os.stat(self.finalhome)[stat.ST_SIZE]
2829+        filelen = self._sharefile.stat()
2830         self.ss.bucket_writer_closed(self, filelen)
2831         self.ss.add_latency("close", time.time() - start)
2832         self.ss.count("close")
2833hunk ./src/allmydata/storage/server.py 45
2834         self._active_writers = weakref.WeakKeyDictionary()
2835         self.backend = backend
2836         self.backend.setServiceParent(self)
2837+        self.backend.set_storage_server(self)
2838         log.msg("StorageServer created", facility="tahoe.storage")
2839 
2840         self.latencies = {"allocate": [], # immutable
2841hunk ./src/allmydata/storage/server.py 220
2842 
2843         for shnum in (sharenums - alreadygot):
2844             if (not limited) or (remaining_space >= max_space_per_bucket):
2845-                #XXX or should the following line occur in storage server construtor? ok! we need to create the new share file.
2846-                self.backend.set_storage_server(self)
2847                 bw = self.backend.make_bucket_writer(storage_index, shnum,
2848                                                      max_space_per_bucket, lease_info, canary)
2849                 bucketwriters[shnum] = bw
2850hunk ./src/allmydata/test/test_backends.py 117
2851         mockopen.side_effect = call_open
2852         testbackend = DASCore(tempdir, expiration_policy)
2853         self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
2854-
2855+   
2856+    @mock.patch('allmydata.util.fileutil.get_available_space')
2857     @mock.patch('time.time')
2858     @mock.patch('os.mkdir')
2859     @mock.patch('__builtin__.open')
2860hunk ./src/allmydata/test/test_backends.py 124
2861     @mock.patch('os.listdir')
2862     @mock.patch('os.path.isdir')
2863-    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
2864+    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime,\
2865+                             mockget_available_space):
2866         """ Write a new share. """
2867 
2868         def call_listdir(dirname):
2869hunk ./src/allmydata/test/test_backends.py 148
2870 
2871         mockmkdir.side_effect = call_mkdir
2872 
2873+        def call_get_available_space(storedir, reserved_space):
2874+            self.failUnlessReallyEqual(storedir, tempdir)
2875+            return 1
2876+
2877+        mockget_available_space.side_effect = call_get_available_space
2878+
2879         class MockFile:
2880             def __init__(self):
2881                 self.buffer = ''
2882hunk ./src/allmydata/test/test_backends.py 188
2883         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
2884         bs[0].remote_write(0, 'a')
2885         self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
2886-       
2887+
2888+        # What happens when there's not enough space for the client's request?
2889+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
2890+
2891         # Now test the allocated_size method.
2892         spaceint = self.s.allocated_size()
2893         self.failUnlessReallyEqual(spaceint, 1)
2894}
2895[checkpoint10
2896wilcoxjg@gmail.com**20110707172049
2897 Ignore-this: 9dd2fb8bee93a88cea2625058decff32
2898] {
2899hunk ./src/allmydata/test/test_backends.py 20
2900 # The following share file contents was generated with
2901 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
2902 # with share data == 'a'.
2903-share_data = 'a\x00\x00\x00\x00xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy\x00(\xde\x80'
2904+renew_secret  = 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
2905+cancel_secret = 'yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy'
2906+share_data = 'a\x00\x00\x00\x00' + renew_secret + cancel_secret + '\x00(\xde\x80'
2907 share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data
2908 
2909hunk ./src/allmydata/test/test_backends.py 25
2910+testnodeid = 'testnodeidxxxxxxxxxx'
2911 tempdir = 'teststoredir'
2912 sharedirname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
2913 sharefname = os.path.join(sharedirname, '0')
2914hunk ./src/allmydata/test/test_backends.py 37
2915 
2916 class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
2917     def setUp(self):
2918-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2919+        self.s = StorageServer(testnodeid, backend=NullCore())
2920 
2921     @mock.patch('os.mkdir')
2922     @mock.patch('__builtin__.open')
2923hunk ./src/allmydata/test/test_backends.py 99
2924         mockmkdir.side_effect = call_mkdir
2925 
2926         # Now begin the test.
2927-        s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore('teststoredir', expiration_policy))
2928+        s = StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
2929 
2930         self.failIf(mocklistdir.called, mocklistdir.call_args_list)
2931 
2932hunk ./src/allmydata/test/test_backends.py 119
2933 
2934         mockopen.side_effect = call_open
2935         testbackend = DASCore(tempdir, expiration_policy)
2936-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
2937-   
2938+        self.s = StorageServer(testnodeid, backend=DASCore(tempdir, expiration_policy) )
2939+       
2940+    @mock.patch('allmydata.storage.backends.das.core.DASCore.get_shares')
2941     @mock.patch('allmydata.util.fileutil.get_available_space')
2942     @mock.patch('time.time')
2943     @mock.patch('os.mkdir')
2944hunk ./src/allmydata/test/test_backends.py 129
2945     @mock.patch('os.listdir')
2946     @mock.patch('os.path.isdir')
2947     def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime,\
2948-                             mockget_available_space):
2949+                             mockget_available_space, mockget_shares):
2950         """ Write a new share. """
2951 
2952         def call_listdir(dirname):
2953hunk ./src/allmydata/test/test_backends.py 139
2954         mocklistdir.side_effect = call_listdir
2955 
2956         def call_isdir(dirname):
2957+            #XXX Should there be any other tests here?
2958             self.failUnlessReallyEqual(dirname, sharedirname)
2959             return True
2960 
2961hunk ./src/allmydata/test/test_backends.py 159
2962 
2963         mockget_available_space.side_effect = call_get_available_space
2964 
2965+        mocktime.return_value = 0
2966+        class MockShare:
2967+            def __init__(self):
2968+                self.shnum = 1
2969+               
2970+            def add_or_renew_lease(elf, lease_info):
2971+                self.failUnlessReallyEqual(lease_info.renew_secret, renew_secret)
2972+                self.failUnlessReallyEqual(lease_info.cancel_secret, cancel_secret)
2973+                self.failUnlessReallyEqual(lease_info.owner_num, 0)
2974+                self.failUnlessReallyEqual(lease_info.expiration_time, mocktime() + 31*24*60*60)
2975+                self.failUnlessReallyEqual(lease_info.nodeid, testnodeid)
2976+               
2977+
2978+        share = MockShare()
2979+        def call_get_shares(storageindex):
2980+            return [share]
2981+
2982+        mockget_shares.side_effect = call_get_shares
2983+
2984         class MockFile:
2985             def __init__(self):
2986                 self.buffer = ''
2987hunk ./src/allmydata/test/test_backends.py 199
2988             def tell(self):
2989                 return self.pos
2990 
2991-        mocktime.return_value = 0
2992 
2993         sharefile = MockFile()
2994         def call_open(fname, mode):
2995}
2996[jacp 11
2997wilcoxjg@gmail.com**20110708213919
2998 Ignore-this: b8f81b264800590b3e2bfc6fffd21ff9
2999] {
3000hunk ./src/allmydata/storage/backends/das/core.py 144
3001         self.incomingdir = os.path.join(sharedir, 'incoming')
3002         si_dir = storage_index_to_dir(storageindex)
3003         self.incominghome = os.path.join(self.incomingdir, si_dir, "%d" % shnum)
3004+        #XXX  self.fname and self.finalhome need to be resolve/merged.
3005         self.finalhome = os.path.join(sharedir, si_dir, "%d" % shnum)
3006         if create:
3007             # touch the file, so later callers will see that we're working on
3008hunk ./src/allmydata/storage/backends/das/core.py 208
3009         pass
3010         
3011     def stat(self):
3012-        return os.stat(self.finalhome)[stat.ST_SIZE]
3013+        return os.stat(self.finalhome)[os.stat.ST_SIZE]
3014 
3015     def get_shnum(self):
3016         return self.shnum
3017hunk ./src/allmydata/storage/immutable.py 44
3018     def remote_close(self):
3019         precondition(not self.closed)
3020         start = time.time()
3021+
3022         self._sharefile.close()
3023hunk ./src/allmydata/storage/immutable.py 46
3024+        filelen = self._sharefile.stat()
3025         self._sharefile = None
3026hunk ./src/allmydata/storage/immutable.py 48
3027+
3028         self.closed = True
3029         self._canary.dontNotifyOnDisconnect(self._disconnect_marker)
3030 
3031hunk ./src/allmydata/storage/immutable.py 52
3032-        filelen = self._sharefile.stat()
3033         self.ss.bucket_writer_closed(self, filelen)
3034         self.ss.add_latency("close", time.time() - start)
3035         self.ss.count("close")
3036hunk ./src/allmydata/storage/server.py 220
3037 
3038         for shnum in (sharenums - alreadygot):
3039             if (not limited) or (remaining_space >= max_space_per_bucket):
3040-                bw = self.backend.make_bucket_writer(storage_index, shnum,
3041-                                                     max_space_per_bucket, lease_info, canary)
3042+                bw = self.backend.make_bucket_writer(storage_index, shnum, max_space_per_bucket, lease_info, canary)
3043                 bucketwriters[shnum] = bw
3044                 self._active_writers[bw] = 1
3045                 if limited:
3046hunk ./src/allmydata/test/test_backends.py 20
3047 # The following share file contents was generated with
3048 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
3049 # with share data == 'a'.
3050-renew_secret  = 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
3051-cancel_secret = 'yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy'
3052+renew_secret  = 'x'*32
3053+cancel_secret = 'y'*32
3054 share_data = 'a\x00\x00\x00\x00' + renew_secret + cancel_secret + '\x00(\xde\x80'
3055 share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data
3056 
3057hunk ./src/allmydata/test/test_backends.py 27
3058 testnodeid = 'testnodeidxxxxxxxxxx'
3059 tempdir = 'teststoredir'
3060-sharedirname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3061-sharefname = os.path.join(sharedirname, '0')
3062+sharedirfinalname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3063+sharedirincomingname = os.path.join(tempdir, 'shares', 'incoming', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3064+shareincomingname = os.path.join(sharedirincomingname, '0')
3065+sharefname = os.path.join(sharedirfinalname, '0')
3066+
3067 expiration_policy = {'enabled' : False,
3068                      'mode' : 'age',
3069                      'override_lease_duration' : None,
3070hunk ./src/allmydata/test/test_backends.py 123
3071         mockopen.side_effect = call_open
3072         testbackend = DASCore(tempdir, expiration_policy)
3073         self.s = StorageServer(testnodeid, backend=DASCore(tempdir, expiration_policy) )
3074-       
3075+
3076+    @mock.patch('allmydata.util.fileutil.rename')
3077+    @mock.patch('allmydata.util.fileutil.make_dirs')
3078+    @mock.patch('os.path.exists')
3079+    @mock.patch('os.stat')
3080     @mock.patch('allmydata.storage.backends.das.core.DASCore.get_shares')
3081     @mock.patch('allmydata.util.fileutil.get_available_space')
3082     @mock.patch('time.time')
3083hunk ./src/allmydata/test/test_backends.py 136
3084     @mock.patch('os.listdir')
3085     @mock.patch('os.path.isdir')
3086     def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime,\
3087-                             mockget_available_space, mockget_shares):
3088+                             mockget_available_space, mockget_shares, mockstat, mockexists, \
3089+                             mockmake_dirs, mockrename):
3090         """ Write a new share. """
3091 
3092         def call_listdir(dirname):
3093hunk ./src/allmydata/test/test_backends.py 141
3094-            self.failUnlessReallyEqual(dirname, sharedirname)
3095+            self.failUnlessReallyEqual(dirname, sharedirfinalname)
3096             raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
3097 
3098         mocklistdir.side_effect = call_listdir
3099hunk ./src/allmydata/test/test_backends.py 148
3100 
3101         def call_isdir(dirname):
3102             #XXX Should there be any other tests here?
3103-            self.failUnlessReallyEqual(dirname, sharedirname)
3104+            self.failUnlessReallyEqual(dirname, sharedirfinalname)
3105             return True
3106 
3107         mockisdir.side_effect = call_isdir
3108hunk ./src/allmydata/test/test_backends.py 154
3109 
3110         def call_mkdir(dirname, permissions):
3111-            if dirname not in [sharedirname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
3112+            if dirname not in [sharedirfinalname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
3113                 self.Fail
3114             else:
3115                 return True
3116hunk ./src/allmydata/test/test_backends.py 208
3117                 return self.pos
3118 
3119 
3120-        sharefile = MockFile()
3121+        fobj = MockFile()
3122         def call_open(fname, mode):
3123             self.failUnlessReallyEqual(fname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a', '0' ))
3124hunk ./src/allmydata/test/test_backends.py 211
3125-            return sharefile
3126+            return fobj
3127 
3128         mockopen.side_effect = call_open
3129 
3130hunk ./src/allmydata/test/test_backends.py 215
3131+        def call_make_dirs(dname):
3132+            self.failUnlessReallyEqual(dname, sharedirfinalname)
3133+           
3134+        mockmake_dirs.side_effect = call_make_dirs
3135+
3136+        def call_rename(src, dst):
3137+           self.failUnlessReallyEqual(src, shareincomingname)
3138+           self.failUnlessReallyEqual(dst, sharefname)
3139+           
3140+        mockrename.side_effect = call_rename
3141+
3142+        def call_exists(fname):
3143+            self.failUnlessReallyEqual(fname, sharefname)
3144+
3145+        mockexists.side_effect = call_exists
3146+
3147         # Now begin the test.
3148         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
3149         bs[0].remote_write(0, 'a')
3150hunk ./src/allmydata/test/test_backends.py 234
3151-        self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
3152+        self.failUnlessReallyEqual(fobj.buffer, share_file_data)
3153+        spaceint = self.s.allocated_size()
3154+        self.failUnlessReallyEqual(spaceint, 1)
3155+
3156+        bs[0].remote_close()
3157 
3158         # What happens when there's not enough space for the client's request?
3159hunk ./src/allmydata/test/test_backends.py 241
3160-        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
3161+        # XXX Need to uncomment! alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
3162 
3163         # Now test the allocated_size method.
3164hunk ./src/allmydata/test/test_backends.py 244
3165-        spaceint = self.s.allocated_size()
3166-        self.failUnlessReallyEqual(spaceint, 1)
3167+        #self.failIf(mockexists.called, mockexists.call_args_list)
3168+        #self.failIf(mockmake_dirs.called, mockmake_dirs.call_args_list)
3169+        #self.failIf(mockrename.called, mockrename.call_args_list)
3170+        #self.failIf(mockstat.called, mockstat.call_args_list)
3171 
3172     @mock.patch('os.path.exists')
3173     @mock.patch('os.path.getsize')
3174}
3175
3176Context:
3177
3178[add Protovis.js-based download-status timeline visualization
3179Brian Warner <warner@lothar.com>**20110629222606
3180 Ignore-this: 477ccef5c51b30e246f5b6e04ab4a127
3181 
3182 provide status overlap info on the webapi t=json output, add decode/decrypt
3183 rate tooltips, add zoomin/zoomout buttons
3184]
3185[add more download-status data, fix tests
3186Brian Warner <warner@lothar.com>**20110629222555
3187 Ignore-this: e9e0b7e0163f1e95858aa646b9b17b8c
3188]
3189[prepare for viz: improve DownloadStatus events
3190Brian Warner <warner@lothar.com>**20110629222542
3191 Ignore-this: 16d0bde6b734bb501aa6f1174b2b57be
3192 
3193 consolidate IDownloadStatusHandlingConsumer stuff into DownloadNode
3194]
3195[docs: fix error in crypto specification that was noticed by Taylor R Campbell <campbell+tahoe@mumble.net>
3196zooko@zooko.com**20110629185711
3197 Ignore-this: b921ed60c1c8ba3c390737fbcbe47a67
3198]
3199[setup.py: don't make bin/tahoe.pyscript executable. fixes #1347
3200david-sarah@jacaranda.org**20110130235809
3201 Ignore-this: 3454c8b5d9c2c77ace03de3ef2d9398a
3202]
3203[Makefile: remove targets relating to 'setup.py check_auto_deps' which no longer exists. fixes #1345
3204david-sarah@jacaranda.org**20110626054124
3205 Ignore-this: abb864427a1b91bd10d5132b4589fd90
3206]
3207[Makefile: add 'make check' as an alias for 'make test'. Also remove an unnecessary dependency of 'test' on 'build' and 'src/allmydata/_version.py'. fixes #1344
3208david-sarah@jacaranda.org**20110623205528
3209 Ignore-this: c63e23146c39195de52fb17c7c49b2da
3210]
3211[Rename test_package_initialization.py to (much shorter) test_import.py .
3212Brian Warner <warner@lothar.com>**20110611190234
3213 Ignore-this: 3eb3dbac73600eeff5cfa6b65d65822
3214 
3215 The former name was making my 'ls' listings hard to read, by forcing them
3216 down to just two columns.
3217]
3218[tests: fix tests to accomodate [20110611153758-92b7f-0ba5e4726fb6318dac28fb762a6512a003f4c430]
3219zooko@zooko.com**20110611163741
3220 Ignore-this: 64073a5f39e7937e8e5e1314c1a302d1
3221 Apparently none of the two authors (stercor, terrell), three reviewers (warner, davidsarah, terrell), or one committer (me) actually ran the tests. This is presumably due to #20.
3222 fixes #1412
3223]
3224[wui: right-align the size column in the WUI
3225zooko@zooko.com**20110611153758
3226 Ignore-this: 492bdaf4373c96f59f90581c7daf7cd7
3227 Thanks to Ted "stercor" Rolle Jr. and Terrell Russell.
3228 fixes #1412
3229]
3230[docs: three minor fixes
3231zooko@zooko.com**20110610121656
3232 Ignore-this: fec96579eb95aceb2ad5fc01a814c8a2
3233 CREDITS for arc for stats tweak
3234 fix link to .zip file in quickstart.rst (thanks to ChosenOne for noticing)
3235 English usage tweak
3236]
3237[docs/running.rst: fix stray HTML (not .rst) link noticed by ChosenOne.
3238david-sarah@jacaranda.org**20110609223719
3239 Ignore-this: fc50ac9c94792dcac6f1067df8ac0d4a
3240]
3241[server.py:  get_latencies now reports percentiles _only_ if there are sufficient observations for the interpretation of the percentile to be unambiguous.
3242wilcoxjg@gmail.com**20110527120135
3243 Ignore-this: 2e7029764bffc60e26f471d7c2b6611e
3244 interfaces.py:  modified the return type of RIStatsProvider.get_stats to allow for None as a return value
3245 NEWS.rst, stats.py: documentation of change to get_latencies
3246 stats.rst: now documents percentile modification in get_latencies
3247 test_storage.py:  test_latencies now expects None in output categories that contain too few samples for the associated percentile to be unambiguously reported.
3248 fixes #1392
3249]
3250[docs: revert link in relnotes.txt from NEWS.rst to NEWS, since the former did not exist at revision 5000.
3251david-sarah@jacaranda.org**20110517011214
3252 Ignore-this: 6a5be6e70241e3ec0575641f64343df7
3253]
3254[docs: convert NEWS to NEWS.rst and change all references to it.
3255david-sarah@jacaranda.org**20110517010255
3256 Ignore-this: a820b93ea10577c77e9c8206dbfe770d
3257]
3258[docs: remove out-of-date docs/testgrid/introducer.furl and containing directory. fixes #1404
3259david-sarah@jacaranda.org**20110512140559
3260 Ignore-this: 784548fc5367fac5450df1c46890876d
3261]
3262[scripts/common.py: don't assume that the default alias is always 'tahoe' (it is, but the API of get_alias doesn't say so). refs #1342
3263david-sarah@jacaranda.org**20110130164923
3264 Ignore-this: a271e77ce81d84bb4c43645b891d92eb
3265]
3266[setup: don't catch all Exception from check_requirement(), but only PackagingError and ImportError
3267zooko@zooko.com**20110128142006
3268 Ignore-this: 57d4bc9298b711e4bc9dc832c75295de
3269 I noticed this because I had accidentally inserted a bug which caused AssertionError to be raised from check_requirement().
3270]
3271[M-x whitespace-cleanup
3272zooko@zooko.com**20110510193653
3273 Ignore-this: dea02f831298c0f65ad096960e7df5c7
3274]
3275[docs: fix typo in running.rst, thanks to arch_o_median
3276zooko@zooko.com**20110510193633
3277 Ignore-this: ca06de166a46abbc61140513918e79e8
3278]
3279[relnotes.txt: don't claim to work on Cygwin (which has been untested for some time). refs #1342
3280david-sarah@jacaranda.org**20110204204902
3281 Ignore-this: 85ef118a48453d93fa4cddc32d65b25b
3282]
3283[relnotes.txt: forseeable -> foreseeable. refs #1342
3284david-sarah@jacaranda.org**20110204204116
3285 Ignore-this: 746debc4d82f4031ebf75ab4031b3a9
3286]
3287[replace remaining .html docs with .rst docs
3288zooko@zooko.com**20110510191650
3289 Ignore-this: d557d960a986d4ac8216d1677d236399
3290 Remove install.html (long since deprecated).
3291 Also replace some obsolete references to install.html with references to quickstart.rst.
3292 Fix some broken internal references within docs/historical/historical_known_issues.txt.
3293 Thanks to Ravi Pinjala and Patrick McDonald.
3294 refs #1227
3295]
3296[docs: FTP-and-SFTP.rst: fix a minor error and update the information about which version of Twisted fixes #1297
3297zooko@zooko.com**20110428055232
3298 Ignore-this: b63cfb4ebdbe32fb3b5f885255db4d39
3299]
3300[munin tahoe_files plugin: fix incorrect file count
3301francois@ctrlaltdel.ch**20110428055312
3302 Ignore-this: 334ba49a0bbd93b4a7b06a25697aba34
3303 fixes #1391
3304]
3305[corrected "k must never be smaller than N" to "k must never be greater than N"
3306secorp@allmydata.org**20110425010308
3307 Ignore-this: 233129505d6c70860087f22541805eac
3308]
3309[Fix a test failure in test_package_initialization on Python 2.4.x due to exceptions being stringified differently than in later versions of Python. refs #1389
3310david-sarah@jacaranda.org**20110411190738
3311 Ignore-this: 7847d26bc117c328c679f08a7baee519
3312]
3313[tests: add test for including the ImportError message and traceback entry in the summary of errors from importing dependencies. refs #1389
3314david-sarah@jacaranda.org**20110410155844
3315 Ignore-this: fbecdbeb0d06a0f875fe8d4030aabafa
3316]
3317[allmydata/__init__.py: preserve the message and last traceback entry (file, line number, function, and source line) of ImportErrors in the package versions string. fixes #1389
3318david-sarah@jacaranda.org**20110410155705
3319 Ignore-this: 2f87b8b327906cf8bfca9440a0904900
3320]
3321[remove unused variable detected by pyflakes
3322zooko@zooko.com**20110407172231
3323 Ignore-this: 7344652d5e0720af822070d91f03daf9
3324]
3325[allmydata/__init__.py: Nicer reporting of unparseable version numbers in dependencies. fixes #1388
3326david-sarah@jacaranda.org**20110401202750
3327 Ignore-this: 9c6bd599259d2405e1caadbb3e0d8c7f
3328]
3329[update FTP-and-SFTP.rst: the necessary patch is included in Twisted-10.1
3330Brian Warner <warner@lothar.com>**20110325232511
3331 Ignore-this: d5307faa6900f143193bfbe14e0f01a
3332]
3333[control.py: remove all uses of s.get_serverid()
3334warner@lothar.com**20110227011203
3335 Ignore-this: f80a787953bd7fa3d40e828bde00e855
3336]
3337[web: remove some uses of s.get_serverid(), not all
3338warner@lothar.com**20110227011159
3339 Ignore-this: a9347d9cf6436537a47edc6efde9f8be
3340]
3341[immutable/downloader/fetcher.py: remove all get_serverid() calls
3342warner@lothar.com**20110227011156
3343 Ignore-this: fb5ef018ade1749348b546ec24f7f09a
3344]
3345[immutable/downloader/fetcher.py: fix diversity bug in server-response handling
3346warner@lothar.com**20110227011153
3347 Ignore-this: bcd62232c9159371ae8a16ff63d22c1b
3348 
3349 When blocks terminate (either COMPLETE or CORRUPT/DEAD/BADSEGNUM), the
3350 _shares_from_server dict was being popped incorrectly (using shnum as the
3351 index instead of serverid). I'm still thinking through the consequences of
3352 this bug. It was probably benign and really hard to detect. I think it would
3353 cause us to incorrectly believe that we're pulling too many shares from a
3354 server, and thus prefer a different server rather than asking for a second
3355 share from the first server. The diversity code is intended to spread out the
3356 number of shares simultaneously being requested from each server, but with
3357 this bug, it might be spreading out the total number of shares requested at
3358 all, not just simultaneously. (note that SegmentFetcher is scoped to a single
3359 segment, so the effect doesn't last very long).
3360]
3361[immutable/downloader/share.py: reduce get_serverid(), one left, update ext deps
3362warner@lothar.com**20110227011150
3363 Ignore-this: d8d56dd8e7b280792b40105e13664554
3364 
3365 test_download.py: create+check MyShare instances better, make sure they share
3366 Server objects, now that finder.py cares
3367]
3368[immutable/downloader/finder.py: reduce use of get_serverid(), one left
3369warner@lothar.com**20110227011146
3370 Ignore-this: 5785be173b491ae8a78faf5142892020
3371]
3372[immutable/offloaded.py: reduce use of get_serverid() a bit more
3373warner@lothar.com**20110227011142
3374 Ignore-this: b48acc1b2ae1b311da7f3ba4ffba38f
3375]
3376[immutable/upload.py: reduce use of get_serverid()
3377warner@lothar.com**20110227011138
3378 Ignore-this: ffdd7ff32bca890782119a6e9f1495f6
3379]
3380[immutable/checker.py: remove some uses of s.get_serverid(), not all
3381warner@lothar.com**20110227011134
3382 Ignore-this: e480a37efa9e94e8016d826c492f626e
3383]
3384[add remaining get_* methods to storage_client.Server, NoNetworkServer, and
3385warner@lothar.com**20110227011132
3386 Ignore-this: 6078279ddf42b179996a4b53bee8c421
3387 MockIServer stubs
3388]
3389[upload.py: rearrange _make_trackers a bit, no behavior changes
3390warner@lothar.com**20110227011128
3391 Ignore-this: 296d4819e2af452b107177aef6ebb40f
3392]
3393[happinessutil.py: finally rename merge_peers to merge_servers
3394warner@lothar.com**20110227011124
3395 Ignore-this: c8cd381fea1dd888899cb71e4f86de6e
3396]
3397[test_upload.py: factor out FakeServerTracker
3398warner@lothar.com**20110227011120
3399 Ignore-this: 6c182cba90e908221099472cc159325b
3400]
3401[test_upload.py: server-vs-tracker cleanup
3402warner@lothar.com**20110227011115
3403 Ignore-this: 2915133be1a3ba456e8603885437e03
3404]
3405[happinessutil.py: server-vs-tracker cleanup
3406warner@lothar.com**20110227011111
3407 Ignore-this: b856c84033562d7d718cae7cb01085a9
3408]
3409[upload.py: more tracker-vs-server cleanup
3410warner@lothar.com**20110227011107
3411 Ignore-this: bb75ed2afef55e47c085b35def2de315
3412]
3413[upload.py: fix var names to avoid confusion between 'trackers' and 'servers'
3414warner@lothar.com**20110227011103
3415 Ignore-this: 5d5e3415b7d2732d92f42413c25d205d
3416]
3417[refactor: s/peer/server/ in immutable/upload, happinessutil.py, test_upload
3418warner@lothar.com**20110227011100
3419 Ignore-this: 7ea858755cbe5896ac212a925840fe68
3420 
3421 No behavioral changes, just updating variable/method names and log messages.
3422 The effects outside these three files should be minimal: some exception
3423 messages changed (to say "server" instead of "peer"), and some internal class
3424 names were changed. A few things still use "peer" to minimize external
3425 changes, like UploadResults.timings["peer_selection"] and
3426 happinessutil.merge_peers, which can be changed later.
3427]
3428[storage_client.py: clean up test_add_server/test_add_descriptor, remove .test_servers
3429warner@lothar.com**20110227011056
3430 Ignore-this: efad933e78179d3d5fdcd6d1ef2b19cc
3431]
3432[test_client.py, upload.py:: remove KiB/MiB/etc constants, and other dead code
3433warner@lothar.com**20110227011051
3434 Ignore-this: dc83c5794c2afc4f81e592f689c0dc2d
3435]
3436[test: increase timeout on a network test because Francois's ARM machine hit that timeout
3437zooko@zooko.com**20110317165909
3438 Ignore-this: 380c345cdcbd196268ca5b65664ac85b
3439 I'm skeptical that the test was proceeding correctly but ran out of time. It seems more likely that it had gotten hung. But if we raise the timeout to an even more extravagant number then we can be even more certain that the test was never going to finish.
3440]
3441[docs/configuration.rst: add a "Frontend Configuration" section
3442Brian Warner <warner@lothar.com>**20110222014323
3443 Ignore-this: 657018aa501fe4f0efef9851628444ca
3444 
3445 this points to docs/frontends/*.rst, which were previously underlinked
3446]
3447[web/filenode.py: avoid calling req.finish() on closed HTTP connections. Closes #1366
3448"Brian Warner <warner@lothar.com>"**20110221061544
3449 Ignore-this: 799d4de19933f2309b3c0c19a63bb888
3450]
3451[Add unit tests for cross_check_pkg_resources_versus_import, and a regression test for ref #1355. This requires a little refactoring to make it testable.
3452david-sarah@jacaranda.org**20110221015817
3453 Ignore-this: 51d181698f8c20d3aca58b057e9c475a
3454]
3455[allmydata/__init__.py: .name was used in place of the correct .__name__ when printing an exception. Also, robustify string formatting by using %r instead of %s in some places. fixes #1355.
3456david-sarah@jacaranda.org**20110221020125
3457 Ignore-this: b0744ed58f161bf188e037bad077fc48
3458]
3459[Refactor StorageFarmBroker handling of servers
3460Brian Warner <warner@lothar.com>**20110221015804
3461 Ignore-this: 842144ed92f5717699b8f580eab32a51
3462 
3463 Pass around IServer instance instead of (peerid, rref) tuple. Replace
3464 "descriptor" with "server". Other replacements:
3465 
3466  get_all_servers -> get_connected_servers/get_known_servers
3467  get_servers_for_index -> get_servers_for_psi (now returns IServers)
3468 
3469 This change still needs to be pushed further down: lots of code is now
3470 getting the IServer and then distributing (peerid, rref) internally.
3471 Instead, it ought to distribute the IServer internally and delay
3472 extracting a serverid or rref until the last moment.
3473 
3474 no_network.py was updated to retain parallelism.
3475]
3476[TAG allmydata-tahoe-1.8.2
3477warner@lothar.com**20110131020101]
3478Patch bundle hash:
3479bed0514d4c55431ae5c03148b0761249a17a9736