Ticket #999: jacp20Zancas20110728.darcs.patch

File jacp20Zancas20110728.darcs.patch, 350.1 KB (added by Zancas, at 2011-07-28T07:23:47Z)
Line 
1Fri Mar 25 14:35:14 MDT 2011  wilcoxjg@gmail.com
2  * storage: new mocking tests of storage server read and write
3  There are already tests of read and functionality in test_storage.py, but those tests let the code under test use a real filesystem whereas these tests mock all file system calls.
4
5Fri Jun 24 14:28:50 MDT 2011  wilcoxjg@gmail.com
6  * server.py, test_backends.py, interfaces.py, immutable.py (others?): working patch for implementation of backends plugin
7  sloppy not for production
8
9Sat Jun 25 23:27:32 MDT 2011  wilcoxjg@gmail.com
10  * a temp patch used as a snapshot
11
12Sat Jun 25 23:32:44 MDT 2011  wilcoxjg@gmail.com
13  * snapshot of progress on backend implementation (not suitable for trunk)
14
15Sun Jun 26 10:57:15 MDT 2011  wilcoxjg@gmail.com
16  * checkpoint patch
17
18Tue Jun 28 14:22:02 MDT 2011  wilcoxjg@gmail.com
19  * checkpoint4
20
21Mon Jul  4 21:46:26 MDT 2011  wilcoxjg@gmail.com
22  * checkpoint5
23
24Wed Jul  6 13:08:24 MDT 2011  wilcoxjg@gmail.com
25  * checkpoint 6
26
27Wed Jul  6 14:08:20 MDT 2011  wilcoxjg@gmail.com
28  * checkpoint 7
29
30Wed Jul  6 16:31:26 MDT 2011  wilcoxjg@gmail.com
31  * checkpoint8
32    The nullbackend is necessary to test unlimited space in a backend.  It is a mock-like object.
33
34Wed Jul  6 22:29:42 MDT 2011  wilcoxjg@gmail.com
35  * checkpoint 9
36
37Thu Jul  7 11:20:49 MDT 2011  wilcoxjg@gmail.com
38  * checkpoint10
39
40Fri Jul  8 15:39:19 MDT 2011  wilcoxjg@gmail.com
41  * jacp 11
42
43Sun Jul 10 13:19:15 MDT 2011  wilcoxjg@gmail.com
44  * checkpoint12 testing correct behavior with regard to incoming and final
45
46Sun Jul 10 13:51:39 MDT 2011  wilcoxjg@gmail.com
47  * fix inconsistent naming of storage_index vs storageindex in storage/server.py
48
49Sun Jul 10 16:06:23 MDT 2011  wilcoxjg@gmail.com
50  * adding comments to clarify what I'm about to do.
51
52Mon Jul 11 13:08:49 MDT 2011  wilcoxjg@gmail.com
53  * branching back, no longer attempting to mock inside TestServerFSBackend
54
55Mon Jul 11 13:33:57 MDT 2011  wilcoxjg@gmail.com
56  * checkpoint12 TestServerFSBackend no longer mocks filesystem
57
58Mon Jul 11 13:44:07 MDT 2011  wilcoxjg@gmail.com
59  * JACP
60
61Mon Jul 11 15:02:24 MDT 2011  wilcoxjg@gmail.com
62  * testing get incoming
63
64Mon Jul 11 15:14:24 MDT 2011  wilcoxjg@gmail.com
65  * ImmutableShareFile does not know its StorageIndex
66
67Mon Jul 11 20:51:57 MDT 2011  wilcoxjg@gmail.com
68  * get_incoming correctly reports the 0 share after it has arrived
69
70Tue Jul 12 00:12:11 MDT 2011  wilcoxjg@gmail.com
71  * jacp14
72
73Wed Jul 13 00:03:46 MDT 2011  wilcoxjg@gmail.com
74  * jacp14 or so
75
76Wed Jul 13 18:30:08 MDT 2011  zooko@zooko.com
77  * temporary work-in-progress patch to be unrecorded
78  tidy up a few tests, work done in pair-programming with Zancas
79
80Thu Jul 14 15:21:39 MDT 2011  zooko@zooko.com
81  * work in progress intended to be unrecorded and never committed to trunk
82  switch from os.path.join to filepath
83  incomplete refactoring of common "stay in your subtree" tester code into a superclass
84 
85
86Fri Jul 15 13:15:00 MDT 2011  zooko@zooko.com
87  * another incomplete patch for people who are very curious about incomplete work or for Zancas to apply and build on top of 2011-07-15_19_15Z
88  In this patch (very incomplete) we started two major changes: first was to refactor the mockery of the filesystem into a common base class which provides a mock filesystem for all the DAS tests. Second was to convert from Python standard library filename manipulation like os.path.join to twisted.python.filepath. The former *might* be close to complete -- it seems to run at least most of the first test before that test hits a problem due to the incomplete converstion to filepath. The latter has still a lot of work to go.
89
90Tue Jul 19 23:59:18 MDT 2011  zooko@zooko.com
91  * another temporary patch for sharing work-in-progress
92  A lot more filepathification. The changes made in this patch feel really good to me -- we get to remove and simplify code by relying on filepath.
93  There are a few other changes in this file, notably removing the misfeature of catching OSError and returning 0 from get_available_space()...
94  (There is a lot of work to do to document these changes in good commit log messages and break them up into logical units inasmuch as possible...)
95 
96
97Fri Jul 22 01:00:36 MDT 2011  wilcoxjg@gmail.com
98  * jacp16 or so
99
100Fri Jul 22 14:32:44 MDT 2011  wilcoxjg@gmail.com
101  * jacp17
102
103Fri Jul 22 21:19:15 MDT 2011  wilcoxjg@gmail.com
104  * jacp18
105
106Sat Jul 23 21:42:30 MDT 2011  wilcoxjg@gmail.com
107  * jacp19orso
108
109Wed Jul 27 02:05:53 MDT 2011  wilcoxjg@gmail.com
110  * jacp19
111
112Thu Jul 28 01:25:14 MDT 2011  wilcoxjg@gmail.com
113  * jacp20
114
115New patches:
116
117[storage: new mocking tests of storage server read and write
118wilcoxjg@gmail.com**20110325203514
119 Ignore-this: df65c3c4f061dd1516f88662023fdb41
120 There are already tests of read and functionality in test_storage.py, but those tests let the code under test use a real filesystem whereas these tests mock all file system calls.
121] {
122addfile ./src/allmydata/test/test_server.py
123hunk ./src/allmydata/test/test_server.py 1
124+from twisted.trial import unittest
125+
126+from StringIO import StringIO
127+
128+from allmydata.test.common_util import ReallyEqualMixin
129+
130+import mock
131+
132+# This is the code that we're going to be testing.
133+from allmydata.storage.server import StorageServer
134+
135+# The following share file contents was generated with
136+# storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
137+# with share data == 'a'.
138+share_data = 'a\x00\x00\x00\x00xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy\x00(\xde\x80'
139+share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data
140+
141+sharefname = 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a/0'
142+
143+class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
144+    @mock.patch('__builtin__.open')
145+    def test_create_server(self, mockopen):
146+        """ This tests whether a server instance can be constructed. """
147+
148+        def call_open(fname, mode):
149+            if fname == 'testdir/bucket_counter.state':
150+                raise IOError(2, "No such file or directory: 'testdir/bucket_counter.state'")
151+            elif fname == 'testdir/lease_checker.state':
152+                raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
153+            elif fname == 'testdir/lease_checker.history':
154+                return StringIO()
155+        mockopen.side_effect = call_open
156+
157+        # Now begin the test.
158+        s = StorageServer('testdir', 'testnodeidxxxxxxxxxx')
159+
160+        # You passed!
161+
162+class TestServer(unittest.TestCase, ReallyEqualMixin):
163+    @mock.patch('__builtin__.open')
164+    def setUp(self, mockopen):
165+        def call_open(fname, mode):
166+            if fname == 'testdir/bucket_counter.state':
167+                raise IOError(2, "No such file or directory: 'testdir/bucket_counter.state'")
168+            elif fname == 'testdir/lease_checker.state':
169+                raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
170+            elif fname == 'testdir/lease_checker.history':
171+                return StringIO()
172+        mockopen.side_effect = call_open
173+
174+        self.s = StorageServer('testdir', 'testnodeidxxxxxxxxxx')
175+
176+
177+    @mock.patch('time.time')
178+    @mock.patch('os.mkdir')
179+    @mock.patch('__builtin__.open')
180+    @mock.patch('os.listdir')
181+    @mock.patch('os.path.isdir')
182+    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
183+        """Handle a report of corruption."""
184+
185+        def call_listdir(dirname):
186+            self.failUnlessReallyEqual(dirname, 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
187+            raise OSError(2, "No such file or directory: 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a'")
188+
189+        mocklistdir.side_effect = call_listdir
190+
191+        class MockFile:
192+            def __init__(self):
193+                self.buffer = ''
194+                self.pos = 0
195+            def write(self, instring):
196+                begin = self.pos
197+                padlen = begin - len(self.buffer)
198+                if padlen > 0:
199+                    self.buffer += '\x00' * padlen
200+                end = self.pos + len(instring)
201+                self.buffer = self.buffer[:begin]+instring+self.buffer[end:]
202+                self.pos = end
203+            def close(self):
204+                pass
205+            def seek(self, pos):
206+                self.pos = pos
207+            def read(self, numberbytes):
208+                return self.buffer[self.pos:self.pos+numberbytes]
209+            def tell(self):
210+                return self.pos
211+
212+        mocktime.return_value = 0
213+
214+        sharefile = MockFile()
215+        def call_open(fname, mode):
216+            self.failUnlessReallyEqual(fname, 'testdir/shares/incoming/or/orsxg5dtorxxeylhmvpws3temv4a/0' )
217+            return sharefile
218+
219+        mockopen.side_effect = call_open
220+        # Now begin the test.
221+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
222+        print bs
223+        bs[0].remote_write(0, 'a')
224+        self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
225+
226+
227+    @mock.patch('os.path.exists')
228+    @mock.patch('os.path.getsize')
229+    @mock.patch('__builtin__.open')
230+    @mock.patch('os.listdir')
231+    def test_read_share(self, mocklistdir, mockopen, mockgetsize, mockexists):
232+        """ This tests whether the code correctly finds and reads
233+        shares written out by old (Tahoe-LAFS <= v1.8.2)
234+        servers. There is a similar test in test_download, but that one
235+        is from the perspective of the client and exercises a deeper
236+        stack of code. This one is for exercising just the
237+        StorageServer object. """
238+
239+        def call_listdir(dirname):
240+            self.failUnlessReallyEqual(dirname,'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
241+            return ['0']
242+
243+        mocklistdir.side_effect = call_listdir
244+
245+        def call_open(fname, mode):
246+            self.failUnlessReallyEqual(fname, sharefname)
247+            self.failUnless('r' in mode, mode)
248+            self.failUnless('b' in mode, mode)
249+
250+            return StringIO(share_file_data)
251+        mockopen.side_effect = call_open
252+
253+        datalen = len(share_file_data)
254+        def call_getsize(fname):
255+            self.failUnlessReallyEqual(fname, sharefname)
256+            return datalen
257+        mockgetsize.side_effect = call_getsize
258+
259+        def call_exists(fname):
260+            self.failUnlessReallyEqual(fname, sharefname)
261+            return True
262+        mockexists.side_effect = call_exists
263+
264+        # Now begin the test.
265+        bs = self.s.remote_get_buckets('teststorage_index')
266+
267+        self.failUnlessEqual(len(bs), 1)
268+        b = bs[0]
269+        self.failUnlessReallyEqual(b.remote_read(0, datalen), share_data)
270+        # If you try to read past the end you get the as much data as is there.
271+        self.failUnlessReallyEqual(b.remote_read(0, datalen+20), share_data)
272+        # If you start reading past the end of the file you get the empty string.
273+        self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
274}
275[server.py, test_backends.py, interfaces.py, immutable.py (others?): working patch for implementation of backends plugin
276wilcoxjg@gmail.com**20110624202850
277 Ignore-this: ca6f34987ee3b0d25cac17c1fc22d50c
278 sloppy not for production
279] {
280move ./src/allmydata/test/test_server.py ./src/allmydata/test/test_backends.py
281hunk ./src/allmydata/storage/crawler.py 13
282     pass
283 
284 class ShareCrawler(service.MultiService):
285-    """A ShareCrawler subclass is attached to a StorageServer, and
286+    """A subcless of ShareCrawler is attached to a StorageServer, and
287     periodically walks all of its shares, processing each one in some
288     fashion. This crawl is rate-limited, to reduce the IO burden on the host,
289     since large servers can easily have a terabyte of shares, in several
290hunk ./src/allmydata/storage/crawler.py 31
291     We assume that the normal upload/download/get_buckets traffic of a tahoe
292     grid will cause the prefixdir contents to be mostly cached in the kernel,
293     or that the number of buckets in each prefixdir will be small enough to
294-    load quickly. A 1TB allmydata.com server was measured to have 2.56M
295+    load quickly. A 1TB allmydata.com server was measured to have 2.56 * 10^6
296     buckets, spread into the 1024 prefixdirs, with about 2500 buckets per
297     prefix. On this server, each prefixdir took 130ms-200ms to list the first
298     time, and 17ms to list the second time.
299hunk ./src/allmydata/storage/crawler.py 68
300     cpu_slice = 1.0 # use up to 1.0 seconds before yielding
301     minimum_cycle_time = 300 # don't run a cycle faster than this
302 
303-    def __init__(self, server, statefile, allowed_cpu_percentage=None):
304+    def __init__(self, backend, statefile, allowed_cpu_percentage=None):
305         service.MultiService.__init__(self)
306         if allowed_cpu_percentage is not None:
307             self.allowed_cpu_percentage = allowed_cpu_percentage
308hunk ./src/allmydata/storage/crawler.py 72
309-        self.server = server
310-        self.sharedir = server.sharedir
311-        self.statefile = statefile
312+        self.backend = backend
313         self.prefixes = [si_b2a(struct.pack(">H", i << (16-10)))[:2]
314                          for i in range(2**10)]
315         self.prefixes.sort()
316hunk ./src/allmydata/storage/crawler.py 446
317 
318     minimum_cycle_time = 60*60 # we don't need this more than once an hour
319 
320-    def __init__(self, server, statefile, num_sample_prefixes=1):
321-        ShareCrawler.__init__(self, server, statefile)
322+    def __init__(self, statefile, num_sample_prefixes=1):
323+        ShareCrawler.__init__(self, statefile)
324         self.num_sample_prefixes = num_sample_prefixes
325 
326     def add_initial_state(self):
327hunk ./src/allmydata/storage/expirer.py 15
328     removed.
329 
330     I collect statistics on the leases and make these available to a web
331-    status page, including::
332+    status page, including:
333 
334     Space recovered during this cycle-so-far:
335      actual (only if expiration_enabled=True):
336hunk ./src/allmydata/storage/expirer.py 51
337     slow_start = 360 # wait 6 minutes after startup
338     minimum_cycle_time = 12*60*60 # not more than twice per day
339 
340-    def __init__(self, server, statefile, historyfile,
341+    def __init__(self, statefile, historyfile,
342                  expiration_enabled, mode,
343                  override_lease_duration, # used if expiration_mode=="age"
344                  cutoff_date, # used if expiration_mode=="cutoff-date"
345hunk ./src/allmydata/storage/expirer.py 71
346         else:
347             raise ValueError("GC mode '%s' must be 'age' or 'cutoff-date'" % mode)
348         self.sharetypes_to_expire = sharetypes
349-        ShareCrawler.__init__(self, server, statefile)
350+        ShareCrawler.__init__(self, statefile)
351 
352     def add_initial_state(self):
353         # we fill ["cycle-to-date"] here (even though they will be reset in
354hunk ./src/allmydata/storage/immutable.py 44
355     sharetype = "immutable"
356 
357     def __init__(self, filename, max_size=None, create=False):
358-        """ If max_size is not None then I won't allow more than max_size to be written to me. If create=True and max_size must not be None. """
359+        """ If max_size is not None then I won't allow more than
360+        max_size to be written to me. If create=True then max_size
361+        must not be None. """
362         precondition((max_size is not None) or (not create), max_size, create)
363         self.home = filename
364         self._max_size = max_size
365hunk ./src/allmydata/storage/immutable.py 87
366 
367     def read_share_data(self, offset, length):
368         precondition(offset >= 0)
369-        # reads beyond the end of the data are truncated. Reads that start
370-        # beyond the end of the data return an empty string. I wonder why
371-        # Python doesn't do the following computation for me?
372+        # Reads beyond the end of the data are truncated. Reads that start
373+        # beyond the end of the data return an empty string.
374         seekpos = self._data_offset+offset
375         fsize = os.path.getsize(self.home)
376         actuallength = max(0, min(length, fsize-seekpos))
377hunk ./src/allmydata/storage/immutable.py 198
378             space_freed += os.stat(self.home)[stat.ST_SIZE]
379             self.unlink()
380         return space_freed
381+class NullBucketWriter(Referenceable):
382+    implements(RIBucketWriter)
383 
384hunk ./src/allmydata/storage/immutable.py 201
385+    def remote_write(self, offset, data):
386+        return
387 
388 class BucketWriter(Referenceable):
389     implements(RIBucketWriter)
390hunk ./src/allmydata/storage/server.py 7
391 from twisted.application import service
392 
393 from zope.interface import implements
394-from allmydata.interfaces import RIStorageServer, IStatsProducer
395+from allmydata.interfaces import RIStorageServer, IStatsProducer, IShareStore
396 from allmydata.util import fileutil, idlib, log, time_format
397 import allmydata # for __full_version__
398 
399hunk ./src/allmydata/storage/server.py 16
400 from allmydata.storage.lease import LeaseInfo
401 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
402      create_mutable_sharefile
403-from allmydata.storage.immutable import ShareFile, BucketWriter, BucketReader
404+from allmydata.storage.immutable import ShareFile, NullBucketWriter, BucketWriter, BucketReader
405 from allmydata.storage.crawler import BucketCountingCrawler
406 from allmydata.storage.expirer import LeaseCheckingCrawler
407 
408hunk ./src/allmydata/storage/server.py 20
409+from zope.interface import implements
410+
411+# A Backend is a MultiService so that its server's crawlers (if the server has any) can
412+# be started and stopped.
413+class Backend(service.MultiService):
414+    implements(IStatsProducer)
415+    def __init__(self):
416+        service.MultiService.__init__(self)
417+
418+    def get_bucket_shares(self):
419+        """XXX"""
420+        raise NotImplementedError
421+
422+    def get_share(self):
423+        """XXX"""
424+        raise NotImplementedError
425+
426+    def make_bucket_writer(self):
427+        """XXX"""
428+        raise NotImplementedError
429+
430+class NullBackend(Backend):
431+    def __init__(self):
432+        Backend.__init__(self)
433+
434+    def get_available_space(self):
435+        return None
436+
437+    def get_bucket_shares(self, storage_index):
438+        return set()
439+
440+    def get_share(self, storage_index, sharenum):
441+        return None
442+
443+    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
444+        return NullBucketWriter()
445+
446+class FSBackend(Backend):
447+    def __init__(self, storedir, readonly=False, reserved_space=0):
448+        Backend.__init__(self)
449+
450+        self._setup_storage(storedir, readonly, reserved_space)
451+        self._setup_corruption_advisory()
452+        self._setup_bucket_counter()
453+        self._setup_lease_checkerf()
454+
455+    def _setup_storage(self, storedir, readonly, reserved_space):
456+        self.storedir = storedir
457+        self.readonly = readonly
458+        self.reserved_space = int(reserved_space)
459+        if self.reserved_space:
460+            if self.get_available_space() is None:
461+                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
462+                        umid="0wZ27w", level=log.UNUSUAL)
463+
464+        self.sharedir = os.path.join(self.storedir, "shares")
465+        fileutil.make_dirs(self.sharedir)
466+        self.incomingdir = os.path.join(self.sharedir, 'incoming')
467+        self._clean_incomplete()
468+
469+    def _clean_incomplete(self):
470+        fileutil.rm_dir(self.incomingdir)
471+        fileutil.make_dirs(self.incomingdir)
472+
473+    def _setup_corruption_advisory(self):
474+        # we don't actually create the corruption-advisory dir until necessary
475+        self.corruption_advisory_dir = os.path.join(self.storedir,
476+                                                    "corruption-advisories")
477+
478+    def _setup_bucket_counter(self):
479+        statefile = os.path.join(self.storedir, "bucket_counter.state")
480+        self.bucket_counter = BucketCountingCrawler(statefile)
481+        self.bucket_counter.setServiceParent(self)
482+
483+    def _setup_lease_checkerf(self):
484+        statefile = os.path.join(self.storedir, "lease_checker.state")
485+        historyfile = os.path.join(self.storedir, "lease_checker.history")
486+        self.lease_checker = LeaseCheckingCrawler(statefile, historyfile,
487+                                   expiration_enabled, expiration_mode,
488+                                   expiration_override_lease_duration,
489+                                   expiration_cutoff_date,
490+                                   expiration_sharetypes)
491+        self.lease_checker.setServiceParent(self)
492+
493+    def get_available_space(self):
494+        if self.readonly:
495+            return 0
496+        return fileutil.get_available_space(self.storedir, self.reserved_space)
497+
498+    def get_bucket_shares(self, storage_index):
499+        """Return a list of (shnum, pathname) tuples for files that hold
500+        shares for this storage_index. In each tuple, 'shnum' will always be
501+        the integer form of the last component of 'pathname'."""
502+        storagedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
503+        try:
504+            for f in os.listdir(storagedir):
505+                if NUM_RE.match(f):
506+                    filename = os.path.join(storagedir, f)
507+                    yield (int(f), filename)
508+        except OSError:
509+            # Commonly caused by there being no buckets at all.
510+            pass
511+
512 # storage/
513 # storage/shares/incoming
514 #   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
515hunk ./src/allmydata/storage/server.py 143
516     name = 'storage'
517     LeaseCheckerClass = LeaseCheckingCrawler
518 
519-    def __init__(self, storedir, nodeid, reserved_space=0,
520-                 discard_storage=False, readonly_storage=False,
521+    def __init__(self, nodeid, backend, reserved_space=0,
522+                 readonly_storage=False,
523                  stats_provider=None,
524                  expiration_enabled=False,
525                  expiration_mode="age",
526hunk ./src/allmydata/storage/server.py 155
527         assert isinstance(nodeid, str)
528         assert len(nodeid) == 20
529         self.my_nodeid = nodeid
530-        self.storedir = storedir
531-        sharedir = os.path.join(storedir, "shares")
532-        fileutil.make_dirs(sharedir)
533-        self.sharedir = sharedir
534-        # we don't actually create the corruption-advisory dir until necessary
535-        self.corruption_advisory_dir = os.path.join(storedir,
536-                                                    "corruption-advisories")
537-        self.reserved_space = int(reserved_space)
538-        self.no_storage = discard_storage
539-        self.readonly_storage = readonly_storage
540         self.stats_provider = stats_provider
541         if self.stats_provider:
542             self.stats_provider.register_producer(self)
543hunk ./src/allmydata/storage/server.py 158
544-        self.incomingdir = os.path.join(sharedir, 'incoming')
545-        self._clean_incomplete()
546-        fileutil.make_dirs(self.incomingdir)
547         self._active_writers = weakref.WeakKeyDictionary()
548hunk ./src/allmydata/storage/server.py 159
549+        self.backend = backend
550+        self.backend.setServiceParent(self)
551         log.msg("StorageServer created", facility="tahoe.storage")
552 
553hunk ./src/allmydata/storage/server.py 163
554-        if reserved_space:
555-            if self.get_available_space() is None:
556-                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
557-                        umin="0wZ27w", level=log.UNUSUAL)
558-
559         self.latencies = {"allocate": [], # immutable
560                           "write": [],
561                           "close": [],
562hunk ./src/allmydata/storage/server.py 174
563                           "renew": [],
564                           "cancel": [],
565                           }
566-        self.add_bucket_counter()
567-
568-        statefile = os.path.join(self.storedir, "lease_checker.state")
569-        historyfile = os.path.join(self.storedir, "lease_checker.history")
570-        klass = self.LeaseCheckerClass
571-        self.lease_checker = klass(self, statefile, historyfile,
572-                                   expiration_enabled, expiration_mode,
573-                                   expiration_override_lease_duration,
574-                                   expiration_cutoff_date,
575-                                   expiration_sharetypes)
576-        self.lease_checker.setServiceParent(self)
577 
578     def __repr__(self):
579         return "<StorageServer %s>" % (idlib.shortnodeid_b2a(self.my_nodeid),)
580hunk ./src/allmydata/storage/server.py 178
581 
582-    def add_bucket_counter(self):
583-        statefile = os.path.join(self.storedir, "bucket_counter.state")
584-        self.bucket_counter = BucketCountingCrawler(self, statefile)
585-        self.bucket_counter.setServiceParent(self)
586-
587     def count(self, name, delta=1):
588         if self.stats_provider:
589             self.stats_provider.count("storage_server." + name, delta)
590hunk ./src/allmydata/storage/server.py 233
591             kwargs["facility"] = "tahoe.storage"
592         return log.msg(*args, **kwargs)
593 
594-    def _clean_incomplete(self):
595-        fileutil.rm_dir(self.incomingdir)
596-
597     def get_stats(self):
598         # remember: RIStatsProvider requires that our return dict
599         # contains numeric values.
600hunk ./src/allmydata/storage/server.py 269
601             stats['storage_server.total_bucket_count'] = bucket_count
602         return stats
603 
604-    def get_available_space(self):
605-        """Returns available space for share storage in bytes, or None if no
606-        API to get this information is available."""
607-
608-        if self.readonly_storage:
609-            return 0
610-        return fileutil.get_available_space(self.storedir, self.reserved_space)
611-
612     def allocated_size(self):
613         space = 0
614         for bw in self._active_writers:
615hunk ./src/allmydata/storage/server.py 276
616         return space
617 
618     def remote_get_version(self):
619-        remaining_space = self.get_available_space()
620+        remaining_space = self.backend.get_available_space()
621         if remaining_space is None:
622             # We're on a platform that has no API to get disk stats.
623             remaining_space = 2**64
624hunk ./src/allmydata/storage/server.py 301
625         self.count("allocate")
626         alreadygot = set()
627         bucketwriters = {} # k: shnum, v: BucketWriter
628-        si_dir = storage_index_to_dir(storage_index)
629-        si_s = si_b2a(storage_index)
630 
631hunk ./src/allmydata/storage/server.py 302
632+        si_s = si_b2a(storage_index)
633         log.msg("storage: allocate_buckets %s" % si_s)
634 
635         # in this implementation, the lease information (including secrets)
636hunk ./src/allmydata/storage/server.py 316
637 
638         max_space_per_bucket = allocated_size
639 
640-        remaining_space = self.get_available_space()
641+        remaining_space = self.backend.get_available_space()
642         limited = remaining_space is not None
643         if limited:
644             # this is a bit conservative, since some of this allocated_size()
645hunk ./src/allmydata/storage/server.py 329
646         # they asked about: this will save them a lot of work. Add or update
647         # leases for all of them: if they want us to hold shares for this
648         # file, they'll want us to hold leases for this file.
649-        for (shnum, fn) in self._get_bucket_shares(storage_index):
650+        for (shnum, fn) in self.backend.get_bucket_shares(storage_index):
651             alreadygot.add(shnum)
652             sf = ShareFile(fn)
653             sf.add_or_renew_lease(lease_info)
654hunk ./src/allmydata/storage/server.py 335
655 
656         for shnum in sharenums:
657-            incominghome = os.path.join(self.incomingdir, si_dir, "%d" % shnum)
658-            finalhome = os.path.join(self.sharedir, si_dir, "%d" % shnum)
659-            if os.path.exists(finalhome):
660+            share = self.backend.get_share(storage_index, shnum)
661+
662+            if not share:
663+                if (not limited) or (remaining_space >= max_space_per_bucket):
664+                    # ok! we need to create the new share file.
665+                    bw = self.backend.make_bucket_writer(storage_index, shnum,
666+                                      max_space_per_bucket, lease_info, canary)
667+                    bucketwriters[shnum] = bw
668+                    self._active_writers[bw] = 1
669+                    if limited:
670+                        remaining_space -= max_space_per_bucket
671+                else:
672+                    # bummer! not enough space to accept this bucket
673+                    pass
674+
675+            elif share.is_complete():
676                 # great! we already have it. easy.
677                 pass
678hunk ./src/allmydata/storage/server.py 353
679-            elif os.path.exists(incominghome):
680+            elif not share.is_complete():
681                 # Note that we don't create BucketWriters for shnums that
682                 # have a partial share (in incoming/), so if a second upload
683                 # occurs while the first is still in progress, the second
684hunk ./src/allmydata/storage/server.py 359
685                 # uploader will use different storage servers.
686                 pass
687-            elif (not limited) or (remaining_space >= max_space_per_bucket):
688-                # ok! we need to create the new share file.
689-                bw = BucketWriter(self, incominghome, finalhome,
690-                                  max_space_per_bucket, lease_info, canary)
691-                if self.no_storage:
692-                    bw.throw_out_all_data = True
693-                bucketwriters[shnum] = bw
694-                self._active_writers[bw] = 1
695-                if limited:
696-                    remaining_space -= max_space_per_bucket
697-            else:
698-                # bummer! not enough space to accept this bucket
699-                pass
700-
701-        if bucketwriters:
702-            fileutil.make_dirs(os.path.join(self.sharedir, si_dir))
703 
704         self.add_latency("allocate", time.time() - start)
705         return alreadygot, bucketwriters
706hunk ./src/allmydata/storage/server.py 437
707             self.stats_provider.count('storage_server.bytes_added', consumed_size)
708         del self._active_writers[bw]
709 
710-    def _get_bucket_shares(self, storage_index):
711-        """Return a list of (shnum, pathname) tuples for files that hold
712-        shares for this storage_index. In each tuple, 'shnum' will always be
713-        the integer form of the last component of 'pathname'."""
714-        storagedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
715-        try:
716-            for f in os.listdir(storagedir):
717-                if NUM_RE.match(f):
718-                    filename = os.path.join(storagedir, f)
719-                    yield (int(f), filename)
720-        except OSError:
721-            # Commonly caused by there being no buckets at all.
722-            pass
723 
724     def remote_get_buckets(self, storage_index):
725         start = time.time()
726hunk ./src/allmydata/storage/server.py 444
727         si_s = si_b2a(storage_index)
728         log.msg("storage: get_buckets %s" % si_s)
729         bucketreaders = {} # k: sharenum, v: BucketReader
730-        for shnum, filename in self._get_bucket_shares(storage_index):
731+        for shnum, filename in self.backend.get_bucket_shares(storage_index):
732             bucketreaders[shnum] = BucketReader(self, filename,
733                                                 storage_index, shnum)
734         self.add_latency("get", time.time() - start)
735hunk ./src/allmydata/test/test_backends.py 10
736 import mock
737 
738 # This is the code that we're going to be testing.
739-from allmydata.storage.server import StorageServer
740+from allmydata.storage.server import StorageServer, FSBackend, NullBackend
741 
742 # The following share file contents was generated with
743 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
744hunk ./src/allmydata/test/test_backends.py 21
745 sharefname = 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a/0'
746 
747 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
748+    @mock.patch('time.time')
749+    @mock.patch('os.mkdir')
750+    @mock.patch('__builtin__.open')
751+    @mock.patch('os.listdir')
752+    @mock.patch('os.path.isdir')
753+    def test_create_server_null_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
754+        """ This tests whether a server instance can be constructed
755+        with a null backend. The server instance fails the test if it
756+        tries to read or write to the file system. """
757+
758+        # Now begin the test.
759+        s = StorageServer('testnodeidxxxxxxxxxx', backend=NullBackend())
760+
761+        self.failIf(mockisdir.called)
762+        self.failIf(mocklistdir.called)
763+        self.failIf(mockopen.called)
764+        self.failIf(mockmkdir.called)
765+
766+        # You passed!
767+
768+    @mock.patch('time.time')
769+    @mock.patch('os.mkdir')
770     @mock.patch('__builtin__.open')
771hunk ./src/allmydata/test/test_backends.py 44
772-    def test_create_server(self, mockopen):
773-        """ This tests whether a server instance can be constructed. """
774+    @mock.patch('os.listdir')
775+    @mock.patch('os.path.isdir')
776+    def test_create_server_fs_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
777+        """ This tests whether a server instance can be constructed
778+        with a filesystem backend. To pass the test, it has to use the
779+        filesystem in only the prescribed ways. """
780 
781         def call_open(fname, mode):
782             if fname == 'testdir/bucket_counter.state':
783hunk ./src/allmydata/test/test_backends.py 58
784                 raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
785             elif fname == 'testdir/lease_checker.history':
786                 return StringIO()
787+            else:
788+                self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
789         mockopen.side_effect = call_open
790 
791         # Now begin the test.
792hunk ./src/allmydata/test/test_backends.py 63
793-        s = StorageServer('testdir', 'testnodeidxxxxxxxxxx')
794+        s = StorageServer('testnodeidxxxxxxxxxx', backend=FSBackend('teststoredir'))
795+
796+        self.failIf(mockisdir.called)
797+        self.failIf(mocklistdir.called)
798+        self.failIf(mockopen.called)
799+        self.failIf(mockmkdir.called)
800+        self.failIf(mocktime.called)
801 
802         # You passed!
803 
804hunk ./src/allmydata/test/test_backends.py 73
805-class TestServer(unittest.TestCase, ReallyEqualMixin):
806+class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
807+    def setUp(self):
808+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullBackend())
809+
810+    @mock.patch('os.mkdir')
811+    @mock.patch('__builtin__.open')
812+    @mock.patch('os.listdir')
813+    @mock.patch('os.path.isdir')
814+    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir):
815+        """ Write a new share. """
816+
817+        # Now begin the test.
818+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
819+        bs[0].remote_write(0, 'a')
820+        self.failIf(mockisdir.called)
821+        self.failIf(mocklistdir.called)
822+        self.failIf(mockopen.called)
823+        self.failIf(mockmkdir.called)
824+
825+    @mock.patch('os.path.exists')
826+    @mock.patch('os.path.getsize')
827+    @mock.patch('__builtin__.open')
828+    @mock.patch('os.listdir')
829+    def test_read_share(self, mocklistdir, mockopen, mockgetsize, mockexists):
830+        """ This tests whether the code correctly finds and reads
831+        shares written out by old (Tahoe-LAFS <= v1.8.2)
832+        servers. There is a similar test in test_download, but that one
833+        is from the perspective of the client and exercises a deeper
834+        stack of code. This one is for exercising just the
835+        StorageServer object. """
836+
837+        # Now begin the test.
838+        bs = self.s.remote_get_buckets('teststorage_index')
839+
840+        self.failUnlessEqual(len(bs), 0)
841+        self.failIf(mocklistdir.called)
842+        self.failIf(mockopen.called)
843+        self.failIf(mockgetsize.called)
844+        self.failIf(mockexists.called)
845+
846+
847+class TestServerFSBackend(unittest.TestCase, ReallyEqualMixin):
848     @mock.patch('__builtin__.open')
849     def setUp(self, mockopen):
850         def call_open(fname, mode):
851hunk ./src/allmydata/test/test_backends.py 126
852                 return StringIO()
853         mockopen.side_effect = call_open
854 
855-        self.s = StorageServer('testdir', 'testnodeidxxxxxxxxxx')
856-
857+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=FSBackend('teststoredir'))
858 
859     @mock.patch('time.time')
860     @mock.patch('os.mkdir')
861hunk ./src/allmydata/test/test_backends.py 134
862     @mock.patch('os.listdir')
863     @mock.patch('os.path.isdir')
864     def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
865-        """Handle a report of corruption."""
866+        """ Write a new share. """
867 
868         def call_listdir(dirname):
869             self.failUnlessReallyEqual(dirname, 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
870hunk ./src/allmydata/test/test_backends.py 173
871         mockopen.side_effect = call_open
872         # Now begin the test.
873         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
874-        print bs
875         bs[0].remote_write(0, 'a')
876         self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
877 
878hunk ./src/allmydata/test/test_backends.py 176
879-
880     @mock.patch('os.path.exists')
881     @mock.patch('os.path.getsize')
882     @mock.patch('__builtin__.open')
883hunk ./src/allmydata/test/test_backends.py 218
884 
885         self.failUnlessEqual(len(bs), 1)
886         b = bs[0]
887+        # These should match by definition, the next two cases cover cases without (completely) unambiguous behaviors.
888         self.failUnlessReallyEqual(b.remote_read(0, datalen), share_data)
889         # If you try to read past the end you get the as much data as is there.
890         self.failUnlessReallyEqual(b.remote_read(0, datalen+20), share_data)
891hunk ./src/allmydata/test/test_backends.py 224
892         # If you start reading past the end of the file you get the empty string.
893         self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
894+
895+
896}
897[a temp patch used as a snapshot
898wilcoxjg@gmail.com**20110626052732
899 Ignore-this: 95f05e314eaec870afa04c76d979aa44
900] {
901hunk ./docs/configuration.rst 637
902   [storage]
903   enabled = True
904   readonly = True
905-  sizelimit = 10000000000
906 
907 
908   [helper]
909hunk ./docs/garbage-collection.rst 16
910 
911 When a file or directory in the virtual filesystem is no longer referenced,
912 the space that its shares occupied on each storage server can be freed,
913-making room for other shares. Tahoe currently uses a garbage collection
914+making room for other shares. Tahoe uses a garbage collection
915 ("GC") mechanism to implement this space-reclamation process. Each share has
916 one or more "leases", which are managed by clients who want the
917 file/directory to be retained. The storage server accepts each share for a
918hunk ./docs/garbage-collection.rst 34
919 the `<lease-tradeoffs.svg>`_ diagram to get an idea for the tradeoffs involved.
920 If lease renewal occurs quickly and with 100% reliability, than any renewal
921 time that is shorter than the lease duration will suffice, but a larger ratio
922-of duration-over-renewal-time will be more robust in the face of occasional
923+of lease duration to renewal time will be more robust in the face of occasional
924 delays or failures.
925 
926 The current recommended values for a small Tahoe grid are to renew the leases
927replace ./docs/garbage-collection.rst [A-Za-z_0-9\-\.] Tahoe Tahoe-LAFS
928hunk ./src/allmydata/client.py 260
929             sharetypes.append("mutable")
930         expiration_sharetypes = tuple(sharetypes)
931 
932+        if self.get_config("storage", "backend", "filesystem") == "filesystem":
933+            xyz
934+        xyz
935         ss = StorageServer(storedir, self.nodeid,
936                            reserved_space=reserved,
937                            discard_storage=discard,
938hunk ./src/allmydata/storage/crawler.py 234
939         f = open(tmpfile, "wb")
940         pickle.dump(self.state, f)
941         f.close()
942-        fileutil.move_into_place(tmpfile, self.statefile)
943+        fileutil.move_into_place(tmpfile, self.statefname)
944 
945     def startService(self):
946         # arrange things to look like we were just sleeping, so
947}
948[snapshot of progress on backend implementation (not suitable for trunk)
949wilcoxjg@gmail.com**20110626053244
950 Ignore-this: 50c764af791c2b99ada8289546806a0a
951] {
952adddir ./src/allmydata/storage/backends
953adddir ./src/allmydata/storage/backends/das
954move ./src/allmydata/storage/expirer.py ./src/allmydata/storage/backends/das/expirer.py
955adddir ./src/allmydata/storage/backends/null
956hunk ./src/allmydata/interfaces.py 270
957         store that on disk.
958         """
959 
960+class IStorageBackend(Interface):
961+    """
962+    Objects of this kind live on the server side and are used by the
963+    storage server object.
964+    """
965+    def get_available_space(self, reserved_space):
966+        """ Returns available space for share storage in bytes, or
967+        None if this information is not available or if the available
968+        space is unlimited.
969+
970+        If the backend is configured for read-only mode then this will
971+        return 0.
972+
973+        reserved_space is how many bytes to subtract from the answer, so
974+        you can pass how many bytes you would like to leave unused on this
975+        filesystem as reserved_space. """
976+
977+    def get_bucket_shares(self):
978+        """XXX"""
979+
980+    def get_share(self):
981+        """XXX"""
982+
983+    def make_bucket_writer(self):
984+        """XXX"""
985+
986+class IStorageBackendShare(Interface):
987+    """
988+    This object contains as much as all of the share data.  It is intended
989+    for lazy evaluation such that in many use cases substantially less than
990+    all of the share data will be accessed.
991+    """
992+    def is_complete(self):
993+        """
994+        Returns the share state, or None if the share does not exist.
995+        """
996+
997 class IStorageBucketWriter(Interface):
998     """
999     Objects of this kind live on the client side.
1000hunk ./src/allmydata/interfaces.py 2492
1001 
1002 class EmptyPathnameComponentError(Exception):
1003     """The webapi disallows empty pathname components."""
1004+
1005+class IShareStore(Interface):
1006+    pass
1007+
1008addfile ./src/allmydata/storage/backends/__init__.py
1009addfile ./src/allmydata/storage/backends/das/__init__.py
1010addfile ./src/allmydata/storage/backends/das/core.py
1011hunk ./src/allmydata/storage/backends/das/core.py 1
1012+from allmydata.interfaces import IStorageBackend
1013+from allmydata.storage.backends.base import Backend
1014+from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
1015+from allmydata.util.assertutil import precondition
1016+
1017+import os, re, weakref, struct, time
1018+
1019+from foolscap.api import Referenceable
1020+from twisted.application import service
1021+
1022+from zope.interface import implements
1023+from allmydata.interfaces import RIStorageServer, IStatsProducer, IShareStore
1024+from allmydata.util import fileutil, idlib, log, time_format
1025+import allmydata # for __full_version__
1026+
1027+from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
1028+_pyflakes_hush = [si_b2a, si_a2b, storage_index_to_dir] # re-exported
1029+from allmydata.storage.lease import LeaseInfo
1030+from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
1031+     create_mutable_sharefile
1032+from allmydata.storage.backends.das.immutable import NullBucketWriter, BucketWriter, BucketReader
1033+from allmydata.storage.crawler import FSBucketCountingCrawler
1034+from allmydata.storage.backends.das.expirer import FSLeaseCheckingCrawler
1035+
1036+from zope.interface import implements
1037+
1038+class DASCore(Backend):
1039+    implements(IStorageBackend)
1040+    def __init__(self, storedir, expiration_policy, readonly=False, reserved_space=0):
1041+        Backend.__init__(self)
1042+
1043+        self._setup_storage(storedir, readonly, reserved_space)
1044+        self._setup_corruption_advisory()
1045+        self._setup_bucket_counter()
1046+        self._setup_lease_checkerf(expiration_policy)
1047+
1048+    def _setup_storage(self, storedir, readonly, reserved_space):
1049+        self.storedir = storedir
1050+        self.readonly = readonly
1051+        self.reserved_space = int(reserved_space)
1052+        if self.reserved_space:
1053+            if self.get_available_space() is None:
1054+                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
1055+                        umid="0wZ27w", level=log.UNUSUAL)
1056+
1057+        self.sharedir = os.path.join(self.storedir, "shares")
1058+        fileutil.make_dirs(self.sharedir)
1059+        self.incomingdir = os.path.join(self.sharedir, 'incoming')
1060+        self._clean_incomplete()
1061+
1062+    def _clean_incomplete(self):
1063+        fileutil.rm_dir(self.incomingdir)
1064+        fileutil.make_dirs(self.incomingdir)
1065+
1066+    def _setup_corruption_advisory(self):
1067+        # we don't actually create the corruption-advisory dir until necessary
1068+        self.corruption_advisory_dir = os.path.join(self.storedir,
1069+                                                    "corruption-advisories")
1070+
1071+    def _setup_bucket_counter(self):
1072+        statefname = os.path.join(self.storedir, "bucket_counter.state")
1073+        self.bucket_counter = FSBucketCountingCrawler(statefname)
1074+        self.bucket_counter.setServiceParent(self)
1075+
1076+    def _setup_lease_checkerf(self, expiration_policy):
1077+        statefile = os.path.join(self.storedir, "lease_checker.state")
1078+        historyfile = os.path.join(self.storedir, "lease_checker.history")
1079+        self.lease_checker = FSLeaseCheckingCrawler(statefile, historyfile, expiration_policy)
1080+        self.lease_checker.setServiceParent(self)
1081+
1082+    def get_available_space(self):
1083+        if self.readonly:
1084+            return 0
1085+        return fileutil.get_available_space(self.storedir, self.reserved_space)
1086+
1087+    def get_shares(self, storage_index):
1088+        """Return a list of the FSBShare objects that correspond to the passed storage_index."""
1089+        finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
1090+        try:
1091+            for f in os.listdir(finalstoragedir):
1092+                if NUM_RE.match(f):
1093+                    filename = os.path.join(finalstoragedir, f)
1094+                    yield FSBShare(filename, int(f))
1095+        except OSError:
1096+            # Commonly caused by there being no buckets at all.
1097+            pass
1098+       
1099+    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
1100+        immsh = ImmutableShare(self.sharedir, storage_index, shnum, max_size=max_space_per_bucket, create=True)
1101+        bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
1102+        return bw
1103+       
1104+
1105+# each share file (in storage/shares/$SI/$SHNUM) contains lease information
1106+# and share data. The share data is accessed by RIBucketWriter.write and
1107+# RIBucketReader.read . The lease information is not accessible through these
1108+# interfaces.
1109+
1110+# The share file has the following layout:
1111+#  0x00: share file version number, four bytes, current version is 1
1112+#  0x04: share data length, four bytes big-endian = A # See Footnote 1 below.
1113+#  0x08: number of leases, four bytes big-endian
1114+#  0x0c: beginning of share data (see immutable.layout.WriteBucketProxy)
1115+#  A+0x0c = B: first lease. Lease format is:
1116+#   B+0x00: owner number, 4 bytes big-endian, 0 is reserved for no-owner
1117+#   B+0x04: renew secret, 32 bytes (SHA256)
1118+#   B+0x24: cancel secret, 32 bytes (SHA256)
1119+#   B+0x44: expiration time, 4 bytes big-endian seconds-since-epoch
1120+#   B+0x48: next lease, or end of record
1121+
1122+# Footnote 1: as of Tahoe v1.3.0 this field is not used by storage servers,
1123+# but it is still filled in by storage servers in case the storage server
1124+# software gets downgraded from >= Tahoe v1.3.0 to < Tahoe v1.3.0, or the
1125+# share file is moved from one storage server to another. The value stored in
1126+# this field is truncated, so if the actual share data length is >= 2**32,
1127+# then the value stored in this field will be the actual share data length
1128+# modulo 2**32.
1129+
1130+class ImmutableShare:
1131+    LEASE_SIZE = struct.calcsize(">L32s32sL")
1132+    sharetype = "immutable"
1133+
1134+    def __init__(self, sharedir, storageindex, shnum, max_size=None, create=False):
1135+        """ If max_size is not None then I won't allow more than
1136+        max_size to be written to me. If create=True then max_size
1137+        must not be None. """
1138+        precondition((max_size is not None) or (not create), max_size, create)
1139+        self.shnum = shnum
1140+        self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
1141+        self._max_size = max_size
1142+        if create:
1143+            # touch the file, so later callers will see that we're working on
1144+            # it. Also construct the metadata.
1145+            assert not os.path.exists(self.fname)
1146+            fileutil.make_dirs(os.path.dirname(self.fname))
1147+            f = open(self.fname, 'wb')
1148+            # The second field -- the four-byte share data length -- is no
1149+            # longer used as of Tahoe v1.3.0, but we continue to write it in
1150+            # there in case someone downgrades a storage server from >=
1151+            # Tahoe-1.3.0 to < Tahoe-1.3.0, or moves a share file from one
1152+            # server to another, etc. We do saturation -- a share data length
1153+            # larger than 2**32-1 (what can fit into the field) is marked as
1154+            # the largest length that can fit into the field. That way, even
1155+            # if this does happen, the old < v1.3.0 server will still allow
1156+            # clients to read the first part of the share.
1157+            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
1158+            f.close()
1159+            self._lease_offset = max_size + 0x0c
1160+            self._num_leases = 0
1161+        else:
1162+            f = open(self.fname, 'rb')
1163+            filesize = os.path.getsize(self.fname)
1164+            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1165+            f.close()
1166+            if version != 1:
1167+                msg = "sharefile %s had version %d but we wanted 1" % \
1168+                      (self.fname, version)
1169+                raise UnknownImmutableContainerVersionError(msg)
1170+            self._num_leases = num_leases
1171+            self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
1172+        self._data_offset = 0xc
1173+
1174+    def unlink(self):
1175+        os.unlink(self.fname)
1176+
1177+    def read_share_data(self, offset, length):
1178+        precondition(offset >= 0)
1179+        # Reads beyond the end of the data are truncated. Reads that start
1180+        # beyond the end of the data return an empty string.
1181+        seekpos = self._data_offset+offset
1182+        fsize = os.path.getsize(self.fname)
1183+        actuallength = max(0, min(length, fsize-seekpos))
1184+        if actuallength == 0:
1185+            return ""
1186+        f = open(self.fname, 'rb')
1187+        f.seek(seekpos)
1188+        return f.read(actuallength)
1189+
1190+    def write_share_data(self, offset, data):
1191+        length = len(data)
1192+        precondition(offset >= 0, offset)
1193+        if self._max_size is not None and offset+length > self._max_size:
1194+            raise DataTooLargeError(self._max_size, offset, length)
1195+        f = open(self.fname, 'rb+')
1196+        real_offset = self._data_offset+offset
1197+        f.seek(real_offset)
1198+        assert f.tell() == real_offset
1199+        f.write(data)
1200+        f.close()
1201+
1202+    def _write_lease_record(self, f, lease_number, lease_info):
1203+        offset = self._lease_offset + lease_number * self.LEASE_SIZE
1204+        f.seek(offset)
1205+        assert f.tell() == offset
1206+        f.write(lease_info.to_immutable_data())
1207+
1208+    def _read_num_leases(self, f):
1209+        f.seek(0x08)
1210+        (num_leases,) = struct.unpack(">L", f.read(4))
1211+        return num_leases
1212+
1213+    def _write_num_leases(self, f, num_leases):
1214+        f.seek(0x08)
1215+        f.write(struct.pack(">L", num_leases))
1216+
1217+    def _truncate_leases(self, f, num_leases):
1218+        f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE)
1219+
1220+    def get_leases(self):
1221+        """Yields a LeaseInfo instance for all leases."""
1222+        f = open(self.fname, 'rb')
1223+        (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1224+        f.seek(self._lease_offset)
1225+        for i in range(num_leases):
1226+            data = f.read(self.LEASE_SIZE)
1227+            if data:
1228+                yield LeaseInfo().from_immutable_data(data)
1229+
1230+    def add_lease(self, lease_info):
1231+        f = open(self.fname, 'rb+')
1232+        num_leases = self._read_num_leases(f)
1233+        self._write_lease_record(f, num_leases, lease_info)
1234+        self._write_num_leases(f, num_leases+1)
1235+        f.close()
1236+
1237+    def renew_lease(self, renew_secret, new_expire_time):
1238+        for i,lease in enumerate(self.get_leases()):
1239+            if constant_time_compare(lease.renew_secret, renew_secret):
1240+                # yup. See if we need to update the owner time.
1241+                if new_expire_time > lease.expiration_time:
1242+                    # yes
1243+                    lease.expiration_time = new_expire_time
1244+                    f = open(self.fname, 'rb+')
1245+                    self._write_lease_record(f, i, lease)
1246+                    f.close()
1247+                return
1248+        raise IndexError("unable to renew non-existent lease")
1249+
1250+    def add_or_renew_lease(self, lease_info):
1251+        try:
1252+            self.renew_lease(lease_info.renew_secret,
1253+                             lease_info.expiration_time)
1254+        except IndexError:
1255+            self.add_lease(lease_info)
1256+
1257+
1258+    def cancel_lease(self, cancel_secret):
1259+        """Remove a lease with the given cancel_secret. If the last lease is
1260+        cancelled, the file will be removed. Return the number of bytes that
1261+        were freed (by truncating the list of leases, and possibly by
1262+        deleting the file. Raise IndexError if there was no lease with the
1263+        given cancel_secret.
1264+        """
1265+
1266+        leases = list(self.get_leases())
1267+        num_leases_removed = 0
1268+        for i,lease in enumerate(leases):
1269+            if constant_time_compare(lease.cancel_secret, cancel_secret):
1270+                leases[i] = None
1271+                num_leases_removed += 1
1272+        if not num_leases_removed:
1273+            raise IndexError("unable to find matching lease to cancel")
1274+        if num_leases_removed:
1275+            # pack and write out the remaining leases. We write these out in
1276+            # the same order as they were added, so that if we crash while
1277+            # doing this, we won't lose any non-cancelled leases.
1278+            leases = [l for l in leases if l] # remove the cancelled leases
1279+            f = open(self.fname, 'rb+')
1280+            for i,lease in enumerate(leases):
1281+                self._write_lease_record(f, i, lease)
1282+            self._write_num_leases(f, len(leases))
1283+            self._truncate_leases(f, len(leases))
1284+            f.close()
1285+        space_freed = self.LEASE_SIZE * num_leases_removed
1286+        if not len(leases):
1287+            space_freed += os.stat(self.fname)[stat.ST_SIZE]
1288+            self.unlink()
1289+        return space_freed
1290hunk ./src/allmydata/storage/backends/das/expirer.py 2
1291 import time, os, pickle, struct
1292-from allmydata.storage.crawler import ShareCrawler
1293-from allmydata.storage.shares import get_share_file
1294+from allmydata.storage.crawler import FSShareCrawler
1295 from allmydata.storage.common import UnknownMutableContainerVersionError, \
1296      UnknownImmutableContainerVersionError
1297 from twisted.python import log as twlog
1298hunk ./src/allmydata/storage/backends/das/expirer.py 7
1299 
1300-class LeaseCheckingCrawler(ShareCrawler):
1301+class FSLeaseCheckingCrawler(FSShareCrawler):
1302     """I examine the leases on all shares, determining which are still valid
1303     and which have expired. I can remove the expired leases (if so
1304     configured), and the share will be deleted when the last lease is
1305hunk ./src/allmydata/storage/backends/das/expirer.py 50
1306     slow_start = 360 # wait 6 minutes after startup
1307     minimum_cycle_time = 12*60*60 # not more than twice per day
1308 
1309-    def __init__(self, statefile, historyfile,
1310-                 expiration_enabled, mode,
1311-                 override_lease_duration, # used if expiration_mode=="age"
1312-                 cutoff_date, # used if expiration_mode=="cutoff-date"
1313-                 sharetypes):
1314+    def __init__(self, statefile, historyfile, expiration_policy):
1315         self.historyfile = historyfile
1316hunk ./src/allmydata/storage/backends/das/expirer.py 52
1317-        self.expiration_enabled = expiration_enabled
1318-        self.mode = mode
1319+        self.expiration_enabled = expiration_policy['enabled']
1320+        self.mode = expiration_policy['mode']
1321         self.override_lease_duration = None
1322         self.cutoff_date = None
1323         if self.mode == "age":
1324hunk ./src/allmydata/storage/backends/das/expirer.py 57
1325-            assert isinstance(override_lease_duration, (int, type(None)))
1326-            self.override_lease_duration = override_lease_duration # seconds
1327+            assert isinstance(expiration_policy['override_lease_duration'], (int, type(None)))
1328+            self.override_lease_duration = expiration_policy['override_lease_duration']# seconds
1329         elif self.mode == "cutoff-date":
1330hunk ./src/allmydata/storage/backends/das/expirer.py 60
1331-            assert isinstance(cutoff_date, int) # seconds-since-epoch
1332+            assert isinstance(expiration_policy['cutoff_date'], int) # seconds-since-epoch
1333             assert cutoff_date is not None
1334hunk ./src/allmydata/storage/backends/das/expirer.py 62
1335-            self.cutoff_date = cutoff_date
1336+            self.cutoff_date = expiration_policy['cutoff_date']
1337         else:
1338hunk ./src/allmydata/storage/backends/das/expirer.py 64
1339-            raise ValueError("GC mode '%s' must be 'age' or 'cutoff-date'" % mode)
1340-        self.sharetypes_to_expire = sharetypes
1341-        ShareCrawler.__init__(self, statefile)
1342+            raise ValueError("GC mode '%s' must be 'age' or 'cutoff-date'" % expiration_policy['mode'])
1343+        self.sharetypes_to_expire = expiration_policy['sharetypes']
1344+        FSShareCrawler.__init__(self, statefile)
1345 
1346     def add_initial_state(self):
1347         # we fill ["cycle-to-date"] here (even though they will be reset in
1348hunk ./src/allmydata/storage/backends/das/expirer.py 156
1349 
1350     def process_share(self, sharefilename):
1351         # first, find out what kind of a share it is
1352-        sf = get_share_file(sharefilename)
1353+        f = open(sharefilename, "rb")
1354+        prefix = f.read(32)
1355+        f.close()
1356+        if prefix == MutableShareFile.MAGIC:
1357+            sf = MutableShareFile(sharefilename)
1358+        else:
1359+            # otherwise assume it's immutable
1360+            sf = FSBShare(sharefilename)
1361         sharetype = sf.sharetype
1362         now = time.time()
1363         s = self.stat(sharefilename)
1364addfile ./src/allmydata/storage/backends/null/__init__.py
1365addfile ./src/allmydata/storage/backends/null/core.py
1366hunk ./src/allmydata/storage/backends/null/core.py 1
1367+from allmydata.storage.backends.base import Backend
1368+
1369+class NullCore(Backend):
1370+    def __init__(self):
1371+        Backend.__init__(self)
1372+
1373+    def get_available_space(self):
1374+        return None
1375+
1376+    def get_shares(self, storage_index):
1377+        return set()
1378+
1379+    def get_share(self, storage_index, sharenum):
1380+        return None
1381+
1382+    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
1383+        return NullBucketWriter()
1384hunk ./src/allmydata/storage/crawler.py 12
1385 class TimeSliceExceeded(Exception):
1386     pass
1387 
1388-class ShareCrawler(service.MultiService):
1389+class FSShareCrawler(service.MultiService):
1390     """A subcless of ShareCrawler is attached to a StorageServer, and
1391     periodically walks all of its shares, processing each one in some
1392     fashion. This crawl is rate-limited, to reduce the IO burden on the host,
1393hunk ./src/allmydata/storage/crawler.py 68
1394     cpu_slice = 1.0 # use up to 1.0 seconds before yielding
1395     minimum_cycle_time = 300 # don't run a cycle faster than this
1396 
1397-    def __init__(self, backend, statefile, allowed_cpu_percentage=None):
1398+    def __init__(self, statefname, allowed_cpu_percentage=None):
1399         service.MultiService.__init__(self)
1400         if allowed_cpu_percentage is not None:
1401             self.allowed_cpu_percentage = allowed_cpu_percentage
1402hunk ./src/allmydata/storage/crawler.py 72
1403-        self.backend = backend
1404+        self.statefname = statefname
1405         self.prefixes = [si_b2a(struct.pack(">H", i << (16-10)))[:2]
1406                          for i in range(2**10)]
1407         self.prefixes.sort()
1408hunk ./src/allmydata/storage/crawler.py 192
1409         #                            of the last bucket to be processed, or
1410         #                            None if we are sleeping between cycles
1411         try:
1412-            f = open(self.statefile, "rb")
1413+            f = open(self.statefname, "rb")
1414             state = pickle.load(f)
1415             f.close()
1416         except EnvironmentError:
1417hunk ./src/allmydata/storage/crawler.py 230
1418         else:
1419             last_complete_prefix = self.prefixes[lcpi]
1420         self.state["last-complete-prefix"] = last_complete_prefix
1421-        tmpfile = self.statefile + ".tmp"
1422+        tmpfile = self.statefname + ".tmp"
1423         f = open(tmpfile, "wb")
1424         pickle.dump(self.state, f)
1425         f.close()
1426hunk ./src/allmydata/storage/crawler.py 433
1427         pass
1428 
1429 
1430-class BucketCountingCrawler(ShareCrawler):
1431+class FSBucketCountingCrawler(FSShareCrawler):
1432     """I keep track of how many buckets are being managed by this server.
1433     This is equivalent to the number of distributed files and directories for
1434     which I am providing storage. The actual number of files+directories in
1435hunk ./src/allmydata/storage/crawler.py 446
1436 
1437     minimum_cycle_time = 60*60 # we don't need this more than once an hour
1438 
1439-    def __init__(self, statefile, num_sample_prefixes=1):
1440-        ShareCrawler.__init__(self, statefile)
1441+    def __init__(self, statefname, num_sample_prefixes=1):
1442+        FSShareCrawler.__init__(self, statefname)
1443         self.num_sample_prefixes = num_sample_prefixes
1444 
1445     def add_initial_state(self):
1446hunk ./src/allmydata/storage/immutable.py 14
1447 from allmydata.storage.common import UnknownImmutableContainerVersionError, \
1448      DataTooLargeError
1449 
1450-# each share file (in storage/shares/$SI/$SHNUM) contains lease information
1451-# and share data. The share data is accessed by RIBucketWriter.write and
1452-# RIBucketReader.read . The lease information is not accessible through these
1453-# interfaces.
1454-
1455-# The share file has the following layout:
1456-#  0x00: share file version number, four bytes, current version is 1
1457-#  0x04: share data length, four bytes big-endian = A # See Footnote 1 below.
1458-#  0x08: number of leases, four bytes big-endian
1459-#  0x0c: beginning of share data (see immutable.layout.WriteBucketProxy)
1460-#  A+0x0c = B: first lease. Lease format is:
1461-#   B+0x00: owner number, 4 bytes big-endian, 0 is reserved for no-owner
1462-#   B+0x04: renew secret, 32 bytes (SHA256)
1463-#   B+0x24: cancel secret, 32 bytes (SHA256)
1464-#   B+0x44: expiration time, 4 bytes big-endian seconds-since-epoch
1465-#   B+0x48: next lease, or end of record
1466-
1467-# Footnote 1: as of Tahoe v1.3.0 this field is not used by storage servers,
1468-# but it is still filled in by storage servers in case the storage server
1469-# software gets downgraded from >= Tahoe v1.3.0 to < Tahoe v1.3.0, or the
1470-# share file is moved from one storage server to another. The value stored in
1471-# this field is truncated, so if the actual share data length is >= 2**32,
1472-# then the value stored in this field will be the actual share data length
1473-# modulo 2**32.
1474-
1475-class ShareFile:
1476-    LEASE_SIZE = struct.calcsize(">L32s32sL")
1477-    sharetype = "immutable"
1478-
1479-    def __init__(self, filename, max_size=None, create=False):
1480-        """ If max_size is not None then I won't allow more than
1481-        max_size to be written to me. If create=True then max_size
1482-        must not be None. """
1483-        precondition((max_size is not None) or (not create), max_size, create)
1484-        self.home = filename
1485-        self._max_size = max_size
1486-        if create:
1487-            # touch the file, so later callers will see that we're working on
1488-            # it. Also construct the metadata.
1489-            assert not os.path.exists(self.home)
1490-            fileutil.make_dirs(os.path.dirname(self.home))
1491-            f = open(self.home, 'wb')
1492-            # The second field -- the four-byte share data length -- is no
1493-            # longer used as of Tahoe v1.3.0, but we continue to write it in
1494-            # there in case someone downgrades a storage server from >=
1495-            # Tahoe-1.3.0 to < Tahoe-1.3.0, or moves a share file from one
1496-            # server to another, etc. We do saturation -- a share data length
1497-            # larger than 2**32-1 (what can fit into the field) is marked as
1498-            # the largest length that can fit into the field. That way, even
1499-            # if this does happen, the old < v1.3.0 server will still allow
1500-            # clients to read the first part of the share.
1501-            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
1502-            f.close()
1503-            self._lease_offset = max_size + 0x0c
1504-            self._num_leases = 0
1505-        else:
1506-            f = open(self.home, 'rb')
1507-            filesize = os.path.getsize(self.home)
1508-            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1509-            f.close()
1510-            if version != 1:
1511-                msg = "sharefile %s had version %d but we wanted 1" % \
1512-                      (filename, version)
1513-                raise UnknownImmutableContainerVersionError(msg)
1514-            self._num_leases = num_leases
1515-            self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
1516-        self._data_offset = 0xc
1517-
1518-    def unlink(self):
1519-        os.unlink(self.home)
1520-
1521-    def read_share_data(self, offset, length):
1522-        precondition(offset >= 0)
1523-        # Reads beyond the end of the data are truncated. Reads that start
1524-        # beyond the end of the data return an empty string.
1525-        seekpos = self._data_offset+offset
1526-        fsize = os.path.getsize(self.home)
1527-        actuallength = max(0, min(length, fsize-seekpos))
1528-        if actuallength == 0:
1529-            return ""
1530-        f = open(self.home, 'rb')
1531-        f.seek(seekpos)
1532-        return f.read(actuallength)
1533-
1534-    def write_share_data(self, offset, data):
1535-        length = len(data)
1536-        precondition(offset >= 0, offset)
1537-        if self._max_size is not None and offset+length > self._max_size:
1538-            raise DataTooLargeError(self._max_size, offset, length)
1539-        f = open(self.home, 'rb+')
1540-        real_offset = self._data_offset+offset
1541-        f.seek(real_offset)
1542-        assert f.tell() == real_offset
1543-        f.write(data)
1544-        f.close()
1545-
1546-    def _write_lease_record(self, f, lease_number, lease_info):
1547-        offset = self._lease_offset + lease_number * self.LEASE_SIZE
1548-        f.seek(offset)
1549-        assert f.tell() == offset
1550-        f.write(lease_info.to_immutable_data())
1551-
1552-    def _read_num_leases(self, f):
1553-        f.seek(0x08)
1554-        (num_leases,) = struct.unpack(">L", f.read(4))
1555-        return num_leases
1556-
1557-    def _write_num_leases(self, f, num_leases):
1558-        f.seek(0x08)
1559-        f.write(struct.pack(">L", num_leases))
1560-
1561-    def _truncate_leases(self, f, num_leases):
1562-        f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE)
1563-
1564-    def get_leases(self):
1565-        """Yields a LeaseInfo instance for all leases."""
1566-        f = open(self.home, 'rb')
1567-        (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1568-        f.seek(self._lease_offset)
1569-        for i in range(num_leases):
1570-            data = f.read(self.LEASE_SIZE)
1571-            if data:
1572-                yield LeaseInfo().from_immutable_data(data)
1573-
1574-    def add_lease(self, lease_info):
1575-        f = open(self.home, 'rb+')
1576-        num_leases = self._read_num_leases(f)
1577-        self._write_lease_record(f, num_leases, lease_info)
1578-        self._write_num_leases(f, num_leases+1)
1579-        f.close()
1580-
1581-    def renew_lease(self, renew_secret, new_expire_time):
1582-        for i,lease in enumerate(self.get_leases()):
1583-            if constant_time_compare(lease.renew_secret, renew_secret):
1584-                # yup. See if we need to update the owner time.
1585-                if new_expire_time > lease.expiration_time:
1586-                    # yes
1587-                    lease.expiration_time = new_expire_time
1588-                    f = open(self.home, 'rb+')
1589-                    self._write_lease_record(f, i, lease)
1590-                    f.close()
1591-                return
1592-        raise IndexError("unable to renew non-existent lease")
1593-
1594-    def add_or_renew_lease(self, lease_info):
1595-        try:
1596-            self.renew_lease(lease_info.renew_secret,
1597-                             lease_info.expiration_time)
1598-        except IndexError:
1599-            self.add_lease(lease_info)
1600-
1601-
1602-    def cancel_lease(self, cancel_secret):
1603-        """Remove a lease with the given cancel_secret. If the last lease is
1604-        cancelled, the file will be removed. Return the number of bytes that
1605-        were freed (by truncating the list of leases, and possibly by
1606-        deleting the file. Raise IndexError if there was no lease with the
1607-        given cancel_secret.
1608-        """
1609-
1610-        leases = list(self.get_leases())
1611-        num_leases_removed = 0
1612-        for i,lease in enumerate(leases):
1613-            if constant_time_compare(lease.cancel_secret, cancel_secret):
1614-                leases[i] = None
1615-                num_leases_removed += 1
1616-        if not num_leases_removed:
1617-            raise IndexError("unable to find matching lease to cancel")
1618-        if num_leases_removed:
1619-            # pack and write out the remaining leases. We write these out in
1620-            # the same order as they were added, so that if we crash while
1621-            # doing this, we won't lose any non-cancelled leases.
1622-            leases = [l for l in leases if l] # remove the cancelled leases
1623-            f = open(self.home, 'rb+')
1624-            for i,lease in enumerate(leases):
1625-                self._write_lease_record(f, i, lease)
1626-            self._write_num_leases(f, len(leases))
1627-            self._truncate_leases(f, len(leases))
1628-            f.close()
1629-        space_freed = self.LEASE_SIZE * num_leases_removed
1630-        if not len(leases):
1631-            space_freed += os.stat(self.home)[stat.ST_SIZE]
1632-            self.unlink()
1633-        return space_freed
1634-class NullBucketWriter(Referenceable):
1635-    implements(RIBucketWriter)
1636-
1637-    def remote_write(self, offset, data):
1638-        return
1639-
1640 class BucketWriter(Referenceable):
1641     implements(RIBucketWriter)
1642 
1643hunk ./src/allmydata/storage/immutable.py 17
1644-    def __init__(self, ss, incominghome, finalhome, max_size, lease_info, canary):
1645+    def __init__(self, ss, immutableshare, max_size, lease_info, canary):
1646         self.ss = ss
1647hunk ./src/allmydata/storage/immutable.py 19
1648-        self.incominghome = incominghome
1649-        self.finalhome = finalhome
1650         self._max_size = max_size # don't allow the client to write more than this
1651         self._canary = canary
1652         self._disconnect_marker = canary.notifyOnDisconnect(self._disconnected)
1653hunk ./src/allmydata/storage/immutable.py 24
1654         self.closed = False
1655         self.throw_out_all_data = False
1656-        self._sharefile = ShareFile(incominghome, create=True, max_size=max_size)
1657+        self._sharefile = immutableshare
1658         # also, add our lease to the file now, so that other ones can be
1659         # added by simultaneous uploaders
1660         self._sharefile.add_lease(lease_info)
1661hunk ./src/allmydata/storage/server.py 16
1662 from allmydata.storage.lease import LeaseInfo
1663 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
1664      create_mutable_sharefile
1665-from allmydata.storage.immutable import ShareFile, NullBucketWriter, BucketWriter, BucketReader
1666-from allmydata.storage.crawler import BucketCountingCrawler
1667-from allmydata.storage.expirer import LeaseCheckingCrawler
1668 
1669 from zope.interface import implements
1670 
1671hunk ./src/allmydata/storage/server.py 19
1672-# A Backend is a MultiService so that its server's crawlers (if the server has any) can
1673-# be started and stopped.
1674-class Backend(service.MultiService):
1675-    implements(IStatsProducer)
1676-    def __init__(self):
1677-        service.MultiService.__init__(self)
1678-
1679-    def get_bucket_shares(self):
1680-        """XXX"""
1681-        raise NotImplementedError
1682-
1683-    def get_share(self):
1684-        """XXX"""
1685-        raise NotImplementedError
1686-
1687-    def make_bucket_writer(self):
1688-        """XXX"""
1689-        raise NotImplementedError
1690-
1691-class NullBackend(Backend):
1692-    def __init__(self):
1693-        Backend.__init__(self)
1694-
1695-    def get_available_space(self):
1696-        return None
1697-
1698-    def get_bucket_shares(self, storage_index):
1699-        return set()
1700-
1701-    def get_share(self, storage_index, sharenum):
1702-        return None
1703-
1704-    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
1705-        return NullBucketWriter()
1706-
1707-class FSBackend(Backend):
1708-    def __init__(self, storedir, readonly=False, reserved_space=0):
1709-        Backend.__init__(self)
1710-
1711-        self._setup_storage(storedir, readonly, reserved_space)
1712-        self._setup_corruption_advisory()
1713-        self._setup_bucket_counter()
1714-        self._setup_lease_checkerf()
1715-
1716-    def _setup_storage(self, storedir, readonly, reserved_space):
1717-        self.storedir = storedir
1718-        self.readonly = readonly
1719-        self.reserved_space = int(reserved_space)
1720-        if self.reserved_space:
1721-            if self.get_available_space() is None:
1722-                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
1723-                        umid="0wZ27w", level=log.UNUSUAL)
1724-
1725-        self.sharedir = os.path.join(self.storedir, "shares")
1726-        fileutil.make_dirs(self.sharedir)
1727-        self.incomingdir = os.path.join(self.sharedir, 'incoming')
1728-        self._clean_incomplete()
1729-
1730-    def _clean_incomplete(self):
1731-        fileutil.rm_dir(self.incomingdir)
1732-        fileutil.make_dirs(self.incomingdir)
1733-
1734-    def _setup_corruption_advisory(self):
1735-        # we don't actually create the corruption-advisory dir until necessary
1736-        self.corruption_advisory_dir = os.path.join(self.storedir,
1737-                                                    "corruption-advisories")
1738-
1739-    def _setup_bucket_counter(self):
1740-        statefile = os.path.join(self.storedir, "bucket_counter.state")
1741-        self.bucket_counter = BucketCountingCrawler(statefile)
1742-        self.bucket_counter.setServiceParent(self)
1743-
1744-    def _setup_lease_checkerf(self):
1745-        statefile = os.path.join(self.storedir, "lease_checker.state")
1746-        historyfile = os.path.join(self.storedir, "lease_checker.history")
1747-        self.lease_checker = LeaseCheckingCrawler(statefile, historyfile,
1748-                                   expiration_enabled, expiration_mode,
1749-                                   expiration_override_lease_duration,
1750-                                   expiration_cutoff_date,
1751-                                   expiration_sharetypes)
1752-        self.lease_checker.setServiceParent(self)
1753-
1754-    def get_available_space(self):
1755-        if self.readonly:
1756-            return 0
1757-        return fileutil.get_available_space(self.storedir, self.reserved_space)
1758-
1759-    def get_bucket_shares(self, storage_index):
1760-        """Return a list of (shnum, pathname) tuples for files that hold
1761-        shares for this storage_index. In each tuple, 'shnum' will always be
1762-        the integer form of the last component of 'pathname'."""
1763-        storagedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
1764-        try:
1765-            for f in os.listdir(storagedir):
1766-                if NUM_RE.match(f):
1767-                    filename = os.path.join(storagedir, f)
1768-                    yield (int(f), filename)
1769-        except OSError:
1770-            # Commonly caused by there being no buckets at all.
1771-            pass
1772-
1773 # storage/
1774 # storage/shares/incoming
1775 #   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
1776hunk ./src/allmydata/storage/server.py 32
1777 # $SHARENUM matches this regex:
1778 NUM_RE=re.compile("^[0-9]+$")
1779 
1780-
1781-
1782 class StorageServer(service.MultiService, Referenceable):
1783     implements(RIStorageServer, IStatsProducer)
1784     name = 'storage'
1785hunk ./src/allmydata/storage/server.py 35
1786-    LeaseCheckerClass = LeaseCheckingCrawler
1787 
1788     def __init__(self, nodeid, backend, reserved_space=0,
1789                  readonly_storage=False,
1790hunk ./src/allmydata/storage/server.py 38
1791-                 stats_provider=None,
1792-                 expiration_enabled=False,
1793-                 expiration_mode="age",
1794-                 expiration_override_lease_duration=None,
1795-                 expiration_cutoff_date=None,
1796-                 expiration_sharetypes=("mutable", "immutable")):
1797+                 stats_provider=None ):
1798         service.MultiService.__init__(self)
1799         assert isinstance(nodeid, str)
1800         assert len(nodeid) == 20
1801hunk ./src/allmydata/storage/server.py 217
1802         # they asked about: this will save them a lot of work. Add or update
1803         # leases for all of them: if they want us to hold shares for this
1804         # file, they'll want us to hold leases for this file.
1805-        for (shnum, fn) in self.backend.get_bucket_shares(storage_index):
1806-            alreadygot.add(shnum)
1807-            sf = ShareFile(fn)
1808-            sf.add_or_renew_lease(lease_info)
1809-
1810-        for shnum in sharenums:
1811-            share = self.backend.get_share(storage_index, shnum)
1812+        for share in self.backend.get_shares(storage_index):
1813+            alreadygot.add(share.shnum)
1814+            share.add_or_renew_lease(lease_info)
1815 
1816hunk ./src/allmydata/storage/server.py 221
1817-            if not share:
1818-                if (not limited) or (remaining_space >= max_space_per_bucket):
1819-                    # ok! we need to create the new share file.
1820-                    bw = self.backend.make_bucket_writer(storage_index, shnum,
1821-                                      max_space_per_bucket, lease_info, canary)
1822-                    bucketwriters[shnum] = bw
1823-                    self._active_writers[bw] = 1
1824-                    if limited:
1825-                        remaining_space -= max_space_per_bucket
1826-                else:
1827-                    # bummer! not enough space to accept this bucket
1828-                    pass
1829+        for shnum in (sharenums - alreadygot):
1830+            if (not limited) or (remaining_space >= max_space_per_bucket):
1831+                #XXX or should the following line occur in storage server construtor? ok! we need to create the new share file.
1832+                self.backend.set_storage_server(self)
1833+                bw = self.backend.make_bucket_writer(storage_index, shnum,
1834+                                                     max_space_per_bucket, lease_info, canary)
1835+                bucketwriters[shnum] = bw
1836+                self._active_writers[bw] = 1
1837+                if limited:
1838+                    remaining_space -= max_space_per_bucket
1839 
1840hunk ./src/allmydata/storage/server.py 232
1841-            elif share.is_complete():
1842-                # great! we already have it. easy.
1843-                pass
1844-            elif not share.is_complete():
1845-                # Note that we don't create BucketWriters for shnums that
1846-                # have a partial share (in incoming/), so if a second upload
1847-                # occurs while the first is still in progress, the second
1848-                # uploader will use different storage servers.
1849-                pass
1850+        #XXX We SHOULD DOCUMENT LATER.
1851 
1852         self.add_latency("allocate", time.time() - start)
1853         return alreadygot, bucketwriters
1854hunk ./src/allmydata/storage/server.py 238
1855 
1856     def _iter_share_files(self, storage_index):
1857-        for shnum, filename in self._get_bucket_shares(storage_index):
1858+        for shnum, filename in self._get_shares(storage_index):
1859             f = open(filename, 'rb')
1860             header = f.read(32)
1861             f.close()
1862hunk ./src/allmydata/storage/server.py 318
1863         si_s = si_b2a(storage_index)
1864         log.msg("storage: get_buckets %s" % si_s)
1865         bucketreaders = {} # k: sharenum, v: BucketReader
1866-        for shnum, filename in self.backend.get_bucket_shares(storage_index):
1867+        for shnum, filename in self.backend.get_shares(storage_index):
1868             bucketreaders[shnum] = BucketReader(self, filename,
1869                                                 storage_index, shnum)
1870         self.add_latency("get", time.time() - start)
1871hunk ./src/allmydata/storage/server.py 334
1872         # since all shares get the same lease data, we just grab the leases
1873         # from the first share
1874         try:
1875-            shnum, filename = self._get_bucket_shares(storage_index).next()
1876+            shnum, filename = self._get_shares(storage_index).next()
1877             sf = ShareFile(filename)
1878             return sf.get_leases()
1879         except StopIteration:
1880hunk ./src/allmydata/storage/shares.py 1
1881-#! /usr/bin/python
1882-
1883-from allmydata.storage.mutable import MutableShareFile
1884-from allmydata.storage.immutable import ShareFile
1885-
1886-def get_share_file(filename):
1887-    f = open(filename, "rb")
1888-    prefix = f.read(32)
1889-    f.close()
1890-    if prefix == MutableShareFile.MAGIC:
1891-        return MutableShareFile(filename)
1892-    # otherwise assume it's immutable
1893-    return ShareFile(filename)
1894-
1895rmfile ./src/allmydata/storage/shares.py
1896hunk ./src/allmydata/test/common_util.py 20
1897 
1898 def flip_one_bit(s, offset=0, size=None):
1899     """ flip one random bit of the string s, in a byte greater than or equal to offset and less
1900-    than offset+size. """
1901+    than offset+size. Return the new string. """
1902     if size is None:
1903         size=len(s)-offset
1904     i = randrange(offset, offset+size)
1905hunk ./src/allmydata/test/test_backends.py 7
1906 
1907 from allmydata.test.common_util import ReallyEqualMixin
1908 
1909-import mock
1910+import mock, os
1911 
1912 # This is the code that we're going to be testing.
1913hunk ./src/allmydata/test/test_backends.py 10
1914-from allmydata.storage.server import StorageServer, FSBackend, NullBackend
1915+from allmydata.storage.server import StorageServer
1916+
1917+from allmydata.storage.backends.das.core import DASCore
1918+from allmydata.storage.backends.null.core import NullCore
1919+
1920 
1921 # The following share file contents was generated with
1922 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
1923hunk ./src/allmydata/test/test_backends.py 22
1924 share_data = 'a\x00\x00\x00\x00xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy\x00(\xde\x80'
1925 share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data
1926 
1927-sharefname = 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a/0'
1928+tempdir = 'teststoredir'
1929+sharedirname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
1930+sharefname = os.path.join(sharedirname, '0')
1931 
1932 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
1933     @mock.patch('time.time')
1934hunk ./src/allmydata/test/test_backends.py 58
1935         filesystem in only the prescribed ways. """
1936 
1937         def call_open(fname, mode):
1938-            if fname == 'testdir/bucket_counter.state':
1939-                raise IOError(2, "No such file or directory: 'testdir/bucket_counter.state'")
1940-            elif fname == 'testdir/lease_checker.state':
1941-                raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
1942-            elif fname == 'testdir/lease_checker.history':
1943+            if fname == os.path.join(tempdir,'bucket_counter.state'):
1944+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
1945+            elif fname == os.path.join(tempdir, 'lease_checker.state'):
1946+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
1947+            elif fname == os.path.join(tempdir, 'lease_checker.history'):
1948                 return StringIO()
1949             else:
1950                 self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
1951hunk ./src/allmydata/test/test_backends.py 124
1952     @mock.patch('__builtin__.open')
1953     def setUp(self, mockopen):
1954         def call_open(fname, mode):
1955-            if fname == 'testdir/bucket_counter.state':
1956-                raise IOError(2, "No such file or directory: 'testdir/bucket_counter.state'")
1957-            elif fname == 'testdir/lease_checker.state':
1958-                raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
1959-            elif fname == 'testdir/lease_checker.history':
1960+            if fname == os.path.join(tempdir, 'bucket_counter.state'):
1961+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
1962+            elif fname == os.path.join(tempdir, 'lease_checker.state'):
1963+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
1964+            elif fname == os.path.join(tempdir, 'lease_checker.history'):
1965                 return StringIO()
1966         mockopen.side_effect = call_open
1967hunk ./src/allmydata/test/test_backends.py 131
1968-
1969-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=FSBackend('teststoredir'))
1970+        expiration_policy = {'enabled' : False,
1971+                             'mode' : 'age',
1972+                             'override_lease_duration' : None,
1973+                             'cutoff_date' : None,
1974+                             'sharetypes' : None}
1975+        testbackend = DASCore(tempdir, expiration_policy)
1976+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
1977 
1978     @mock.patch('time.time')
1979     @mock.patch('os.mkdir')
1980hunk ./src/allmydata/test/test_backends.py 148
1981         """ Write a new share. """
1982 
1983         def call_listdir(dirname):
1984-            self.failUnlessReallyEqual(dirname, 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
1985-            raise OSError(2, "No such file or directory: 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a'")
1986+            self.failUnlessReallyEqual(dirname, sharedirname)
1987+            raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares/or/orsxg5dtorxxeylhmvpws3temv4a'))
1988 
1989         mocklistdir.side_effect = call_listdir
1990 
1991hunk ./src/allmydata/test/test_backends.py 178
1992 
1993         sharefile = MockFile()
1994         def call_open(fname, mode):
1995-            self.failUnlessReallyEqual(fname, 'testdir/shares/incoming/or/orsxg5dtorxxeylhmvpws3temv4a/0' )
1996+            self.failUnlessReallyEqual(fname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a', '0' ))
1997             return sharefile
1998 
1999         mockopen.side_effect = call_open
2000hunk ./src/allmydata/test/test_backends.py 200
2001         StorageServer object. """
2002 
2003         def call_listdir(dirname):
2004-            self.failUnlessReallyEqual(dirname,'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
2005+            self.failUnlessReallyEqual(dirname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
2006             return ['0']
2007 
2008         mocklistdir.side_effect = call_listdir
2009}
2010[checkpoint patch
2011wilcoxjg@gmail.com**20110626165715
2012 Ignore-this: fbfce2e8a1c1bb92715793b8ad6854d5
2013] {
2014hunk ./src/allmydata/storage/backends/das/core.py 21
2015 from allmydata.storage.lease import LeaseInfo
2016 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
2017      create_mutable_sharefile
2018-from allmydata.storage.backends.das.immutable import NullBucketWriter, BucketWriter, BucketReader
2019+from allmydata.storage.immutable import BucketWriter, BucketReader
2020 from allmydata.storage.crawler import FSBucketCountingCrawler
2021 from allmydata.storage.backends.das.expirer import FSLeaseCheckingCrawler
2022 
2023hunk ./src/allmydata/storage/backends/das/core.py 27
2024 from zope.interface import implements
2025 
2026+# $SHARENUM matches this regex:
2027+NUM_RE=re.compile("^[0-9]+$")
2028+
2029 class DASCore(Backend):
2030     implements(IStorageBackend)
2031     def __init__(self, storedir, expiration_policy, readonly=False, reserved_space=0):
2032hunk ./src/allmydata/storage/backends/das/core.py 80
2033         return fileutil.get_available_space(self.storedir, self.reserved_space)
2034 
2035     def get_shares(self, storage_index):
2036-        """Return a list of the FSBShare objects that correspond to the passed storage_index."""
2037+        """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
2038         finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
2039         try:
2040             for f in os.listdir(finalstoragedir):
2041hunk ./src/allmydata/storage/backends/das/core.py 86
2042                 if NUM_RE.match(f):
2043                     filename = os.path.join(finalstoragedir, f)
2044-                    yield FSBShare(filename, int(f))
2045+                    yield ImmutableShare(self.sharedir, storage_index, int(f))
2046         except OSError:
2047             # Commonly caused by there being no buckets at all.
2048             pass
2049hunk ./src/allmydata/storage/backends/das/core.py 95
2050         immsh = ImmutableShare(self.sharedir, storage_index, shnum, max_size=max_space_per_bucket, create=True)
2051         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
2052         return bw
2053+
2054+    def set_storage_server(self, ss):
2055+        self.ss = ss
2056         
2057 
2058 # each share file (in storage/shares/$SI/$SHNUM) contains lease information
2059hunk ./src/allmydata/storage/server.py 29
2060 # Where "$START" denotes the first 10 bits worth of $STORAGEINDEX (that's 2
2061 # base-32 chars).
2062 
2063-# $SHARENUM matches this regex:
2064-NUM_RE=re.compile("^[0-9]+$")
2065 
2066 class StorageServer(service.MultiService, Referenceable):
2067     implements(RIStorageServer, IStatsProducer)
2068}
2069[checkpoint4
2070wilcoxjg@gmail.com**20110628202202
2071 Ignore-this: 9778596c10bb066b58fc211f8c1707b7
2072] {
2073hunk ./src/allmydata/storage/backends/das/core.py 96
2074         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
2075         return bw
2076 
2077+    def make_bucket_reader(self, share):
2078+        return BucketReader(self.ss, share)
2079+
2080     def set_storage_server(self, ss):
2081         self.ss = ss
2082         
2083hunk ./src/allmydata/storage/backends/das/core.py 138
2084         must not be None. """
2085         precondition((max_size is not None) or (not create), max_size, create)
2086         self.shnum = shnum
2087+        self.storage_index = storageindex
2088         self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
2089         self._max_size = max_size
2090         if create:
2091hunk ./src/allmydata/storage/backends/das/core.py 173
2092             self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
2093         self._data_offset = 0xc
2094 
2095+    def get_shnum(self):
2096+        return self.shnum
2097+
2098     def unlink(self):
2099         os.unlink(self.fname)
2100 
2101hunk ./src/allmydata/storage/backends/null/core.py 2
2102 from allmydata.storage.backends.base import Backend
2103+from allmydata.storage.immutable import BucketWriter, BucketReader
2104 
2105 class NullCore(Backend):
2106     def __init__(self):
2107hunk ./src/allmydata/storage/backends/null/core.py 17
2108     def get_share(self, storage_index, sharenum):
2109         return None
2110 
2111-    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
2112-        return NullBucketWriter()
2113+    def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
2114+       
2115+        return BucketWriter(self.ss, immutableshare, max_space_per_bucket, lease_info, canary)
2116+
2117+    def set_storage_server(self, ss):
2118+        self.ss = ss
2119+
2120+class ImmutableShare:
2121+    sharetype = "immutable"
2122+
2123+    def __init__(self, sharedir, storageindex, shnum, max_size=None, create=False):
2124+        """ If max_size is not None then I won't allow more than
2125+        max_size to be written to me. If create=True then max_size
2126+        must not be None. """
2127+        precondition((max_size is not None) or (not create), max_size, create)
2128+        self.shnum = shnum
2129+        self.storage_index = storageindex
2130+        self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
2131+        self._max_size = max_size
2132+        if create:
2133+            # touch the file, so later callers will see that we're working on
2134+            # it. Also construct the metadata.
2135+            assert not os.path.exists(self.fname)
2136+            fileutil.make_dirs(os.path.dirname(self.fname))
2137+            f = open(self.fname, 'wb')
2138+            # The second field -- the four-byte share data length -- is no
2139+            # longer used as of Tahoe v1.3.0, but we continue to write it in
2140+            # there in case someone downgrades a storage server from >=
2141+            # Tahoe-1.3.0 to < Tahoe-1.3.0, or moves a share file from one
2142+            # server to another, etc. We do saturation -- a share data length
2143+            # larger than 2**32-1 (what can fit into the field) is marked as
2144+            # the largest length that can fit into the field. That way, even
2145+            # if this does happen, the old < v1.3.0 server will still allow
2146+            # clients to read the first part of the share.
2147+            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
2148+            f.close()
2149+            self._lease_offset = max_size + 0x0c
2150+            self._num_leases = 0
2151+        else:
2152+            f = open(self.fname, 'rb')
2153+            filesize = os.path.getsize(self.fname)
2154+            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
2155+            f.close()
2156+            if version != 1:
2157+                msg = "sharefile %s had version %d but we wanted 1" % \
2158+                      (self.fname, version)
2159+                raise UnknownImmutableContainerVersionError(msg)
2160+            self._num_leases = num_leases
2161+            self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
2162+        self._data_offset = 0xc
2163+
2164+    def get_shnum(self):
2165+        return self.shnum
2166+
2167+    def unlink(self):
2168+        os.unlink(self.fname)
2169+
2170+    def read_share_data(self, offset, length):
2171+        precondition(offset >= 0)
2172+        # Reads beyond the end of the data are truncated. Reads that start
2173+        # beyond the end of the data return an empty string.
2174+        seekpos = self._data_offset+offset
2175+        fsize = os.path.getsize(self.fname)
2176+        actuallength = max(0, min(length, fsize-seekpos))
2177+        if actuallength == 0:
2178+            return ""
2179+        f = open(self.fname, 'rb')
2180+        f.seek(seekpos)
2181+        return f.read(actuallength)
2182+
2183+    def write_share_data(self, offset, data):
2184+        length = len(data)
2185+        precondition(offset >= 0, offset)
2186+        if self._max_size is not None and offset+length > self._max_size:
2187+            raise DataTooLargeError(self._max_size, offset, length)
2188+        f = open(self.fname, 'rb+')
2189+        real_offset = self._data_offset+offset
2190+        f.seek(real_offset)
2191+        assert f.tell() == real_offset
2192+        f.write(data)
2193+        f.close()
2194+
2195+    def _write_lease_record(self, f, lease_number, lease_info):
2196+        offset = self._lease_offset + lease_number * self.LEASE_SIZE
2197+        f.seek(offset)
2198+        assert f.tell() == offset
2199+        f.write(lease_info.to_immutable_data())
2200+
2201+    def _read_num_leases(self, f):
2202+        f.seek(0x08)
2203+        (num_leases,) = struct.unpack(">L", f.read(4))
2204+        return num_leases
2205+
2206+    def _write_num_leases(self, f, num_leases):
2207+        f.seek(0x08)
2208+        f.write(struct.pack(">L", num_leases))
2209+
2210+    def _truncate_leases(self, f, num_leases):
2211+        f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE)
2212+
2213+    def get_leases(self):
2214+        """Yields a LeaseInfo instance for all leases."""
2215+        f = open(self.fname, 'rb')
2216+        (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
2217+        f.seek(self._lease_offset)
2218+        for i in range(num_leases):
2219+            data = f.read(self.LEASE_SIZE)
2220+            if data:
2221+                yield LeaseInfo().from_immutable_data(data)
2222+
2223+    def add_lease(self, lease_info):
2224+        f = open(self.fname, 'rb+')
2225+        num_leases = self._read_num_leases(f)
2226+        self._write_lease_record(f, num_leases, lease_info)
2227+        self._write_num_leases(f, num_leases+1)
2228+        f.close()
2229+
2230+    def renew_lease(self, renew_secret, new_expire_time):
2231+        for i,lease in enumerate(self.get_leases()):
2232+            if constant_time_compare(lease.renew_secret, renew_secret):
2233+                # yup. See if we need to update the owner time.
2234+                if new_expire_time > lease.expiration_time:
2235+                    # yes
2236+                    lease.expiration_time = new_expire_time
2237+                    f = open(self.fname, 'rb+')
2238+                    self._write_lease_record(f, i, lease)
2239+                    f.close()
2240+                return
2241+        raise IndexError("unable to renew non-existent lease")
2242+
2243+    def add_or_renew_lease(self, lease_info):
2244+        try:
2245+            self.renew_lease(lease_info.renew_secret,
2246+                             lease_info.expiration_time)
2247+        except IndexError:
2248+            self.add_lease(lease_info)
2249+
2250+
2251+    def cancel_lease(self, cancel_secret):
2252+        """Remove a lease with the given cancel_secret. If the last lease is
2253+        cancelled, the file will be removed. Return the number of bytes that
2254+        were freed (by truncating the list of leases, and possibly by
2255+        deleting the file. Raise IndexError if there was no lease with the
2256+        given cancel_secret.
2257+        """
2258+
2259+        leases = list(self.get_leases())
2260+        num_leases_removed = 0
2261+        for i,lease in enumerate(leases):
2262+            if constant_time_compare(lease.cancel_secret, cancel_secret):
2263+                leases[i] = None
2264+                num_leases_removed += 1
2265+        if not num_leases_removed:
2266+            raise IndexError("unable to find matching lease to cancel")
2267+        if num_leases_removed:
2268+            # pack and write out the remaining leases. We write these out in
2269+            # the same order as they were added, so that if we crash while
2270+            # doing this, we won't lose any non-cancelled leases.
2271+            leases = [l for l in leases if l] # remove the cancelled leases
2272+            f = open(self.fname, 'rb+')
2273+            for i,lease in enumerate(leases):
2274+                self._write_lease_record(f, i, lease)
2275+            self._write_num_leases(f, len(leases))
2276+            self._truncate_leases(f, len(leases))
2277+            f.close()
2278+        space_freed = self.LEASE_SIZE * num_leases_removed
2279+        if not len(leases):
2280+            space_freed += os.stat(self.fname)[stat.ST_SIZE]
2281+            self.unlink()
2282+        return space_freed
2283hunk ./src/allmydata/storage/immutable.py 114
2284 class BucketReader(Referenceable):
2285     implements(RIBucketReader)
2286 
2287-    def __init__(self, ss, sharefname, storage_index=None, shnum=None):
2288+    def __init__(self, ss, share):
2289         self.ss = ss
2290hunk ./src/allmydata/storage/immutable.py 116
2291-        self._share_file = ShareFile(sharefname)
2292-        self.storage_index = storage_index
2293-        self.shnum = shnum
2294+        self._share_file = share
2295+        self.storage_index = share.storage_index
2296+        self.shnum = share.shnum
2297 
2298     def __repr__(self):
2299         return "<%s %s %s>" % (self.__class__.__name__,
2300hunk ./src/allmydata/storage/server.py 316
2301         si_s = si_b2a(storage_index)
2302         log.msg("storage: get_buckets %s" % si_s)
2303         bucketreaders = {} # k: sharenum, v: BucketReader
2304-        for shnum, filename in self.backend.get_shares(storage_index):
2305-            bucketreaders[shnum] = BucketReader(self, filename,
2306-                                                storage_index, shnum)
2307+        self.backend.set_storage_server(self)
2308+        for share in self.backend.get_shares(storage_index):
2309+            bucketreaders[share.get_shnum()] = self.backend.make_bucket_reader(share)
2310         self.add_latency("get", time.time() - start)
2311         return bucketreaders
2312 
2313hunk ./src/allmydata/test/test_backends.py 25
2314 tempdir = 'teststoredir'
2315 sharedirname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
2316 sharefname = os.path.join(sharedirname, '0')
2317+expiration_policy = {'enabled' : False,
2318+                     'mode' : 'age',
2319+                     'override_lease_duration' : None,
2320+                     'cutoff_date' : None,
2321+                     'sharetypes' : None}
2322 
2323 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
2324     @mock.patch('time.time')
2325hunk ./src/allmydata/test/test_backends.py 43
2326         tries to read or write to the file system. """
2327 
2328         # Now begin the test.
2329-        s = StorageServer('testnodeidxxxxxxxxxx', backend=NullBackend())
2330+        s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2331 
2332         self.failIf(mockisdir.called)
2333         self.failIf(mocklistdir.called)
2334hunk ./src/allmydata/test/test_backends.py 74
2335         mockopen.side_effect = call_open
2336 
2337         # Now begin the test.
2338-        s = StorageServer('testnodeidxxxxxxxxxx', backend=FSBackend('teststoredir'))
2339+        s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore('teststoredir', expiration_policy))
2340 
2341         self.failIf(mockisdir.called)
2342         self.failIf(mocklistdir.called)
2343hunk ./src/allmydata/test/test_backends.py 86
2344 
2345 class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
2346     def setUp(self):
2347-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullBackend())
2348+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2349 
2350     @mock.patch('os.mkdir')
2351     @mock.patch('__builtin__.open')
2352hunk ./src/allmydata/test/test_backends.py 136
2353             elif fname == os.path.join(tempdir, 'lease_checker.history'):
2354                 return StringIO()
2355         mockopen.side_effect = call_open
2356-        expiration_policy = {'enabled' : False,
2357-                             'mode' : 'age',
2358-                             'override_lease_duration' : None,
2359-                             'cutoff_date' : None,
2360-                             'sharetypes' : None}
2361         testbackend = DASCore(tempdir, expiration_policy)
2362         self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
2363 
2364}
2365[checkpoint5
2366wilcoxjg@gmail.com**20110705034626
2367 Ignore-this: 255780bd58299b0aa33c027e9d008262
2368] {
2369addfile ./src/allmydata/storage/backends/base.py
2370hunk ./src/allmydata/storage/backends/base.py 1
2371+from twisted.application import service
2372+
2373+class Backend(service.MultiService):
2374+    def __init__(self):
2375+        service.MultiService.__init__(self)
2376hunk ./src/allmydata/storage/backends/null/core.py 19
2377 
2378     def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
2379         
2380+        immutableshare = ImmutableShare()
2381         return BucketWriter(self.ss, immutableshare, max_space_per_bucket, lease_info, canary)
2382 
2383     def set_storage_server(self, ss):
2384hunk ./src/allmydata/storage/backends/null/core.py 28
2385 class ImmutableShare:
2386     sharetype = "immutable"
2387 
2388-    def __init__(self, sharedir, storageindex, shnum, max_size=None, create=False):
2389+    def __init__(self):
2390         """ If max_size is not None then I won't allow more than
2391         max_size to be written to me. If create=True then max_size
2392         must not be None. """
2393hunk ./src/allmydata/storage/backends/null/core.py 32
2394-        precondition((max_size is not None) or (not create), max_size, create)
2395-        self.shnum = shnum
2396-        self.storage_index = storageindex
2397-        self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
2398-        self._max_size = max_size
2399-        if create:
2400-            # touch the file, so later callers will see that we're working on
2401-            # it. Also construct the metadata.
2402-            assert not os.path.exists(self.fname)
2403-            fileutil.make_dirs(os.path.dirname(self.fname))
2404-            f = open(self.fname, 'wb')
2405-            # The second field -- the four-byte share data length -- is no
2406-            # longer used as of Tahoe v1.3.0, but we continue to write it in
2407-            # there in case someone downgrades a storage server from >=
2408-            # Tahoe-1.3.0 to < Tahoe-1.3.0, or moves a share file from one
2409-            # server to another, etc. We do saturation -- a share data length
2410-            # larger than 2**32-1 (what can fit into the field) is marked as
2411-            # the largest length that can fit into the field. That way, even
2412-            # if this does happen, the old < v1.3.0 server will still allow
2413-            # clients to read the first part of the share.
2414-            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
2415-            f.close()
2416-            self._lease_offset = max_size + 0x0c
2417-            self._num_leases = 0
2418-        else:
2419-            f = open(self.fname, 'rb')
2420-            filesize = os.path.getsize(self.fname)
2421-            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
2422-            f.close()
2423-            if version != 1:
2424-                msg = "sharefile %s had version %d but we wanted 1" % \
2425-                      (self.fname, version)
2426-                raise UnknownImmutableContainerVersionError(msg)
2427-            self._num_leases = num_leases
2428-            self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
2429-        self._data_offset = 0xc
2430+        pass
2431 
2432     def get_shnum(self):
2433         return self.shnum
2434hunk ./src/allmydata/storage/backends/null/core.py 54
2435         return f.read(actuallength)
2436 
2437     def write_share_data(self, offset, data):
2438-        length = len(data)
2439-        precondition(offset >= 0, offset)
2440-        if self._max_size is not None and offset+length > self._max_size:
2441-            raise DataTooLargeError(self._max_size, offset, length)
2442-        f = open(self.fname, 'rb+')
2443-        real_offset = self._data_offset+offset
2444-        f.seek(real_offset)
2445-        assert f.tell() == real_offset
2446-        f.write(data)
2447-        f.close()
2448+        pass
2449 
2450     def _write_lease_record(self, f, lease_number, lease_info):
2451         offset = self._lease_offset + lease_number * self.LEASE_SIZE
2452hunk ./src/allmydata/storage/backends/null/core.py 84
2453             if data:
2454                 yield LeaseInfo().from_immutable_data(data)
2455 
2456-    def add_lease(self, lease_info):
2457-        f = open(self.fname, 'rb+')
2458-        num_leases = self._read_num_leases(f)
2459-        self._write_lease_record(f, num_leases, lease_info)
2460-        self._write_num_leases(f, num_leases+1)
2461-        f.close()
2462+    def add_lease(self, lease):
2463+        pass
2464 
2465     def renew_lease(self, renew_secret, new_expire_time):
2466         for i,lease in enumerate(self.get_leases()):
2467hunk ./src/allmydata/test/test_backends.py 32
2468                      'sharetypes' : None}
2469 
2470 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
2471-    @mock.patch('time.time')
2472-    @mock.patch('os.mkdir')
2473-    @mock.patch('__builtin__.open')
2474-    @mock.patch('os.listdir')
2475-    @mock.patch('os.path.isdir')
2476-    def test_create_server_null_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
2477-        """ This tests whether a server instance can be constructed
2478-        with a null backend. The server instance fails the test if it
2479-        tries to read or write to the file system. """
2480-
2481-        # Now begin the test.
2482-        s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2483-
2484-        self.failIf(mockisdir.called)
2485-        self.failIf(mocklistdir.called)
2486-        self.failIf(mockopen.called)
2487-        self.failIf(mockmkdir.called)
2488-
2489-        # You passed!
2490-
2491     @mock.patch('time.time')
2492     @mock.patch('os.mkdir')
2493     @mock.patch('__builtin__.open')
2494hunk ./src/allmydata/test/test_backends.py 53
2495                 self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
2496         mockopen.side_effect = call_open
2497 
2498-        # Now begin the test.
2499-        s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore('teststoredir', expiration_policy))
2500-
2501-        self.failIf(mockisdir.called)
2502-        self.failIf(mocklistdir.called)
2503-        self.failIf(mockopen.called)
2504-        self.failIf(mockmkdir.called)
2505-        self.failIf(mocktime.called)
2506-
2507-        # You passed!
2508-
2509-class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
2510-    def setUp(self):
2511-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2512-
2513-    @mock.patch('os.mkdir')
2514-    @mock.patch('__builtin__.open')
2515-    @mock.patch('os.listdir')
2516-    @mock.patch('os.path.isdir')
2517-    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir):
2518-        """ Write a new share. """
2519-
2520-        # Now begin the test.
2521-        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
2522-        bs[0].remote_write(0, 'a')
2523-        self.failIf(mockisdir.called)
2524-        self.failIf(mocklistdir.called)
2525-        self.failIf(mockopen.called)
2526-        self.failIf(mockmkdir.called)
2527+        def call_isdir(fname):
2528+            if fname == os.path.join(tempdir,'shares'):
2529+                return True
2530+            elif fname == os.path.join(tempdir,'shares', 'incoming'):
2531+                return True
2532+            else:
2533+                self.fail("Server with FS backend tried to idsir '%s'" % (fname,))
2534+        mockisdir.side_effect = call_isdir
2535 
2536hunk ./src/allmydata/test/test_backends.py 62
2537-    @mock.patch('os.path.exists')
2538-    @mock.patch('os.path.getsize')
2539-    @mock.patch('__builtin__.open')
2540-    @mock.patch('os.listdir')
2541-    def test_read_share(self, mocklistdir, mockopen, mockgetsize, mockexists):
2542-        """ This tests whether the code correctly finds and reads
2543-        shares written out by old (Tahoe-LAFS <= v1.8.2)
2544-        servers. There is a similar test in test_download, but that one
2545-        is from the perspective of the client and exercises a deeper
2546-        stack of code. This one is for exercising just the
2547-        StorageServer object. """
2548+        def call_mkdir(fname, mode):
2549+            """XXX something is calling mkdir teststoredir and teststoredir/shares twice...  this is odd!"""
2550+            self.failUnlessEqual(0777, mode)
2551+            if fname == tempdir:
2552+                return None
2553+            elif fname == os.path.join(tempdir,'shares'):
2554+                return None
2555+            elif fname == os.path.join(tempdir,'shares', 'incoming'):
2556+                return None
2557+            else:
2558+                self.fail("Server with FS backend tried to mkdir '%s'" % (fname,))
2559+        mockmkdir.side_effect = call_mkdir
2560 
2561         # Now begin the test.
2562hunk ./src/allmydata/test/test_backends.py 76
2563-        bs = self.s.remote_get_buckets('teststorage_index')
2564+        s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore('teststoredir', expiration_policy))
2565 
2566hunk ./src/allmydata/test/test_backends.py 78
2567-        self.failUnlessEqual(len(bs), 0)
2568-        self.failIf(mocklistdir.called)
2569-        self.failIf(mockopen.called)
2570-        self.failIf(mockgetsize.called)
2571-        self.failIf(mockexists.called)
2572+        self.failIf(mocklistdir.called, mocklistdir.call_args_list)
2573 
2574 
2575 class TestServerFSBackend(unittest.TestCase, ReallyEqualMixin):
2576hunk ./src/allmydata/test/test_backends.py 193
2577         self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
2578 
2579 
2580+
2581+class TestBackendConstruction(unittest.TestCase, ReallyEqualMixin):
2582+    @mock.patch('time.time')
2583+    @mock.patch('os.mkdir')
2584+    @mock.patch('__builtin__.open')
2585+    @mock.patch('os.listdir')
2586+    @mock.patch('os.path.isdir')
2587+    def test_create_fs_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
2588+        """ This tests whether a file system backend instance can be
2589+        constructed. To pass the test, it has to use the
2590+        filesystem in only the prescribed ways. """
2591+
2592+        def call_open(fname, mode):
2593+            if fname == os.path.join(tempdir,'bucket_counter.state'):
2594+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
2595+            elif fname == os.path.join(tempdir, 'lease_checker.state'):
2596+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
2597+            elif fname == os.path.join(tempdir, 'lease_checker.history'):
2598+                return StringIO()
2599+            else:
2600+                self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
2601+        mockopen.side_effect = call_open
2602+
2603+        def call_isdir(fname):
2604+            if fname == os.path.join(tempdir,'shares'):
2605+                return True
2606+            elif fname == os.path.join(tempdir,'shares', 'incoming'):
2607+                return True
2608+            else:
2609+                self.fail("Server with FS backend tried to idsir '%s'" % (fname,))
2610+        mockisdir.side_effect = call_isdir
2611+
2612+        def call_mkdir(fname, mode):
2613+            """XXX something is calling mkdir teststoredir and teststoredir/shares twice...  this is odd!"""
2614+            self.failUnlessEqual(0777, mode)
2615+            if fname == tempdir:
2616+                return None
2617+            elif fname == os.path.join(tempdir,'shares'):
2618+                return None
2619+            elif fname == os.path.join(tempdir,'shares', 'incoming'):
2620+                return None
2621+            else:
2622+                self.fail("Server with FS backend tried to mkdir '%s'" % (fname,))
2623+        mockmkdir.side_effect = call_mkdir
2624+
2625+        # Now begin the test.
2626+        DASCore('teststoredir', expiration_policy)
2627+
2628+        self.failIf(mocklistdir.called, mocklistdir.call_args_list)
2629}
2630[checkpoint 6
2631wilcoxjg@gmail.com**20110706190824
2632 Ignore-this: 2fb2d722b53fe4a72c99118c01fceb69
2633] {
2634hunk ./src/allmydata/interfaces.py 100
2635                          renew_secret=LeaseRenewSecret,
2636                          cancel_secret=LeaseCancelSecret,
2637                          sharenums=SetOf(int, maxLength=MAX_BUCKETS),
2638-                         allocated_size=Offset, canary=Referenceable):
2639+                         allocated_size=Offset,
2640+                         canary=Referenceable):
2641         """
2642hunk ./src/allmydata/interfaces.py 103
2643-        @param storage_index: the index of the bucket to be created or
2644+        @param storage_index: the index of the shares to be created or
2645                               increfed.
2646hunk ./src/allmydata/interfaces.py 105
2647-        @param sharenums: these are the share numbers (probably between 0 and
2648-                          99) that the sender is proposing to store on this
2649-                          server.
2650-        @param renew_secret: This is the secret used to protect bucket refresh
2651+        @param renew_secret: This is the secret used to protect shares refresh
2652                              This secret is generated by the client and
2653                              stored for later comparison by the server. Each
2654                              server is given a different secret.
2655hunk ./src/allmydata/interfaces.py 109
2656-        @param cancel_secret: Like renew_secret, but protects bucket decref.
2657-        @param canary: If the canary is lost before close(), the bucket is
2658+        @param cancel_secret: Like renew_secret, but protects shares decref.
2659+        @param sharenums: these are the share numbers (probably between 0 and
2660+                          99) that the sender is proposing to store on this
2661+                          server.
2662+        @param allocated_size: XXX The size of the shares the client wishes to store.
2663+        @param canary: If the canary is lost before close(), the shares are
2664                        deleted.
2665hunk ./src/allmydata/interfaces.py 116
2666+
2667         @return: tuple of (alreadygot, allocated), where alreadygot is what we
2668                  already have and allocated is what we hereby agree to accept.
2669                  New leases are added for shares in both lists.
2670hunk ./src/allmydata/interfaces.py 128
2671                   renew_secret=LeaseRenewSecret,
2672                   cancel_secret=LeaseCancelSecret):
2673         """
2674-        Add a new lease on the given bucket. If the renew_secret matches an
2675+        Add a new lease on the given shares. If the renew_secret matches an
2676         existing lease, that lease will be renewed instead. If there is no
2677         bucket for the given storage_index, return silently. (note that in
2678         tahoe-1.3.0 and earlier, IndexError was raised if there was no
2679hunk ./src/allmydata/storage/server.py 17
2680 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
2681      create_mutable_sharefile
2682 
2683-from zope.interface import implements
2684-
2685 # storage/
2686 # storage/shares/incoming
2687 #   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
2688hunk ./src/allmydata/test/test_backends.py 6
2689 from StringIO import StringIO
2690 
2691 from allmydata.test.common_util import ReallyEqualMixin
2692+from allmydata.util.assertutil import _assert
2693 
2694 import mock, os
2695 
2696hunk ./src/allmydata/test/test_backends.py 92
2697                 raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
2698             elif fname == os.path.join(tempdir, 'lease_checker.history'):
2699                 return StringIO()
2700+            else:
2701+                _assert(False, "The tester code doesn't recognize this case.") 
2702+
2703         mockopen.side_effect = call_open
2704         testbackend = DASCore(tempdir, expiration_policy)
2705         self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
2706hunk ./src/allmydata/test/test_backends.py 109
2707 
2708         def call_listdir(dirname):
2709             self.failUnlessReallyEqual(dirname, sharedirname)
2710-            raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares/or/orsxg5dtorxxeylhmvpws3temv4a'))
2711+            raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
2712 
2713         mocklistdir.side_effect = call_listdir
2714 
2715hunk ./src/allmydata/test/test_backends.py 113
2716+        def call_isdir(dirname):
2717+            self.failUnlessReallyEqual(dirname, sharedirname)
2718+            return True
2719+
2720+        mockisdir.side_effect = call_isdir
2721+
2722+        def call_mkdir(dirname, permissions):
2723+            if dirname not in [sharedirname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
2724+                self.Fail
2725+            else:
2726+                return True
2727+
2728+        mockmkdir.side_effect = call_mkdir
2729+
2730         class MockFile:
2731             def __init__(self):
2732                 self.buffer = ''
2733hunk ./src/allmydata/test/test_backends.py 156
2734             return sharefile
2735 
2736         mockopen.side_effect = call_open
2737+
2738         # Now begin the test.
2739         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
2740         bs[0].remote_write(0, 'a')
2741hunk ./src/allmydata/test/test_backends.py 161
2742         self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
2743+       
2744+        # Now test the allocated_size method.
2745+        spaceint = self.s.allocated_size()
2746 
2747     @mock.patch('os.path.exists')
2748     @mock.patch('os.path.getsize')
2749}
2750[checkpoint 7
2751wilcoxjg@gmail.com**20110706200820
2752 Ignore-this: 16b790efc41a53964cbb99c0e86dafba
2753] hunk ./src/allmydata/test/test_backends.py 164
2754         
2755         # Now test the allocated_size method.
2756         spaceint = self.s.allocated_size()
2757+        self.failUnlessReallyEqual(spaceint, 1)
2758 
2759     @mock.patch('os.path.exists')
2760     @mock.patch('os.path.getsize')
2761[checkpoint8
2762wilcoxjg@gmail.com**20110706223126
2763 Ignore-this: 97336180883cb798b16f15411179f827
2764   The nullbackend is necessary to test unlimited space in a backend.  It is a mock-like object.
2765] hunk ./src/allmydata/test/test_backends.py 32
2766                      'cutoff_date' : None,
2767                      'sharetypes' : None}
2768 
2769+class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
2770+    def setUp(self):
2771+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2772+
2773+    @mock.patch('os.mkdir')
2774+    @mock.patch('__builtin__.open')
2775+    @mock.patch('os.listdir')
2776+    @mock.patch('os.path.isdir')
2777+    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir):
2778+        """ Write a new share. """
2779+
2780+        # Now begin the test.
2781+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
2782+        bs[0].remote_write(0, 'a')
2783+        self.failIf(mockisdir.called)
2784+        self.failIf(mocklistdir.called)
2785+        self.failIf(mockopen.called)
2786+        self.failIf(mockmkdir.called)
2787+
2788 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
2789     @mock.patch('time.time')
2790     @mock.patch('os.mkdir')
2791[checkpoint 9
2792wilcoxjg@gmail.com**20110707042942
2793 Ignore-this: 75396571fd05944755a104a8fc38aaf6
2794] {
2795hunk ./src/allmydata/storage/backends/das/core.py 88
2796                     filename = os.path.join(finalstoragedir, f)
2797                     yield ImmutableShare(self.sharedir, storage_index, int(f))
2798         except OSError:
2799-            # Commonly caused by there being no buckets at all.
2800+            # Commonly caused by there being no shares at all.
2801             pass
2802         
2803     def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
2804hunk ./src/allmydata/storage/backends/das/core.py 141
2805         self.storage_index = storageindex
2806         self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
2807         self._max_size = max_size
2808+        self.incomingdir = os.path.join(sharedir, 'incoming')
2809+        si_dir = storage_index_to_dir(storageindex)
2810+        self.incominghome = os.path.join(self.incomingdir, si_dir, "%d" % shnum)
2811+        self.finalhome = os.path.join(sharedir, si_dir, "%d" % shnum)
2812         if create:
2813             # touch the file, so later callers will see that we're working on
2814             # it. Also construct the metadata.
2815hunk ./src/allmydata/storage/backends/das/core.py 177
2816             self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
2817         self._data_offset = 0xc
2818 
2819+    def close(self):
2820+        fileutil.make_dirs(os.path.dirname(self.finalhome))
2821+        fileutil.rename(self.incominghome, self.finalhome)
2822+        try:
2823+            # self.incominghome is like storage/shares/incoming/ab/abcde/4 .
2824+            # We try to delete the parent (.../ab/abcde) to avoid leaving
2825+            # these directories lying around forever, but the delete might
2826+            # fail if we're working on another share for the same storage
2827+            # index (like ab/abcde/5). The alternative approach would be to
2828+            # use a hierarchy of objects (PrefixHolder, BucketHolder,
2829+            # ShareWriter), each of which is responsible for a single
2830+            # directory on disk, and have them use reference counting of
2831+            # their children to know when they should do the rmdir. This
2832+            # approach is simpler, but relies on os.rmdir refusing to delete
2833+            # a non-empty directory. Do *not* use fileutil.rm_dir() here!
2834+            os.rmdir(os.path.dirname(self.incominghome))
2835+            # we also delete the grandparent (prefix) directory, .../ab ,
2836+            # again to avoid leaving directories lying around. This might
2837+            # fail if there is another bucket open that shares a prefix (like
2838+            # ab/abfff).
2839+            os.rmdir(os.path.dirname(os.path.dirname(self.incominghome)))
2840+            # we leave the great-grandparent (incoming/) directory in place.
2841+        except EnvironmentError:
2842+            # ignore the "can't rmdir because the directory is not empty"
2843+            # exceptions, those are normal consequences of the
2844+            # above-mentioned conditions.
2845+            pass
2846+        pass
2847+       
2848+    def stat(self):
2849+        return os.stat(self.finalhome)[stat.ST_SIZE]
2850+
2851     def get_shnum(self):
2852         return self.shnum
2853 
2854hunk ./src/allmydata/storage/immutable.py 7
2855 
2856 from zope.interface import implements
2857 from allmydata.interfaces import RIBucketWriter, RIBucketReader
2858-from allmydata.util import base32, fileutil, log
2859+from allmydata.util import base32, log
2860 from allmydata.util.assertutil import precondition
2861 from allmydata.util.hashutil import constant_time_compare
2862 from allmydata.storage.lease import LeaseInfo
2863hunk ./src/allmydata/storage/immutable.py 44
2864     def remote_close(self):
2865         precondition(not self.closed)
2866         start = time.time()
2867-
2868-        fileutil.make_dirs(os.path.dirname(self.finalhome))
2869-        fileutil.rename(self.incominghome, self.finalhome)
2870-        try:
2871-            # self.incominghome is like storage/shares/incoming/ab/abcde/4 .
2872-            # We try to delete the parent (.../ab/abcde) to avoid leaving
2873-            # these directories lying around forever, but the delete might
2874-            # fail if we're working on another share for the same storage
2875-            # index (like ab/abcde/5). The alternative approach would be to
2876-            # use a hierarchy of objects (PrefixHolder, BucketHolder,
2877-            # ShareWriter), each of which is responsible for a single
2878-            # directory on disk, and have them use reference counting of
2879-            # their children to know when they should do the rmdir. This
2880-            # approach is simpler, but relies on os.rmdir refusing to delete
2881-            # a non-empty directory. Do *not* use fileutil.rm_dir() here!
2882-            os.rmdir(os.path.dirname(self.incominghome))
2883-            # we also delete the grandparent (prefix) directory, .../ab ,
2884-            # again to avoid leaving directories lying around. This might
2885-            # fail if there is another bucket open that shares a prefix (like
2886-            # ab/abfff).
2887-            os.rmdir(os.path.dirname(os.path.dirname(self.incominghome)))
2888-            # we leave the great-grandparent (incoming/) directory in place.
2889-        except EnvironmentError:
2890-            # ignore the "can't rmdir because the directory is not empty"
2891-            # exceptions, those are normal consequences of the
2892-            # above-mentioned conditions.
2893-            pass
2894+        self._sharefile.close()
2895         self._sharefile = None
2896         self.closed = True
2897         self._canary.dontNotifyOnDisconnect(self._disconnect_marker)
2898hunk ./src/allmydata/storage/immutable.py 49
2899 
2900-        filelen = os.stat(self.finalhome)[stat.ST_SIZE]
2901+        filelen = self._sharefile.stat()
2902         self.ss.bucket_writer_closed(self, filelen)
2903         self.ss.add_latency("close", time.time() - start)
2904         self.ss.count("close")
2905hunk ./src/allmydata/storage/server.py 45
2906         self._active_writers = weakref.WeakKeyDictionary()
2907         self.backend = backend
2908         self.backend.setServiceParent(self)
2909+        self.backend.set_storage_server(self)
2910         log.msg("StorageServer created", facility="tahoe.storage")
2911 
2912         self.latencies = {"allocate": [], # immutable
2913hunk ./src/allmydata/storage/server.py 220
2914 
2915         for shnum in (sharenums - alreadygot):
2916             if (not limited) or (remaining_space >= max_space_per_bucket):
2917-                #XXX or should the following line occur in storage server construtor? ok! we need to create the new share file.
2918-                self.backend.set_storage_server(self)
2919                 bw = self.backend.make_bucket_writer(storage_index, shnum,
2920                                                      max_space_per_bucket, lease_info, canary)
2921                 bucketwriters[shnum] = bw
2922hunk ./src/allmydata/test/test_backends.py 117
2923         mockopen.side_effect = call_open
2924         testbackend = DASCore(tempdir, expiration_policy)
2925         self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
2926-
2927+   
2928+    @mock.patch('allmydata.util.fileutil.get_available_space')
2929     @mock.patch('time.time')
2930     @mock.patch('os.mkdir')
2931     @mock.patch('__builtin__.open')
2932hunk ./src/allmydata/test/test_backends.py 124
2933     @mock.patch('os.listdir')
2934     @mock.patch('os.path.isdir')
2935-    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
2936+    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime,\
2937+                             mockget_available_space):
2938         """ Write a new share. """
2939 
2940         def call_listdir(dirname):
2941hunk ./src/allmydata/test/test_backends.py 148
2942 
2943         mockmkdir.side_effect = call_mkdir
2944 
2945+        def call_get_available_space(storedir, reserved_space):
2946+            self.failUnlessReallyEqual(storedir, tempdir)
2947+            return 1
2948+
2949+        mockget_available_space.side_effect = call_get_available_space
2950+
2951         class MockFile:
2952             def __init__(self):
2953                 self.buffer = ''
2954hunk ./src/allmydata/test/test_backends.py 188
2955         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
2956         bs[0].remote_write(0, 'a')
2957         self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
2958-       
2959+
2960+        # What happens when there's not enough space for the client's request?
2961+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
2962+
2963         # Now test the allocated_size method.
2964         spaceint = self.s.allocated_size()
2965         self.failUnlessReallyEqual(spaceint, 1)
2966}
2967[checkpoint10
2968wilcoxjg@gmail.com**20110707172049
2969 Ignore-this: 9dd2fb8bee93a88cea2625058decff32
2970] {
2971hunk ./src/allmydata/test/test_backends.py 20
2972 # The following share file contents was generated with
2973 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
2974 # with share data == 'a'.
2975-share_data = 'a\x00\x00\x00\x00xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy\x00(\xde\x80'
2976+renew_secret  = 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
2977+cancel_secret = 'yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy'
2978+share_data = 'a\x00\x00\x00\x00' + renew_secret + cancel_secret + '\x00(\xde\x80'
2979 share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data
2980 
2981hunk ./src/allmydata/test/test_backends.py 25
2982+testnodeid = 'testnodeidxxxxxxxxxx'
2983 tempdir = 'teststoredir'
2984 sharedirname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
2985 sharefname = os.path.join(sharedirname, '0')
2986hunk ./src/allmydata/test/test_backends.py 37
2987 
2988 class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
2989     def setUp(self):
2990-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2991+        self.s = StorageServer(testnodeid, backend=NullCore())
2992 
2993     @mock.patch('os.mkdir')
2994     @mock.patch('__builtin__.open')
2995hunk ./src/allmydata/test/test_backends.py 99
2996         mockmkdir.side_effect = call_mkdir
2997 
2998         # Now begin the test.
2999-        s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore('teststoredir', expiration_policy))
3000+        s = StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
3001 
3002         self.failIf(mocklistdir.called, mocklistdir.call_args_list)
3003 
3004hunk ./src/allmydata/test/test_backends.py 119
3005 
3006         mockopen.side_effect = call_open
3007         testbackend = DASCore(tempdir, expiration_policy)
3008-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
3009-   
3010+        self.s = StorageServer(testnodeid, backend=DASCore(tempdir, expiration_policy) )
3011+       
3012+    @mock.patch('allmydata.storage.backends.das.core.DASCore.get_shares')
3013     @mock.patch('allmydata.util.fileutil.get_available_space')
3014     @mock.patch('time.time')
3015     @mock.patch('os.mkdir')
3016hunk ./src/allmydata/test/test_backends.py 129
3017     @mock.patch('os.listdir')
3018     @mock.patch('os.path.isdir')
3019     def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime,\
3020-                             mockget_available_space):
3021+                             mockget_available_space, mockget_shares):
3022         """ Write a new share. """
3023 
3024         def call_listdir(dirname):
3025hunk ./src/allmydata/test/test_backends.py 139
3026         mocklistdir.side_effect = call_listdir
3027 
3028         def call_isdir(dirname):
3029+            #XXX Should there be any other tests here?
3030             self.failUnlessReallyEqual(dirname, sharedirname)
3031             return True
3032 
3033hunk ./src/allmydata/test/test_backends.py 159
3034 
3035         mockget_available_space.side_effect = call_get_available_space
3036 
3037+        mocktime.return_value = 0
3038+        class MockShare:
3039+            def __init__(self):
3040+                self.shnum = 1
3041+               
3042+            def add_or_renew_lease(elf, lease_info):
3043+                self.failUnlessReallyEqual(lease_info.renew_secret, renew_secret)
3044+                self.failUnlessReallyEqual(lease_info.cancel_secret, cancel_secret)
3045+                self.failUnlessReallyEqual(lease_info.owner_num, 0)
3046+                self.failUnlessReallyEqual(lease_info.expiration_time, mocktime() + 31*24*60*60)
3047+                self.failUnlessReallyEqual(lease_info.nodeid, testnodeid)
3048+               
3049+
3050+        share = MockShare()
3051+        def call_get_shares(storageindex):
3052+            return [share]
3053+
3054+        mockget_shares.side_effect = call_get_shares
3055+
3056         class MockFile:
3057             def __init__(self):
3058                 self.buffer = ''
3059hunk ./src/allmydata/test/test_backends.py 199
3060             def tell(self):
3061                 return self.pos
3062 
3063-        mocktime.return_value = 0
3064 
3065         sharefile = MockFile()
3066         def call_open(fname, mode):
3067}
3068[jacp 11
3069wilcoxjg@gmail.com**20110708213919
3070 Ignore-this: b8f81b264800590b3e2bfc6fffd21ff9
3071] {
3072hunk ./src/allmydata/storage/backends/das/core.py 144
3073         self.incomingdir = os.path.join(sharedir, 'incoming')
3074         si_dir = storage_index_to_dir(storageindex)
3075         self.incominghome = os.path.join(self.incomingdir, si_dir, "%d" % shnum)
3076+        #XXX  self.fname and self.finalhome need to be resolve/merged.
3077         self.finalhome = os.path.join(sharedir, si_dir, "%d" % shnum)
3078         if create:
3079             # touch the file, so later callers will see that we're working on
3080hunk ./src/allmydata/storage/backends/das/core.py 208
3081         pass
3082         
3083     def stat(self):
3084-        return os.stat(self.finalhome)[stat.ST_SIZE]
3085+        return os.stat(self.finalhome)[os.stat.ST_SIZE]
3086 
3087     def get_shnum(self):
3088         return self.shnum
3089hunk ./src/allmydata/storage/immutable.py 44
3090     def remote_close(self):
3091         precondition(not self.closed)
3092         start = time.time()
3093+
3094         self._sharefile.close()
3095hunk ./src/allmydata/storage/immutable.py 46
3096+        filelen = self._sharefile.stat()
3097         self._sharefile = None
3098hunk ./src/allmydata/storage/immutable.py 48
3099+
3100         self.closed = True
3101         self._canary.dontNotifyOnDisconnect(self._disconnect_marker)
3102 
3103hunk ./src/allmydata/storage/immutable.py 52
3104-        filelen = self._sharefile.stat()
3105         self.ss.bucket_writer_closed(self, filelen)
3106         self.ss.add_latency("close", time.time() - start)
3107         self.ss.count("close")
3108hunk ./src/allmydata/storage/server.py 220
3109 
3110         for shnum in (sharenums - alreadygot):
3111             if (not limited) or (remaining_space >= max_space_per_bucket):
3112-                bw = self.backend.make_bucket_writer(storage_index, shnum,
3113-                                                     max_space_per_bucket, lease_info, canary)
3114+                bw = self.backend.make_bucket_writer(storage_index, shnum, max_space_per_bucket, lease_info, canary)
3115                 bucketwriters[shnum] = bw
3116                 self._active_writers[bw] = 1
3117                 if limited:
3118hunk ./src/allmydata/test/test_backends.py 20
3119 # The following share file contents was generated with
3120 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
3121 # with share data == 'a'.
3122-renew_secret  = 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
3123-cancel_secret = 'yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy'
3124+renew_secret  = 'x'*32
3125+cancel_secret = 'y'*32
3126 share_data = 'a\x00\x00\x00\x00' + renew_secret + cancel_secret + '\x00(\xde\x80'
3127 share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data
3128 
3129hunk ./src/allmydata/test/test_backends.py 27
3130 testnodeid = 'testnodeidxxxxxxxxxx'
3131 tempdir = 'teststoredir'
3132-sharedirname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3133-sharefname = os.path.join(sharedirname, '0')
3134+sharedirfinalname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3135+sharedirincomingname = os.path.join(tempdir, 'shares', 'incoming', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3136+shareincomingname = os.path.join(sharedirincomingname, '0')
3137+sharefname = os.path.join(sharedirfinalname, '0')
3138+
3139 expiration_policy = {'enabled' : False,
3140                      'mode' : 'age',
3141                      'override_lease_duration' : None,
3142hunk ./src/allmydata/test/test_backends.py 123
3143         mockopen.side_effect = call_open
3144         testbackend = DASCore(tempdir, expiration_policy)
3145         self.s = StorageServer(testnodeid, backend=DASCore(tempdir, expiration_policy) )
3146-       
3147+
3148+    @mock.patch('allmydata.util.fileutil.rename')
3149+    @mock.patch('allmydata.util.fileutil.make_dirs')
3150+    @mock.patch('os.path.exists')
3151+    @mock.patch('os.stat')
3152     @mock.patch('allmydata.storage.backends.das.core.DASCore.get_shares')
3153     @mock.patch('allmydata.util.fileutil.get_available_space')
3154     @mock.patch('time.time')
3155hunk ./src/allmydata/test/test_backends.py 136
3156     @mock.patch('os.listdir')
3157     @mock.patch('os.path.isdir')
3158     def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime,\
3159-                             mockget_available_space, mockget_shares):
3160+                             mockget_available_space, mockget_shares, mockstat, mockexists, \
3161+                             mockmake_dirs, mockrename):
3162         """ Write a new share. """
3163 
3164         def call_listdir(dirname):
3165hunk ./src/allmydata/test/test_backends.py 141
3166-            self.failUnlessReallyEqual(dirname, sharedirname)
3167+            self.failUnlessReallyEqual(dirname, sharedirfinalname)
3168             raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
3169 
3170         mocklistdir.side_effect = call_listdir
3171hunk ./src/allmydata/test/test_backends.py 148
3172 
3173         def call_isdir(dirname):
3174             #XXX Should there be any other tests here?
3175-            self.failUnlessReallyEqual(dirname, sharedirname)
3176+            self.failUnlessReallyEqual(dirname, sharedirfinalname)
3177             return True
3178 
3179         mockisdir.side_effect = call_isdir
3180hunk ./src/allmydata/test/test_backends.py 154
3181 
3182         def call_mkdir(dirname, permissions):
3183-            if dirname not in [sharedirname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
3184+            if dirname not in [sharedirfinalname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
3185                 self.Fail
3186             else:
3187                 return True
3188hunk ./src/allmydata/test/test_backends.py 208
3189                 return self.pos
3190 
3191 
3192-        sharefile = MockFile()
3193+        fobj = MockFile()
3194         def call_open(fname, mode):
3195             self.failUnlessReallyEqual(fname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a', '0' ))
3196hunk ./src/allmydata/test/test_backends.py 211
3197-            return sharefile
3198+            return fobj
3199 
3200         mockopen.side_effect = call_open
3201 
3202hunk ./src/allmydata/test/test_backends.py 215
3203+        def call_make_dirs(dname):
3204+            self.failUnlessReallyEqual(dname, sharedirfinalname)
3205+           
3206+        mockmake_dirs.side_effect = call_make_dirs
3207+
3208+        def call_rename(src, dst):
3209+           self.failUnlessReallyEqual(src, shareincomingname)
3210+           self.failUnlessReallyEqual(dst, sharefname)
3211+           
3212+        mockrename.side_effect = call_rename
3213+
3214+        def call_exists(fname):
3215+            self.failUnlessReallyEqual(fname, sharefname)
3216+
3217+        mockexists.side_effect = call_exists
3218+
3219         # Now begin the test.
3220         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
3221         bs[0].remote_write(0, 'a')
3222hunk ./src/allmydata/test/test_backends.py 234
3223-        self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
3224+        self.failUnlessReallyEqual(fobj.buffer, share_file_data)
3225+        spaceint = self.s.allocated_size()
3226+        self.failUnlessReallyEqual(spaceint, 1)
3227+
3228+        bs[0].remote_close()
3229 
3230         # What happens when there's not enough space for the client's request?
3231hunk ./src/allmydata/test/test_backends.py 241
3232-        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
3233+        # XXX Need to uncomment! alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
3234 
3235         # Now test the allocated_size method.
3236hunk ./src/allmydata/test/test_backends.py 244
3237-        spaceint = self.s.allocated_size()
3238-        self.failUnlessReallyEqual(spaceint, 1)
3239+        #self.failIf(mockexists.called, mockexists.call_args_list)
3240+        #self.failIf(mockmake_dirs.called, mockmake_dirs.call_args_list)
3241+        #self.failIf(mockrename.called, mockrename.call_args_list)
3242+        #self.failIf(mockstat.called, mockstat.call_args_list)
3243 
3244     @mock.patch('os.path.exists')
3245     @mock.patch('os.path.getsize')
3246}
3247[checkpoint12 testing correct behavior with regard to incoming and final
3248wilcoxjg@gmail.com**20110710191915
3249 Ignore-this: 34413c6dc100f8aec3c1bb217eaa6bc7
3250] {
3251hunk ./src/allmydata/storage/backends/das/core.py 74
3252         self.lease_checker = FSLeaseCheckingCrawler(statefile, historyfile, expiration_policy)
3253         self.lease_checker.setServiceParent(self)
3254 
3255+    def get_incoming(self, storageindex):
3256+        return set((1,))
3257+
3258     def get_available_space(self):
3259         if self.readonly:
3260             return 0
3261hunk ./src/allmydata/storage/server.py 77
3262         """Return a dict, indexed by category, that contains a dict of
3263         latency numbers for each category. If there are sufficient samples
3264         for unambiguous interpretation, each dict will contain the
3265-        following keys: mean, 01_0_percentile, 10_0_percentile,
3266+        following keys: samplesize, mean, 01_0_percentile, 10_0_percentile,
3267         50_0_percentile (median), 90_0_percentile, 95_0_percentile,
3268         99_0_percentile, 99_9_percentile.  If there are insufficient
3269         samples for a given percentile to be interpreted unambiguously
3270hunk ./src/allmydata/storage/server.py 120
3271 
3272     def get_stats(self):
3273         # remember: RIStatsProvider requires that our return dict
3274-        # contains numeric values.
3275+        # contains numeric, or None values.
3276         stats = { 'storage_server.allocated': self.allocated_size(), }
3277         stats['storage_server.reserved_space'] = self.reserved_space
3278         for category,ld in self.get_latencies().items():
3279hunk ./src/allmydata/storage/server.py 185
3280         start = time.time()
3281         self.count("allocate")
3282         alreadygot = set()
3283+        incoming = set()
3284         bucketwriters = {} # k: shnum, v: BucketWriter
3285 
3286         si_s = si_b2a(storage_index)
3287hunk ./src/allmydata/storage/server.py 219
3288             alreadygot.add(share.shnum)
3289             share.add_or_renew_lease(lease_info)
3290 
3291-        for shnum in (sharenums - alreadygot):
3292+        # fill incoming with all shares that are incoming use a set operation since there's no need to operate on individual pieces
3293+        incoming = self.backend.get_incoming(storageindex)
3294+
3295+        for shnum in ((sharenums - alreadygot) - incoming):
3296             if (not limited) or (remaining_space >= max_space_per_bucket):
3297                 bw = self.backend.make_bucket_writer(storage_index, shnum, max_space_per_bucket, lease_info, canary)
3298                 bucketwriters[shnum] = bw
3299hunk ./src/allmydata/storage/server.py 229
3300                 self._active_writers[bw] = 1
3301                 if limited:
3302                     remaining_space -= max_space_per_bucket
3303-
3304-        #XXX We SHOULD DOCUMENT LATER.
3305+            else:
3306+                # Bummer not enough space to accept this share.
3307+                pass
3308 
3309         self.add_latency("allocate", time.time() - start)
3310         return alreadygot, bucketwriters
3311hunk ./src/allmydata/storage/server.py 323
3312         self.add_latency("get", time.time() - start)
3313         return bucketreaders
3314 
3315-    def get_leases(self, storage_index):
3316+    def remote_get_incoming(self, storageindex):
3317+        incoming_share_set = self.backend.get_incoming(storageindex)
3318+        return incoming_share_set
3319+
3320+    def get_leases(self, storageindex):
3321         """Provide an iterator that yields all of the leases attached to this
3322         bucket. Each lease is returned as a LeaseInfo instance.
3323 
3324hunk ./src/allmydata/storage/server.py 337
3325         # since all shares get the same lease data, we just grab the leases
3326         # from the first share
3327         try:
3328-            shnum, filename = self._get_shares(storage_index).next()
3329+            shnum, filename = self._get_shares(storageindex).next()
3330             sf = ShareFile(filename)
3331             return sf.get_leases()
3332         except StopIteration:
3333hunk ./src/allmydata/test/test_backends.py 182
3334 
3335         share = MockShare()
3336         def call_get_shares(storageindex):
3337-            return [share]
3338+            #XXX  Whether or not to return an empty list depends on which case of get_shares we are interested in.
3339+            return []#share]
3340 
3341         mockget_shares.side_effect = call_get_shares
3342 
3343hunk ./src/allmydata/test/test_backends.py 222
3344         mockmake_dirs.side_effect = call_make_dirs
3345 
3346         def call_rename(src, dst):
3347-           self.failUnlessReallyEqual(src, shareincomingname)
3348-           self.failUnlessReallyEqual(dst, sharefname)
3349+            self.failUnlessReallyEqual(src, shareincomingname)
3350+            self.failUnlessReallyEqual(dst, sharefname)
3351             
3352         mockrename.side_effect = call_rename
3353 
3354hunk ./src/allmydata/test/test_backends.py 233
3355         mockexists.side_effect = call_exists
3356 
3357         # Now begin the test.
3358+
3359+        # XXX (0) ???  Fail unless something is not properly set-up?
3360         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
3361hunk ./src/allmydata/test/test_backends.py 236
3362+
3363+        # XXX (1) Inspect incoming and fail unless the sharenum is listed there.
3364+        alreadygota, bsa = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
3365+
3366+        self.failUnlessEqual(self.s.remote_get_incoming('teststorage_index'), set((0,)))
3367+        # XXX (2) Test that no bucketwriter results from a remote_allocate_buckets
3368+        # with the same si, until BucketWriter.remote_close() has been called.
3369+        # self.failIf(bsa)
3370+
3371+        # XXX (3) Inspect final and fail unless there's nothing there.
3372         bs[0].remote_write(0, 'a')
3373hunk ./src/allmydata/test/test_backends.py 247
3374+        # XXX (4a) Inspect final and fail unless share 0 is there.
3375+        # XXX (4b) Inspect incoming and fail unless share 0 is NOT there.
3376         self.failUnlessReallyEqual(fobj.buffer, share_file_data)
3377         spaceint = self.s.allocated_size()
3378         self.failUnlessReallyEqual(spaceint, 1)
3379hunk ./src/allmydata/test/test_backends.py 253
3380 
3381+        #  If there's something in self.alreadygot prior to remote_close() then fail.
3382         bs[0].remote_close()
3383 
3384         # What happens when there's not enough space for the client's request?
3385hunk ./src/allmydata/test/test_backends.py 260
3386         # XXX Need to uncomment! alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
3387 
3388         # Now test the allocated_size method.
3389-        #self.failIf(mockexists.called, mockexists.call_args_list)
3390+        # self.failIf(mockexists.called, mockexists.call_args_list)
3391         #self.failIf(mockmake_dirs.called, mockmake_dirs.call_args_list)
3392         #self.failIf(mockrename.called, mockrename.call_args_list)
3393         #self.failIf(mockstat.called, mockstat.call_args_list)
3394}
3395[fix inconsistent naming of storage_index vs storageindex in storage/server.py
3396wilcoxjg@gmail.com**20110710195139
3397 Ignore-this: 3b05cf549f3374f2c891159a8d4015aa
3398] {
3399hunk ./src/allmydata/storage/server.py 220
3400             share.add_or_renew_lease(lease_info)
3401 
3402         # fill incoming with all shares that are incoming use a set operation since there's no need to operate on individual pieces
3403-        incoming = self.backend.get_incoming(storageindex)
3404+        incoming = self.backend.get_incoming(storage_index)
3405 
3406         for shnum in ((sharenums - alreadygot) - incoming):
3407             if (not limited) or (remaining_space >= max_space_per_bucket):
3408hunk ./src/allmydata/storage/server.py 323
3409         self.add_latency("get", time.time() - start)
3410         return bucketreaders
3411 
3412-    def remote_get_incoming(self, storageindex):
3413-        incoming_share_set = self.backend.get_incoming(storageindex)
3414+    def remote_get_incoming(self, storage_index):
3415+        incoming_share_set = self.backend.get_incoming(storage_index)
3416         return incoming_share_set
3417 
3418hunk ./src/allmydata/storage/server.py 327
3419-    def get_leases(self, storageindex):
3420+    def get_leases(self, storage_index):
3421         """Provide an iterator that yields all of the leases attached to this
3422         bucket. Each lease is returned as a LeaseInfo instance.
3423 
3424hunk ./src/allmydata/storage/server.py 337
3425         # since all shares get the same lease data, we just grab the leases
3426         # from the first share
3427         try:
3428-            shnum, filename = self._get_shares(storageindex).next()
3429+            shnum, filename = self._get_shares(storage_index).next()
3430             sf = ShareFile(filename)
3431             return sf.get_leases()
3432         except StopIteration:
3433replace ./src/allmydata/storage/server.py [A-Za-z_0-9] storage_index storageindex
3434}
3435[adding comments to clarify what I'm about to do.
3436wilcoxjg@gmail.com**20110710220623
3437 Ignore-this: 44f97633c3eac1047660272e2308dd7c
3438] {
3439hunk ./src/allmydata/storage/backends/das/core.py 8
3440 
3441 import os, re, weakref, struct, time
3442 
3443-from foolscap.api import Referenceable
3444+#from foolscap.api import Referenceable
3445 from twisted.application import service
3446 
3447 from zope.interface import implements
3448hunk ./src/allmydata/storage/backends/das/core.py 12
3449-from allmydata.interfaces import RIStorageServer, IStatsProducer, IShareStore
3450+from allmydata.interfaces import IStatsProducer, IShareStore# XXX, RIStorageServer
3451 from allmydata.util import fileutil, idlib, log, time_format
3452 import allmydata # for __full_version__
3453 
3454hunk ./src/allmydata/storage/server.py 219
3455             alreadygot.add(share.shnum)
3456             share.add_or_renew_lease(lease_info)
3457 
3458-        # fill incoming with all shares that are incoming use a set operation since there's no need to operate on individual pieces
3459+        # fill incoming with all shares that are incoming use a set operation
3460+        # since there's no need to operate on individual pieces
3461         incoming = self.backend.get_incoming(storageindex)
3462 
3463         for shnum in ((sharenums - alreadygot) - incoming):
3464hunk ./src/allmydata/test/test_backends.py 245
3465         # with the same si, until BucketWriter.remote_close() has been called.
3466         # self.failIf(bsa)
3467 
3468-        # XXX (3) Inspect final and fail unless there's nothing there.
3469         bs[0].remote_write(0, 'a')
3470hunk ./src/allmydata/test/test_backends.py 246
3471-        # XXX (4a) Inspect final and fail unless share 0 is there.
3472-        # XXX (4b) Inspect incoming and fail unless share 0 is NOT there.
3473         self.failUnlessReallyEqual(fobj.buffer, share_file_data)
3474         spaceint = self.s.allocated_size()
3475         self.failUnlessReallyEqual(spaceint, 1)
3476hunk ./src/allmydata/test/test_backends.py 250
3477 
3478-        #  If there's something in self.alreadygot prior to remote_close() then fail.
3479+        # XXX (3) Inspect final and fail unless there's nothing there.
3480         bs[0].remote_close()
3481hunk ./src/allmydata/test/test_backends.py 252
3482+        # XXX (4a) Inspect final and fail unless share 0 is there.
3483+        # XXX (4b) Inspect incoming and fail unless share 0 is NOT there.
3484 
3485         # What happens when there's not enough space for the client's request?
3486         # XXX Need to uncomment! alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
3487}
3488[branching back, no longer attempting to mock inside TestServerFSBackend
3489wilcoxjg@gmail.com**20110711190849
3490 Ignore-this: e72c9560f8d05f1f93d46c91d2354df0
3491] {
3492hunk ./src/allmydata/storage/backends/das/core.py 75
3493         self.lease_checker.setServiceParent(self)
3494 
3495     def get_incoming(self, storageindex):
3496-        return set((1,))
3497-
3498-    def get_available_space(self):
3499-        if self.readonly:
3500-            return 0
3501-        return fileutil.get_available_space(self.storedir, self.reserved_space)
3502+        """Return the set of incoming shnums."""
3503+        return set(os.listdir(self.incomingdir))
3504 
3505     def get_shares(self, storage_index):
3506         """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
3507hunk ./src/allmydata/storage/backends/das/core.py 90
3508             # Commonly caused by there being no shares at all.
3509             pass
3510         
3511+    def get_available_space(self):
3512+        if self.readonly:
3513+            return 0
3514+        return fileutil.get_available_space(self.storedir, self.reserved_space)
3515+
3516     def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
3517         immsh = ImmutableShare(self.sharedir, storage_index, shnum, max_size=max_space_per_bucket, create=True)
3518         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
3519hunk ./src/allmydata/test/test_backends.py 27
3520 
3521 testnodeid = 'testnodeidxxxxxxxxxx'
3522 tempdir = 'teststoredir'
3523-sharedirfinalname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3524-sharedirincomingname = os.path.join(tempdir, 'shares', 'incoming', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3525+basedir = os.path.join(tempdir, 'shares')
3526+baseincdir = os.path.join(basedir, 'incoming')
3527+sharedirfinalname = os.path.join(basedir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3528+sharedirincomingname = os.path.join(baseincdir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3529 shareincomingname = os.path.join(sharedirincomingname, '0')
3530 sharefname = os.path.join(sharedirfinalname, '0')
3531 
3532hunk ./src/allmydata/test/test_backends.py 142
3533                              mockmake_dirs, mockrename):
3534         """ Write a new share. """
3535 
3536-        def call_listdir(dirname):
3537-            self.failUnlessReallyEqual(dirname, sharedirfinalname)
3538-            raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
3539-
3540-        mocklistdir.side_effect = call_listdir
3541-
3542-        def call_isdir(dirname):
3543-            #XXX Should there be any other tests here?
3544-            self.failUnlessReallyEqual(dirname, sharedirfinalname)
3545-            return True
3546-
3547-        mockisdir.side_effect = call_isdir
3548-
3549-        def call_mkdir(dirname, permissions):
3550-            if dirname not in [sharedirfinalname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
3551-                self.Fail
3552-            else:
3553-                return True
3554-
3555-        mockmkdir.side_effect = call_mkdir
3556-
3557-        def call_get_available_space(storedir, reserved_space):
3558-            self.failUnlessReallyEqual(storedir, tempdir)
3559-            return 1
3560-
3561-        mockget_available_space.side_effect = call_get_available_space
3562-
3563-        mocktime.return_value = 0
3564         class MockShare:
3565             def __init__(self):
3566                 self.shnum = 1
3567hunk ./src/allmydata/test/test_backends.py 152
3568                 self.failUnlessReallyEqual(lease_info.owner_num, 0)
3569                 self.failUnlessReallyEqual(lease_info.expiration_time, mocktime() + 31*24*60*60)
3570                 self.failUnlessReallyEqual(lease_info.nodeid, testnodeid)
3571-               
3572 
3573         share = MockShare()
3574hunk ./src/allmydata/test/test_backends.py 154
3575-        def call_get_shares(storageindex):
3576-            #XXX  Whether or not to return an empty list depends on which case of get_shares we are interested in.
3577-            return []#share]
3578-
3579-        mockget_shares.side_effect = call_get_shares
3580 
3581         class MockFile:
3582             def __init__(self):
3583hunk ./src/allmydata/test/test_backends.py 176
3584             def tell(self):
3585                 return self.pos
3586 
3587-
3588         fobj = MockFile()
3589hunk ./src/allmydata/test/test_backends.py 177
3590+
3591+        directories = {}
3592+        def call_listdir(dirname):
3593+            if dirname not in directories:
3594+                raise OSError(2, "No such file or directory: '%s'" % os.path.join(basedir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
3595+            else:
3596+                return directories[dirname].get_contents()
3597+
3598+        mocklistdir.side_effect = call_listdir
3599+
3600+        class MockDir:
3601+            def __init__(self, dirname):
3602+                self.name = dirname
3603+                self.contents = []
3604+   
3605+            def get_contents(self):
3606+                return self.contents
3607+
3608+        def call_isdir(dirname):
3609+            #XXX Should there be any other tests here?
3610+            self.failUnlessReallyEqual(dirname, sharedirfinalname)
3611+            return True
3612+
3613+        mockisdir.side_effect = call_isdir
3614+
3615+        def call_mkdir(dirname, permissions):
3616+            if dirname not in [sharedirfinalname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
3617+                self.Fail
3618+            if dirname in directories:
3619+                raise OSError(17, "File exists: '%s'" % dirname)
3620+                self.Fail
3621+            elif dirname not in directories:
3622+                directories[dirname] = MockDir(dirname)
3623+                return True
3624+
3625+        mockmkdir.side_effect = call_mkdir
3626+
3627+        def call_get_available_space(storedir, reserved_space):
3628+            self.failUnlessReallyEqual(storedir, tempdir)
3629+            return 1
3630+
3631+        mockget_available_space.side_effect = call_get_available_space
3632+
3633+        mocktime.return_value = 0
3634+        def call_get_shares(storageindex):
3635+            #XXX  Whether or not to return an empty list depends on which case of get_shares we are interested in.
3636+            return []#share]
3637+
3638+        mockget_shares.side_effect = call_get_shares
3639+
3640         def call_open(fname, mode):
3641             self.failUnlessReallyEqual(fname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a', '0' ))
3642             return fobj
3643}
3644[checkpoint12 TestServerFSBackend no longer mocks filesystem
3645wilcoxjg@gmail.com**20110711193357
3646 Ignore-this: 48654a6c0eb02cf1e97e62fe24920b5f
3647] {
3648hunk ./src/allmydata/storage/backends/das/core.py 23
3649      create_mutable_sharefile
3650 from allmydata.storage.immutable import BucketWriter, BucketReader
3651 from allmydata.storage.crawler import FSBucketCountingCrawler
3652+from allmydata.util.hashutil import constant_time_compare
3653 from allmydata.storage.backends.das.expirer import FSLeaseCheckingCrawler
3654 
3655 from zope.interface import implements
3656hunk ./src/allmydata/storage/backends/das/core.py 28
3657 
3658+# storage/
3659+# storage/shares/incoming
3660+#   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
3661+#   be moved to storage/shares/$START/$STORAGEINDEX/$SHARENUM upon success
3662+# storage/shares/$START/$STORAGEINDEX
3663+# storage/shares/$START/$STORAGEINDEX/$SHARENUM
3664+
3665+# Where "$START" denotes the first 10 bits worth of $STORAGEINDEX (that's 2
3666+# base-32 chars).
3667 # $SHARENUM matches this regex:
3668 NUM_RE=re.compile("^[0-9]+$")
3669 
3670hunk ./src/allmydata/test/test_backends.py 126
3671         testbackend = DASCore(tempdir, expiration_policy)
3672         self.s = StorageServer(testnodeid, backend=DASCore(tempdir, expiration_policy) )
3673 
3674-    @mock.patch('allmydata.util.fileutil.rename')
3675-    @mock.patch('allmydata.util.fileutil.make_dirs')
3676-    @mock.patch('os.path.exists')
3677-    @mock.patch('os.stat')
3678-    @mock.patch('allmydata.storage.backends.das.core.DASCore.get_shares')
3679-    @mock.patch('allmydata.util.fileutil.get_available_space')
3680     @mock.patch('time.time')
3681hunk ./src/allmydata/test/test_backends.py 127
3682-    @mock.patch('os.mkdir')
3683-    @mock.patch('__builtin__.open')
3684-    @mock.patch('os.listdir')
3685-    @mock.patch('os.path.isdir')
3686-    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime,\
3687-                             mockget_available_space, mockget_shares, mockstat, mockexists, \
3688-                             mockmake_dirs, mockrename):
3689+    def test_write_share(self, mocktime):
3690         """ Write a new share. """
3691 
3692         class MockShare:
3693hunk ./src/allmydata/test/test_backends.py 143
3694 
3695         share = MockShare()
3696 
3697-        class MockFile:
3698-            def __init__(self):
3699-                self.buffer = ''
3700-                self.pos = 0
3701-            def write(self, instring):
3702-                begin = self.pos
3703-                padlen = begin - len(self.buffer)
3704-                if padlen > 0:
3705-                    self.buffer += '\x00' * padlen
3706-                end = self.pos + len(instring)
3707-                self.buffer = self.buffer[:begin]+instring+self.buffer[end:]
3708-                self.pos = end
3709-            def close(self):
3710-                pass
3711-            def seek(self, pos):
3712-                self.pos = pos
3713-            def read(self, numberbytes):
3714-                return self.buffer[self.pos:self.pos+numberbytes]
3715-            def tell(self):
3716-                return self.pos
3717-
3718-        fobj = MockFile()
3719-
3720-        directories = {}
3721-        def call_listdir(dirname):
3722-            if dirname not in directories:
3723-                raise OSError(2, "No such file or directory: '%s'" % os.path.join(basedir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
3724-            else:
3725-                return directories[dirname].get_contents()
3726-
3727-        mocklistdir.side_effect = call_listdir
3728-
3729-        class MockDir:
3730-            def __init__(self, dirname):
3731-                self.name = dirname
3732-                self.contents = []
3733-   
3734-            def get_contents(self):
3735-                return self.contents
3736-
3737-        def call_isdir(dirname):
3738-            #XXX Should there be any other tests here?
3739-            self.failUnlessReallyEqual(dirname, sharedirfinalname)
3740-            return True
3741-
3742-        mockisdir.side_effect = call_isdir
3743-
3744-        def call_mkdir(dirname, permissions):
3745-            if dirname not in [sharedirfinalname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
3746-                self.Fail
3747-            if dirname in directories:
3748-                raise OSError(17, "File exists: '%s'" % dirname)
3749-                self.Fail
3750-            elif dirname not in directories:
3751-                directories[dirname] = MockDir(dirname)
3752-                return True
3753-
3754-        mockmkdir.side_effect = call_mkdir
3755-
3756-        def call_get_available_space(storedir, reserved_space):
3757-            self.failUnlessReallyEqual(storedir, tempdir)
3758-            return 1
3759-
3760-        mockget_available_space.side_effect = call_get_available_space
3761-
3762-        mocktime.return_value = 0
3763-        def call_get_shares(storageindex):
3764-            #XXX  Whether or not to return an empty list depends on which case of get_shares we are interested in.
3765-            return []#share]
3766-
3767-        mockget_shares.side_effect = call_get_shares
3768-
3769-        def call_open(fname, mode):
3770-            self.failUnlessReallyEqual(fname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a', '0' ))
3771-            return fobj
3772-
3773-        mockopen.side_effect = call_open
3774-
3775-        def call_make_dirs(dname):
3776-            self.failUnlessReallyEqual(dname, sharedirfinalname)
3777-           
3778-        mockmake_dirs.side_effect = call_make_dirs
3779-
3780-        def call_rename(src, dst):
3781-            self.failUnlessReallyEqual(src, shareincomingname)
3782-            self.failUnlessReallyEqual(dst, sharefname)
3783-           
3784-        mockrename.side_effect = call_rename
3785-
3786-        def call_exists(fname):
3787-            self.failUnlessReallyEqual(fname, sharefname)
3788-
3789-        mockexists.side_effect = call_exists
3790-
3791         # Now begin the test.
3792 
3793         # XXX (0) ???  Fail unless something is not properly set-up?
3794}
3795[JACP
3796wilcoxjg@gmail.com**20110711194407
3797 Ignore-this: b54745de777c4bb58d68d708f010bbb
3798] {
3799hunk ./src/allmydata/storage/backends/das/core.py 86
3800 
3801     def get_incoming(self, storageindex):
3802         """Return the set of incoming shnums."""
3803-        return set(os.listdir(self.incomingdir))
3804+        try:
3805+            incominglist = os.listdir(self.incomingdir)
3806+            print "incominglist: ", incominglist
3807+            return set(incominglist)
3808+        except OSError:
3809+            # XXX I'd like to make this more specific. Commonly caused by there being no shares at all.
3810+            pass
3811 
3812     def get_shares(self, storage_index):
3813         """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
3814hunk ./src/allmydata/storage/server.py 17
3815 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
3816      create_mutable_sharefile
3817 
3818-# storage/
3819-# storage/shares/incoming
3820-#   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
3821-#   be moved to storage/shares/$START/$STORAGEINDEX/$SHARENUM upon success
3822-# storage/shares/$START/$STORAGEINDEX
3823-# storage/shares/$START/$STORAGEINDEX/$SHARENUM
3824-
3825-# Where "$START" denotes the first 10 bits worth of $STORAGEINDEX (that's 2
3826-# base-32 chars).
3827-
3828-
3829 class StorageServer(service.MultiService, Referenceable):
3830     implements(RIStorageServer, IStatsProducer)
3831     name = 'storage'
3832}
3833[testing get incoming
3834wilcoxjg@gmail.com**20110711210224
3835 Ignore-this: 279ee530a7d1daff3c30421d9e3a2161
3836] {
3837hunk ./src/allmydata/storage/backends/das/core.py 87
3838     def get_incoming(self, storageindex):
3839         """Return the set of incoming shnums."""
3840         try:
3841-            incominglist = os.listdir(self.incomingdir)
3842+            incomingsharesdir = os.path.join(self.incomingdir, storage_index_to_dir(storageindex))
3843+            incominglist = os.listdir(incomingsharesdir)
3844             print "incominglist: ", incominglist
3845             return set(incominglist)
3846         except OSError:
3847hunk ./src/allmydata/storage/backends/das/core.py 92
3848-            # XXX I'd like to make this more specific. Commonly caused by there being no shares at all.
3849-            pass
3850-
3851+            # XXX I'd like to make this more specific. If there are no shares at all.
3852+            return set()
3853+           
3854     def get_shares(self, storage_index):
3855         """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
3856         finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
3857hunk ./src/allmydata/test/test_backends.py 149
3858         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
3859 
3860         # XXX (1) Inspect incoming and fail unless the sharenum is listed there.
3861+        self.failUnlessEqual(self.s.remote_get_incoming('teststorage_index'), set((0,)))
3862         alreadygota, bsa = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
3863 
3864hunk ./src/allmydata/test/test_backends.py 152
3865-        self.failUnlessEqual(self.s.remote_get_incoming('teststorage_index'), set((0,)))
3866         # XXX (2) Test that no bucketwriter results from a remote_allocate_buckets
3867         # with the same si, until BucketWriter.remote_close() has been called.
3868         # self.failIf(bsa)
3869}
3870[ImmutableShareFile does not know its StorageIndex
3871wilcoxjg@gmail.com**20110711211424
3872 Ignore-this: 595de5c2781b607e1c9ebf6f64a2898a
3873] {
3874hunk ./src/allmydata/storage/backends/das/core.py 112
3875             return 0
3876         return fileutil.get_available_space(self.storedir, self.reserved_space)
3877 
3878-    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
3879-        immsh = ImmutableShare(self.sharedir, storage_index, shnum, max_size=max_space_per_bucket, create=True)
3880+    def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
3881+        finalhome = os.path.join(self.sharedir, storage_index_to_dir(storageindex), shnum)
3882+        incominghome = os.path.join(self.sharedir,'incoming', storage_index_to_dir(storageindex), shnum)
3883+        immsh = ImmutableShare(self, finalhome, incominghome, max_size=max_space_per_bucket, create=True)
3884         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
3885         return bw
3886 
3887hunk ./src/allmydata/storage/backends/das/core.py 155
3888     LEASE_SIZE = struct.calcsize(">L32s32sL")
3889     sharetype = "immutable"
3890 
3891-    def __init__(self, sharedir, storageindex, shnum, max_size=None, create=False):
3892+    def __init__(self, finalhome, incominghome, max_size=None, create=False):
3893         """ If max_size is not None then I won't allow more than
3894         max_size to be written to me. If create=True then max_size
3895         must not be None. """
3896}
3897[get_incoming correctly reports the 0 share after it has arrived
3898wilcoxjg@gmail.com**20110712025157
3899 Ignore-this: 893b2df6e41391567fffc85e4799bb0b
3900] {
3901hunk ./src/allmydata/storage/backends/das/core.py 1
3902+import os, re, weakref, struct, time, stat
3903+
3904 from allmydata.interfaces import IStorageBackend
3905 from allmydata.storage.backends.base import Backend
3906 from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
3907hunk ./src/allmydata/storage/backends/das/core.py 8
3908 from allmydata.util.assertutil import precondition
3909 
3910-import os, re, weakref, struct, time
3911-
3912 #from foolscap.api import Referenceable
3913 from twisted.application import service
3914 
3915hunk ./src/allmydata/storage/backends/das/core.py 89
3916         try:
3917             incomingsharesdir = os.path.join(self.incomingdir, storage_index_to_dir(storageindex))
3918             incominglist = os.listdir(incomingsharesdir)
3919-            print "incominglist: ", incominglist
3920-            return set(incominglist)
3921+            incomingshnums = [int(x) for x in incominglist]
3922+            return set(incomingshnums)
3923         except OSError:
3924             # XXX I'd like to make this more specific. If there are no shares at all.
3925             return set()
3926hunk ./src/allmydata/storage/backends/das/core.py 113
3927         return fileutil.get_available_space(self.storedir, self.reserved_space)
3928 
3929     def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
3930-        finalhome = os.path.join(self.sharedir, storage_index_to_dir(storageindex), shnum)
3931-        incominghome = os.path.join(self.sharedir,'incoming', storage_index_to_dir(storageindex), shnum)
3932-        immsh = ImmutableShare(self, finalhome, incominghome, max_size=max_space_per_bucket, create=True)
3933+        finalhome = os.path.join(self.sharedir, storage_index_to_dir(storageindex), str(shnum))
3934+        incominghome = os.path.join(self.sharedir,'incoming', storage_index_to_dir(storageindex), str(shnum))
3935+        immsh = ImmutableShare(finalhome, incominghome, max_size=max_space_per_bucket, create=True)
3936         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
3937         return bw
3938 
3939hunk ./src/allmydata/storage/backends/das/core.py 160
3940         max_size to be written to me. If create=True then max_size
3941         must not be None. """
3942         precondition((max_size is not None) or (not create), max_size, create)
3943-        self.shnum = shnum
3944-        self.storage_index = storageindex
3945-        self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
3946         self._max_size = max_size
3947hunk ./src/allmydata/storage/backends/das/core.py 161
3948-        self.incomingdir = os.path.join(sharedir, 'incoming')
3949-        si_dir = storage_index_to_dir(storageindex)
3950-        self.incominghome = os.path.join(self.incomingdir, si_dir, "%d" % shnum)
3951-        #XXX  self.fname and self.finalhome need to be resolve/merged.
3952-        self.finalhome = os.path.join(sharedir, si_dir, "%d" % shnum)
3953+        self.incominghome = incominghome
3954+        self.finalhome = finalhome
3955         if create:
3956             # touch the file, so later callers will see that we're working on
3957             # it. Also construct the metadata.
3958hunk ./src/allmydata/storage/backends/das/core.py 166
3959-            assert not os.path.exists(self.fname)
3960-            fileutil.make_dirs(os.path.dirname(self.fname))
3961-            f = open(self.fname, 'wb')
3962+            assert not os.path.exists(self.finalhome)
3963+            fileutil.make_dirs(os.path.dirname(self.incominghome))
3964+            f = open(self.incominghome, 'wb')
3965             # The second field -- the four-byte share data length -- is no
3966             # longer used as of Tahoe v1.3.0, but we continue to write it in
3967             # there in case someone downgrades a storage server from >=
3968hunk ./src/allmydata/storage/backends/das/core.py 183
3969             self._lease_offset = max_size + 0x0c
3970             self._num_leases = 0
3971         else:
3972-            f = open(self.fname, 'rb')
3973-            filesize = os.path.getsize(self.fname)
3974+            f = open(self.finalhome, 'rb')
3975+            filesize = os.path.getsize(self.finalhome)
3976             (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
3977             f.close()
3978             if version != 1:
3979hunk ./src/allmydata/storage/backends/das/core.py 189
3980                 msg = "sharefile %s had version %d but we wanted 1" % \
3981-                      (self.fname, version)
3982+                      (self.finalhome, version)
3983                 raise UnknownImmutableContainerVersionError(msg)
3984             self._num_leases = num_leases
3985             self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
3986hunk ./src/allmydata/storage/backends/das/core.py 225
3987         pass
3988         
3989     def stat(self):
3990-        return os.stat(self.finalhome)[os.stat.ST_SIZE]
3991+        return os.stat(self.finalhome)[stat.ST_SIZE]
3992+        #filelen = os.stat(self.finalhome)[stat.ST_SIZE]
3993 
3994     def get_shnum(self):
3995         return self.shnum
3996hunk ./src/allmydata/storage/backends/das/core.py 232
3997 
3998     def unlink(self):
3999-        os.unlink(self.fname)
4000+        os.unlink(self.finalhome)
4001 
4002     def read_share_data(self, offset, length):
4003         precondition(offset >= 0)
4004hunk ./src/allmydata/storage/backends/das/core.py 239
4005         # Reads beyond the end of the data are truncated. Reads that start
4006         # beyond the end of the data return an empty string.
4007         seekpos = self._data_offset+offset
4008-        fsize = os.path.getsize(self.fname)
4009+        fsize = os.path.getsize(self.finalhome)
4010         actuallength = max(0, min(length, fsize-seekpos))
4011         if actuallength == 0:
4012             return ""
4013hunk ./src/allmydata/storage/backends/das/core.py 243
4014-        f = open(self.fname, 'rb')
4015+        f = open(self.finalhome, 'rb')
4016         f.seek(seekpos)
4017         return f.read(actuallength)
4018 
4019hunk ./src/allmydata/storage/backends/das/core.py 252
4020         precondition(offset >= 0, offset)
4021         if self._max_size is not None and offset+length > self._max_size:
4022             raise DataTooLargeError(self._max_size, offset, length)
4023-        f = open(self.fname, 'rb+')
4024+        f = open(self.incominghome, 'rb+')
4025         real_offset = self._data_offset+offset
4026         f.seek(real_offset)
4027         assert f.tell() == real_offset
4028hunk ./src/allmydata/storage/backends/das/core.py 279
4029 
4030     def get_leases(self):
4031         """Yields a LeaseInfo instance for all leases."""
4032-        f = open(self.fname, 'rb')
4033+        f = open(self.finalhome, 'rb')
4034         (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
4035         f.seek(self._lease_offset)
4036         for i in range(num_leases):
4037hunk ./src/allmydata/storage/backends/das/core.py 288
4038                 yield LeaseInfo().from_immutable_data(data)
4039 
4040     def add_lease(self, lease_info):
4041-        f = open(self.fname, 'rb+')
4042+        f = open(self.incominghome, 'rb+')
4043         num_leases = self._read_num_leases(f)
4044         self._write_lease_record(f, num_leases, lease_info)
4045         self._write_num_leases(f, num_leases+1)
4046hunk ./src/allmydata/storage/backends/das/core.py 301
4047                 if new_expire_time > lease.expiration_time:
4048                     # yes
4049                     lease.expiration_time = new_expire_time
4050-                    f = open(self.fname, 'rb+')
4051+                    f = open(self.finalhome, 'rb+')
4052                     self._write_lease_record(f, i, lease)
4053                     f.close()
4054                 return
4055hunk ./src/allmydata/storage/backends/das/core.py 336
4056             # the same order as they were added, so that if we crash while
4057             # doing this, we won't lose any non-cancelled leases.
4058             leases = [l for l in leases if l] # remove the cancelled leases
4059-            f = open(self.fname, 'rb+')
4060+            f = open(self.finalhome, 'rb+')
4061             for i,lease in enumerate(leases):
4062                 self._write_lease_record(f, i, lease)
4063             self._write_num_leases(f, len(leases))
4064hunk ./src/allmydata/storage/backends/das/core.py 344
4065             f.close()
4066         space_freed = self.LEASE_SIZE * num_leases_removed
4067         if not len(leases):
4068-            space_freed += os.stat(self.fname)[stat.ST_SIZE]
4069+            space_freed += os.stat(self.finalhome)[stat.ST_SIZE]
4070             self.unlink()
4071         return space_freed
4072hunk ./src/allmydata/test/test_backends.py 129
4073     @mock.patch('time.time')
4074     def test_write_share(self, mocktime):
4075         """ Write a new share. """
4076-
4077-        class MockShare:
4078-            def __init__(self):
4079-                self.shnum = 1
4080-               
4081-            def add_or_renew_lease(elf, lease_info):
4082-                self.failUnlessReallyEqual(lease_info.renew_secret, renew_secret)
4083-                self.failUnlessReallyEqual(lease_info.cancel_secret, cancel_secret)
4084-                self.failUnlessReallyEqual(lease_info.owner_num, 0)
4085-                self.failUnlessReallyEqual(lease_info.expiration_time, mocktime() + 31*24*60*60)
4086-                self.failUnlessReallyEqual(lease_info.nodeid, testnodeid)
4087-
4088-        share = MockShare()
4089-
4090         # Now begin the test.
4091 
4092         # XXX (0) ???  Fail unless something is not properly set-up?
4093hunk ./src/allmydata/test/test_backends.py 143
4094         # self.failIf(bsa)
4095 
4096         bs[0].remote_write(0, 'a')
4097-        self.failUnlessReallyEqual(fobj.buffer, share_file_data)
4098+        #self.failUnlessReallyEqual(fobj.buffer, share_file_data)
4099         spaceint = self.s.allocated_size()
4100         self.failUnlessReallyEqual(spaceint, 1)
4101 
4102hunk ./src/allmydata/test/test_backends.py 161
4103         #self.failIf(mockrename.called, mockrename.call_args_list)
4104         #self.failIf(mockstat.called, mockstat.call_args_list)
4105 
4106+    def test_handle_incoming(self):
4107+        incomingset = self.s.backend.get_incoming('teststorage_index')
4108+        self.failUnlessReallyEqual(incomingset, set())
4109+
4110+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4111+       
4112+        incomingset = self.s.backend.get_incoming('teststorage_index')
4113+        self.failUnlessReallyEqual(incomingset, set((0,)))
4114+
4115+        bs[0].remote_close()
4116+        self.failUnlessReallyEqual(incomingset, set())
4117+
4118     @mock.patch('os.path.exists')
4119     @mock.patch('os.path.getsize')
4120     @mock.patch('__builtin__.open')
4121hunk ./src/allmydata/test/test_backends.py 223
4122         self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
4123 
4124 
4125-
4126 class TestBackendConstruction(unittest.TestCase, ReallyEqualMixin):
4127     @mock.patch('time.time')
4128     @mock.patch('os.mkdir')
4129hunk ./src/allmydata/test/test_backends.py 271
4130         DASCore('teststoredir', expiration_policy)
4131 
4132         self.failIf(mocklistdir.called, mocklistdir.call_args_list)
4133+
4134}
4135[jacp14
4136wilcoxjg@gmail.com**20110712061211
4137 Ignore-this: 57b86958eceeef1442b21cca14798a0f
4138] {
4139hunk ./src/allmydata/storage/backends/das/core.py 95
4140             # XXX I'd like to make this more specific. If there are no shares at all.
4141             return set()
4142             
4143-    def get_shares(self, storage_index):
4144+    def get_shares(self, storageindex):
4145         """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
4146hunk ./src/allmydata/storage/backends/das/core.py 97
4147-        finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
4148+        finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storageindex))
4149         try:
4150             for f in os.listdir(finalstoragedir):
4151                 if NUM_RE.match(f):
4152hunk ./src/allmydata/storage/backends/das/core.py 102
4153                     filename = os.path.join(finalstoragedir, f)
4154-                    yield ImmutableShare(self.sharedir, storage_index, int(f))
4155+                    yield ImmutableShare(filename, storageindex, f)
4156         except OSError:
4157             # Commonly caused by there being no shares at all.
4158             pass
4159hunk ./src/allmydata/storage/backends/das/core.py 115
4160     def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
4161         finalhome = os.path.join(self.sharedir, storage_index_to_dir(storageindex), str(shnum))
4162         incominghome = os.path.join(self.sharedir,'incoming', storage_index_to_dir(storageindex), str(shnum))
4163-        immsh = ImmutableShare(finalhome, incominghome, max_size=max_space_per_bucket, create=True)
4164+        immsh = ImmutableShare(finalhome, storageindex, shnum, incominghome, max_size=max_space_per_bucket, create=True)
4165         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
4166         return bw
4167 
4168hunk ./src/allmydata/storage/backends/das/core.py 155
4169     LEASE_SIZE = struct.calcsize(">L32s32sL")
4170     sharetype = "immutable"
4171 
4172-    def __init__(self, finalhome, incominghome, max_size=None, create=False):
4173+    def __init__(self, finalhome, storageindex, shnum, incominghome=None, max_size=None, create=False):
4174         """ If max_size is not None then I won't allow more than
4175         max_size to be written to me. If create=True then max_size
4176         must not be None. """
4177hunk ./src/allmydata/storage/backends/das/core.py 160
4178         precondition((max_size is not None) or (not create), max_size, create)
4179+        self.storageindex = storageindex
4180         self._max_size = max_size
4181         self.incominghome = incominghome
4182         self.finalhome = finalhome
4183hunk ./src/allmydata/storage/backends/das/core.py 164
4184+        self.shnum = shnum
4185         if create:
4186             # touch the file, so later callers will see that we're working on
4187             # it. Also construct the metadata.
4188hunk ./src/allmydata/storage/backends/das/core.py 212
4189             # their children to know when they should do the rmdir. This
4190             # approach is simpler, but relies on os.rmdir refusing to delete
4191             # a non-empty directory. Do *not* use fileutil.rm_dir() here!
4192+            #print "os.path.dirname(self.incominghome): "
4193+            #print os.path.dirname(self.incominghome)
4194             os.rmdir(os.path.dirname(self.incominghome))
4195             # we also delete the grandparent (prefix) directory, .../ab ,
4196             # again to avoid leaving directories lying around. This might
4197hunk ./src/allmydata/storage/immutable.py 93
4198     def __init__(self, ss, share):
4199         self.ss = ss
4200         self._share_file = share
4201-        self.storage_index = share.storage_index
4202+        self.storageindex = share.storageindex
4203         self.shnum = share.shnum
4204 
4205     def __repr__(self):
4206hunk ./src/allmydata/storage/immutable.py 98
4207         return "<%s %s %s>" % (self.__class__.__name__,
4208-                               base32.b2a_l(self.storage_index[:8], 60),
4209+                               base32.b2a_l(self.storageindex[:8], 60),
4210                                self.shnum)
4211 
4212     def remote_read(self, offset, length):
4213hunk ./src/allmydata/storage/immutable.py 110
4214 
4215     def remote_advise_corrupt_share(self, reason):
4216         return self.ss.remote_advise_corrupt_share("immutable",
4217-                                                   self.storage_index,
4218+                                                   self.storageindex,
4219                                                    self.shnum,
4220                                                    reason)
4221hunk ./src/allmydata/test/test_backends.py 20
4222 # The following share file contents was generated with
4223 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
4224 # with share data == 'a'.
4225-renew_secret  = 'x'*32
4226-cancel_secret = 'y'*32
4227-share_data = 'a\x00\x00\x00\x00' + renew_secret + cancel_secret + '\x00(\xde\x80'
4228-share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data
4229+shareversionnumber = '\x00\x00\x00\x01'
4230+sharedatalength = '\x00\x00\x00\x01'
4231+numberofleases = '\x00\x00\x00\x01'
4232+shareinputdata = 'a'
4233+ownernumber = '\x00\x00\x00\x00'
4234+renewsecret  = 'x'*32
4235+cancelsecret = 'y'*32
4236+expirationtime = '\x00(\xde\x80'
4237+nextlease = ''
4238+containerdata = shareversionnumber + sharedatalength + numberofleases
4239+client_data = shareinputdata + ownernumber + renewsecret + \
4240+    cancelsecret + expirationtime + nextlease
4241+share_data = containerdata + client_data
4242+
4243 
4244 testnodeid = 'testnodeidxxxxxxxxxx'
4245 tempdir = 'teststoredir'
4246hunk ./src/allmydata/test/test_backends.py 52
4247 
4248 class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
4249     def setUp(self):
4250-        self.s = StorageServer(testnodeid, backend=NullCore())
4251+        self.ss = StorageServer(testnodeid, backend=NullCore())
4252 
4253     @mock.patch('os.mkdir')
4254     @mock.patch('__builtin__.open')
4255hunk ./src/allmydata/test/test_backends.py 62
4256         """ Write a new share. """
4257 
4258         # Now begin the test.
4259-        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4260+        alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4261         bs[0].remote_write(0, 'a')
4262         self.failIf(mockisdir.called)
4263         self.failIf(mocklistdir.called)
4264hunk ./src/allmydata/test/test_backends.py 133
4265                 _assert(False, "The tester code doesn't recognize this case.") 
4266 
4267         mockopen.side_effect = call_open
4268-        testbackend = DASCore(tempdir, expiration_policy)
4269-        self.s = StorageServer(testnodeid, backend=DASCore(tempdir, expiration_policy) )
4270+        self.backend = DASCore(tempdir, expiration_policy)
4271+        self.ss = StorageServer(testnodeid, self.backend)
4272+        self.ssinf = StorageServer(testnodeid, self.backend)
4273 
4274     @mock.patch('time.time')
4275     def test_write_share(self, mocktime):
4276hunk ./src/allmydata/test/test_backends.py 142
4277         """ Write a new share. """
4278         # Now begin the test.
4279 
4280-        # XXX (0) ???  Fail unless something is not properly set-up?
4281-        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4282+        mocktime.return_value = 0
4283+        # Inspect incoming and fail unless it's empty.
4284+        incomingset = self.ss.backend.get_incoming('teststorage_index')
4285+        self.failUnlessReallyEqual(incomingset, set())
4286+       
4287+        # Among other things, populate incoming with the sharenum: 0.
4288+        alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4289 
4290hunk ./src/allmydata/test/test_backends.py 150
4291-        # XXX (1) Inspect incoming and fail unless the sharenum is listed there.
4292-        self.failUnlessEqual(self.s.remote_get_incoming('teststorage_index'), set((0,)))
4293-        alreadygota, bsa = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4294+        # Inspect incoming and fail unless the sharenum: 0 is listed there.
4295+        self.failUnlessEqual(self.ss.remote_get_incoming('teststorage_index'), set((0,)))
4296+       
4297+        # Attempt to create a second share writer with the same share.
4298+        alreadygota, bsa = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4299 
4300hunk ./src/allmydata/test/test_backends.py 156
4301-        # XXX (2) Test that no bucketwriter results from a remote_allocate_buckets
4302+        # Show that no sharewriter results from a remote_allocate_buckets
4303         # with the same si, until BucketWriter.remote_close() has been called.
4304hunk ./src/allmydata/test/test_backends.py 158
4305-        # self.failIf(bsa)
4306+        self.failIf(bsa)
4307 
4308hunk ./src/allmydata/test/test_backends.py 160
4309+        # Write 'a' to shnum 0. Only tested together with close and read.
4310         bs[0].remote_write(0, 'a')
4311hunk ./src/allmydata/test/test_backends.py 162
4312-        #self.failUnlessReallyEqual(fobj.buffer, share_file_data)
4313-        spaceint = self.s.allocated_size()
4314+
4315+        # Test allocated size.
4316+        spaceint = self.ss.allocated_size()
4317         self.failUnlessReallyEqual(spaceint, 1)
4318 
4319         # XXX (3) Inspect final and fail unless there's nothing there.
4320hunk ./src/allmydata/test/test_backends.py 168
4321+        self.failUnlessReallyEqual(len(list(self.backend.get_shares('teststorage_index'))), 0)
4322         bs[0].remote_close()
4323         # XXX (4a) Inspect final and fail unless share 0 is there.
4324hunk ./src/allmydata/test/test_backends.py 171
4325+        #sharesinfinal = list(self.backend.get_shares('teststorage_index'))
4326+        #contents = sharesinfinal[0].read_share_data(0,999)
4327+        #self.failUnlessReallyEqual(sharesinfinal[0].read_share_data(0,73), client_data)
4328         # XXX (4b) Inspect incoming and fail unless share 0 is NOT there.
4329 
4330         # What happens when there's not enough space for the client's request?
4331hunk ./src/allmydata/test/test_backends.py 177
4332-        # XXX Need to uncomment! alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
4333+        # XXX Need to uncomment! alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
4334 
4335         # Now test the allocated_size method.
4336         # self.failIf(mockexists.called, mockexists.call_args_list)
4337hunk ./src/allmydata/test/test_backends.py 185
4338         #self.failIf(mockrename.called, mockrename.call_args_list)
4339         #self.failIf(mockstat.called, mockstat.call_args_list)
4340 
4341-    def test_handle_incoming(self):
4342-        incomingset = self.s.backend.get_incoming('teststorage_index')
4343-        self.failUnlessReallyEqual(incomingset, set())
4344-
4345-        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4346-       
4347-        incomingset = self.s.backend.get_incoming('teststorage_index')
4348-        self.failUnlessReallyEqual(incomingset, set((0,)))
4349-
4350-        bs[0].remote_close()
4351-        self.failUnlessReallyEqual(incomingset, set())
4352-
4353     @mock.patch('os.path.exists')
4354     @mock.patch('os.path.getsize')
4355     @mock.patch('__builtin__.open')
4356hunk ./src/allmydata/test/test_backends.py 208
4357             self.failUnless('r' in mode, mode)
4358             self.failUnless('b' in mode, mode)
4359 
4360-            return StringIO(share_file_data)
4361+            return StringIO(share_data)
4362         mockopen.side_effect = call_open
4363 
4364hunk ./src/allmydata/test/test_backends.py 211
4365-        datalen = len(share_file_data)
4366+        datalen = len(share_data)
4367         def call_getsize(fname):
4368             self.failUnlessReallyEqual(fname, sharefname)
4369             return datalen
4370hunk ./src/allmydata/test/test_backends.py 223
4371         mockexists.side_effect = call_exists
4372 
4373         # Now begin the test.
4374-        bs = self.s.remote_get_buckets('teststorage_index')
4375+        bs = self.ss.remote_get_buckets('teststorage_index')
4376 
4377         self.failUnlessEqual(len(bs), 1)
4378hunk ./src/allmydata/test/test_backends.py 226
4379-        b = bs[0]
4380+        b = bs['0']
4381         # These should match by definition, the next two cases cover cases without (completely) unambiguous behaviors.
4382hunk ./src/allmydata/test/test_backends.py 228
4383-        self.failUnlessReallyEqual(b.remote_read(0, datalen), share_data)
4384+        self.failUnlessReallyEqual(b.remote_read(0, datalen), client_data)
4385         # If you try to read past the end you get the as much data as is there.
4386hunk ./src/allmydata/test/test_backends.py 230
4387-        self.failUnlessReallyEqual(b.remote_read(0, datalen+20), share_data)
4388+        self.failUnlessReallyEqual(b.remote_read(0, datalen+20), client_data)
4389         # If you start reading past the end of the file you get the empty string.
4390         self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
4391 
4392}
4393[jacp14 or so
4394wilcoxjg@gmail.com**20110713060346
4395 Ignore-this: 7026810f60879d65b525d450e43ff87a
4396] {
4397hunk ./src/allmydata/storage/backends/das/core.py 102
4398             for f in os.listdir(finalstoragedir):
4399                 if NUM_RE.match(f):
4400                     filename = os.path.join(finalstoragedir, f)
4401-                    yield ImmutableShare(filename, storageindex, f)
4402+                    yield ImmutableShare(filename, storageindex, int(f))
4403         except OSError:
4404             # Commonly caused by there being no shares at all.
4405             pass
4406hunk ./src/allmydata/storage/backends/null/core.py 25
4407     def set_storage_server(self, ss):
4408         self.ss = ss
4409 
4410+    def get_incoming(self, storageindex):
4411+        return set()
4412+
4413 class ImmutableShare:
4414     sharetype = "immutable"
4415 
4416hunk ./src/allmydata/storage/immutable.py 19
4417 
4418     def __init__(self, ss, immutableshare, max_size, lease_info, canary):
4419         self.ss = ss
4420-        self._max_size = max_size # don't allow the client to write more than this
4421+        self._max_size = max_size # don't allow the client to write more than this        print self.ss._active_writers.keys()
4422+
4423         self._canary = canary
4424         self._disconnect_marker = canary.notifyOnDisconnect(self._disconnected)
4425         self.closed = False
4426hunk ./src/allmydata/test/test_backends.py 135
4427         mockopen.side_effect = call_open
4428         self.backend = DASCore(tempdir, expiration_policy)
4429         self.ss = StorageServer(testnodeid, self.backend)
4430-        self.ssinf = StorageServer(testnodeid, self.backend)
4431+        self.backendsmall = DASCore(tempdir, expiration_policy, reserved_space = 1)
4432+        self.ssmallback = StorageServer(testnodeid, self.backendsmall)
4433 
4434     @mock.patch('time.time')
4435     def test_write_share(self, mocktime):
4436hunk ./src/allmydata/test/test_backends.py 161
4437         # with the same si, until BucketWriter.remote_close() has been called.
4438         self.failIf(bsa)
4439 
4440-        # Write 'a' to shnum 0. Only tested together with close and read.
4441-        bs[0].remote_write(0, 'a')
4442-
4443         # Test allocated size.
4444         spaceint = self.ss.allocated_size()
4445         self.failUnlessReallyEqual(spaceint, 1)
4446hunk ./src/allmydata/test/test_backends.py 165
4447 
4448-        # XXX (3) Inspect final and fail unless there's nothing there.
4449+        # Write 'a' to shnum 0. Only tested together with close and read.
4450+        bs[0].remote_write(0, 'a')
4451+       
4452+        # Preclose: Inspect final, failUnless nothing there.
4453         self.failUnlessReallyEqual(len(list(self.backend.get_shares('teststorage_index'))), 0)
4454         bs[0].remote_close()
4455hunk ./src/allmydata/test/test_backends.py 171
4456-        # XXX (4a) Inspect final and fail unless share 0 is there.
4457-        #sharesinfinal = list(self.backend.get_shares('teststorage_index'))
4458-        #contents = sharesinfinal[0].read_share_data(0,999)
4459-        #self.failUnlessReallyEqual(sharesinfinal[0].read_share_data(0,73), client_data)
4460-        # XXX (4b) Inspect incoming and fail unless share 0 is NOT there.
4461 
4462hunk ./src/allmydata/test/test_backends.py 172
4463-        # What happens when there's not enough space for the client's request?
4464-        # XXX Need to uncomment! alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
4465+        # Postclose: (Omnibus) failUnless written data is in final.
4466+        sharesinfinal = list(self.backend.get_shares('teststorage_index'))
4467+        contents = sharesinfinal[0].read_share_data(0,73)
4468+        self.failUnlessReallyEqual(sharesinfinal[0].read_share_data(0,73), client_data)
4469 
4470hunk ./src/allmydata/test/test_backends.py 177
4471-        # Now test the allocated_size method.
4472-        # self.failIf(mockexists.called, mockexists.call_args_list)
4473-        #self.failIf(mockmake_dirs.called, mockmake_dirs.call_args_list)
4474-        #self.failIf(mockrename.called, mockrename.call_args_list)
4475-        #self.failIf(mockstat.called, mockstat.call_args_list)
4476+        # Cover interior of for share in get_shares loop.
4477+        alreadygotb, bsb = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4478+       
4479+    @mock.patch('time.time')
4480+    @mock.patch('allmydata.util.fileutil.get_available_space')
4481+    def test_out_of_space(self, mockget_available_space, mocktime):
4482+        mocktime.return_value = 0
4483+       
4484+        def call_get_available_space(dir, reserve):
4485+            return 0
4486+
4487+        mockget_available_space.side_effect = call_get_available_space
4488+       
4489+       
4490+        alreadygotc, bsc = self.ssmallback.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4491 
4492     @mock.patch('os.path.exists')
4493     @mock.patch('os.path.getsize')
4494hunk ./src/allmydata/test/test_backends.py 234
4495         bs = self.ss.remote_get_buckets('teststorage_index')
4496 
4497         self.failUnlessEqual(len(bs), 1)
4498-        b = bs['0']
4499+        b = bs[0]
4500         # These should match by definition, the next two cases cover cases without (completely) unambiguous behaviors.
4501         self.failUnlessReallyEqual(b.remote_read(0, datalen), client_data)
4502         # If you try to read past the end you get the as much data as is there.
4503}
4504[temporary work-in-progress patch to be unrecorded
4505zooko@zooko.com**20110714003008
4506 Ignore-this: 39ecb812eca5abe04274c19897af5b45
4507 tidy up a few tests, work done in pair-programming with Zancas
4508] {
4509hunk ./src/allmydata/storage/backends/das/core.py 65
4510         self._clean_incomplete()
4511 
4512     def _clean_incomplete(self):
4513-        fileutil.rm_dir(self.incomingdir)
4514+        fileutil.rmtree(self.incomingdir)
4515         fileutil.make_dirs(self.incomingdir)
4516 
4517     def _setup_corruption_advisory(self):
4518hunk ./src/allmydata/storage/immutable.py 1
4519-import os, stat, struct, time
4520+import os, time
4521 
4522 from foolscap.api import Referenceable
4523 
4524hunk ./src/allmydata/storage/server.py 1
4525-import os, re, weakref, struct, time
4526+import os, weakref, struct, time
4527 
4528 from foolscap.api import Referenceable
4529 from twisted.application import service
4530hunk ./src/allmydata/storage/server.py 7
4531 
4532 from zope.interface import implements
4533-from allmydata.interfaces import RIStorageServer, IStatsProducer, IShareStore
4534+from allmydata.interfaces import RIStorageServer, IStatsProducer
4535 from allmydata.util import fileutil, idlib, log, time_format
4536 import allmydata # for __full_version__
4537 
4538hunk ./src/allmydata/storage/server.py 313
4539         self.add_latency("get", time.time() - start)
4540         return bucketreaders
4541 
4542-    def remote_get_incoming(self, storageindex):
4543-        incoming_share_set = self.backend.get_incoming(storageindex)
4544-        return incoming_share_set
4545-
4546     def get_leases(self, storageindex):
4547         """Provide an iterator that yields all of the leases attached to this
4548         bucket. Each lease is returned as a LeaseInfo instance.
4549hunk ./src/allmydata/test/test_backends.py 3
4550 from twisted.trial import unittest
4551 
4552+from twisted.path.filepath import FilePath
4553+
4554 from StringIO import StringIO
4555 
4556 from allmydata.test.common_util import ReallyEqualMixin
4557hunk ./src/allmydata/test/test_backends.py 38
4558 
4559 
4560 testnodeid = 'testnodeidxxxxxxxxxx'
4561-tempdir = 'teststoredir'
4562-basedir = os.path.join(tempdir, 'shares')
4563+storedir = 'teststoredir'
4564+storedirfp = FilePath(storedir)
4565+basedir = os.path.join(storedir, 'shares')
4566 baseincdir = os.path.join(basedir, 'incoming')
4567 sharedirfinalname = os.path.join(basedir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
4568 sharedirincomingname = os.path.join(baseincdir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
4569hunk ./src/allmydata/test/test_backends.py 53
4570                      'cutoff_date' : None,
4571                      'sharetypes' : None}
4572 
4573-class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
4574+class TestServerWithNullBackend(unittest.TestCase, ReallyEqualMixin):
4575+    """ NullBackend is just for testing and executable documentation, so
4576+    this test is actually a test of StorageServer in which we're using
4577+    NullBackend as helper code for the test, rather than a test of
4578+    NullBackend. """
4579     def setUp(self):
4580         self.ss = StorageServer(testnodeid, backend=NullCore())
4581 
4582hunk ./src/allmydata/test/test_backends.py 62
4583     @mock.patch('os.mkdir')
4584+
4585     @mock.patch('__builtin__.open')
4586     @mock.patch('os.listdir')
4587     @mock.patch('os.path.isdir')
4588hunk ./src/allmydata/test/test_backends.py 69
4589     def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir):
4590         """ Write a new share. """
4591 
4592-        # Now begin the test.
4593         alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4594         bs[0].remote_write(0, 'a')
4595         self.failIf(mockisdir.called)
4596hunk ./src/allmydata/test/test_backends.py 83
4597     @mock.patch('os.listdir')
4598     @mock.patch('os.path.isdir')
4599     def test_create_server_fs_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
4600-        """ This tests whether a server instance can be constructed
4601-        with a filesystem backend. To pass the test, it has to use the
4602-        filesystem in only the prescribed ways. """
4603+        """ This tests whether a server instance can be constructed with a
4604+        filesystem backend. To pass the test, it mustn't use the filesystem
4605+        outside of its configured storedir. """
4606 
4607         def call_open(fname, mode):
4608hunk ./src/allmydata/test/test_backends.py 88
4609-            if fname == os.path.join(tempdir,'bucket_counter.state'):
4610-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
4611-            elif fname == os.path.join(tempdir, 'lease_checker.state'):
4612-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
4613-            elif fname == os.path.join(tempdir, 'lease_checker.history'):
4614+            if fname == os.path.join(storedir, 'bucket_counter.state'):
4615+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'bucket_counter.state'))
4616+            elif fname == os.path.join(storedir, 'lease_checker.state'):
4617+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'lease_checker.state'))
4618+            elif fname == os.path.join(storedir, 'lease_checker.history'):
4619                 return StringIO()
4620             else:
4621hunk ./src/allmydata/test/test_backends.py 95
4622-                self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
4623+                fnamefp = FilePath(fname)
4624+                self.failUnless(storedirfp in fnamefp.parents(),
4625+                                "Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
4626         mockopen.side_effect = call_open
4627 
4628         def call_isdir(fname):
4629hunk ./src/allmydata/test/test_backends.py 101
4630-            if fname == os.path.join(tempdir,'shares'):
4631+            if fname == os.path.join(storedir, 'shares'):
4632                 return True
4633hunk ./src/allmydata/test/test_backends.py 103
4634-            elif fname == os.path.join(tempdir,'shares', 'incoming'):
4635+            elif fname == os.path.join(storedir, 'shares', 'incoming'):
4636                 return True
4637             else:
4638                 self.fail("Server with FS backend tried to idsir '%s'" % (fname,))
4639hunk ./src/allmydata/test/test_backends.py 109
4640         mockisdir.side_effect = call_isdir
4641 
4642+        mocklistdir.return_value = []
4643+
4644         def call_mkdir(fname, mode):
4645hunk ./src/allmydata/test/test_backends.py 112
4646-            """XXX something is calling mkdir teststoredir and teststoredir/shares twice...  this is odd!"""
4647             self.failUnlessEqual(0777, mode)
4648hunk ./src/allmydata/test/test_backends.py 113
4649-            if fname == tempdir:
4650-                return None
4651-            elif fname == os.path.join(tempdir,'shares'):
4652-                return None
4653-            elif fname == os.path.join(tempdir,'shares', 'incoming'):
4654-                return None
4655-            else:
4656-                self.fail("Server with FS backend tried to mkdir '%s'" % (fname,))
4657+            self.failUnlessIn(fname,
4658+                              [storedir,
4659+                               os.path.join(storedir, 'shares'),
4660+                               os.path.join(storedir, 'shares', 'incoming')],
4661+                              "Server with FS backend tried to mkdir '%s'" % (fname,))
4662         mockmkdir.side_effect = call_mkdir
4663 
4664         # Now begin the test.
4665hunk ./src/allmydata/test/test_backends.py 121
4666-        s = StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
4667+        StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
4668 
4669         self.failIf(mocklistdir.called, mocklistdir.call_args_list)
4670 
4671hunk ./src/allmydata/test/test_backends.py 126
4672 
4673-class TestServerFSBackend(unittest.TestCase, ReallyEqualMixin):
4674+class TestServerAndFSBackend(unittest.TestCase, ReallyEqualMixin):
4675+    """ This tests both the StorageServer xyz """
4676     @mock.patch('__builtin__.open')
4677     def setUp(self, mockopen):
4678         def call_open(fname, mode):
4679hunk ./src/allmydata/test/test_backends.py 131
4680-            if fname == os.path.join(tempdir, 'bucket_counter.state'):
4681-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
4682-            elif fname == os.path.join(tempdir, 'lease_checker.state'):
4683-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
4684-            elif fname == os.path.join(tempdir, 'lease_checker.history'):
4685+            if fname == os.path.join(storedir, 'bucket_counter.state'):
4686+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'bucket_counter.state'))
4687+            elif fname == os.path.join(storedir, 'lease_checker.state'):
4688+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'lease_checker.state'))
4689+            elif fname == os.path.join(storedir, 'lease_checker.history'):
4690                 return StringIO()
4691             else:
4692                 _assert(False, "The tester code doesn't recognize this case.") 
4693hunk ./src/allmydata/test/test_backends.py 141
4694 
4695         mockopen.side_effect = call_open
4696-        self.backend = DASCore(tempdir, expiration_policy)
4697+        self.backend = DASCore(storedir, expiration_policy)
4698         self.ss = StorageServer(testnodeid, self.backend)
4699hunk ./src/allmydata/test/test_backends.py 143
4700-        self.backendsmall = DASCore(tempdir, expiration_policy, reserved_space = 1)
4701+        self.backendsmall = DASCore(storedir, expiration_policy, reserved_space = 1)
4702         self.ssmallback = StorageServer(testnodeid, self.backendsmall)
4703 
4704     @mock.patch('time.time')
4705hunk ./src/allmydata/test/test_backends.py 147
4706-    def test_write_share(self, mocktime):
4707-        """ Write a new share. """
4708-        # Now begin the test.
4709+    def test_write_and_read_share(self, mocktime):
4710+        """
4711+        Write a new share, read it, and test the server's (and FS backend's)
4712+        handling of simultaneous and successive attempts to write the same
4713+        share.
4714+        """
4715 
4716         mocktime.return_value = 0
4717         # Inspect incoming and fail unless it's empty.
4718hunk ./src/allmydata/test/test_backends.py 159
4719         incomingset = self.ss.backend.get_incoming('teststorage_index')
4720         self.failUnlessReallyEqual(incomingset, set())
4721         
4722-        # Among other things, populate incoming with the sharenum: 0.
4723+        # Populate incoming with the sharenum: 0.
4724         alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4725 
4726         # Inspect incoming and fail unless the sharenum: 0 is listed there.
4727hunk ./src/allmydata/test/test_backends.py 163
4728-        self.failUnlessEqual(self.ss.remote_get_incoming('teststorage_index'), set((0,)))
4729+        self.failUnlessEqual(self.ss.backend.get_incoming('teststorage_index'), set((0,)))
4730         
4731hunk ./src/allmydata/test/test_backends.py 165
4732-        # Attempt to create a second share writer with the same share.
4733+        # Attempt to create a second share writer with the same sharenum.
4734         alreadygota, bsa = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4735 
4736         # Show that no sharewriter results from a remote_allocate_buckets
4737hunk ./src/allmydata/test/test_backends.py 169
4738-        # with the same si, until BucketWriter.remote_close() has been called.
4739+        # with the same si and sharenum, until BucketWriter.remote_close()
4740+        # has been called.
4741         self.failIf(bsa)
4742 
4743         # Test allocated size.
4744hunk ./src/allmydata/test/test_backends.py 187
4745         # Postclose: (Omnibus) failUnless written data is in final.
4746         sharesinfinal = list(self.backend.get_shares('teststorage_index'))
4747         contents = sharesinfinal[0].read_share_data(0,73)
4748-        self.failUnlessReallyEqual(sharesinfinal[0].read_share_data(0,73), client_data)
4749+        self.failUnlessReallyEqual(contents, client_data)
4750 
4751hunk ./src/allmydata/test/test_backends.py 189
4752-        # Cover interior of for share in get_shares loop.
4753-        alreadygotb, bsb = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4754+        # Exercise the case that the share we're asking to allocate is
4755+        # already (completely) uploaded.
4756+        self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4757         
4758     @mock.patch('time.time')
4759     @mock.patch('allmydata.util.fileutil.get_available_space')
4760hunk ./src/allmydata/test/test_backends.py 210
4761     @mock.patch('os.path.getsize')
4762     @mock.patch('__builtin__.open')
4763     @mock.patch('os.listdir')
4764-    def test_read_share(self, mocklistdir, mockopen, mockgetsize, mockexists):
4765+    def test_read_old_share(self, mocklistdir, mockopen, mockgetsize, mockexists):
4766         """ This tests whether the code correctly finds and reads
4767         shares written out by old (Tahoe-LAFS <= v1.8.2)
4768         servers. There is a similar test in test_download, but that one
4769hunk ./src/allmydata/test/test_backends.py 219
4770         StorageServer object. """
4771 
4772         def call_listdir(dirname):
4773-            self.failUnlessReallyEqual(dirname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
4774+            self.failUnlessReallyEqual(dirname, os.path.join(storedir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
4775             return ['0']
4776 
4777         mocklistdir.side_effect = call_listdir
4778hunk ./src/allmydata/test/test_backends.py 226
4779 
4780         def call_open(fname, mode):
4781             self.failUnlessReallyEqual(fname, sharefname)
4782-            self.failUnless('r' in mode, mode)
4783+            self.failUnlessEqual(mode[0], 'r', mode)
4784             self.failUnless('b' in mode, mode)
4785 
4786             return StringIO(share_data)
4787hunk ./src/allmydata/test/test_backends.py 268
4788         filesystem in only the prescribed ways. """
4789 
4790         def call_open(fname, mode):
4791-            if fname == os.path.join(tempdir,'bucket_counter.state'):
4792-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
4793-            elif fname == os.path.join(tempdir, 'lease_checker.state'):
4794-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
4795-            elif fname == os.path.join(tempdir, 'lease_checker.history'):
4796+            if fname == os.path.join(storedir,'bucket_counter.state'):
4797+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'bucket_counter.state'))
4798+            elif fname == os.path.join(storedir, 'lease_checker.state'):
4799+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'lease_checker.state'))
4800+            elif fname == os.path.join(storedir, 'lease_checker.history'):
4801                 return StringIO()
4802             else:
4803                 self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
4804hunk ./src/allmydata/test/test_backends.py 279
4805         mockopen.side_effect = call_open
4806 
4807         def call_isdir(fname):
4808-            if fname == os.path.join(tempdir,'shares'):
4809+            if fname == os.path.join(storedir,'shares'):
4810                 return True
4811hunk ./src/allmydata/test/test_backends.py 281
4812-            elif fname == os.path.join(tempdir,'shares', 'incoming'):
4813+            elif fname == os.path.join(storedir,'shares', 'incoming'):
4814                 return True
4815             else:
4816                 self.fail("Server with FS backend tried to idsir '%s'" % (fname,))
4817hunk ./src/allmydata/test/test_backends.py 290
4818         def call_mkdir(fname, mode):
4819             """XXX something is calling mkdir teststoredir and teststoredir/shares twice...  this is odd!"""
4820             self.failUnlessEqual(0777, mode)
4821-            if fname == tempdir:
4822+            if fname == storedir:
4823                 return None
4824hunk ./src/allmydata/test/test_backends.py 292
4825-            elif fname == os.path.join(tempdir,'shares'):
4826+            elif fname == os.path.join(storedir,'shares'):
4827                 return None
4828hunk ./src/allmydata/test/test_backends.py 294
4829-            elif fname == os.path.join(tempdir,'shares', 'incoming'):
4830+            elif fname == os.path.join(storedir,'shares', 'incoming'):
4831                 return None
4832             else:
4833                 self.fail("Server with FS backend tried to mkdir '%s'" % (fname,))
4834hunk ./src/allmydata/util/fileutil.py 5
4835 Futz with files like a pro.
4836 """
4837 
4838-import sys, exceptions, os, stat, tempfile, time, binascii
4839+import errno, sys, exceptions, os, stat, tempfile, time, binascii
4840 
4841 from twisted.python import log
4842 
4843hunk ./src/allmydata/util/fileutil.py 186
4844             raise tx
4845         raise exceptions.IOError, "unknown error prevented creation of directory, or deleted the directory immediately after creation: %s" % dirname # careful not to construct an IOError with a 2-tuple, as that has a special meaning...
4846 
4847-def rm_dir(dirname):
4848+def rmtree(dirname):
4849     """
4850     A threadsafe and idempotent version of shutil.rmtree().  If the dir is
4851     already gone, do nothing and return without raising an exception.  If this
4852hunk ./src/allmydata/util/fileutil.py 205
4853             else:
4854                 remove(fullname)
4855         os.rmdir(dirname)
4856-    except Exception, le:
4857-        # Ignore "No such file or directory"
4858-        if (not isinstance(le, OSError)) or le.args[0] != 2:
4859+    except EnvironmentError, le:
4860+        # Ignore "No such file or directory", collect any other exception.
4861+        if (le.args[0] != 2 and le.args[0] != 3) or (le.args[0] != errno.ENOENT):
4862             excs.append(le)
4863hunk ./src/allmydata/util/fileutil.py 209
4864+    except Exception, le:
4865+        excs.append(le)
4866 
4867     # Okay, now we've recursively removed everything, ignoring any "No
4868     # such file or directory" errors, and collecting any other errors.
4869hunk ./src/allmydata/util/fileutil.py 222
4870             raise OSError, "Failed to remove dir for unknown reason."
4871         raise OSError, excs
4872 
4873+def rm_dir(dirname):
4874+    # Renamed to be like shutil.rmtree and unlike rmdir.
4875+    return rmtree(dirname)
4876 
4877 def remove_if_possible(f):
4878     try:
4879}
4880[work in progress intended to be unrecorded and never committed to trunk
4881zooko@zooko.com**20110714212139
4882 Ignore-this: c291aaf2b22c4887ad0ba2caea911537
4883 switch from os.path.join to filepath
4884 incomplete refactoring of common "stay in your subtree" tester code into a superclass
4885 
4886] {
4887hunk ./src/allmydata/test/test_backends.py 3
4888 from twisted.trial import unittest
4889 
4890-from twisted.path.filepath import FilePath
4891+from twisted.python.filepath import FilePath
4892 
4893 from StringIO import StringIO
4894 
4895hunk ./src/allmydata/test/test_backends.py 10
4896 from allmydata.test.common_util import ReallyEqualMixin
4897 from allmydata.util.assertutil import _assert
4898 
4899-import mock, os
4900+import mock
4901 
4902 # This is the code that we're going to be testing.
4903 from allmydata.storage.server import StorageServer
4904hunk ./src/allmydata/test/test_backends.py 25
4905 shareversionnumber = '\x00\x00\x00\x01'
4906 sharedatalength = '\x00\x00\x00\x01'
4907 numberofleases = '\x00\x00\x00\x01'
4908+
4909 shareinputdata = 'a'
4910 ownernumber = '\x00\x00\x00\x00'
4911 renewsecret  = 'x'*32
4912hunk ./src/allmydata/test/test_backends.py 39
4913 
4914 
4915 testnodeid = 'testnodeidxxxxxxxxxx'
4916-storedir = 'teststoredir'
4917-storedirfp = FilePath(storedir)
4918-basedir = os.path.join(storedir, 'shares')
4919-baseincdir = os.path.join(basedir, 'incoming')
4920-sharedirfinalname = os.path.join(basedir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
4921-sharedirincomingname = os.path.join(baseincdir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
4922-shareincomingname = os.path.join(sharedirincomingname, '0')
4923-sharefname = os.path.join(sharedirfinalname, '0')
4924+
4925+class TestFilesMixin(unittest.TestCase):
4926+    def setUp(self):
4927+        self.storedir = FilePath('teststoredir')
4928+        self.basedir = self.storedir.child('shares')
4929+        self.baseincdir = self.basedir.child('incoming')
4930+        self.sharedirfinalname = self.basedir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
4931+        self.sharedirincomingname = self.baseincdir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
4932+        self.shareincomingname = self.sharedirincomingname.child('0')
4933+        self.sharefname = self.sharedirfinalname.child('0')
4934+
4935+    def call_open(self, fname, mode):
4936+        fnamefp = FilePath(fname)
4937+        if fnamefp == self.storedir.child('bucket_counter.state'):
4938+            raise IOError(2, "No such file or directory: '%s'" % self.storedir.child('bucket_counter.state'))
4939+        elif fnamefp == self.storedir.child('lease_checker.state'):
4940+            raise IOError(2, "No such file or directory: '%s'" % self.storedir.child('lease_checker.state'))
4941+        elif fnamefp == self.storedir.child('lease_checker.history'):
4942+            return StringIO()
4943+        else:
4944+            self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
4945+                            "Server with FS backend tried to open '%s' which is outside of the storage tree '%s' in mode '%s'" % (fnamefp, self.storedir, mode))
4946+
4947+    def call_isdir(self, fname):
4948+        fnamefp = FilePath(fname)
4949+        if fnamefp == self.storedir.child('shares'):
4950+            return True
4951+        elif fnamefp == self.storedir.child('shares').child('incoming'):
4952+            return True
4953+        else:
4954+            self.failUnless(self.storedir in fnamefp.parents(),
4955+                            "Server with FS backend tried to isdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
4956+
4957+    def call_mkdir(self, fname, mode):
4958+        self.failUnlessEqual(0777, mode)
4959+        fnamefp = FilePath(fname)
4960+        self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
4961+                        "Server with FS backend tried to mkdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
4962+
4963+
4964+    @mock.patch('os.mkdir')
4965+    @mock.patch('__builtin__.open')
4966+    @mock.patch('os.listdir')
4967+    @mock.patch('os.path.isdir')
4968+    def _help_test_stay_in_your_subtree(self, test_func, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
4969+        mocklistdir.return_value = []
4970+        mockmkdir.side_effect = self.call_mkdir
4971+        mockisdir.side_effect = self.call_isdir
4972+        mockopen.side_effect = self.call_open
4973+        mocklistdir.return_value = []
4974+       
4975+        test_func()
4976+       
4977+        self.failIf(mocklistdir.called, mocklistdir.call_args_list)
4978 
4979 expiration_policy = {'enabled' : False,
4980                      'mode' : 'age',
4981hunk ./src/allmydata/test/test_backends.py 123
4982         self.failIf(mockopen.called)
4983         self.failIf(mockmkdir.called)
4984 
4985-class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
4986-    @mock.patch('time.time')
4987-    @mock.patch('os.mkdir')
4988-    @mock.patch('__builtin__.open')
4989-    @mock.patch('os.listdir')
4990-    @mock.patch('os.path.isdir')
4991-    def test_create_server_fs_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
4992+class TestServerConstruction(ReallyEqualMixin, TestFilesMixin):
4993+    def test_create_server_fs_backend(self):
4994         """ This tests whether a server instance can be constructed with a
4995         filesystem backend. To pass the test, it mustn't use the filesystem
4996         outside of its configured storedir. """
4997hunk ./src/allmydata/test/test_backends.py 129
4998 
4999-        def call_open(fname, mode):
5000-            if fname == os.path.join(storedir, 'bucket_counter.state'):
5001-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'bucket_counter.state'))
5002-            elif fname == os.path.join(storedir, 'lease_checker.state'):
5003-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'lease_checker.state'))
5004-            elif fname == os.path.join(storedir, 'lease_checker.history'):
5005-                return StringIO()
5006-            else:
5007-                fnamefp = FilePath(fname)
5008-                self.failUnless(storedirfp in fnamefp.parents(),
5009-                                "Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
5010-        mockopen.side_effect = call_open
5011+        def _f():
5012+            StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
5013 
5014hunk ./src/allmydata/test/test_backends.py 132
5015-        def call_isdir(fname):
5016-            if fname == os.path.join(storedir, 'shares'):
5017-                return True
5018-            elif fname == os.path.join(storedir, 'shares', 'incoming'):
5019-                return True
5020-            else:
5021-                self.fail("Server with FS backend tried to idsir '%s'" % (fname,))
5022-        mockisdir.side_effect = call_isdir
5023-
5024-        mocklistdir.return_value = []
5025-
5026-        def call_mkdir(fname, mode):
5027-            self.failUnlessEqual(0777, mode)
5028-            self.failUnlessIn(fname,
5029-                              [storedir,
5030-                               os.path.join(storedir, 'shares'),
5031-                               os.path.join(storedir, 'shares', 'incoming')],
5032-                              "Server with FS backend tried to mkdir '%s'" % (fname,))
5033-        mockmkdir.side_effect = call_mkdir
5034-
5035-        # Now begin the test.
5036-        StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
5037-
5038-        self.failIf(mocklistdir.called, mocklistdir.call_args_list)
5039+        self._help_test_stay_in_your_subtree(_f)
5040 
5041 
5042 class TestServerAndFSBackend(unittest.TestCase, ReallyEqualMixin):
5043}
5044[another incomplete patch for people who are very curious about incomplete work or for Zancas to apply and build on top of 2011-07-15_19_15Z
5045zooko@zooko.com**20110715191500
5046 Ignore-this: af33336789041800761e80510ea2f583
5047 In this patch (very incomplete) we started two major changes: first was to refactor the mockery of the filesystem into a common base class which provides a mock filesystem for all the DAS tests. Second was to convert from Python standard library filename manipulation like os.path.join to twisted.python.filepath. The former *might* be close to complete -- it seems to run at least most of the first test before that test hits a problem due to the incomplete converstion to filepath. The latter has still a lot of work to go.
5048] {
5049hunk ./src/allmydata/storage/backends/das/core.py 59
5050                 log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
5051                         umid="0wZ27w", level=log.UNUSUAL)
5052 
5053-        self.sharedir = os.path.join(self.storedir, "shares")
5054-        fileutil.make_dirs(self.sharedir)
5055-        self.incomingdir = os.path.join(self.sharedir, 'incoming')
5056+        self.sharedir = self.storedir.child("shares")
5057+        fileutil.fp_make_dirs(self.sharedir)
5058+        self.incomingdir = self.sharedir.child('incoming')
5059         self._clean_incomplete()
5060 
5061     def _clean_incomplete(self):
5062hunk ./src/allmydata/storage/backends/das/core.py 65
5063-        fileutil.rmtree(self.incomingdir)
5064-        fileutil.make_dirs(self.incomingdir)
5065+        fileutil.fp_remove(self.incomingdir)
5066+        fileutil.fp_make_dirs(self.incomingdir)
5067 
5068     def _setup_corruption_advisory(self):
5069         # we don't actually create the corruption-advisory dir until necessary
5070hunk ./src/allmydata/storage/backends/das/core.py 70
5071-        self.corruption_advisory_dir = os.path.join(self.storedir,
5072-                                                    "corruption-advisories")
5073+        self.corruption_advisory_dir = self.storedir.child("corruption-advisories")
5074 
5075     def _setup_bucket_counter(self):
5076hunk ./src/allmydata/storage/backends/das/core.py 73
5077-        statefname = os.path.join(self.storedir, "bucket_counter.state")
5078+        statefname = self.storedir.child("bucket_counter.state")
5079         self.bucket_counter = FSBucketCountingCrawler(statefname)
5080         self.bucket_counter.setServiceParent(self)
5081 
5082hunk ./src/allmydata/storage/backends/das/core.py 78
5083     def _setup_lease_checkerf(self, expiration_policy):
5084-        statefile = os.path.join(self.storedir, "lease_checker.state")
5085-        historyfile = os.path.join(self.storedir, "lease_checker.history")
5086+        statefile = self.storedir.child("lease_checker.state")
5087+        historyfile = self.storedir.child("lease_checker.history")
5088         self.lease_checker = FSLeaseCheckingCrawler(statefile, historyfile, expiration_policy)
5089         self.lease_checker.setServiceParent(self)
5090 
5091hunk ./src/allmydata/storage/backends/das/core.py 83
5092-    def get_incoming(self, storageindex):
5093+    def get_incoming_shnums(self, storageindex):
5094         """Return the set of incoming shnums."""
5095         try:
5096hunk ./src/allmydata/storage/backends/das/core.py 86
5097-            incomingsharesdir = os.path.join(self.incomingdir, storage_index_to_dir(storageindex))
5098-            incominglist = os.listdir(incomingsharesdir)
5099-            incomingshnums = [int(x) for x in incominglist]
5100-            return set(incomingshnums)
5101-        except OSError:
5102-            # XXX I'd like to make this more specific. If there are no shares at all.
5103-            return set()
5104+           
5105+            incomingsharesdir = storage_index_to_dir(self.incomingdir, storageindex)
5106+            incomingshnums = [int(x) for x in incomingsharesdir.listdir()]
5107+            return frozenset(incomingshnums)
5108+        except UnlistableError:
5109+            # There is no shares directory at all.
5110+            return frozenset()
5111             
5112     def get_shares(self, storageindex):
5113         """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
5114hunk ./src/allmydata/storage/backends/das/core.py 96
5115-        finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storageindex))
5116+        finalstoragedir = storage_index_to_dir(self.sharedir, storageindex)
5117         try:
5118hunk ./src/allmydata/storage/backends/das/core.py 98
5119-            for f in os.listdir(finalstoragedir):
5120-                if NUM_RE.match(f):
5121-                    filename = os.path.join(finalstoragedir, f)
5122-                    yield ImmutableShare(filename, storageindex, int(f))
5123-        except OSError:
5124-            # Commonly caused by there being no shares at all.
5125+            for f in finalstoragedir.listdir():
5126+                if NUM_RE.match(f.basename):
5127+                    yield ImmutableShare(f, storageindex, int(f))
5128+        except UnlistableError:
5129+            # There is no shares directory at all.
5130             pass
5131         
5132     def get_available_space(self):
5133hunk ./src/allmydata/storage/backends/das/core.py 149
5134 # then the value stored in this field will be the actual share data length
5135 # modulo 2**32.
5136 
5137-class ImmutableShare:
5138+class ImmutableShare(object):
5139     LEASE_SIZE = struct.calcsize(">L32s32sL")
5140     sharetype = "immutable"
5141 
5142hunk ./src/allmydata/storage/backends/das/core.py 166
5143         if create:
5144             # touch the file, so later callers will see that we're working on
5145             # it. Also construct the metadata.
5146-            assert not os.path.exists(self.finalhome)
5147-            fileutil.make_dirs(os.path.dirname(self.incominghome))
5148+            assert not finalhome.exists()
5149+            fp_make_dirs(self.incominghome)
5150             f = open(self.incominghome, 'wb')
5151             # The second field -- the four-byte share data length -- is no
5152             # longer used as of Tahoe v1.3.0, but we continue to write it in
5153hunk ./src/allmydata/storage/backends/das/core.py 316
5154         except IndexError:
5155             self.add_lease(lease_info)
5156 
5157-
5158     def cancel_lease(self, cancel_secret):
5159         """Remove a lease with the given cancel_secret. If the last lease is
5160         cancelled, the file will be removed. Return the number of bytes that
5161hunk ./src/allmydata/storage/common.py 19
5162 def si_a2b(ascii_storageindex):
5163     return base32.a2b(ascii_storageindex)
5164 
5165-def storage_index_to_dir(storageindex):
5166+def storage_index_to_dir(startfp, storageindex):
5167     sia = si_b2a(storageindex)
5168     return os.path.join(sia[:2], sia)
5169hunk ./src/allmydata/storage/server.py 210
5170 
5171         # fill incoming with all shares that are incoming use a set operation
5172         # since there's no need to operate on individual pieces
5173-        incoming = self.backend.get_incoming(storageindex)
5174+        incoming = self.backend.get_incoming_shnums(storageindex)
5175 
5176         for shnum in ((sharenums - alreadygot) - incoming):
5177             if (not limited) or (remaining_space >= max_space_per_bucket):
5178hunk ./src/allmydata/test/test_backends.py 5
5179 
5180 from twisted.python.filepath import FilePath
5181 
5182+from allmydata.util.log import msg
5183+
5184 from StringIO import StringIO
5185 
5186 from allmydata.test.common_util import ReallyEqualMixin
5187hunk ./src/allmydata/test/test_backends.py 42
5188 
5189 testnodeid = 'testnodeidxxxxxxxxxx'
5190 
5191-class TestFilesMixin(unittest.TestCase):
5192-    def setUp(self):
5193-        self.storedir = FilePath('teststoredir')
5194-        self.basedir = self.storedir.child('shares')
5195-        self.baseincdir = self.basedir.child('incoming')
5196-        self.sharedirfinalname = self.basedir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
5197-        self.sharedirincomingname = self.baseincdir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
5198-        self.shareincomingname = self.sharedirincomingname.child('0')
5199-        self.sharefname = self.sharedirfinalname.child('0')
5200+class MockStat:
5201+    def __init__(self):
5202+        self.st_mode = None
5203 
5204hunk ./src/allmydata/test/test_backends.py 46
5205+class MockFiles(unittest.TestCase):
5206+    """ I simulate a filesystem that the code under test can use. I flag the
5207+    code under test if it reads or writes outside of its prescribed
5208+    subtree. I simulate just the parts of the filesystem that the current
5209+    implementation of DAS backend needs. """
5210     def call_open(self, fname, mode):
5211         fnamefp = FilePath(fname)
5212hunk ./src/allmydata/test/test_backends.py 53
5213+        self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
5214+                        "Server with FS backend tried to open '%s' which is outside of the storage tree '%s' in mode '%s'" % (fnamefp, self.storedir, mode))
5215+
5216         if fnamefp == self.storedir.child('bucket_counter.state'):
5217             raise IOError(2, "No such file or directory: '%s'" % self.storedir.child('bucket_counter.state'))
5218         elif fnamefp == self.storedir.child('lease_checker.state'):
5219hunk ./src/allmydata/test/test_backends.py 61
5220             raise IOError(2, "No such file or directory: '%s'" % self.storedir.child('lease_checker.state'))
5221         elif fnamefp == self.storedir.child('lease_checker.history'):
5222+            # This is separated out from the else clause below just because
5223+            # we know this particular file is going to be used by the
5224+            # current implementation of DAS backend, and we might want to
5225+            # use this information in this test in the future...
5226             return StringIO()
5227         else:
5228hunk ./src/allmydata/test/test_backends.py 67
5229-            self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
5230-                            "Server with FS backend tried to open '%s' which is outside of the storage tree '%s' in mode '%s'" % (fnamefp, self.storedir, mode))
5231+            # Anything else you open inside your subtree appears to be an
5232+            # empty file.
5233+            return StringIO()
5234 
5235     def call_isdir(self, fname):
5236         fnamefp = FilePath(fname)
5237hunk ./src/allmydata/test/test_backends.py 73
5238-        if fnamefp == self.storedir.child('shares'):
5239+        return fnamefp.isdir()
5240+
5241+        self.failUnless(self.storedir == self or self.storedir in self.parents(),
5242+                        "Server with FS backend tried to isdir '%s' which is outside of the storage tree '%s''" % (self, self.storedir))
5243+
5244+        # The first two cases are separate from the else clause below just
5245+        # because we know that the current implementation of the DAS backend
5246+        # inspects these two directories and we might want to make use of
5247+        # that information in the tests in the future...
5248+        if self == self.storedir.child('shares'):
5249             return True
5250hunk ./src/allmydata/test/test_backends.py 84
5251-        elif fnamefp == self.storedir.child('shares').child('incoming'):
5252+        elif self == self.storedir.child('shares').child('incoming'):
5253             return True
5254         else:
5255hunk ./src/allmydata/test/test_backends.py 87
5256-            self.failUnless(self.storedir in fnamefp.parents(),
5257-                            "Server with FS backend tried to isdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
5258+            # Anything else you open inside your subtree appears to be a
5259+            # directory.
5260+            return True
5261 
5262     def call_mkdir(self, fname, mode):
5263hunk ./src/allmydata/test/test_backends.py 92
5264-        self.failUnlessEqual(0777, mode)
5265         fnamefp = FilePath(fname)
5266         self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
5267                         "Server with FS backend tried to mkdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
5268hunk ./src/allmydata/test/test_backends.py 95
5269+        self.failUnlessEqual(0777, mode)
5270 
5271hunk ./src/allmydata/test/test_backends.py 97
5272+    def call_listdir(self, fname):
5273+        fnamefp = FilePath(fname)
5274+        self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
5275+                        "Server with FS backend tried to listdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
5276 
5277hunk ./src/allmydata/test/test_backends.py 102
5278-    @mock.patch('os.mkdir')
5279-    @mock.patch('__builtin__.open')
5280-    @mock.patch('os.listdir')
5281-    @mock.patch('os.path.isdir')
5282-    def _help_test_stay_in_your_subtree(self, test_func, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
5283-        mocklistdir.return_value = []
5284+    def call_stat(self, fname):
5285+        fnamefp = FilePath(fname)
5286+        self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
5287+                        "Server with FS backend tried to isdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
5288+
5289+        msg("%s.call_stat(%s)" % (self, fname,))
5290+        mstat = MockStat()
5291+        mstat.st_mode = 16893 # a directory
5292+        return mstat
5293+
5294+    def setUp(self):
5295+        msg( "%s.setUp()" % (self,))
5296+        self.storedir = FilePath('teststoredir')
5297+        self.basedir = self.storedir.child('shares')
5298+        self.baseincdir = self.basedir.child('incoming')
5299+        self.sharedirfinalname = self.basedir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
5300+        self.sharedirincomingname = self.baseincdir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
5301+        self.shareincomingname = self.sharedirincomingname.child('0')
5302+        self.sharefname = self.sharedirfinalname.child('0')
5303+
5304+        self.mocklistdirp = mock.patch('os.listdir')
5305+        mocklistdir = self.mocklistdirp.__enter__()
5306+        mocklistdir.side_effect = self.call_listdir
5307+
5308+        self.mockmkdirp = mock.patch('os.mkdir')
5309+        mockmkdir = self.mockmkdirp.__enter__()
5310         mockmkdir.side_effect = self.call_mkdir
5311hunk ./src/allmydata/test/test_backends.py 129
5312+
5313+        self.mockisdirp = mock.patch('os.path.isdir')
5314+        mockisdir = self.mockisdirp.__enter__()
5315         mockisdir.side_effect = self.call_isdir
5316hunk ./src/allmydata/test/test_backends.py 133
5317+
5318+        self.mockopenp = mock.patch('__builtin__.open')
5319+        mockopen = self.mockopenp.__enter__()
5320         mockopen.side_effect = self.call_open
5321hunk ./src/allmydata/test/test_backends.py 137
5322-        mocklistdir.return_value = []
5323-       
5324-        test_func()
5325-       
5326-        self.failIf(mocklistdir.called, mocklistdir.call_args_list)
5327+
5328+        self.mockstatp = mock.patch('os.stat')
5329+        mockstat = self.mockstatp.__enter__()
5330+        mockstat.side_effect = self.call_stat
5331+
5332+        self.mockfpstatp = mock.patch('twisted.python.filepath.stat')
5333+        mockfpstat = self.mockfpstatp.__enter__()
5334+        mockfpstat.side_effect = self.call_stat
5335+
5336+    def tearDown(self):
5337+        msg( "%s.tearDown()" % (self,))
5338+        self.mockfpstatp.__exit__()
5339+        self.mockstatp.__exit__()
5340+        self.mockopenp.__exit__()
5341+        self.mockisdirp.__exit__()
5342+        self.mockmkdirp.__exit__()
5343+        self.mocklistdirp.__exit__()
5344 
5345 expiration_policy = {'enabled' : False,
5346                      'mode' : 'age',
5347hunk ./src/allmydata/test/test_backends.py 184
5348         self.failIf(mockopen.called)
5349         self.failIf(mockmkdir.called)
5350 
5351-class TestServerConstruction(ReallyEqualMixin, TestFilesMixin):
5352+class TestServerConstruction(MockFiles, ReallyEqualMixin):
5353     def test_create_server_fs_backend(self):
5354         """ This tests whether a server instance can be constructed with a
5355         filesystem backend. To pass the test, it mustn't use the filesystem
5356hunk ./src/allmydata/test/test_backends.py 190
5357         outside of its configured storedir. """
5358 
5359-        def _f():
5360-            StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
5361+        StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
5362 
5363hunk ./src/allmydata/test/test_backends.py 192
5364-        self._help_test_stay_in_your_subtree(_f)
5365-
5366-
5367-class TestServerAndFSBackend(unittest.TestCase, ReallyEqualMixin):
5368-    """ This tests both the StorageServer xyz """
5369-    @mock.patch('__builtin__.open')
5370-    def setUp(self, mockopen):
5371-        def call_open(fname, mode):
5372-            if fname == os.path.join(storedir, 'bucket_counter.state'):
5373-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'bucket_counter.state'))
5374-            elif fname == os.path.join(storedir, 'lease_checker.state'):
5375-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'lease_checker.state'))
5376-            elif fname == os.path.join(storedir, 'lease_checker.history'):
5377-                return StringIO()
5378-            else:
5379-                _assert(False, "The tester code doesn't recognize this case.") 
5380-
5381-        mockopen.side_effect = call_open
5382-        self.backend = DASCore(storedir, expiration_policy)
5383-        self.ss = StorageServer(testnodeid, self.backend)
5384-        self.backendsmall = DASCore(storedir, expiration_policy, reserved_space = 1)
5385-        self.ssmallback = StorageServer(testnodeid, self.backendsmall)
5386+class TestServerAndFSBackend(MockFiles, ReallyEqualMixin):
5387+    """ This tests both the StorageServer and the DAS backend together. """
5388+    def setUp(self):
5389+        MockFiles.setUp(self)
5390+        try:
5391+            self.backend = DASCore(self.storedir, expiration_policy)
5392+            self.ss = StorageServer(testnodeid, self.backend)
5393+            self.backendsmall = DASCore(self.storedir, expiration_policy, reserved_space = 1)
5394+            self.ssmallback = StorageServer(testnodeid, self.backendsmall)
5395+        except:
5396+            MockFiles.tearDown(self)
5397+            raise
5398 
5399     @mock.patch('time.time')
5400     def test_write_and_read_share(self, mocktime):
5401hunk ./src/allmydata/util/fileutil.py 8
5402 import errno, sys, exceptions, os, stat, tempfile, time, binascii
5403 
5404 from twisted.python import log
5405+from twisted.python.filepath import UnlistableError
5406 
5407 from pycryptopp.cipher.aes import AES
5408 
5409hunk ./src/allmydata/util/fileutil.py 187
5410             raise tx
5411         raise exceptions.IOError, "unknown error prevented creation of directory, or deleted the directory immediately after creation: %s" % dirname # careful not to construct an IOError with a 2-tuple, as that has a special meaning...
5412 
5413+def fp_make_dirs(dirfp):
5414+    """
5415+    An idempotent version of FilePath.makedirs().  If the dir already
5416+    exists, do nothing and return without raising an exception.  If this
5417+    call creates the dir, return without raising an exception.  If there is
5418+    an error that prevents creation or if the directory gets deleted after
5419+    fp_make_dirs() creates it and before fp_make_dirs() checks that it
5420+    exists, raise an exception.
5421+    """
5422+    log.msg( "xxx 0 %s" % (dirfp,))
5423+    tx = None
5424+    try:
5425+        dirfp.makedirs()
5426+    except OSError, x:
5427+        tx = x
5428+
5429+    if not dirfp.isdir():
5430+        if tx:
5431+            raise tx
5432+        raise exceptions.IOError, "unknown error prevented creation of directory, or deleted the directory immediately after creation: %s" % dirfp # careful not to construct an IOError with a 2-tuple, as that has a special meaning...
5433+
5434 def rmtree(dirname):
5435     """
5436     A threadsafe and idempotent version of shutil.rmtree().  If the dir is
5437hunk ./src/allmydata/util/fileutil.py 244
5438             raise OSError, "Failed to remove dir for unknown reason."
5439         raise OSError, excs
5440 
5441+def fp_remove(dirfp):
5442+    try:
5443+        dirfp.remove()
5444+    except UnlistableError, e:
5445+        if e.originalException.errno != errno.ENOENT:
5446+            raise
5447+
5448 def rm_dir(dirname):
5449     # Renamed to be like shutil.rmtree and unlike rmdir.
5450     return rmtree(dirname)
5451}
5452[another temporary patch for sharing work-in-progress
5453zooko@zooko.com**20110720055918
5454 Ignore-this: dfa0270476cbc6511cdb54c5c9a55a8e
5455 A lot more filepathification. The changes made in this patch feel really good to me -- we get to remove and simplify code by relying on filepath.
5456 There are a few other changes in this file, notably removing the misfeature of catching OSError and returning 0 from get_available_space()...
5457 (There is a lot of work to do to document these changes in good commit log messages and break them up into logical units inasmuch as possible...)
5458 
5459] {
5460hunk ./src/allmydata/storage/backends/das/core.py 5
5461 
5462 from allmydata.interfaces import IStorageBackend
5463 from allmydata.storage.backends.base import Backend
5464-from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
5465+from allmydata.storage.common import si_b2a, si_a2b, si_dir
5466 from allmydata.util.assertutil import precondition
5467 
5468 #from foolscap.api import Referenceable
5469hunk ./src/allmydata/storage/backends/das/core.py 10
5470 from twisted.application import service
5471+from twisted.python.filepath import UnlistableError
5472 
5473 from zope.interface import implements
5474 from allmydata.interfaces import IStatsProducer, IShareStore# XXX, RIStorageServer
5475hunk ./src/allmydata/storage/backends/das/core.py 17
5476 from allmydata.util import fileutil, idlib, log, time_format
5477 import allmydata # for __full_version__
5478 
5479-from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
5480-_pyflakes_hush = [si_b2a, si_a2b, storage_index_to_dir] # re-exported
5481+from allmydata.storage.common import si_b2a, si_a2b, si_dir
5482+_pyflakes_hush = [si_b2a, si_a2b, si_dir] # re-exported
5483 from allmydata.storage.lease import LeaseInfo
5484 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
5485      create_mutable_sharefile
5486hunk ./src/allmydata/storage/backends/das/core.py 41
5487 # $SHARENUM matches this regex:
5488 NUM_RE=re.compile("^[0-9]+$")
5489 
5490+def is_num(fp):
5491+    return NUM_RE.match(fp.basename)
5492+
5493 class DASCore(Backend):
5494     implements(IStorageBackend)
5495     def __init__(self, storedir, expiration_policy, readonly=False, reserved_space=0):
5496hunk ./src/allmydata/storage/backends/das/core.py 58
5497         self.storedir = storedir
5498         self.readonly = readonly
5499         self.reserved_space = int(reserved_space)
5500-        if self.reserved_space:
5501-            if self.get_available_space() is None:
5502-                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
5503-                        umid="0wZ27w", level=log.UNUSUAL)
5504-
5505         self.sharedir = self.storedir.child("shares")
5506         fileutil.fp_make_dirs(self.sharedir)
5507         self.incomingdir = self.sharedir.child('incoming')
5508hunk ./src/allmydata/storage/backends/das/core.py 62
5509         self._clean_incomplete()
5510+        if self.reserved_space and (self.get_available_space() is None):
5511+            log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
5512+                    umid="0wZ27w", level=log.UNUSUAL)
5513+
5514 
5515     def _clean_incomplete(self):
5516         fileutil.fp_remove(self.incomingdir)
5517hunk ./src/allmydata/storage/backends/das/core.py 87
5518         self.lease_checker.setServiceParent(self)
5519 
5520     def get_incoming_shnums(self, storageindex):
5521-        """Return the set of incoming shnums."""
5522+        """ Return a frozenset of the shnum (as ints) of incoming shares. """
5523+        incomingdir = storage_index_to_dir(self.incomingdir, storageindex)
5524         try:
5525hunk ./src/allmydata/storage/backends/das/core.py 90
5526-           
5527-            incomingsharesdir = storage_index_to_dir(self.incomingdir, storageindex)
5528-            incomingshnums = [int(x) for x in incomingsharesdir.listdir()]
5529-            return frozenset(incomingshnums)
5530+            childfps = [ fp for fp in incomingdir.children() if is_num(fp) ]
5531+            shnums = [ int(fp.basename) for fp in childfps ]
5532+            return frozenset(shnums)
5533         except UnlistableError:
5534             # There is no shares directory at all.
5535             return frozenset()
5536hunk ./src/allmydata/storage/backends/das/core.py 98
5537             
5538     def get_shares(self, storageindex):
5539-        """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
5540+        """ Generate ImmutableShare objects for shares we have for this
5541+        storageindex. ("Shares we have" means completed ones, excluding
5542+        incoming ones.)"""
5543         finalstoragedir = storage_index_to_dir(self.sharedir, storageindex)
5544         try:
5545hunk ./src/allmydata/storage/backends/das/core.py 103
5546-            for f in finalstoragedir.listdir():
5547-                if NUM_RE.match(f.basename):
5548-                    yield ImmutableShare(f, storageindex, int(f))
5549+            for fp in finalstoragedir.children():
5550+                if is_num(fp):
5551+                    yield ImmutableShare(fp, storageindex)
5552         except UnlistableError:
5553             # There is no shares directory at all.
5554             pass
5555hunk ./src/allmydata/storage/backends/das/core.py 116
5556         return fileutil.get_available_space(self.storedir, self.reserved_space)
5557 
5558     def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
5559-        finalhome = os.path.join(self.sharedir, storage_index_to_dir(storageindex), str(shnum))
5560-        incominghome = os.path.join(self.sharedir,'incoming', storage_index_to_dir(storageindex), str(shnum))
5561+        finalhome = storage_index_to_dir(self.sharedir, storageindex).child(str(shnum))
5562+        incominghome = storage_index_to_dir(self.sharedir.child('incoming'), storageindex).child(str(shnum))
5563         immsh = ImmutableShare(finalhome, storageindex, shnum, incominghome, max_size=max_space_per_bucket, create=True)
5564         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
5565         return bw
5566hunk ./src/allmydata/storage/backends/das/expirer.py 50
5567     slow_start = 360 # wait 6 minutes after startup
5568     minimum_cycle_time = 12*60*60 # not more than twice per day
5569 
5570-    def __init__(self, statefile, historyfile, expiration_policy):
5571-        self.historyfile = historyfile
5572+    def __init__(self, statefile, historyfp, expiration_policy):
5573+        self.historyfp = historyfp
5574         self.expiration_enabled = expiration_policy['enabled']
5575         self.mode = expiration_policy['mode']
5576         self.override_lease_duration = None
5577hunk ./src/allmydata/storage/backends/das/expirer.py 80
5578             self.state["cycle-to-date"].setdefault(k, so_far[k])
5579 
5580         # initialize history
5581-        if not os.path.exists(self.historyfile):
5582+        if not self.historyfp.exists():
5583             history = {} # cyclenum -> dict
5584hunk ./src/allmydata/storage/backends/das/expirer.py 82
5585-            f = open(self.historyfile, "wb")
5586-            pickle.dump(history, f)
5587-            f.close()
5588+            self.historyfp.setContent(pickle.dumps(history))
5589 
5590     def create_empty_cycle_dict(self):
5591         recovered = self.create_empty_recovered_dict()
5592hunk ./src/allmydata/storage/backends/das/expirer.py 305
5593         # copy() needs to become a deepcopy
5594         h["space-recovered"] = s["space-recovered"].copy()
5595 
5596-        history = pickle.load(open(self.historyfile, "rb"))
5597+        history = pickle.load(self.historyfp.getContent())
5598         history[cycle] = h
5599         while len(history) > 10:
5600             oldcycles = sorted(history.keys())
5601hunk ./src/allmydata/storage/backends/das/expirer.py 310
5602             del history[oldcycles[0]]
5603-        f = open(self.historyfile, "wb")
5604-        pickle.dump(history, f)
5605-        f.close()
5606+        self.historyfp.setContent(pickle.dumps(history))
5607 
5608     def get_state(self):
5609         """In addition to the crawler state described in
5610hunk ./src/allmydata/storage/backends/das/expirer.py 379
5611         progress = self.get_progress()
5612 
5613         state = ShareCrawler.get_state(self) # does a shallow copy
5614-        history = pickle.load(open(self.historyfile, "rb"))
5615+        history = pickle.load(self.historyfp.getContent())
5616         state["history"] = history
5617 
5618         if not progress["cycle-in-progress"]:
5619hunk ./src/allmydata/storage/common.py 19
5620 def si_a2b(ascii_storageindex):
5621     return base32.a2b(ascii_storageindex)
5622 
5623-def storage_index_to_dir(startfp, storageindex):
5624+def si_dir(startfp, storageindex):
5625     sia = si_b2a(storageindex)
5626hunk ./src/allmydata/storage/common.py 21
5627-    return os.path.join(sia[:2], sia)
5628+    return startfp.child(sia[:2]).child(sia)
5629hunk ./src/allmydata/storage/crawler.py 68
5630     cpu_slice = 1.0 # use up to 1.0 seconds before yielding
5631     minimum_cycle_time = 300 # don't run a cycle faster than this
5632 
5633-    def __init__(self, statefname, allowed_cpu_percentage=None):
5634+    def __init__(self, statefp, allowed_cpu_percentage=None):
5635         service.MultiService.__init__(self)
5636         if allowed_cpu_percentage is not None:
5637             self.allowed_cpu_percentage = allowed_cpu_percentage
5638hunk ./src/allmydata/storage/crawler.py 72
5639-        self.statefname = statefname
5640+        self.statefp = statefp
5641         self.prefixes = [si_b2a(struct.pack(">H", i << (16-10)))[:2]
5642                          for i in range(2**10)]
5643         self.prefixes.sort()
5644hunk ./src/allmydata/storage/crawler.py 192
5645         #                            of the last bucket to be processed, or
5646         #                            None if we are sleeping between cycles
5647         try:
5648-            f = open(self.statefname, "rb")
5649-            state = pickle.load(f)
5650-            f.close()
5651+            state = pickle.loads(self.statefp.getContent())
5652         except EnvironmentError:
5653             state = {"version": 1,
5654                      "last-cycle-finished": None,
5655hunk ./src/allmydata/storage/crawler.py 228
5656         else:
5657             last_complete_prefix = self.prefixes[lcpi]
5658         self.state["last-complete-prefix"] = last_complete_prefix
5659-        tmpfile = self.statefname + ".tmp"
5660-        f = open(tmpfile, "wb")
5661-        pickle.dump(self.state, f)
5662-        f.close()
5663-        fileutil.move_into_place(tmpfile, self.statefname)
5664+        self.statefp.setContent(pickle.dumps(self.state))
5665 
5666     def startService(self):
5667         # arrange things to look like we were just sleeping, so
5668hunk ./src/allmydata/storage/crawler.py 440
5669 
5670     minimum_cycle_time = 60*60 # we don't need this more than once an hour
5671 
5672-    def __init__(self, statefname, num_sample_prefixes=1):
5673-        FSShareCrawler.__init__(self, statefname)
5674+    def __init__(self, statefp, num_sample_prefixes=1):
5675+        FSShareCrawler.__init__(self, statefp)
5676         self.num_sample_prefixes = num_sample_prefixes
5677 
5678     def add_initial_state(self):
5679hunk ./src/allmydata/storage/server.py 11
5680 from allmydata.util import fileutil, idlib, log, time_format
5681 import allmydata # for __full_version__
5682 
5683-from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
5684-_pyflakes_hush = [si_b2a, si_a2b, storage_index_to_dir] # re-exported
5685+from allmydata.storage.common import si_b2a, si_a2b, si_dir
5686+_pyflakes_hush = [si_b2a, si_a2b, si_dir] # re-exported
5687 from allmydata.storage.lease import LeaseInfo
5688 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
5689      create_mutable_sharefile
5690hunk ./src/allmydata/storage/server.py 173
5691         # to a particular owner.
5692         start = time.time()
5693         self.count("allocate")
5694-        alreadygot = set()
5695         incoming = set()
5696         bucketwriters = {} # k: shnum, v: BucketWriter
5697 
5698hunk ./src/allmydata/storage/server.py 199
5699             remaining_space -= self.allocated_size()
5700         # self.readonly_storage causes remaining_space <= 0
5701 
5702-        # fill alreadygot with all shares that we have, not just the ones
5703+        # Fill alreadygot with all shares that we have, not just the ones
5704         # they asked about: this will save them a lot of work. Add or update
5705         # leases for all of them: if they want us to hold shares for this
5706hunk ./src/allmydata/storage/server.py 202
5707-        # file, they'll want us to hold leases for this file.
5708+        # file, they'll want us to hold leases for all the shares of it.
5709+        alreadygot = set()
5710         for share in self.backend.get_shares(storageindex):
5711hunk ./src/allmydata/storage/server.py 205
5712-            alreadygot.add(share.shnum)
5713             share.add_or_renew_lease(lease_info)
5714hunk ./src/allmydata/storage/server.py 206
5715+            alreadygot.add(share.shnum)
5716 
5717hunk ./src/allmydata/storage/server.py 208
5718-        # fill incoming with all shares that are incoming use a set operation
5719-        # since there's no need to operate on individual pieces
5720+        # all share numbers that are incoming
5721         incoming = self.backend.get_incoming_shnums(storageindex)
5722 
5723         for shnum in ((sharenums - alreadygot) - incoming):
5724hunk ./src/allmydata/storage/server.py 282
5725             total_space_freed += sf.cancel_lease(cancel_secret)
5726 
5727         if found_buckets:
5728-            storagedir = os.path.join(self.sharedir,
5729-                                      storage_index_to_dir(storageindex))
5730-            if not os.listdir(storagedir):
5731-                os.rmdir(storagedir)
5732+            storagedir = si_dir(self.sharedir, storageindex)
5733+            fp_rmdir_if_empty(storagedir)
5734 
5735         if self.stats_provider:
5736             self.stats_provider.count('storage_server.bytes_freed',
5737hunk ./src/allmydata/test/test_backends.py 52
5738     subtree. I simulate just the parts of the filesystem that the current
5739     implementation of DAS backend needs. """
5740     def call_open(self, fname, mode):
5741+        assert isinstance(fname, basestring), fname
5742         fnamefp = FilePath(fname)
5743         self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
5744                         "Server with FS backend tried to open '%s' which is outside of the storage tree '%s' in mode '%s'" % (fnamefp, self.storedir, mode))
5745hunk ./src/allmydata/test/test_backends.py 104
5746                         "Server with FS backend tried to listdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
5747 
5748     def call_stat(self, fname):
5749+        assert isinstance(fname, basestring), fname
5750         fnamefp = FilePath(fname)
5751         self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
5752                         "Server with FS backend tried to isdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
5753hunk ./src/allmydata/test/test_backends.py 217
5754 
5755         mocktime.return_value = 0
5756         # Inspect incoming and fail unless it's empty.
5757-        incomingset = self.ss.backend.get_incoming('teststorage_index')
5758-        self.failUnlessReallyEqual(incomingset, set())
5759+        incomingset = self.ss.backend.get_incoming_shnums('teststorage_index')
5760+        self.failUnlessReallyEqual(incomingset, frozenset())
5761         
5762         # Populate incoming with the sharenum: 0.
5763hunk ./src/allmydata/test/test_backends.py 221
5764-        alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
5765+        alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, frozenset((0,)), 1, mock.Mock())
5766 
5767         # Inspect incoming and fail unless the sharenum: 0 is listed there.
5768hunk ./src/allmydata/test/test_backends.py 224
5769-        self.failUnlessEqual(self.ss.backend.get_incoming('teststorage_index'), set((0,)))
5770+        self.failUnlessReallyEqual(self.ss.backend.get_incoming_shnums('teststorage_index'), frozenset((0,)))
5771         
5772         # Attempt to create a second share writer with the same sharenum.
5773hunk ./src/allmydata/test/test_backends.py 227
5774-        alreadygota, bsa = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
5775+        alreadygota, bsa = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, frozenset((0,)), 1, mock.Mock())
5776 
5777         # Show that no sharewriter results from a remote_allocate_buckets
5778         # with the same si and sharenum, until BucketWriter.remote_close()
5779hunk ./src/allmydata/test/test_backends.py 280
5780         StorageServer object. """
5781 
5782         def call_listdir(dirname):
5783+            precondition(isinstance(dirname, basestring), dirname)
5784             self.failUnlessReallyEqual(dirname, os.path.join(storedir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
5785             return ['0']
5786 
5787hunk ./src/allmydata/test/test_backends.py 287
5788         mocklistdir.side_effect = call_listdir
5789 
5790         def call_open(fname, mode):
5791+            precondition(isinstance(fname, basestring), fname)
5792             self.failUnlessReallyEqual(fname, sharefname)
5793             self.failUnlessEqual(mode[0], 'r', mode)
5794             self.failUnless('b' in mode, mode)
5795hunk ./src/allmydata/test/test_backends.py 297
5796 
5797         datalen = len(share_data)
5798         def call_getsize(fname):
5799+            precondition(isinstance(fname, basestring), fname)
5800             self.failUnlessReallyEqual(fname, sharefname)
5801             return datalen
5802         mockgetsize.side_effect = call_getsize
5803hunk ./src/allmydata/test/test_backends.py 303
5804 
5805         def call_exists(fname):
5806+            precondition(isinstance(fname, basestring), fname)
5807             self.failUnlessReallyEqual(fname, sharefname)
5808             return True
5809         mockexists.side_effect = call_exists
5810hunk ./src/allmydata/test/test_backends.py 321
5811         self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
5812 
5813 
5814-class TestBackendConstruction(unittest.TestCase, ReallyEqualMixin):
5815-    @mock.patch('time.time')
5816-    @mock.patch('os.mkdir')
5817-    @mock.patch('__builtin__.open')
5818-    @mock.patch('os.listdir')
5819-    @mock.patch('os.path.isdir')
5820-    def test_create_fs_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
5821+class TestBackendConstruction(MockFiles, ReallyEqualMixin):
5822+    def test_create_fs_backend(self):
5823         """ This tests whether a file system backend instance can be
5824         constructed. To pass the test, it has to use the
5825         filesystem in only the prescribed ways. """
5826hunk ./src/allmydata/test/test_backends.py 327
5827 
5828-        def call_open(fname, mode):
5829-            if fname == os.path.join(storedir,'bucket_counter.state'):
5830-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'bucket_counter.state'))
5831-            elif fname == os.path.join(storedir, 'lease_checker.state'):
5832-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'lease_checker.state'))
5833-            elif fname == os.path.join(storedir, 'lease_checker.history'):
5834-                return StringIO()
5835-            else:
5836-                self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
5837-        mockopen.side_effect = call_open
5838-
5839-        def call_isdir(fname):
5840-            if fname == os.path.join(storedir,'shares'):
5841-                return True
5842-            elif fname == os.path.join(storedir,'shares', 'incoming'):
5843-                return True
5844-            else:
5845-                self.fail("Server with FS backend tried to idsir '%s'" % (fname,))
5846-        mockisdir.side_effect = call_isdir
5847-
5848-        def call_mkdir(fname, mode):
5849-            """XXX something is calling mkdir teststoredir and teststoredir/shares twice...  this is odd!"""
5850-            self.failUnlessEqual(0777, mode)
5851-            if fname == storedir:
5852-                return None
5853-            elif fname == os.path.join(storedir,'shares'):
5854-                return None
5855-            elif fname == os.path.join(storedir,'shares', 'incoming'):
5856-                return None
5857-            else:
5858-                self.fail("Server with FS backend tried to mkdir '%s'" % (fname,))
5859-        mockmkdir.side_effect = call_mkdir
5860-
5861         # Now begin the test.
5862hunk ./src/allmydata/test/test_backends.py 328
5863-        DASCore('teststoredir', expiration_policy)
5864-
5865-        self.failIf(mocklistdir.called, mocklistdir.call_args_list)
5866-
5867+        DASCore(self.storedir, expiration_policy)
5868hunk ./src/allmydata/util/fileutil.py 7
5869 
5870 import errno, sys, exceptions, os, stat, tempfile, time, binascii
5871 
5872+from allmydata.util.assertutil import precondition
5873+
5874 from twisted.python import log
5875hunk ./src/allmydata/util/fileutil.py 10
5876-from twisted.python.filepath import UnlistableError
5877+from twisted.python.filepath import FilePath, UnlistableError
5878 
5879 from pycryptopp.cipher.aes import AES
5880 
5881hunk ./src/allmydata/util/fileutil.py 210
5882             raise tx
5883         raise exceptions.IOError, "unknown error prevented creation of directory, or deleted the directory immediately after creation: %s" % dirfp # careful not to construct an IOError with a 2-tuple, as that has a special meaning...
5884 
5885+def fp_rmdir_if_empty(dirfp):
5886+    """ Remove the directory if it is empty. """
5887+    try:
5888+        os.rmdir(dirfp.path)
5889+    except OSError, e:
5890+        if e.errno != errno.ENOTEMPTY:
5891+            raise
5892+    else:
5893+        dirfp.changed()
5894+
5895 def rmtree(dirname):
5896     """
5897     A threadsafe and idempotent version of shutil.rmtree().  If the dir is
5898hunk ./src/allmydata/util/fileutil.py 257
5899         raise OSError, excs
5900 
5901 def fp_remove(dirfp):
5902+    """
5903+    An idempotent version of shutil.rmtree().  If the dir is already gone,
5904+    do nothing and return without raising an exception.  If this call
5905+    removes the dir, return without raising an exception.  If there is an
5906+    error that prevents removal or if the directory gets created again by
5907+    someone else after this deletes it and before this checks that it is
5908+    gone, raise an exception.
5909+    """
5910     try:
5911         dirfp.remove()
5912     except UnlistableError, e:
5913hunk ./src/allmydata/util/fileutil.py 270
5914         if e.originalException.errno != errno.ENOENT:
5915             raise
5916+    except OSError, e:
5917+        if e.errno != errno.ENOENT:
5918+            raise
5919 
5920 def rm_dir(dirname):
5921     # Renamed to be like shutil.rmtree and unlike rmdir.
5922hunk ./src/allmydata/util/fileutil.py 387
5923         import traceback
5924         traceback.print_exc()
5925 
5926-def get_disk_stats(whichdir, reserved_space=0):
5927+def get_disk_stats(whichdirfp, reserved_space=0):
5928     """Return disk statistics for the storage disk, in the form of a dict
5929     with the following fields.
5930       total:            total bytes on disk
5931hunk ./src/allmydata/util/fileutil.py 408
5932     you can pass how many bytes you would like to leave unused on this
5933     filesystem as reserved_space.
5934     """
5935+    precondition(isinstance(whichdirfp, FilePath), whichdirfp)
5936 
5937     if have_GetDiskFreeSpaceExW:
5938         # If this is a Windows system and GetDiskFreeSpaceExW is available, use it.
5939hunk ./src/allmydata/util/fileutil.py 419
5940         n_free_for_nonroot = c_ulonglong(0)
5941         n_total            = c_ulonglong(0)
5942         n_free_for_root    = c_ulonglong(0)
5943-        retval = GetDiskFreeSpaceExW(whichdir, byref(n_free_for_nonroot),
5944+        retval = GetDiskFreeSpaceExW(whichdirfp.path, byref(n_free_for_nonroot),
5945                                                byref(n_total),
5946                                                byref(n_free_for_root))
5947         if retval == 0:
5948hunk ./src/allmydata/util/fileutil.py 424
5949             raise OSError("Windows error %d attempting to get disk statistics for %r"
5950-                          % (GetLastError(), whichdir))
5951+                          % (GetLastError(), whichdirfp.path))
5952         free_for_nonroot = n_free_for_nonroot.value
5953         total            = n_total.value
5954         free_for_root    = n_free_for_root.value
5955hunk ./src/allmydata/util/fileutil.py 433
5956         # <http://docs.python.org/library/os.html#os.statvfs>
5957         # <http://opengroup.org/onlinepubs/7990989799/xsh/fstatvfs.html>
5958         # <http://opengroup.org/onlinepubs/7990989799/xsh/sysstatvfs.h.html>
5959-        s = os.statvfs(whichdir)
5960+        s = os.statvfs(whichdirfp.path)
5961 
5962         # on my mac laptop:
5963         #  statvfs(2) is a wrapper around statfs(2).
5964hunk ./src/allmydata/util/fileutil.py 460
5965              'avail': avail,
5966            }
5967 
5968-def get_available_space(whichdir, reserved_space):
5969+def get_available_space(whichdirfp, reserved_space):
5970     """Returns available space for share storage in bytes, or None if no
5971     API to get this information is available.
5972 
5973hunk ./src/allmydata/util/fileutil.py 472
5974     you can pass how many bytes you would like to leave unused on this
5975     filesystem as reserved_space.
5976     """
5977+    precondition(isinstance(whichdirfp, FilePath), whichdirfp)
5978     try:
5979hunk ./src/allmydata/util/fileutil.py 474
5980-        return get_disk_stats(whichdir, reserved_space)['avail']
5981+        return get_disk_stats(whichdirfp, reserved_space)['avail']
5982     except AttributeError:
5983         return None
5984hunk ./src/allmydata/util/fileutil.py 477
5985-    except EnvironmentError:
5986-        log.msg("OS call to get disk statistics failed")
5987-        return 0
5988}
5989[jacp16 or so
5990wilcoxjg@gmail.com**20110722070036
5991 Ignore-this: 7548785cad146056eede9a16b93b569f
5992] {
5993merger 0.0 (
5994hunk ./src/allmydata/_auto_deps.py 21
5995-    "Twisted >= 2.4.0",
5996+    # On Windows we need at least Twisted 9.0 to avoid an indirect dependency on pywin32.
5997+    # We also need Twisted 10.1 for the FTP frontend in order for Twisted's FTP server to
5998+    # support asynchronous close.
5999+    "Twisted >= 10.1.0",
6000hunk ./src/allmydata/_auto_deps.py 21
6001-    "Twisted >= 2.4.0",
6002+    "Twisted >= 11.0",
6003)
6004hunk ./src/allmydata/storage/backends/das/core.py 2
6005 import os, re, weakref, struct, time, stat
6006+from twisted.application import service
6007+from twisted.python.filepath import UnlistableError
6008+from twisted.python.filepath import FilePath
6009+from zope.interface import implements
6010 
6011hunk ./src/allmydata/storage/backends/das/core.py 7
6012+import allmydata # for __full_version__
6013 from allmydata.interfaces import IStorageBackend
6014 from allmydata.storage.backends.base import Backend
6015hunk ./src/allmydata/storage/backends/das/core.py 10
6016-from allmydata.storage.common import si_b2a, si_a2b, si_dir
6017+from allmydata.storage.common import si_b2a, si_a2b, si_si2dir
6018 from allmydata.util.assertutil import precondition
6019hunk ./src/allmydata/storage/backends/das/core.py 12
6020-
6021-#from foolscap.api import Referenceable
6022-from twisted.application import service
6023-from twisted.python.filepath import UnlistableError
6024-
6025-from zope.interface import implements
6026 from allmydata.interfaces import IStatsProducer, IShareStore# XXX, RIStorageServer
6027 from allmydata.util import fileutil, idlib, log, time_format
6028hunk ./src/allmydata/storage/backends/das/core.py 14
6029-import allmydata # for __full_version__
6030-
6031-from allmydata.storage.common import si_b2a, si_a2b, si_dir
6032-_pyflakes_hush = [si_b2a, si_a2b, si_dir] # re-exported
6033 from allmydata.storage.lease import LeaseInfo
6034 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
6035      create_mutable_sharefile
6036hunk ./src/allmydata/storage/backends/das/core.py 21
6037 from allmydata.storage.crawler import FSBucketCountingCrawler
6038 from allmydata.util.hashutil import constant_time_compare
6039 from allmydata.storage.backends.das.expirer import FSLeaseCheckingCrawler
6040-
6041-from zope.interface import implements
6042+_pyflakes_hush = [si_b2a, si_a2b, si_si2dir] # re-exported
6043 
6044 # storage/
6045 # storage/shares/incoming
6046hunk ./src/allmydata/storage/backends/das/core.py 49
6047         self._setup_lease_checkerf(expiration_policy)
6048 
6049     def _setup_storage(self, storedir, readonly, reserved_space):
6050+        precondition(isinstance(storedir, FilePath)) 
6051         self.storedir = storedir
6052         self.readonly = readonly
6053         self.reserved_space = int(reserved_space)
6054hunk ./src/allmydata/storage/backends/das/core.py 83
6055 
6056     def get_incoming_shnums(self, storageindex):
6057         """ Return a frozenset of the shnum (as ints) of incoming shares. """
6058-        incomingdir = storage_index_to_dir(self.incomingdir, storageindex)
6059+        incomingdir = si_si2dir(self.incomingdir, storageindex)
6060         try:
6061             childfps = [ fp for fp in incomingdir.children() if is_num(fp) ]
6062             shnums = [ int(fp.basename) for fp in childfps ]
6063hunk ./src/allmydata/storage/backends/das/core.py 96
6064         """ Generate ImmutableShare objects for shares we have for this
6065         storageindex. ("Shares we have" means completed ones, excluding
6066         incoming ones.)"""
6067-        finalstoragedir = storage_index_to_dir(self.sharedir, storageindex)
6068+        finalstoragedir = si_si2dir(self.sharedir, storageindex)
6069         try:
6070             for fp in finalstoragedir.children():
6071                 if is_num(fp):
6072hunk ./src/allmydata/storage/backends/das/core.py 111
6073         return fileutil.get_available_space(self.storedir, self.reserved_space)
6074 
6075     def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
6076-        finalhome = storage_index_to_dir(self.sharedir, storageindex).child(str(shnum))
6077-        incominghome = storage_index_to_dir(self.sharedir.child('incoming'), storageindex).child(str(shnum))
6078+        finalhome = si_si2dir(self.sharedir, storageindex).child(str(shnum))
6079+        incominghome = si_si2dir(self.sharedir.child('incoming'), storageindex).child(str(shnum))
6080         immsh = ImmutableShare(finalhome, storageindex, shnum, incominghome, max_size=max_space_per_bucket, create=True)
6081         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
6082         return bw
6083hunk ./src/allmydata/storage/backends/null/core.py 18
6084         return None
6085 
6086     def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
6087-       
6088-        immutableshare = ImmutableShare()
6089+        immutableshare = ImmutableShare()
6090         return BucketWriter(self.ss, immutableshare, max_space_per_bucket, lease_info, canary)
6091 
6092     def set_storage_server(self, ss):
6093hunk ./src/allmydata/storage/backends/null/core.py 24
6094         self.ss = ss
6095 
6096-    def get_incoming(self, storageindex):
6097-        return set()
6098+    def get_incoming_shnums(self, storageindex):
6099+        return frozenset()
6100 
6101 class ImmutableShare:
6102     sharetype = "immutable"
6103hunk ./src/allmydata/storage/common.py 19
6104 def si_a2b(ascii_storageindex):
6105     return base32.a2b(ascii_storageindex)
6106 
6107-def si_dir(startfp, storageindex):
6108+def si_si2dir(startfp, storageindex):
6109     sia = si_b2a(storageindex)
6110     return startfp.child(sia[:2]).child(sia)
6111hunk ./src/allmydata/storage/immutable.py 20
6112     def __init__(self, ss, immutableshare, max_size, lease_info, canary):
6113         self.ss = ss
6114         self._max_size = max_size # don't allow the client to write more than this        print self.ss._active_writers.keys()
6115-
6116         self._canary = canary
6117         self._disconnect_marker = canary.notifyOnDisconnect(self._disconnected)
6118         self.closed = False
6119hunk ./src/allmydata/storage/lease.py 17
6120 
6121     def get_expiration_time(self):
6122         return self.expiration_time
6123+
6124     def get_grant_renew_time_time(self):
6125         # hack, based upon fixed 31day expiration period
6126         return self.expiration_time - 31*24*60*60
6127hunk ./src/allmydata/storage/lease.py 21
6128+
6129     def get_age(self):
6130         return time.time() - self.get_grant_renew_time_time()
6131 
6132hunk ./src/allmydata/storage/lease.py 32
6133          self.expiration_time) = struct.unpack(">L32s32sL", data)
6134         self.nodeid = None
6135         return self
6136+
6137     def to_immutable_data(self):
6138         return struct.pack(">L32s32sL",
6139                            self.owner_num,
6140hunk ./src/allmydata/storage/lease.py 45
6141                            int(self.expiration_time),
6142                            self.renew_secret, self.cancel_secret,
6143                            self.nodeid)
6144+
6145     def from_mutable_data(self, data):
6146         (self.owner_num,
6147          self.expiration_time,
6148hunk ./src/allmydata/storage/server.py 11
6149 from allmydata.util import fileutil, idlib, log, time_format
6150 import allmydata # for __full_version__
6151 
6152-from allmydata.storage.common import si_b2a, si_a2b, si_dir
6153-_pyflakes_hush = [si_b2a, si_a2b, si_dir] # re-exported
6154+from allmydata.storage.common import si_b2a, si_a2b, si_si2dir
6155+_pyflakes_hush = [si_b2a, si_a2b, si_si2dir] # re-exported
6156 from allmydata.storage.lease import LeaseInfo
6157 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
6158      create_mutable_sharefile
6159hunk ./src/allmydata/storage/server.py 88
6160             else:
6161                 stats["mean"] = None
6162 
6163-            orderstatlist = [(0.01, "01_0_percentile", 100), (0.1, "10_0_percentile", 10),\
6164-                             (0.50, "50_0_percentile", 10), (0.90, "90_0_percentile", 10),\
6165-                             (0.95, "95_0_percentile", 20), (0.99, "99_0_percentile", 100),\
6166+            orderstatlist = [(0.1, "10_0_percentile", 10), (0.5, "50_0_percentile", 10), \
6167+                             (0.9, "90_0_percentile", 10), (0.95, "95_0_percentile", 20), \
6168+                             (0.01, "01_0_percentile", 100),  (0.99, "99_0_percentile", 100),\
6169                              (0.999, "99_9_percentile", 1000)]
6170 
6171             for percentile, percentilestring, minnumtoobserve in orderstatlist:
6172hunk ./src/allmydata/storage/server.py 231
6173             header = f.read(32)
6174             f.close()
6175             if header[:32] == MutableShareFile.MAGIC:
6176+                # XXX  Can I exploit this code?
6177                 sf = MutableShareFile(filename, self)
6178                 # note: if the share has been migrated, the renew_lease()
6179                 # call will throw an exception, with information to help the
6180hunk ./src/allmydata/storage/server.py 237
6181                 # client update the lease.
6182             elif header[:4] == struct.pack(">L", 1):
6183+                # Check if version number is "1".
6184+                # XXX WHAT ABOUT OTHER VERSIONS!!!!!!!?
6185                 sf = ShareFile(filename)
6186             else:
6187                 continue # non-sharefile
6188hunk ./src/allmydata/storage/server.py 285
6189             total_space_freed += sf.cancel_lease(cancel_secret)
6190 
6191         if found_buckets:
6192-            storagedir = si_dir(self.sharedir, storageindex)
6193+            # XXX  Yikes looks like code that shouldn't be in the server!
6194+            storagedir = si_si2dir(self.sharedir, storageindex)
6195             fp_rmdir_if_empty(storagedir)
6196 
6197         if self.stats_provider:
6198hunk ./src/allmydata/storage/server.py 301
6199             self.stats_provider.count('storage_server.bytes_added', consumed_size)
6200         del self._active_writers[bw]
6201 
6202-
6203     def remote_get_buckets(self, storageindex):
6204         start = time.time()
6205         self.count("get")
6206hunk ./src/allmydata/storage/server.py 329
6207         except StopIteration:
6208             return iter([])
6209 
6210+    #  XXX  As far as Zancas' grockery has gotten.
6211     def remote_slot_testv_and_readv_and_writev(self, storageindex,
6212                                                secrets,
6213                                                test_and_write_vectors,
6214hunk ./src/allmydata/storage/server.py 338
6215         self.count("writev")
6216         si_s = si_b2a(storageindex)
6217         log.msg("storage: slot_writev %s" % si_s)
6218-        si_dir = storage_index_to_dir(storageindex)
6219+       
6220         (write_enabler, renew_secret, cancel_secret) = secrets
6221         # shares exist if there is a file for them
6222hunk ./src/allmydata/storage/server.py 341
6223-        bucketdir = os.path.join(self.sharedir, si_dir)
6224+        bucketdir = si_si2dir(self.sharedir, storageindex)
6225         shares = {}
6226         if os.path.isdir(bucketdir):
6227             for sharenum_s in os.listdir(bucketdir):
6228hunk ./src/allmydata/storage/server.py 430
6229         si_s = si_b2a(storageindex)
6230         lp = log.msg("storage: slot_readv %s %s" % (si_s, shares),
6231                      facility="tahoe.storage", level=log.OPERATIONAL)
6232-        si_dir = storage_index_to_dir(storageindex)
6233         # shares exist if there is a file for them
6234hunk ./src/allmydata/storage/server.py 431
6235-        bucketdir = os.path.join(self.sharedir, si_dir)
6236+        bucketdir = si_si2dir(self.sharedir, storageindex)
6237         if not os.path.isdir(bucketdir):
6238             self.add_latency("readv", time.time() - start)
6239             return {}
6240hunk ./src/allmydata/test/test_backends.py 2
6241 from twisted.trial import unittest
6242-
6243 from twisted.python.filepath import FilePath
6244hunk ./src/allmydata/test/test_backends.py 3
6245-
6246 from allmydata.util.log import msg
6247hunk ./src/allmydata/test/test_backends.py 4
6248-
6249 from StringIO import StringIO
6250hunk ./src/allmydata/test/test_backends.py 5
6251-
6252 from allmydata.test.common_util import ReallyEqualMixin
6253 from allmydata.util.assertutil import _assert
6254hunk ./src/allmydata/test/test_backends.py 7
6255-
6256 import mock
6257 
6258 # This is the code that we're going to be testing.
6259hunk ./src/allmydata/test/test_backends.py 11
6260 from allmydata.storage.server import StorageServer
6261-
6262 from allmydata.storage.backends.das.core import DASCore
6263 from allmydata.storage.backends.null.core import NullCore
6264 
6265hunk ./src/allmydata/test/test_backends.py 14
6266-
6267-# The following share file contents was generated with
6268+# The following share file content was generated with
6269 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
6270hunk ./src/allmydata/test/test_backends.py 16
6271-# with share data == 'a'.
6272+# with share data == 'a'. The total size of this input
6273+# is 85 bytes.
6274 shareversionnumber = '\x00\x00\x00\x01'
6275 sharedatalength = '\x00\x00\x00\x01'
6276 numberofleases = '\x00\x00\x00\x01'
6277hunk ./src/allmydata/test/test_backends.py 21
6278-
6279 shareinputdata = 'a'
6280 ownernumber = '\x00\x00\x00\x00'
6281 renewsecret  = 'x'*32
6282hunk ./src/allmydata/test/test_backends.py 31
6283 client_data = shareinputdata + ownernumber + renewsecret + \
6284     cancelsecret + expirationtime + nextlease
6285 share_data = containerdata + client_data
6286-
6287-
6288 testnodeid = 'testnodeidxxxxxxxxxx'
6289 
6290 class MockStat:
6291hunk ./src/allmydata/test/test_backends.py 105
6292         mstat.st_mode = 16893 # a directory
6293         return mstat
6294 
6295+    def call_get_available_space(self, storedir, reservedspace):
6296+        # The input vector has an input size of 85.
6297+        return 85 - reservedspace
6298+
6299+    def call_exists(self):
6300+        # I'm only called in the ImmutableShareFile constructor.
6301+        return False
6302+
6303     def setUp(self):
6304         msg( "%s.setUp()" % (self,))
6305         self.storedir = FilePath('teststoredir')
6306hunk ./src/allmydata/test/test_backends.py 147
6307         mockfpstat = self.mockfpstatp.__enter__()
6308         mockfpstat.side_effect = self.call_stat
6309 
6310+        self.mockget_available_space = mock.patch('allmydata.util.fileutil.get_available_space')
6311+        mockget_available_space = self.mockget_available_space.__enter__()
6312+        mockget_available_space.side_effect = self.call_get_available_space
6313+
6314+        self.mockfpexists = mock.patch('twisted.python.filepath.FilePath.exists')
6315+        mockfpexists = self.mockfpexists.__enter__()
6316+        mockfpexists.side_effect = self.call_exists
6317+
6318     def tearDown(self):
6319         msg( "%s.tearDown()" % (self,))
6320hunk ./src/allmydata/test/test_backends.py 157
6321+        self.mockfpexists.__exit__()
6322+        self.mockget_available_space.__exit__()
6323         self.mockfpstatp.__exit__()
6324         self.mockstatp.__exit__()
6325         self.mockopenp.__exit__()
6326hunk ./src/allmydata/test/test_backends.py 166
6327         self.mockmkdirp.__exit__()
6328         self.mocklistdirp.__exit__()
6329 
6330+
6331 expiration_policy = {'enabled' : False,
6332                      'mode' : 'age',
6333                      'override_lease_duration' : None,
6334hunk ./src/allmydata/test/test_backends.py 182
6335         self.ss = StorageServer(testnodeid, backend=NullCore())
6336 
6337     @mock.patch('os.mkdir')
6338-
6339     @mock.patch('__builtin__.open')
6340     @mock.patch('os.listdir')
6341     @mock.patch('os.path.isdir')
6342hunk ./src/allmydata/test/test_backends.py 201
6343         filesystem backend. To pass the test, it mustn't use the filesystem
6344         outside of its configured storedir. """
6345 
6346-        StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
6347+        StorageServer(testnodeid, backend=DASCore(self.storedir, expiration_policy))
6348 
6349 class TestServerAndFSBackend(MockFiles, ReallyEqualMixin):
6350     """ This tests both the StorageServer and the DAS backend together. """
6351hunk ./src/allmydata/test/test_backends.py 205
6352+   
6353     def setUp(self):
6354         MockFiles.setUp(self)
6355         try:
6356hunk ./src/allmydata/test/test_backends.py 211
6357             self.backend = DASCore(self.storedir, expiration_policy)
6358             self.ss = StorageServer(testnodeid, self.backend)
6359-            self.backendsmall = DASCore(self.storedir, expiration_policy, reserved_space = 1)
6360-            self.ssmallback = StorageServer(testnodeid, self.backendsmall)
6361+            self.backendwithreserve = DASCore(self.storedir, expiration_policy, reserved_space = 1)
6362+            self.sswithreserve = StorageServer(testnodeid, self.backendwithreserve)
6363         except:
6364             MockFiles.tearDown(self)
6365             raise
6366hunk ./src/allmydata/test/test_backends.py 233
6367         # Populate incoming with the sharenum: 0.
6368         alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, frozenset((0,)), 1, mock.Mock())
6369 
6370-        # Inspect incoming and fail unless the sharenum: 0 is listed there.
6371-        self.failUnlessReallyEqual(self.ss.backend.get_incoming_shnums('teststorage_index'), frozenset((0,)))
6372+        # This is a transparent-box test: Inspect incoming and fail unless the sharenum: 0 is listed there.
6373+        # self.failUnlessReallyEqual(self.ss.backend.get_incoming_shnums('teststorage_index'), frozenset((0,)))
6374         
6375         # Attempt to create a second share writer with the same sharenum.
6376         alreadygota, bsa = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, frozenset((0,)), 1, mock.Mock())
6377hunk ./src/allmydata/test/test_backends.py 257
6378 
6379         # Postclose: (Omnibus) failUnless written data is in final.
6380         sharesinfinal = list(self.backend.get_shares('teststorage_index'))
6381-        contents = sharesinfinal[0].read_share_data(0,73)
6382+        self.failUnlessReallyEqual(len(sharesinfinal), 1)
6383+        contents = sharesinfinal[0].read_share_data(0, 73)
6384         self.failUnlessReallyEqual(contents, client_data)
6385 
6386         # Exercise the case that the share we're asking to allocate is
6387hunk ./src/allmydata/test/test_backends.py 276
6388         mockget_available_space.side_effect = call_get_available_space
6389         
6390         
6391-        alreadygotc, bsc = self.ssmallback.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
6392+        alreadygotc, bsc = self.sswithreserve.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
6393 
6394     @mock.patch('os.path.exists')
6395     @mock.patch('os.path.getsize')
6396}
6397[jacp17
6398wilcoxjg@gmail.com**20110722203244
6399 Ignore-this: e79a5924fb2eb786ee4e9737a8228f87
6400] {
6401hunk ./src/allmydata/storage/backends/das/core.py 14
6402 from allmydata.util.assertutil import precondition
6403 from allmydata.interfaces import IStatsProducer, IShareStore# XXX, RIStorageServer
6404 from allmydata.util import fileutil, idlib, log, time_format
6405+from allmydata.util.fileutil import fp_make_dirs
6406 from allmydata.storage.lease import LeaseInfo
6407 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
6408      create_mutable_sharefile
6409hunk ./src/allmydata/storage/backends/das/core.py 19
6410 from allmydata.storage.immutable import BucketWriter, BucketReader
6411-from allmydata.storage.crawler import FSBucketCountingCrawler
6412+from allmydata.storage.crawler import BucketCountingCrawler
6413 from allmydata.util.hashutil import constant_time_compare
6414hunk ./src/allmydata/storage/backends/das/core.py 21
6415-from allmydata.storage.backends.das.expirer import FSLeaseCheckingCrawler
6416+from allmydata.storage.backends.das.expirer import LeaseCheckingCrawler
6417 _pyflakes_hush = [si_b2a, si_a2b, si_si2dir] # re-exported
6418 
6419 # storage/
6420hunk ./src/allmydata/storage/backends/das/core.py 43
6421     implements(IStorageBackend)
6422     def __init__(self, storedir, expiration_policy, readonly=False, reserved_space=0):
6423         Backend.__init__(self)
6424-
6425         self._setup_storage(storedir, readonly, reserved_space)
6426         self._setup_corruption_advisory()
6427         self._setup_bucket_counter()
6428hunk ./src/allmydata/storage/backends/das/core.py 72
6429 
6430     def _setup_bucket_counter(self):
6431         statefname = self.storedir.child("bucket_counter.state")
6432-        self.bucket_counter = FSBucketCountingCrawler(statefname)
6433+        self.bucket_counter = BucketCountingCrawler(statefname)
6434         self.bucket_counter.setServiceParent(self)
6435 
6436     def _setup_lease_checkerf(self, expiration_policy):
6437hunk ./src/allmydata/storage/backends/das/core.py 78
6438         statefile = self.storedir.child("lease_checker.state")
6439         historyfile = self.storedir.child("lease_checker.history")
6440-        self.lease_checker = FSLeaseCheckingCrawler(statefile, historyfile, expiration_policy)
6441+        self.lease_checker = LeaseCheckingCrawler(statefile, historyfile, expiration_policy)
6442         self.lease_checker.setServiceParent(self)
6443 
6444     def get_incoming_shnums(self, storageindex):
6445hunk ./src/allmydata/storage/backends/das/core.py 168
6446             # it. Also construct the metadata.
6447             assert not finalhome.exists()
6448             fp_make_dirs(self.incominghome)
6449-            f = open(self.incominghome, 'wb')
6450+            f = self.incominghome.child(str(self.shnum))
6451             # The second field -- the four-byte share data length -- is no
6452             # longer used as of Tahoe v1.3.0, but we continue to write it in
6453             # there in case someone downgrades a storage server from >=
6454hunk ./src/allmydata/storage/backends/das/core.py 178
6455             # the largest length that can fit into the field. That way, even
6456             # if this does happen, the old < v1.3.0 server will still allow
6457             # clients to read the first part of the share.
6458-            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
6459-            f.close()
6460+            f.setContent(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
6461+            #f.close()
6462             self._lease_offset = max_size + 0x0c
6463             self._num_leases = 0
6464         else:
6465hunk ./src/allmydata/storage/backends/das/core.py 261
6466         f.write(data)
6467         f.close()
6468 
6469-    def _write_lease_record(self, f, lease_number, lease_info):
6470+    def _write_lease_record(self, lease_number, lease_info):
6471         offset = self._lease_offset + lease_number * self.LEASE_SIZE
6472         f.seek(offset)
6473         assert f.tell() == offset
6474hunk ./src/allmydata/storage/backends/das/core.py 290
6475                 yield LeaseInfo().from_immutable_data(data)
6476 
6477     def add_lease(self, lease_info):
6478-        f = open(self.incominghome, 'rb+')
6479+        self.incominghome, 'rb+')
6480         num_leases = self._read_num_leases(f)
6481         self._write_lease_record(f, num_leases, lease_info)
6482         self._write_num_leases(f, num_leases+1)
6483hunk ./src/allmydata/storage/backends/das/expirer.py 1
6484-import time, os, pickle, struct
6485-from allmydata.storage.crawler import FSShareCrawler
6486+import time, os, pickle, struct # os, pickle, and struct will almost certainly be migrated to the backend...
6487+from allmydata.storage.crawler import ShareCrawler
6488 from allmydata.storage.common import UnknownMutableContainerVersionError, \
6489      UnknownImmutableContainerVersionError
6490 from twisted.python import log as twlog
6491hunk ./src/allmydata/storage/backends/das/expirer.py 7
6492 
6493-class FSLeaseCheckingCrawler(FSShareCrawler):
6494+class LeaseCheckingCrawler(ShareCrawler):
6495     """I examine the leases on all shares, determining which are still valid
6496     and which have expired. I can remove the expired leases (if so
6497     configured), and the share will be deleted when the last lease is
6498hunk ./src/allmydata/storage/backends/das/expirer.py 66
6499         else:
6500             raise ValueError("GC mode '%s' must be 'age' or 'cutoff-date'" % expiration_policy['mode'])
6501         self.sharetypes_to_expire = expiration_policy['sharetypes']
6502-        FSShareCrawler.__init__(self, statefile)
6503+        ShareCrawler.__init__(self, statefile)
6504 
6505     def add_initial_state(self):
6506         # we fill ["cycle-to-date"] here (even though they will be reset in
6507hunk ./src/allmydata/storage/crawler.py 1
6508-
6509 import os, time, struct
6510 import cPickle as pickle
6511 from twisted.internet import reactor
6512hunk ./src/allmydata/storage/crawler.py 11
6513 class TimeSliceExceeded(Exception):
6514     pass
6515 
6516-class FSShareCrawler(service.MultiService):
6517-    """A subcless of ShareCrawler is attached to a StorageServer, and
6518+class ShareCrawler(service.MultiService):
6519+    """A subclass of ShareCrawler is attached to a StorageServer, and
6520     periodically walks all of its shares, processing each one in some
6521     fashion. This crawl is rate-limited, to reduce the IO burden on the host,
6522     since large servers can easily have a terabyte of shares, in several
6523hunk ./src/allmydata/storage/crawler.py 426
6524         pass
6525 
6526 
6527-class FSBucketCountingCrawler(FSShareCrawler):
6528+class BucketCountingCrawler(ShareCrawler):
6529     """I keep track of how many buckets are being managed by this server.
6530     This is equivalent to the number of distributed files and directories for
6531     which I am providing storage. The actual number of files+directories in
6532hunk ./src/allmydata/storage/crawler.py 440
6533     minimum_cycle_time = 60*60 # we don't need this more than once an hour
6534 
6535     def __init__(self, statefp, num_sample_prefixes=1):
6536-        FSShareCrawler.__init__(self, statefp)
6537+        ShareCrawler.__init__(self, statefp)
6538         self.num_sample_prefixes = num_sample_prefixes
6539 
6540     def add_initial_state(self):
6541hunk ./src/allmydata/test/test_backends.py 113
6542         # I'm only called in the ImmutableShareFile constructor.
6543         return False
6544 
6545+    def call_setContent(self, inputstring):
6546+        # XXX Good enough for expirer, not sure about elsewhere...
6547+        return True
6548+
6549     def setUp(self):
6550         msg( "%s.setUp()" % (self,))
6551         self.storedir = FilePath('teststoredir')
6552hunk ./src/allmydata/test/test_backends.py 159
6553         mockfpexists = self.mockfpexists.__enter__()
6554         mockfpexists.side_effect = self.call_exists
6555 
6556+        self.mocksetContent = mock.patch('twisted.python.filepath.FilePath.setContent')
6557+        mocksetContent = self.mocksetContent.__enter__()
6558+        mocksetContent.side_effect = self.call_setContent
6559+
6560     def tearDown(self):
6561         msg( "%s.tearDown()" % (self,))
6562hunk ./src/allmydata/test/test_backends.py 165
6563+        self.mocksetContent.__exit__()
6564         self.mockfpexists.__exit__()
6565         self.mockget_available_space.__exit__()
6566         self.mockfpstatp.__exit__()
6567}
6568[jacp18
6569wilcoxjg@gmail.com**20110723031915
6570 Ignore-this: 21e7f22ac20e3f8af22ea2e9b755d6a5
6571] {
6572hunk ./src/allmydata/_auto_deps.py 21
6573     # These are the versions packaged in major versions of Debian or Ubuntu, or in pkgsrc.
6574     "zope.interface == 3.3.1, == 3.5.3, == 3.6.1",
6575 
6576-    "Twisted >= 2.4.0",
6577+v v v v v v v
6578+    "Twisted >= 11.0",
6579+*************
6580+    # On Windows we need at least Twisted 9.0 to avoid an indirect dependency on pywin32.
6581+    # We also need Twisted 10.1 for the FTP frontend in order for Twisted's FTP server to
6582+    # support asynchronous close.
6583+    "Twisted >= 10.1.0",
6584+^ ^ ^ ^ ^ ^ ^
6585 
6586     # foolscap < 0.5.1 had a performance bug which spent
6587     # O(N**2) CPU for transferring large mutable files
6588hunk ./src/allmydata/storage/backends/das/core.py 168
6589             # it. Also construct the metadata.
6590             assert not finalhome.exists()
6591             fp_make_dirs(self.incominghome)
6592-            f = self.incominghome.child(str(self.shnum))
6593+            f = self.incominghome
6594             # The second field -- the four-byte share data length -- is no
6595             # longer used as of Tahoe v1.3.0, but we continue to write it in
6596             # there in case someone downgrades a storage server from >=
6597hunk ./src/allmydata/storage/backends/das/core.py 178
6598             # the largest length that can fit into the field. That way, even
6599             # if this does happen, the old < v1.3.0 server will still allow
6600             # clients to read the first part of the share.
6601-            f.setContent(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
6602-            #f.close()
6603+            print 'f: ',f
6604+            f.setContent(struct.pack(">LLL", 1, min(2**32-1, max_size), 0) )
6605             self._lease_offset = max_size + 0x0c
6606             self._num_leases = 0
6607         else:
6608hunk ./src/allmydata/storage/backends/das/core.py 263
6609 
6610     def _write_lease_record(self, lease_number, lease_info):
6611         offset = self._lease_offset + lease_number * self.LEASE_SIZE
6612-        f.seek(offset)
6613-        assert f.tell() == offset
6614-        f.write(lease_info.to_immutable_data())
6615+        fh = f.open()
6616+        try:
6617+            fh.seek(offset)
6618+            assert fh.tell() == offset
6619+            fh.write(lease_info.to_immutable_data())
6620+        finally:
6621+            fh.close()
6622 
6623     def _read_num_leases(self, f):
6624hunk ./src/allmydata/storage/backends/das/core.py 272
6625-        f.seek(0x08)
6626-        (num_leases,) = struct.unpack(">L", f.read(4))
6627+        fh = f.open()
6628+        try:
6629+            fh.seek(0x08)
6630+            ro = fh.read(4)
6631+            print "repr(rp): %s len(ro): %s"  % (repr(ro), len(ro))
6632+            (num_leases,) = struct.unpack(">L", ro)
6633+        finally:
6634+            fh.close()
6635         return num_leases
6636 
6637     def _write_num_leases(self, f, num_leases):
6638hunk ./src/allmydata/storage/backends/das/core.py 283
6639-        f.seek(0x08)
6640-        f.write(struct.pack(">L", num_leases))
6641+        fh = f.open()
6642+        try:
6643+            fh.seek(0x08)
6644+            fh.write(struct.pack(">L", num_leases))
6645+        finally:
6646+            fh.close()
6647 
6648     def _truncate_leases(self, f, num_leases):
6649         f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE)
6650hunk ./src/allmydata/storage/backends/das/core.py 304
6651                 yield LeaseInfo().from_immutable_data(data)
6652 
6653     def add_lease(self, lease_info):
6654-        self.incominghome, 'rb+')
6655-        num_leases = self._read_num_leases(f)
6656+        f = self.incominghome
6657+        num_leases = self._read_num_leases(self.incominghome)
6658         self._write_lease_record(f, num_leases, lease_info)
6659         self._write_num_leases(f, num_leases+1)
6660hunk ./src/allmydata/storage/backends/das/core.py 308
6661-        f.close()
6662-
6663+       
6664     def renew_lease(self, renew_secret, new_expire_time):
6665         for i,lease in enumerate(self.get_leases()):
6666             if constant_time_compare(lease.renew_secret, renew_secret):
6667hunk ./src/allmydata/test/test_backends.py 33
6668 share_data = containerdata + client_data
6669 testnodeid = 'testnodeidxxxxxxxxxx'
6670 
6671+
6672 class MockStat:
6673     def __init__(self):
6674         self.st_mode = None
6675hunk ./src/allmydata/test/test_backends.py 43
6676     code under test if it reads or writes outside of its prescribed
6677     subtree. I simulate just the parts of the filesystem that the current
6678     implementation of DAS backend needs. """
6679+
6680+    def setUp(self):
6681+        msg( "%s.setUp()" % (self,))
6682+        self.storedir = FilePath('teststoredir')
6683+        self.basedir = self.storedir.child('shares')
6684+        self.baseincdir = self.basedir.child('incoming')
6685+        self.sharedirfinalname = self.basedir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
6686+        self.sharedirincomingname = self.baseincdir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
6687+        self.shareincomingname = self.sharedirincomingname.child('0')
6688+        self.sharefilename = self.sharedirfinalname.child('0')
6689+        self.sharefilecontents = StringIO(share_data)
6690+
6691+        self.mocklistdirp = mock.patch('os.listdir')
6692+        mocklistdir = self.mocklistdirp.__enter__()
6693+        mocklistdir.side_effect = self.call_listdir
6694+
6695+        self.mockmkdirp = mock.patch('os.mkdir')
6696+        mockmkdir = self.mockmkdirp.__enter__()
6697+        mockmkdir.side_effect = self.call_mkdir
6698+
6699+        self.mockisdirp = mock.patch('os.path.isdir')
6700+        mockisdir = self.mockisdirp.__enter__()
6701+        mockisdir.side_effect = self.call_isdir
6702+
6703+        self.mockopenp = mock.patch('__builtin__.open')
6704+        mockopen = self.mockopenp.__enter__()
6705+        mockopen.side_effect = self.call_open
6706+
6707+        self.mockstatp = mock.patch('os.stat')
6708+        mockstat = self.mockstatp.__enter__()
6709+        mockstat.side_effect = self.call_stat
6710+
6711+        self.mockfpstatp = mock.patch('twisted.python.filepath.stat')
6712+        mockfpstat = self.mockfpstatp.__enter__()
6713+        mockfpstat.side_effect = self.call_stat
6714+
6715+        self.mockget_available_space = mock.patch('allmydata.util.fileutil.get_available_space')
6716+        mockget_available_space = self.mockget_available_space.__enter__()
6717+        mockget_available_space.side_effect = self.call_get_available_space
6718+
6719+        self.mockfpexists = mock.patch('twisted.python.filepath.FilePath.exists')
6720+        mockfpexists = self.mockfpexists.__enter__()
6721+        mockfpexists.side_effect = self.call_exists
6722+
6723+        self.mocksetContent = mock.patch('twisted.python.filepath.FilePath.setContent')
6724+        mocksetContent = self.mocksetContent.__enter__()
6725+        mocksetContent.side_effect = self.call_setContent
6726+
6727     def call_open(self, fname, mode):
6728         assert isinstance(fname, basestring), fname
6729         fnamefp = FilePath(fname)
6730hunk ./src/allmydata/test/test_backends.py 107
6731             # current implementation of DAS backend, and we might want to
6732             # use this information in this test in the future...
6733             return StringIO()
6734+        elif fnamefp == self.shareincomingname:
6735+            print "repr(fnamefp): ", repr(fnamefp)
6736         else:
6737             # Anything else you open inside your subtree appears to be an
6738             # empty file.
6739hunk ./src/allmydata/test/test_backends.py 168
6740         # XXX Good enough for expirer, not sure about elsewhere...
6741         return True
6742 
6743-    def setUp(self):
6744-        msg( "%s.setUp()" % (self,))
6745-        self.storedir = FilePath('teststoredir')
6746-        self.basedir = self.storedir.child('shares')
6747-        self.baseincdir = self.basedir.child('incoming')
6748-        self.sharedirfinalname = self.basedir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
6749-        self.sharedirincomingname = self.baseincdir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
6750-        self.shareincomingname = self.sharedirincomingname.child('0')
6751-        self.sharefname = self.sharedirfinalname.child('0')
6752-
6753-        self.mocklistdirp = mock.patch('os.listdir')
6754-        mocklistdir = self.mocklistdirp.__enter__()
6755-        mocklistdir.side_effect = self.call_listdir
6756-
6757-        self.mockmkdirp = mock.patch('os.mkdir')
6758-        mockmkdir = self.mockmkdirp.__enter__()
6759-        mockmkdir.side_effect = self.call_mkdir
6760-
6761-        self.mockisdirp = mock.patch('os.path.isdir')
6762-        mockisdir = self.mockisdirp.__enter__()
6763-        mockisdir.side_effect = self.call_isdir
6764-
6765-        self.mockopenp = mock.patch('__builtin__.open')
6766-        mockopen = self.mockopenp.__enter__()
6767-        mockopen.side_effect = self.call_open
6768-
6769-        self.mockstatp = mock.patch('os.stat')
6770-        mockstat = self.mockstatp.__enter__()
6771-        mockstat.side_effect = self.call_stat
6772-
6773-        self.mockfpstatp = mock.patch('twisted.python.filepath.stat')
6774-        mockfpstat = self.mockfpstatp.__enter__()
6775-        mockfpstat.side_effect = self.call_stat
6776-
6777-        self.mockget_available_space = mock.patch('allmydata.util.fileutil.get_available_space')
6778-        mockget_available_space = self.mockget_available_space.__enter__()
6779-        mockget_available_space.side_effect = self.call_get_available_space
6780-
6781-        self.mockfpexists = mock.patch('twisted.python.filepath.FilePath.exists')
6782-        mockfpexists = self.mockfpexists.__enter__()
6783-        mockfpexists.side_effect = self.call_exists
6784-
6785-        self.mocksetContent = mock.patch('twisted.python.filepath.FilePath.setContent')
6786-        mocksetContent = self.mocksetContent.__enter__()
6787-        mocksetContent.side_effect = self.call_setContent
6788 
6789     def tearDown(self):
6790         msg( "%s.tearDown()" % (self,))
6791hunk ./src/allmydata/test/test_backends.py 239
6792         handling of simultaneous and successive attempts to write the same
6793         share.
6794         """
6795-
6796         mocktime.return_value = 0
6797         # Inspect incoming and fail unless it's empty.
6798         incomingset = self.ss.backend.get_incoming_shnums('teststorage_index')
6799}
6800[jacp19orso
6801wilcoxjg@gmail.com**20110724034230
6802 Ignore-this: f001093c467225c289489636a61935fe
6803] {
6804hunk ./src/allmydata/_auto_deps.py 21
6805     # These are the versions packaged in major versions of Debian or Ubuntu, or in pkgsrc.
6806     "zope.interface == 3.3.1, == 3.5.3, == 3.6.1",
6807 
6808-v v v v v v v
6809-    "Twisted >= 11.0",
6810-*************
6811+
6812     # On Windows we need at least Twisted 9.0 to avoid an indirect dependency on pywin32.
6813     # We also need Twisted 10.1 for the FTP frontend in order for Twisted's FTP server to
6814     # support asynchronous close.
6815hunk ./src/allmydata/_auto_deps.py 26
6816     "Twisted >= 10.1.0",
6817-^ ^ ^ ^ ^ ^ ^
6818+
6819 
6820     # foolscap < 0.5.1 had a performance bug which spent
6821     # O(N**2) CPU for transferring large mutable files
6822hunk ./src/allmydata/storage/backends/das/core.py 153
6823     LEASE_SIZE = struct.calcsize(">L32s32sL")
6824     sharetype = "immutable"
6825 
6826-    def __init__(self, finalhome, storageindex, shnum, incominghome=None, max_size=None, create=False):
6827+    def __init__(self, finalhome, storageindex, shnum, incominghome, max_size=None, create=False):
6828         """ If max_size is not None then I won't allow more than
6829         max_size to be written to me. If create=True then max_size
6830         must not be None. """
6831hunk ./src/allmydata/storage/backends/das/core.py 167
6832             # touch the file, so later callers will see that we're working on
6833             # it. Also construct the metadata.
6834             assert not finalhome.exists()
6835-            fp_make_dirs(self.incominghome)
6836-            f = self.incominghome
6837+            fp_make_dirs(self.incominghome.parent())
6838             # The second field -- the four-byte share data length -- is no
6839             # longer used as of Tahoe v1.3.0, but we continue to write it in
6840             # there in case someone downgrades a storage server from >=
6841hunk ./src/allmydata/storage/backends/das/core.py 177
6842             # the largest length that can fit into the field. That way, even
6843             # if this does happen, the old < v1.3.0 server will still allow
6844             # clients to read the first part of the share.
6845-            print 'f: ',f
6846-            f.setContent(struct.pack(">LLL", 1, min(2**32-1, max_size), 0) )
6847+            self.incominghome.setContent(struct.pack(">LLL", 1, min(2**32-1, max_size), 0) )
6848             self._lease_offset = max_size + 0x0c
6849             self._num_leases = 0
6850         else:
6851hunk ./src/allmydata/storage/backends/das/core.py 182
6852             f = open(self.finalhome, 'rb')
6853-            filesize = os.path.getsize(self.finalhome)
6854             (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
6855             f.close()
6856hunk ./src/allmydata/storage/backends/das/core.py 184
6857+            filesize = self.finalhome.getsize()
6858             if version != 1:
6859                 msg = "sharefile %s had version %d but we wanted 1" % \
6860                       (self.finalhome, version)
6861hunk ./src/allmydata/storage/backends/das/core.py 259
6862         f.write(data)
6863         f.close()
6864 
6865-    def _write_lease_record(self, lease_number, lease_info):
6866+    def _write_lease_record(self, f, lease_number, lease_info):
6867         offset = self._lease_offset + lease_number * self.LEASE_SIZE
6868         fh = f.open()
6869hunk ./src/allmydata/storage/backends/das/core.py 262
6870+        print fh
6871         try:
6872             fh.seek(offset)
6873             assert fh.tell() == offset
6874hunk ./src/allmydata/storage/backends/das/core.py 271
6875             fh.close()
6876 
6877     def _read_num_leases(self, f):
6878-        fh = f.open()
6879+        fh = f.open() #XXX  Ackkk I've mocked open...  is this wrong?
6880         try:
6881             fh.seek(0x08)
6882             ro = fh.read(4)
6883hunk ./src/allmydata/storage/backends/das/core.py 275
6884-            print "repr(rp): %s len(ro): %s"  % (repr(ro), len(ro))
6885             (num_leases,) = struct.unpack(">L", ro)
6886         finally:
6887             fh.close()
6888hunk ./src/allmydata/storage/backends/das/core.py 302
6889                 yield LeaseInfo().from_immutable_data(data)
6890 
6891     def add_lease(self, lease_info):
6892-        f = self.incominghome
6893         num_leases = self._read_num_leases(self.incominghome)
6894hunk ./src/allmydata/storage/backends/das/core.py 303
6895-        self._write_lease_record(f, num_leases, lease_info)
6896-        self._write_num_leases(f, num_leases+1)
6897+        self._write_lease_record(self.incominghome, num_leases, lease_info)
6898+        self._write_num_leases(self.incominghome, num_leases+1)
6899         
6900     def renew_lease(self, renew_secret, new_expire_time):
6901         for i,lease in enumerate(self.get_leases()):
6902hunk ./src/allmydata/test/test_backends.py 52
6903         self.sharedirfinalname = self.basedir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
6904         self.sharedirincomingname = self.baseincdir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
6905         self.shareincomingname = self.sharedirincomingname.child('0')
6906-        self.sharefilename = self.sharedirfinalname.child('0')
6907-        self.sharefilecontents = StringIO(share_data)
6908+        self.sharefinalname = self.sharedirfinalname.child('0')
6909 
6910hunk ./src/allmydata/test/test_backends.py 54
6911-        self.mocklistdirp = mock.patch('os.listdir')
6912-        mocklistdir = self.mocklistdirp.__enter__()
6913-        mocklistdir.side_effect = self.call_listdir
6914+        # Make patcher, patch, and make effects for fs using functions.
6915+        self.mocklistdirp = mock.patch('twisted.python.filepath.FilePath.listdir') # Create a patcher that can replace 'listdir'
6916+        mocklistdir = self.mocklistdirp.__enter__()  # Patches namespace with mockobject replacing 'listdir'
6917+        mocklistdir.side_effect = self.call_listdir  # When replacement 'mocklistdir' is invoked in place of 'listdir', 'call_listdir'
6918 
6919hunk ./src/allmydata/test/test_backends.py 59
6920-        self.mockmkdirp = mock.patch('os.mkdir')
6921-        mockmkdir = self.mockmkdirp.__enter__()
6922-        mockmkdir.side_effect = self.call_mkdir
6923+        #self.mockmkdirp = mock.patch('os.mkdir')
6924+        #mockmkdir = self.mockmkdirp.__enter__()
6925+        #mockmkdir.side_effect = self.call_mkdir
6926 
6927hunk ./src/allmydata/test/test_backends.py 63
6928-        self.mockisdirp = mock.patch('os.path.isdir')
6929+        self.mockisdirp = mock.patch('FilePath.isdir')
6930         mockisdir = self.mockisdirp.__enter__()
6931         mockisdir.side_effect = self.call_isdir
6932 
6933hunk ./src/allmydata/test/test_backends.py 67
6934-        self.mockopenp = mock.patch('__builtin__.open')
6935+        self.mockopenp = mock.patch('FilePath.open')
6936         mockopen = self.mockopenp.__enter__()
6937         mockopen.side_effect = self.call_open
6938 
6939hunk ./src/allmydata/test/test_backends.py 71
6940-        self.mockstatp = mock.patch('os.stat')
6941+        self.mockstatp = mock.patch('filepath.stat')
6942         mockstat = self.mockstatp.__enter__()
6943         mockstat.side_effect = self.call_stat
6944 
6945hunk ./src/allmydata/test/test_backends.py 91
6946         mocksetContent = self.mocksetContent.__enter__()
6947         mocksetContent.side_effect = self.call_setContent
6948 
6949+    #  The behavior of mocked filesystem using functions
6950     def call_open(self, fname, mode):
6951         assert isinstance(fname, basestring), fname
6952         fnamefp = FilePath(fname)
6953hunk ./src/allmydata/test/test_backends.py 109
6954             # use this information in this test in the future...
6955             return StringIO()
6956         elif fnamefp == self.shareincomingname:
6957-            print "repr(fnamefp): ", repr(fnamefp)
6958+            self.incomingsharefilecontents.closed = False
6959+            return self.incomingsharefilecontents
6960         else:
6961             # Anything else you open inside your subtree appears to be an
6962             # empty file.
6963hunk ./src/allmydata/test/test_backends.py 152
6964         fnamefp = FilePath(fname)
6965         self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
6966                         "Server with FS backend tried to isdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
6967-
6968         msg("%s.call_stat(%s)" % (self, fname,))
6969         mstat = MockStat()
6970         mstat.st_mode = 16893 # a directory
6971hunk ./src/allmydata/test/test_backends.py 166
6972         return False
6973 
6974     def call_setContent(self, inputstring):
6975-        # XXX Good enough for expirer, not sure about elsewhere...
6976-        return True
6977-
6978+        self.incomingsharefilecontents = StringIO(inputstring)
6979 
6980     def tearDown(self):
6981         msg( "%s.tearDown()" % (self,))
6982}
6983[jacp19
6984wilcoxjg@gmail.com**20110727080553
6985 Ignore-this: 851b1ebdeeee712abfbda557af142726
6986] {
6987hunk ./src/allmydata/storage/backends/das/core.py 1
6988-import os, re, weakref, struct, time, stat
6989+import re, weakref, struct, time, stat
6990 from twisted.application import service
6991 from twisted.python.filepath import UnlistableError
6992hunk ./src/allmydata/storage/backends/das/core.py 4
6993+from twisted.python import filepath
6994 from twisted.python.filepath import FilePath
6995 from zope.interface import implements
6996 
6997hunk ./src/allmydata/storage/backends/das/core.py 50
6998         self._setup_lease_checkerf(expiration_policy)
6999 
7000     def _setup_storage(self, storedir, readonly, reserved_space):
7001-        precondition(isinstance(storedir, FilePath)) 
7002+        precondition(isinstance(storedir, FilePath), storedir, FilePath) 
7003         self.storedir = storedir
7004         self.readonly = readonly
7005         self.reserved_space = int(reserved_space)
7006hunk ./src/allmydata/storage/backends/das/core.py 195
7007         self._data_offset = 0xc
7008 
7009     def close(self):
7010-        fileutil.make_dirs(os.path.dirname(self.finalhome))
7011-        fileutil.rename(self.incominghome, self.finalhome)
7012+        fileutil.fp_make_dirs(self.finalhome.parent())
7013+        self.incominghome.moveTo(self.finalhome)
7014         try:
7015             # self.incominghome is like storage/shares/incoming/ab/abcde/4 .
7016             # We try to delete the parent (.../ab/abcde) to avoid leaving
7017hunk ./src/allmydata/storage/backends/das/core.py 209
7018             # their children to know when they should do the rmdir. This
7019             # approach is simpler, but relies on os.rmdir refusing to delete
7020             # a non-empty directory. Do *not* use fileutil.rm_dir() here!
7021-            #print "os.path.dirname(self.incominghome): "
7022-            #print os.path.dirname(self.incominghome)
7023-            os.rmdir(os.path.dirname(self.incominghome))
7024+            fileutil.fp_rmdir_if_empty(self.incominghome.parent())
7025             # we also delete the grandparent (prefix) directory, .../ab ,
7026             # again to avoid leaving directories lying around. This might
7027             # fail if there is another bucket open that shares a prefix (like
7028hunk ./src/allmydata/storage/backends/das/core.py 214
7029             # ab/abfff).
7030-            os.rmdir(os.path.dirname(os.path.dirname(self.incominghome)))
7031+            fileutil.fp_rmdir_if_empty(self.incominghome.parent().parent())
7032             # we leave the great-grandparent (incoming/) directory in place.
7033         except EnvironmentError:
7034             # ignore the "can't rmdir because the directory is not empty"
7035hunk ./src/allmydata/storage/backends/das/core.py 224
7036         pass
7037         
7038     def stat(self):
7039-        return os.stat(self.finalhome)[stat.ST_SIZE]
7040-        #filelen = os.stat(self.finalhome)[stat.ST_SIZE]
7041+        return filepath.stat(self.finalhome)[stat.ST_SIZE]
7042 
7043     def get_shnum(self):
7044         return self.shnum
7045hunk ./src/allmydata/storage/backends/das/core.py 230
7046 
7047     def unlink(self):
7048-        os.unlink(self.finalhome)
7049+        self.finalhome.remove()
7050 
7051     def read_share_data(self, offset, length):
7052         precondition(offset >= 0)
7053hunk ./src/allmydata/storage/backends/das/core.py 237
7054         # Reads beyond the end of the data are truncated. Reads that start
7055         # beyond the end of the data return an empty string.
7056         seekpos = self._data_offset+offset
7057-        fsize = os.path.getsize(self.finalhome)
7058+        fsize = self.finalhome.getsize()
7059         actuallength = max(0, min(length, fsize-seekpos))
7060         if actuallength == 0:
7061             return ""
7062hunk ./src/allmydata/storage/backends/das/core.py 241
7063-        f = open(self.finalhome, 'rb')
7064-        f.seek(seekpos)
7065-        return f.read(actuallength)
7066+        try:
7067+            fh = open(self.finalhome, 'rb')
7068+            fh.seek(seekpos)
7069+            sharedata = fh.read(actuallength)
7070+        finally:
7071+            fh.close()
7072+        return sharedata
7073 
7074     def write_share_data(self, offset, data):
7075         length = len(data)
7076hunk ./src/allmydata/storage/backends/das/core.py 264
7077     def _write_lease_record(self, f, lease_number, lease_info):
7078         offset = self._lease_offset + lease_number * self.LEASE_SIZE
7079         fh = f.open()
7080-        print fh
7081         try:
7082             fh.seek(offset)
7083             assert fh.tell() == offset
7084hunk ./src/allmydata/storage/backends/das/core.py 269
7085             fh.write(lease_info.to_immutable_data())
7086         finally:
7087+            print dir(fh)
7088             fh.close()
7089 
7090     def _read_num_leases(self, f):
7091hunk ./src/allmydata/storage/backends/das/core.py 273
7092-        fh = f.open() #XXX  Ackkk I've mocked open...  is this wrong?
7093+        fh = f.open() #XXX  Should be mocking FilePath.open()
7094         try:
7095             fh.seek(0x08)
7096             ro = fh.read(4)
7097hunk ./src/allmydata/storage/backends/das/core.py 280
7098             (num_leases,) = struct.unpack(">L", ro)
7099         finally:
7100             fh.close()
7101+            print "end of _read_num_leases"
7102         return num_leases
7103 
7104     def _write_num_leases(self, f, num_leases):
7105hunk ./src/allmydata/storage/crawler.py 6
7106 from twisted.internet import reactor
7107 from twisted.application import service
7108 from allmydata.storage.common import si_b2a
7109-from allmydata.util import fileutil
7110 
7111 class TimeSliceExceeded(Exception):
7112     pass
7113hunk ./src/allmydata/storage/crawler.py 478
7114             old_cycle,buckets = self.state["storage-index-samples"][prefix]
7115             if old_cycle != cycle:
7116                 del self.state["storage-index-samples"][prefix]
7117-
7118hunk ./src/allmydata/test/test_backends.py 1
7119+import os
7120 from twisted.trial import unittest
7121 from twisted.python.filepath import FilePath
7122 from allmydata.util.log import msg
7123hunk ./src/allmydata/test/test_backends.py 9
7124 from allmydata.test.common_util import ReallyEqualMixin
7125 from allmydata.util.assertutil import _assert
7126 import mock
7127+from mock import Mock
7128 
7129 # This is the code that we're going to be testing.
7130 from allmydata.storage.server import StorageServer
7131hunk ./src/allmydata/test/test_backends.py 40
7132     def __init__(self):
7133         self.st_mode = None
7134 
7135+class MockFilePath:
7136+    def __init__(self, PathString):
7137+        self.PathName = PathString
7138+    def child(self, ChildString):
7139+        return MockFilePath(os.path.join(self.PathName, ChildString))
7140+    def parent(self):
7141+        return MockFilePath(os.path.dirname(self.PathName))
7142+    def makedirs(self):
7143+        # XXX These methods assume that fp_<FOO> functions in fileutil will be tested elsewhere!
7144+        pass
7145+    def isdir(self):
7146+        return True
7147+    def remove(self):
7148+        pass
7149+    def children(self):
7150+        return []
7151+    def exists(self):
7152+        return False
7153+    def setContent(self, ContentString):
7154+        self.File = MockFile(ContentString)
7155+    def open(self):
7156+        return self.File.open()
7157+
7158+class MockFile:
7159+    def __init__(self, ContentString):
7160+        self.Contents = ContentString
7161+    def open(self):
7162+        return self
7163+    def close(self):
7164+        pass
7165+    def seek(self, position):
7166+        pass
7167+    def read(self, amount):
7168+        pass
7169+
7170+
7171+class MockBCC:
7172+    def setServiceParent(self, Parent):
7173+        pass
7174+
7175+class MockLCC:
7176+    def setServiceParent(self, Parent):
7177+        pass
7178+
7179 class MockFiles(unittest.TestCase):
7180     """ I simulate a filesystem that the code under test can use. I flag the
7181     code under test if it reads or writes outside of its prescribed
7182hunk ./src/allmydata/test/test_backends.py 91
7183     implementation of DAS backend needs. """
7184 
7185     def setUp(self):
7186+        # Make patcher, patch, and make effects for fs using functions.
7187         msg( "%s.setUp()" % (self,))
7188hunk ./src/allmydata/test/test_backends.py 93
7189-        self.storedir = FilePath('teststoredir')
7190+        self.storedir = MockFilePath('teststoredir')
7191         self.basedir = self.storedir.child('shares')
7192         self.baseincdir = self.basedir.child('incoming')
7193         self.sharedirfinalname = self.basedir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
7194hunk ./src/allmydata/test/test_backends.py 101
7195         self.shareincomingname = self.sharedirincomingname.child('0')
7196         self.sharefinalname = self.sharedirfinalname.child('0')
7197 
7198-        # Make patcher, patch, and make effects for fs using functions.
7199-        self.mocklistdirp = mock.patch('twisted.python.filepath.FilePath.listdir') # Create a patcher that can replace 'listdir'
7200-        mocklistdir = self.mocklistdirp.__enter__()  # Patches namespace with mockobject replacing 'listdir'
7201-        mocklistdir.side_effect = self.call_listdir  # When replacement 'mocklistdir' is invoked in place of 'listdir', 'call_listdir'
7202-
7203-        #self.mockmkdirp = mock.patch('os.mkdir')
7204-        #mockmkdir = self.mockmkdirp.__enter__()
7205-        #mockmkdir.side_effect = self.call_mkdir
7206-
7207-        self.mockisdirp = mock.patch('FilePath.isdir')
7208-        mockisdir = self.mockisdirp.__enter__()
7209-        mockisdir.side_effect = self.call_isdir
7210+        self.FilePathFake = mock.patch('allmydata.storage.backends.das.core.FilePath', new = MockFilePath )
7211+        FakePath = self.FilePathFake.__enter__()
7212 
7213hunk ./src/allmydata/test/test_backends.py 104
7214-        self.mockopenp = mock.patch('FilePath.open')
7215-        mockopen = self.mockopenp.__enter__()
7216-        mockopen.side_effect = self.call_open
7217+        self.BCountingCrawler = mock.patch('allmydata.storage.backends.das.core.BucketCountingCrawler')
7218+        FakeBCC = self.BCountingCrawler.__enter__()
7219+        FakeBCC.side_effect = self.call_FakeBCC
7220 
7221hunk ./src/allmydata/test/test_backends.py 108
7222-        self.mockstatp = mock.patch('filepath.stat')
7223-        mockstat = self.mockstatp.__enter__()
7224-        mockstat.side_effect = self.call_stat
7225+        self.LeaseCheckingCrawler = mock.patch('allmydata.storage.backends.das.core.LeaseCheckingCrawler')
7226+        FakeLCC = self.LeaseCheckingCrawler.__enter__()
7227+        FakeLCC.side_effect = self.call_FakeLCC
7228 
7229hunk ./src/allmydata/test/test_backends.py 112
7230-        self.mockfpstatp = mock.patch('twisted.python.filepath.stat')
7231-        mockfpstat = self.mockfpstatp.__enter__()
7232-        mockfpstat.side_effect = self.call_stat
7233+        self.get_available_space = mock.patch('allmydata.util.fileutil.get_available_space')
7234+        GetSpace = self.get_available_space.__enter__()
7235+        GetSpace.side_effect = self.call_get_available_space
7236 
7237hunk ./src/allmydata/test/test_backends.py 116
7238-        self.mockget_available_space = mock.patch('allmydata.util.fileutil.get_available_space')
7239-        mockget_available_space = self.mockget_available_space.__enter__()
7240-        mockget_available_space.side_effect = self.call_get_available_space
7241+    def call_FakeBCC(self, StateFile):
7242+        return MockBCC()
7243 
7244hunk ./src/allmydata/test/test_backends.py 119
7245-        self.mockfpexists = mock.patch('twisted.python.filepath.FilePath.exists')
7246-        mockfpexists = self.mockfpexists.__enter__()
7247-        mockfpexists.side_effect = self.call_exists
7248-
7249-        self.mocksetContent = mock.patch('twisted.python.filepath.FilePath.setContent')
7250-        mocksetContent = self.mocksetContent.__enter__()
7251-        mocksetContent.side_effect = self.call_setContent
7252-
7253-    #  The behavior of mocked filesystem using functions
7254-    def call_open(self, fname, mode):
7255-        assert isinstance(fname, basestring), fname
7256-        fnamefp = FilePath(fname)
7257-        self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
7258-                        "Server with FS backend tried to open '%s' which is outside of the storage tree '%s' in mode '%s'" % (fnamefp, self.storedir, mode))
7259-
7260-        if fnamefp == self.storedir.child('bucket_counter.state'):
7261-            raise IOError(2, "No such file or directory: '%s'" % self.storedir.child('bucket_counter.state'))
7262-        elif fnamefp == self.storedir.child('lease_checker.state'):
7263-            raise IOError(2, "No such file or directory: '%s'" % self.storedir.child('lease_checker.state'))
7264-        elif fnamefp == self.storedir.child('lease_checker.history'):
7265-            # This is separated out from the else clause below just because
7266-            # we know this particular file is going to be used by the
7267-            # current implementation of DAS backend, and we might want to
7268-            # use this information in this test in the future...
7269-            return StringIO()
7270-        elif fnamefp == self.shareincomingname:
7271-            self.incomingsharefilecontents.closed = False
7272-            return self.incomingsharefilecontents
7273-        else:
7274-            # Anything else you open inside your subtree appears to be an
7275-            # empty file.
7276-            return StringIO()
7277-
7278-    def call_isdir(self, fname):
7279-        fnamefp = FilePath(fname)
7280-        return fnamefp.isdir()
7281-
7282-        self.failUnless(self.storedir == self or self.storedir in self.parents(),
7283-                        "Server with FS backend tried to isdir '%s' which is outside of the storage tree '%s''" % (self, self.storedir))
7284-
7285-        # The first two cases are separate from the else clause below just
7286-        # because we know that the current implementation of the DAS backend
7287-        # inspects these two directories and we might want to make use of
7288-        # that information in the tests in the future...
7289-        if self == self.storedir.child('shares'):
7290-            return True
7291-        elif self == self.storedir.child('shares').child('incoming'):
7292-            return True
7293-        else:
7294-            # Anything else you open inside your subtree appears to be a
7295-            # directory.
7296-            return True
7297-
7298-    def call_mkdir(self, fname, mode):
7299-        fnamefp = FilePath(fname)
7300-        self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
7301-                        "Server with FS backend tried to mkdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
7302-        self.failUnlessEqual(0777, mode)
7303+    def call_FakeLCC(self, StateFile, HistoryFile, ExpirationPolicy):
7304+        return MockLCC()
7305 
7306     def call_listdir(self, fname):
7307         fnamefp = FilePath(fname)
7308hunk ./src/allmydata/test/test_backends.py 150
7309 
7310     def tearDown(self):
7311         msg( "%s.tearDown()" % (self,))
7312-        self.mocksetContent.__exit__()
7313-        self.mockfpexists.__exit__()
7314-        self.mockget_available_space.__exit__()
7315-        self.mockfpstatp.__exit__()
7316-        self.mockstatp.__exit__()
7317-        self.mockopenp.__exit__()
7318-        self.mockisdirp.__exit__()
7319-        self.mockmkdirp.__exit__()
7320-        self.mocklistdirp.__exit__()
7321-
7322+        FakePath = self.FilePathFake.__exit__()       
7323+        FakeBCC = self.BCountingCrawler.__exit__()
7324 
7325 expiration_policy = {'enabled' : False,
7326                      'mode' : 'age',
7327hunk ./src/allmydata/test/test_backends.py 222
7328         # self.failUnlessReallyEqual(self.ss.backend.get_incoming_shnums('teststorage_index'), frozenset((0,)))
7329         
7330         # Attempt to create a second share writer with the same sharenum.
7331-        alreadygota, bsa = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, frozenset((0,)), 1, mock.Mock())
7332+        # alreadygota, bsa = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, frozenset((0,)), 1, mock.Mock())
7333 
7334         # Show that no sharewriter results from a remote_allocate_buckets
7335         # with the same si and sharenum, until BucketWriter.remote_close()
7336hunk ./src/allmydata/test/test_backends.py 227
7337         # has been called.
7338-        self.failIf(bsa)
7339+        # self.failIf(bsa)
7340 
7341         # Test allocated size.
7342hunk ./src/allmydata/test/test_backends.py 230
7343-        spaceint = self.ss.allocated_size()
7344-        self.failUnlessReallyEqual(spaceint, 1)
7345+        # spaceint = self.ss.allocated_size()
7346+        # self.failUnlessReallyEqual(spaceint, 1)
7347 
7348         # Write 'a' to shnum 0. Only tested together with close and read.
7349hunk ./src/allmydata/test/test_backends.py 234
7350-        bs[0].remote_write(0, 'a')
7351+        # bs[0].remote_write(0, 'a')
7352         
7353         # Preclose: Inspect final, failUnless nothing there.
7354hunk ./src/allmydata/test/test_backends.py 237
7355-        self.failUnlessReallyEqual(len(list(self.backend.get_shares('teststorage_index'))), 0)
7356-        bs[0].remote_close()
7357+        # self.failUnlessReallyEqual(len(list(self.backend.get_shares('teststorage_index'))), 0)
7358+        # bs[0].remote_close()
7359 
7360         # Postclose: (Omnibus) failUnless written data is in final.
7361hunk ./src/allmydata/test/test_backends.py 241
7362-        sharesinfinal = list(self.backend.get_shares('teststorage_index'))
7363-        self.failUnlessReallyEqual(len(sharesinfinal), 1)
7364-        contents = sharesinfinal[0].read_share_data(0, 73)
7365-        self.failUnlessReallyEqual(contents, client_data)
7366+        # sharesinfinal = list(self.backend.get_shares('teststorage_index'))
7367+        # self.failUnlessReallyEqual(len(sharesinfinal), 1)
7368+        # contents = sharesinfinal[0].read_share_data(0, 73)
7369+        # self.failUnlessReallyEqual(contents, client_data)
7370 
7371         # Exercise the case that the share we're asking to allocate is
7372         # already (completely) uploaded.
7373hunk ./src/allmydata/test/test_backends.py 248
7374-        self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
7375+        # self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
7376         
7377     @mock.patch('time.time')
7378     @mock.patch('allmydata.util.fileutil.get_available_space')
7379}
7380[jacp20
7381wilcoxjg@gmail.com**20110728072514
7382 Ignore-this: 6a03289023c3c79b8d09e2711183ea82
7383] {
7384hunk ./src/allmydata/storage/backends/das/core.py 52
7385     def _setup_storage(self, storedir, readonly, reserved_space):
7386         precondition(isinstance(storedir, FilePath), storedir, FilePath) 
7387         self.storedir = storedir
7388+        print "self.storedir: ", self.storedir
7389         self.readonly = readonly
7390         self.reserved_space = int(reserved_space)
7391         self.sharedir = self.storedir.child("shares")
7392hunk ./src/allmydata/storage/backends/das/core.py 85
7393 
7394     def get_incoming_shnums(self, storageindex):
7395         """ Return a frozenset of the shnum (as ints) of incoming shares. """
7396-        incomingdir = si_si2dir(self.incomingdir, storageindex)
7397+        print "self.incomingdir.children(): ", self.incomingdir.children()
7398+        print "self.incomingdir.pathname: ", self.incomingdir.pathname
7399+        incomingthissi = si_si2dir(self.incomingdir, storageindex)
7400+        print "incomingthissi.children(): ", incomingthissi.children()
7401         try:
7402hunk ./src/allmydata/storage/backends/das/core.py 90
7403-            childfps = [ fp for fp in incomingdir.children() if is_num(fp) ]
7404+            childfps = [ fp for fp in incomingthissi.children() if is_num(fp) ]
7405             shnums = [ int(fp.basename) for fp in childfps ]
7406             return frozenset(shnums)
7407         except UnlistableError:
7408hunk ./src/allmydata/storage/backends/das/core.py 117
7409 
7410     def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
7411         finalhome = si_si2dir(self.sharedir, storageindex).child(str(shnum))
7412-        incominghome = si_si2dir(self.sharedir.child('incoming'), storageindex).child(str(shnum))
7413+        incominghome = si_si2dir(self.incomingdir, storageindex).child(str(shnum))
7414         immsh = ImmutableShare(finalhome, storageindex, shnum, incominghome, max_size=max_space_per_bucket, create=True)
7415         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
7416         return bw
7417hunk ./src/allmydata/storage/backends/das/core.py 183
7418             # if this does happen, the old < v1.3.0 server will still allow
7419             # clients to read the first part of the share.
7420             self.incominghome.setContent(struct.pack(">LLL", 1, min(2**32-1, max_size), 0) )
7421+            print "We got here right?"
7422             self._lease_offset = max_size + 0x0c
7423             self._num_leases = 0
7424         else:
7425hunk ./src/allmydata/storage/backends/das/core.py 274
7426             assert fh.tell() == offset
7427             fh.write(lease_info.to_immutable_data())
7428         finally:
7429-            print dir(fh)
7430             fh.close()
7431 
7432     def _read_num_leases(self, f):
7433hunk ./src/allmydata/storage/backends/das/core.py 284
7434             (num_leases,) = struct.unpack(">L", ro)
7435         finally:
7436             fh.close()
7437-            print "end of _read_num_leases"
7438         return num_leases
7439 
7440     def _write_num_leases(self, f, num_leases):
7441hunk ./src/allmydata/storage/common.py 21
7442 
7443 def si_si2dir(startfp, storageindex):
7444     sia = si_b2a(storageindex)
7445-    return startfp.child(sia[:2]).child(sia)
7446+    print "I got here right?  sia =", sia
7447+    print "What the fuck is startfp? ", startfp
7448+    print "What the fuck is startfp.pathname? ", startfp.pathname
7449+    newfp = startfp.child(sia[:2])
7450+    print "Did I get here?"
7451+    return newfp.child(sia)
7452hunk ./src/allmydata/test/test_backends.py 5
7453 from twisted.trial import unittest
7454 from twisted.python.filepath import FilePath
7455 from allmydata.util.log import msg
7456-from StringIO import StringIO
7457+from tempfile import TemporaryFile
7458 from allmydata.test.common_util import ReallyEqualMixin
7459 from allmydata.util.assertutil import _assert
7460 import mock
7461hunk ./src/allmydata/test/test_backends.py 34
7462     cancelsecret + expirationtime + nextlease
7463 share_data = containerdata + client_data
7464 testnodeid = 'testnodeidxxxxxxxxxx'
7465+fakefilepaths = {}
7466 
7467 
7468 class MockStat:
7469hunk ./src/allmydata/test/test_backends.py 41
7470     def __init__(self):
7471         self.st_mode = None
7472 
7473+
7474 class MockFilePath:
7475hunk ./src/allmydata/test/test_backends.py 43
7476-    def __init__(self, PathString):
7477-        self.PathName = PathString
7478-    def child(self, ChildString):
7479-        return MockFilePath(os.path.join(self.PathName, ChildString))
7480+    def __init__(self, pathstring):
7481+        self.pathname = pathstring
7482+        self.spawn = {}
7483+        self.antecedent = os.path.dirname(self.pathname)
7484+    def child(self, childstring):
7485+        arg2child = os.path.join(self.pathname, childstring)
7486+        print "arg2child: ", arg2child
7487+        if fakefilepaths.has_key(arg2child):
7488+            child = fakefilepaths[arg2child]
7489+            print "Should have gotten here."
7490+        else:
7491+            child = MockFilePath(arg2child)
7492+        return child
7493     def parent(self):
7494hunk ./src/allmydata/test/test_backends.py 57
7495-        return MockFilePath(os.path.dirname(self.PathName))
7496+        if fakefilepaths.has_key(self.antecedent):
7497+            parent = fakefilepaths[self.antecedent]
7498+        else:
7499+            parent = MockFilePath(self.antecedent)
7500+        return parent
7501+    def children(self):
7502+        childrenfromffs = frozenset(fakefilepaths.values())
7503+        return list(childrenfromffs | frozenset(self.spawn.values())) 
7504     def makedirs(self):
7505         # XXX These methods assume that fp_<FOO> functions in fileutil will be tested elsewhere!
7506         pass
7507hunk ./src/allmydata/test/test_backends.py 72
7508         return True
7509     def remove(self):
7510         pass
7511-    def children(self):
7512-        return []
7513     def exists(self):
7514         return False
7515hunk ./src/allmydata/test/test_backends.py 74
7516-    def setContent(self, ContentString):
7517-        self.File = MockFile(ContentString)
7518     def open(self):
7519         return self.File.open()
7520hunk ./src/allmydata/test/test_backends.py 76
7521+    def setparents(self):
7522+        antecedents = []
7523+        def f(fps, antecedents):
7524+            newfps = os.path.split(fps)[0]
7525+            if newfps:
7526+                antecedents.append(newfps)
7527+                f(newfps, antecedents)
7528+        f(self.pathname, antecedents)
7529+        for fps in antecedents:
7530+            if not fakefilepaths.has_key(fps):
7531+                fakefilepaths[fps] = MockFilePath(fps)
7532+    def setContent(self, contentstring):
7533+        print "I am self.pathname: ", self.pathname
7534+        fakefilepaths[self.pathname] = self
7535+        self.File = MockFile(contentstring)
7536+        self.setparents()
7537+    def create(self):
7538+        fakefilepaths[self.pathname] = self
7539+        self.setparents()
7540+           
7541 
7542 class MockFile:
7543hunk ./src/allmydata/test/test_backends.py 98
7544-    def __init__(self, ContentString):
7545-        self.Contents = ContentString
7546+    def __init__(self, contentstring):
7547+        self.buffer = contentstring
7548+        self.pos = 0
7549     def open(self):
7550         return self
7551hunk ./src/allmydata/test/test_backends.py 103
7552+    def write(self, instring):
7553+        begin = self.pos
7554+        padlen = begin - len(self.buffer)
7555+        if padlen > 0:
7556+            self.buffer += '\x00' * padlen
7557+            end = self.pos + len(instring)
7558+            self.buffer = self.buffer[:begin]+instring+self.buffer[end:]
7559+            self.pos = end
7560     def close(self):
7561         pass
7562hunk ./src/allmydata/test/test_backends.py 113
7563-    def seek(self, position):
7564-        pass
7565-    def read(self, amount):
7566-        pass
7567+    def seek(self, pos):
7568+        self.pos = pos
7569+    def read(self, numberbytes):
7570+        return self.buffer[self.pos:self.pos+numberbytes]
7571+    def tell(self):
7572+        return self.pos
7573 
7574 
7575 class MockBCC:
7576hunk ./src/allmydata/test/test_backends.py 125
7577     def setServiceParent(self, Parent):
7578         pass
7579 
7580+
7581 class MockLCC:
7582     def setServiceParent(self, Parent):
7583         pass
7584hunk ./src/allmydata/test/test_backends.py 130
7585 
7586+
7587 class MockFiles(unittest.TestCase):
7588     """ I simulate a filesystem that the code under test can use. I flag the
7589     code under test if it reads or writes outside of its prescribed
7590hunk ./src/allmydata/test/test_backends.py 193
7591         return False
7592 
7593     def call_setContent(self, inputstring):
7594-        self.incomingsharefilecontents = StringIO(inputstring)
7595+        self.incomingsharefilecontents = TemporaryFile(inputstring)
7596 
7597     def tearDown(self):
7598         msg( "%s.tearDown()" % (self,))
7599hunk ./src/allmydata/test/test_backends.py 206
7600                      'cutoff_date' : None,
7601                      'sharetypes' : None}
7602 
7603+
7604 class TestServerWithNullBackend(unittest.TestCase, ReallyEqualMixin):
7605     """ NullBackend is just for testing and executable documentation, so
7606     this test is actually a test of StorageServer in which we're using
7607hunk ./src/allmydata/test/test_backends.py 229
7608         self.failIf(mockopen.called)
7609         self.failIf(mockmkdir.called)
7610 
7611+
7612 class TestServerConstruction(MockFiles, ReallyEqualMixin):
7613     def test_create_server_fs_backend(self):
7614         """ This tests whether a server instance can be constructed with a
7615hunk ./src/allmydata/test/test_backends.py 238
7616 
7617         StorageServer(testnodeid, backend=DASCore(self.storedir, expiration_policy))
7618 
7619+
7620 class TestServerAndFSBackend(MockFiles, ReallyEqualMixin):
7621     """ This tests both the StorageServer and the DAS backend together. """
7622     
7623hunk ./src/allmydata/test/test_backends.py 262
7624         """
7625         mocktime.return_value = 0
7626         # Inspect incoming and fail unless it's empty.
7627-        incomingset = self.ss.backend.get_incoming_shnums('teststorage_index')
7628-        self.failUnlessReallyEqual(incomingset, frozenset())
7629+        # incomingset = self.ss.backend.get_incoming_shnums('teststorage_index')
7630+        # self.failUnlessReallyEqual(incomingset, frozenset())
7631         
7632         # Populate incoming with the sharenum: 0.
7633         alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, frozenset((0,)), 1, mock.Mock())
7634hunk ./src/allmydata/test/test_backends.py 269
7635 
7636         # This is a transparent-box test: Inspect incoming and fail unless the sharenum: 0 is listed there.
7637-        # self.failUnlessReallyEqual(self.ss.backend.get_incoming_shnums('teststorage_index'), frozenset((0,)))
7638+        self.failUnlessReallyEqual(self.ss.backend.get_incoming_shnums('teststorage_index'), frozenset((0,)))
7639         
7640         # Attempt to create a second share writer with the same sharenum.
7641         # alreadygota, bsa = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, frozenset((0,)), 1, mock.Mock())
7642hunk ./src/allmydata/test/test_backends.py 274
7643 
7644+        # print bsa
7645         # Show that no sharewriter results from a remote_allocate_buckets
7646         # with the same si and sharenum, until BucketWriter.remote_close()
7647         # has been called.
7648hunk ./src/allmydata/test/test_backends.py 339
7649             self.failUnlessEqual(mode[0], 'r', mode)
7650             self.failUnless('b' in mode, mode)
7651 
7652-            return StringIO(share_data)
7653+            return TemporaryFile(share_data)
7654         mockopen.side_effect = call_open
7655 
7656         datalen = len(share_data)
7657}
7658
7659Context:
7660
7661[Update the dependency on zope.interface to fix an incompatiblity between Nevow and zope.interface 3.6.4. fixes #1435
7662david-sarah@jacaranda.org**20110721234941
7663 Ignore-this: 2ff3fcfc030fca1a4d4c7f1fed0f2aa9
7664]
7665[frontends/ftpd.py: remove the check for IWriteFile.close since we're now guaranteed to be using Twisted >= 10.1 which has it.
7666david-sarah@jacaranda.org**20110722000320
7667 Ignore-this: 55cd558b791526113db3f83c00ec328a
7668]
7669[Update the dependency on Twisted to >= 10.1. This allows us to simplify some documentation: it's no longer necessary to install pywin32 on Windows, or apply a patch to Twisted in order to use the FTP frontend. fixes #1274, #1438. refs #1429
7670david-sarah@jacaranda.org**20110721233658
7671 Ignore-this: 81b41745477163c9b39c0b59db91cc62
7672]
7673[misc/build_helpers/run_trial.py: undo change to block pywin32 (it didn't work because run_trial.py is no longer used). refs #1334
7674david-sarah@jacaranda.org**20110722035402
7675 Ignore-this: 5d03f544c4154f088e26c7107494bf39
7676]
7677[misc/build_helpers/run_trial.py: ensure that pywin32 is not on the sys.path when running the test suite. Includes some temporary debugging printouts that will be removed. refs #1334
7678david-sarah@jacaranda.org**20110722024907
7679 Ignore-this: 5141a9f83a4085ed4ca21f0bbb20bb9c
7680]
7681[docs/running.rst: use 'tahoe run ~/.tahoe' instead of 'tahoe run' (the default is the current directory, unlike 'tahoe start').
7682david-sarah@jacaranda.org**20110718005949
7683 Ignore-this: 81837fbce073e93d88a3e7ae3122458c
7684]
7685[docs/running.rst: say to put the introducer.furl in tahoe.cfg.
7686david-sarah@jacaranda.org**20110717194315
7687 Ignore-this: 954cc4c08e413e8c62685d58ff3e11f3
7688]
7689[README.txt: say that quickstart.rst is in the docs directory.
7690david-sarah@jacaranda.org**20110717192400
7691 Ignore-this: bc6d35a85c496b77dbef7570677ea42a
7692]
7693[setup: remove the dependency on foolscap's "secure_connections" extra, add a dependency on pyOpenSSL
7694zooko@zooko.com**20110717114226
7695 Ignore-this: df222120d41447ce4102616921626c82
7696 fixes #1383
7697]
7698[test_sftp.py cleanup: remove a redundant definition of failUnlessReallyEqual.
7699david-sarah@jacaranda.org**20110716181813
7700 Ignore-this: 50113380b368c573f07ac6fe2eb1e97f
7701]
7702[docs: add missing link in NEWS.rst
7703zooko@zooko.com**20110712153307
7704 Ignore-this: be7b7eb81c03700b739daa1027d72b35
7705]
7706[contrib: remove the contributed fuse modules and the entire contrib/ directory, which is now empty
7707zooko@zooko.com**20110712153229
7708 Ignore-this: 723c4f9e2211027c79d711715d972c5
7709 Also remove a couple of vestigial references to figleaf, which is long gone.
7710 fixes #1409 (remove contrib/fuse)
7711]
7712[add Protovis.js-based download-status timeline visualization
7713Brian Warner <warner@lothar.com>**20110629222606
7714 Ignore-this: 477ccef5c51b30e246f5b6e04ab4a127
7715 
7716 provide status overlap info on the webapi t=json output, add decode/decrypt
7717 rate tooltips, add zoomin/zoomout buttons
7718]
7719[add more download-status data, fix tests
7720Brian Warner <warner@lothar.com>**20110629222555
7721 Ignore-this: e9e0b7e0163f1e95858aa646b9b17b8c
7722]
7723[prepare for viz: improve DownloadStatus events
7724Brian Warner <warner@lothar.com>**20110629222542
7725 Ignore-this: 16d0bde6b734bb501aa6f1174b2b57be
7726 
7727 consolidate IDownloadStatusHandlingConsumer stuff into DownloadNode
7728]
7729[docs: fix error in crypto specification that was noticed by Taylor R Campbell <campbell+tahoe@mumble.net>
7730zooko@zooko.com**20110629185711
7731 Ignore-this: b921ed60c1c8ba3c390737fbcbe47a67
7732]
7733[setup.py: don't make bin/tahoe.pyscript executable. fixes #1347
7734david-sarah@jacaranda.org**20110130235809
7735 Ignore-this: 3454c8b5d9c2c77ace03de3ef2d9398a
7736]
7737[Makefile: remove targets relating to 'setup.py check_auto_deps' which no longer exists. fixes #1345
7738david-sarah@jacaranda.org**20110626054124
7739 Ignore-this: abb864427a1b91bd10d5132b4589fd90
7740]
7741[Makefile: add 'make check' as an alias for 'make test'. Also remove an unnecessary dependency of 'test' on 'build' and 'src/allmydata/_version.py'. fixes #1344
7742david-sarah@jacaranda.org**20110623205528
7743 Ignore-this: c63e23146c39195de52fb17c7c49b2da
7744]
7745[Rename test_package_initialization.py to (much shorter) test_import.py .
7746Brian Warner <warner@lothar.com>**20110611190234
7747 Ignore-this: 3eb3dbac73600eeff5cfa6b65d65822
7748 
7749 The former name was making my 'ls' listings hard to read, by forcing them
7750 down to just two columns.
7751]
7752[tests: fix tests to accomodate [20110611153758-92b7f-0ba5e4726fb6318dac28fb762a6512a003f4c430]
7753zooko@zooko.com**20110611163741
7754 Ignore-this: 64073a5f39e7937e8e5e1314c1a302d1
7755 Apparently none of the two authors (stercor, terrell), three reviewers (warner, davidsarah, terrell), or one committer (me) actually ran the tests. This is presumably due to #20.
7756 fixes #1412
7757]
7758[wui: right-align the size column in the WUI
7759zooko@zooko.com**20110611153758
7760 Ignore-this: 492bdaf4373c96f59f90581c7daf7cd7
7761 Thanks to Ted "stercor" Rolle Jr. and Terrell Russell.
7762 fixes #1412
7763]
7764[docs: three minor fixes
7765zooko@zooko.com**20110610121656
7766 Ignore-this: fec96579eb95aceb2ad5fc01a814c8a2
7767 CREDITS for arc for stats tweak
7768 fix link to .zip file in quickstart.rst (thanks to ChosenOne for noticing)
7769 English usage tweak
7770]
7771[docs/running.rst: fix stray HTML (not .rst) link noticed by ChosenOne.
7772david-sarah@jacaranda.org**20110609223719
7773 Ignore-this: fc50ac9c94792dcac6f1067df8ac0d4a
7774]
7775[server.py:  get_latencies now reports percentiles _only_ if there are sufficient observations for the interpretation of the percentile to be unambiguous.
7776wilcoxjg@gmail.com**20110527120135
7777 Ignore-this: 2e7029764bffc60e26f471d7c2b6611e
7778 interfaces.py:  modified the return type of RIStatsProvider.get_stats to allow for None as a return value
7779 NEWS.rst, stats.py: documentation of change to get_latencies
7780 stats.rst: now documents percentile modification in get_latencies
7781 test_storage.py:  test_latencies now expects None in output categories that contain too few samples for the associated percentile to be unambiguously reported.
7782 fixes #1392
7783]
7784[docs: revert link in relnotes.txt from NEWS.rst to NEWS, since the former did not exist at revision 5000.
7785david-sarah@jacaranda.org**20110517011214
7786 Ignore-this: 6a5be6e70241e3ec0575641f64343df7
7787]
7788[docs: convert NEWS to NEWS.rst and change all references to it.
7789david-sarah@jacaranda.org**20110517010255
7790 Ignore-this: a820b93ea10577c77e9c8206dbfe770d
7791]
7792[docs: remove out-of-date docs/testgrid/introducer.furl and containing directory. fixes #1404
7793david-sarah@jacaranda.org**20110512140559
7794 Ignore-this: 784548fc5367fac5450df1c46890876d
7795]
7796[scripts/common.py: don't assume that the default alias is always 'tahoe' (it is, but the API of get_alias doesn't say so). refs #1342
7797david-sarah@jacaranda.org**20110130164923
7798 Ignore-this: a271e77ce81d84bb4c43645b891d92eb
7799]
7800[setup: don't catch all Exception from check_requirement(), but only PackagingError and ImportError
7801zooko@zooko.com**20110128142006
7802 Ignore-this: 57d4bc9298b711e4bc9dc832c75295de
7803 I noticed this because I had accidentally inserted a bug which caused AssertionError to be raised from check_requirement().
7804]
7805[M-x whitespace-cleanup
7806zooko@zooko.com**20110510193653
7807 Ignore-this: dea02f831298c0f65ad096960e7df5c7
7808]
7809[docs: fix typo in running.rst, thanks to arch_o_median
7810zooko@zooko.com**20110510193633
7811 Ignore-this: ca06de166a46abbc61140513918e79e8
7812]
7813[relnotes.txt: don't claim to work on Cygwin (which has been untested for some time). refs #1342
7814david-sarah@jacaranda.org**20110204204902
7815 Ignore-this: 85ef118a48453d93fa4cddc32d65b25b
7816]
7817[relnotes.txt: forseeable -> foreseeable. refs #1342
7818david-sarah@jacaranda.org**20110204204116
7819 Ignore-this: 746debc4d82f4031ebf75ab4031b3a9
7820]
7821[replace remaining .html docs with .rst docs
7822zooko@zooko.com**20110510191650
7823 Ignore-this: d557d960a986d4ac8216d1677d236399
7824 Remove install.html (long since deprecated).
7825 Also replace some obsolete references to install.html with references to quickstart.rst.
7826 Fix some broken internal references within docs/historical/historical_known_issues.txt.
7827 Thanks to Ravi Pinjala and Patrick McDonald.
7828 refs #1227
7829]
7830[docs: FTP-and-SFTP.rst: fix a minor error and update the information about which version of Twisted fixes #1297
7831zooko@zooko.com**20110428055232
7832 Ignore-this: b63cfb4ebdbe32fb3b5f885255db4d39
7833]
7834[munin tahoe_files plugin: fix incorrect file count
7835francois@ctrlaltdel.ch**20110428055312
7836 Ignore-this: 334ba49a0bbd93b4a7b06a25697aba34
7837 fixes #1391
7838]
7839[corrected "k must never be smaller than N" to "k must never be greater than N"
7840secorp@allmydata.org**20110425010308
7841 Ignore-this: 233129505d6c70860087f22541805eac
7842]
7843[Fix a test failure in test_package_initialization on Python 2.4.x due to exceptions being stringified differently than in later versions of Python. refs #1389
7844david-sarah@jacaranda.org**20110411190738
7845 Ignore-this: 7847d26bc117c328c679f08a7baee519
7846]
7847[tests: add test for including the ImportError message and traceback entry in the summary of errors from importing dependencies. refs #1389
7848david-sarah@jacaranda.org**20110410155844
7849 Ignore-this: fbecdbeb0d06a0f875fe8d4030aabafa
7850]
7851[allmydata/__init__.py: preserve the message and last traceback entry (file, line number, function, and source line) of ImportErrors in the package versions string. fixes #1389
7852david-sarah@jacaranda.org**20110410155705
7853 Ignore-this: 2f87b8b327906cf8bfca9440a0904900
7854]
7855[remove unused variable detected by pyflakes
7856zooko@zooko.com**20110407172231
7857 Ignore-this: 7344652d5e0720af822070d91f03daf9
7858]
7859[allmydata/__init__.py: Nicer reporting of unparseable version numbers in dependencies. fixes #1388
7860david-sarah@jacaranda.org**20110401202750
7861 Ignore-this: 9c6bd599259d2405e1caadbb3e0d8c7f
7862]
7863[update FTP-and-SFTP.rst: the necessary patch is included in Twisted-10.1
7864Brian Warner <warner@lothar.com>**20110325232511
7865 Ignore-this: d5307faa6900f143193bfbe14e0f01a
7866]
7867[control.py: remove all uses of s.get_serverid()
7868warner@lothar.com**20110227011203
7869 Ignore-this: f80a787953bd7fa3d40e828bde00e855
7870]
7871[web: remove some uses of s.get_serverid(), not all
7872warner@lothar.com**20110227011159
7873 Ignore-this: a9347d9cf6436537a47edc6efde9f8be
7874]
7875[immutable/downloader/fetcher.py: remove all get_serverid() calls
7876warner@lothar.com**20110227011156
7877 Ignore-this: fb5ef018ade1749348b546ec24f7f09a
7878]
7879[immutable/downloader/fetcher.py: fix diversity bug in server-response handling
7880warner@lothar.com**20110227011153
7881 Ignore-this: bcd62232c9159371ae8a16ff63d22c1b
7882 
7883 When blocks terminate (either COMPLETE or CORRUPT/DEAD/BADSEGNUM), the
7884 _shares_from_server dict was being popped incorrectly (using shnum as the
7885 index instead of serverid). I'm still thinking through the consequences of
7886 this bug. It was probably benign and really hard to detect. I think it would
7887 cause us to incorrectly believe that we're pulling too many shares from a
7888 server, and thus prefer a different server rather than asking for a second
7889 share from the first server. The diversity code is intended to spread out the
7890 number of shares simultaneously being requested from each server, but with
7891 this bug, it might be spreading out the total number of shares requested at
7892 all, not just simultaneously. (note that SegmentFetcher is scoped to a single
7893 segment, so the effect doesn't last very long).
7894]
7895[immutable/downloader/share.py: reduce get_serverid(), one left, update ext deps
7896warner@lothar.com**20110227011150
7897 Ignore-this: d8d56dd8e7b280792b40105e13664554
7898 
7899 test_download.py: create+check MyShare instances better, make sure they share
7900 Server objects, now that finder.py cares
7901]
7902[immutable/downloader/finder.py: reduce use of get_serverid(), one left
7903warner@lothar.com**20110227011146
7904 Ignore-this: 5785be173b491ae8a78faf5142892020
7905]
7906[immutable/offloaded.py: reduce use of get_serverid() a bit more
7907warner@lothar.com**20110227011142
7908 Ignore-this: b48acc1b2ae1b311da7f3ba4ffba38f
7909]
7910[immutable/upload.py: reduce use of get_serverid()
7911warner@lothar.com**20110227011138
7912 Ignore-this: ffdd7ff32bca890782119a6e9f1495f6
7913]
7914[immutable/checker.py: remove some uses of s.get_serverid(), not all
7915warner@lothar.com**20110227011134
7916 Ignore-this: e480a37efa9e94e8016d826c492f626e
7917]
7918[add remaining get_* methods to storage_client.Server, NoNetworkServer, and
7919warner@lothar.com**20110227011132
7920 Ignore-this: 6078279ddf42b179996a4b53bee8c421
7921 MockIServer stubs
7922]
7923[upload.py: rearrange _make_trackers a bit, no behavior changes
7924warner@lothar.com**20110227011128
7925 Ignore-this: 296d4819e2af452b107177aef6ebb40f
7926]
7927[happinessutil.py: finally rename merge_peers to merge_servers
7928warner@lothar.com**20110227011124
7929 Ignore-this: c8cd381fea1dd888899cb71e4f86de6e
7930]
7931[test_upload.py: factor out FakeServerTracker
7932warner@lothar.com**20110227011120
7933 Ignore-this: 6c182cba90e908221099472cc159325b
7934]
7935[test_upload.py: server-vs-tracker cleanup
7936warner@lothar.com**20110227011115
7937 Ignore-this: 2915133be1a3ba456e8603885437e03
7938]
7939[happinessutil.py: server-vs-tracker cleanup
7940warner@lothar.com**20110227011111
7941 Ignore-this: b856c84033562d7d718cae7cb01085a9
7942]
7943[upload.py: more tracker-vs-server cleanup
7944warner@lothar.com**20110227011107
7945 Ignore-this: bb75ed2afef55e47c085b35def2de315
7946]
7947[upload.py: fix var names to avoid confusion between 'trackers' and 'servers'
7948warner@lothar.com**20110227011103
7949 Ignore-this: 5d5e3415b7d2732d92f42413c25d205d
7950]
7951[refactor: s/peer/server/ in immutable/upload, happinessutil.py, test_upload
7952warner@lothar.com**20110227011100
7953 Ignore-this: 7ea858755cbe5896ac212a925840fe68
7954 
7955 No behavioral changes, just updating variable/method names and log messages.
7956 The effects outside these three files should be minimal: some exception
7957 messages changed (to say "server" instead of "peer"), and some internal class
7958 names were changed. A few things still use "peer" to minimize external
7959 changes, like UploadResults.timings["peer_selection"] and
7960 happinessutil.merge_peers, which can be changed later.
7961]
7962[storage_client.py: clean up test_add_server/test_add_descriptor, remove .test_servers
7963warner@lothar.com**20110227011056
7964 Ignore-this: efad933e78179d3d5fdcd6d1ef2b19cc
7965]
7966[test_client.py, upload.py:: remove KiB/MiB/etc constants, and other dead code
7967warner@lothar.com**20110227011051
7968 Ignore-this: dc83c5794c2afc4f81e592f689c0dc2d
7969]
7970[test: increase timeout on a network test because Francois's ARM machine hit that timeout
7971zooko@zooko.com**20110317165909
7972 Ignore-this: 380c345cdcbd196268ca5b65664ac85b
7973 I'm skeptical that the test was proceeding correctly but ran out of time. It seems more likely that it had gotten hung. But if we raise the timeout to an even more extravagant number then we can be even more certain that the test was never going to finish.
7974]
7975[docs/configuration.rst: add a "Frontend Configuration" section
7976Brian Warner <warner@lothar.com>**20110222014323
7977 Ignore-this: 657018aa501fe4f0efef9851628444ca
7978 
7979 this points to docs/frontends/*.rst, which were previously underlinked
7980]
7981[web/filenode.py: avoid calling req.finish() on closed HTTP connections. Closes #1366
7982"Brian Warner <warner@lothar.com>"**20110221061544
7983 Ignore-this: 799d4de19933f2309b3c0c19a63bb888
7984]
7985[Add unit tests for cross_check_pkg_resources_versus_import, and a regression test for ref #1355. This requires a little refactoring to make it testable.
7986david-sarah@jacaranda.org**20110221015817
7987 Ignore-this: 51d181698f8c20d3aca58b057e9c475a
7988]
7989[allmydata/__init__.py: .name was used in place of the correct .__name__ when printing an exception. Also, robustify string formatting by using %r instead of %s in some places. fixes #1355.
7990david-sarah@jacaranda.org**20110221020125
7991 Ignore-this: b0744ed58f161bf188e037bad077fc48
7992]
7993[Refactor StorageFarmBroker handling of servers
7994Brian Warner <warner@lothar.com>**20110221015804
7995 Ignore-this: 842144ed92f5717699b8f580eab32a51
7996 
7997 Pass around IServer instance instead of (peerid, rref) tuple. Replace
7998 "descriptor" with "server". Other replacements:
7999 
8000  get_all_servers -> get_connected_servers/get_known_servers
8001  get_servers_for_index -> get_servers_for_psi (now returns IServers)
8002 
8003 This change still needs to be pushed further down: lots of code is now
8004 getting the IServer and then distributing (peerid, rref) internally.
8005 Instead, it ought to distribute the IServer internally and delay
8006 extracting a serverid or rref until the last moment.
8007 
8008 no_network.py was updated to retain parallelism.
8009]
8010[TAG allmydata-tahoe-1.8.2
8011warner@lothar.com**20110131020101]
8012Patch bundle hash:
80135112625929162114ea48588e65726436e5c6a7c0