Ticket #999: readoldshpasses_Zancas20110729.darcs.patch

File readoldshpasses_Zancas20110729.darcs.patch, 371.8 KB (added by Zancas, at 2011-07-29T23:54:48Z)

TestServerAndFSBackend.test_read_old_share passes

Line 
1Fri Mar 25 14:35:14 MDT 2011  wilcoxjg@gmail.com
2  * storage: new mocking tests of storage server read and write
3  There are already tests of read and functionality in test_storage.py, but those tests let the code under test use a real filesystem whereas these tests mock all file system calls.
4
5Fri Jun 24 14:28:50 MDT 2011  wilcoxjg@gmail.com
6  * server.py, test_backends.py, interfaces.py, immutable.py (others?): working patch for implementation of backends plugin
7  sloppy not for production
8
9Sat Jun 25 23:27:32 MDT 2011  wilcoxjg@gmail.com
10  * a temp patch used as a snapshot
11
12Sat Jun 25 23:32:44 MDT 2011  wilcoxjg@gmail.com
13  * snapshot of progress on backend implementation (not suitable for trunk)
14
15Sun Jun 26 10:57:15 MDT 2011  wilcoxjg@gmail.com
16  * checkpoint patch
17
18Tue Jun 28 14:22:02 MDT 2011  wilcoxjg@gmail.com
19  * checkpoint4
20
21Mon Jul  4 21:46:26 MDT 2011  wilcoxjg@gmail.com
22  * checkpoint5
23
24Wed Jul  6 13:08:24 MDT 2011  wilcoxjg@gmail.com
25  * checkpoint 6
26
27Wed Jul  6 14:08:20 MDT 2011  wilcoxjg@gmail.com
28  * checkpoint 7
29
30Wed Jul  6 16:31:26 MDT 2011  wilcoxjg@gmail.com
31  * checkpoint8
32    The nullbackend is necessary to test unlimited space in a backend.  It is a mock-like object.
33
34Wed Jul  6 22:29:42 MDT 2011  wilcoxjg@gmail.com
35  * checkpoint 9
36
37Thu Jul  7 11:20:49 MDT 2011  wilcoxjg@gmail.com
38  * checkpoint10
39
40Fri Jul  8 15:39:19 MDT 2011  wilcoxjg@gmail.com
41  * jacp 11
42
43Sun Jul 10 13:19:15 MDT 2011  wilcoxjg@gmail.com
44  * checkpoint12 testing correct behavior with regard to incoming and final
45
46Sun Jul 10 13:51:39 MDT 2011  wilcoxjg@gmail.com
47  * fix inconsistent naming of storage_index vs storageindex in storage/server.py
48
49Sun Jul 10 16:06:23 MDT 2011  wilcoxjg@gmail.com
50  * adding comments to clarify what I'm about to do.
51
52Mon Jul 11 13:08:49 MDT 2011  wilcoxjg@gmail.com
53  * branching back, no longer attempting to mock inside TestServerFSBackend
54
55Mon Jul 11 13:33:57 MDT 2011  wilcoxjg@gmail.com
56  * checkpoint12 TestServerFSBackend no longer mocks filesystem
57
58Mon Jul 11 13:44:07 MDT 2011  wilcoxjg@gmail.com
59  * JACP
60
61Mon Jul 11 15:02:24 MDT 2011  wilcoxjg@gmail.com
62  * testing get incoming
63
64Mon Jul 11 15:14:24 MDT 2011  wilcoxjg@gmail.com
65  * ImmutableShareFile does not know its StorageIndex
66
67Mon Jul 11 20:51:57 MDT 2011  wilcoxjg@gmail.com
68  * get_incoming correctly reports the 0 share after it has arrived
69
70Tue Jul 12 00:12:11 MDT 2011  wilcoxjg@gmail.com
71  * jacp14
72
73Wed Jul 13 00:03:46 MDT 2011  wilcoxjg@gmail.com
74  * jacp14 or so
75
76Wed Jul 13 18:30:08 MDT 2011  zooko@zooko.com
77  * temporary work-in-progress patch to be unrecorded
78  tidy up a few tests, work done in pair-programming with Zancas
79
80Thu Jul 14 15:21:39 MDT 2011  zooko@zooko.com
81  * work in progress intended to be unrecorded and never committed to trunk
82  switch from os.path.join to filepath
83  incomplete refactoring of common "stay in your subtree" tester code into a superclass
84 
85
86Fri Jul 15 13:15:00 MDT 2011  zooko@zooko.com
87  * another incomplete patch for people who are very curious about incomplete work or for Zancas to apply and build on top of 2011-07-15_19_15Z
88  In this patch (very incomplete) we started two major changes: first was to refactor the mockery of the filesystem into a common base class which provides a mock filesystem for all the DAS tests. Second was to convert from Python standard library filename manipulation like os.path.join to twisted.python.filepath. The former *might* be close to complete -- it seems to run at least most of the first test before that test hits a problem due to the incomplete converstion to filepath. The latter has still a lot of work to go.
89
90Tue Jul 19 23:59:18 MDT 2011  zooko@zooko.com
91  * another temporary patch for sharing work-in-progress
92  A lot more filepathification. The changes made in this patch feel really good to me -- we get to remove and simplify code by relying on filepath.
93  There are a few other changes in this file, notably removing the misfeature of catching OSError and returning 0 from get_available_space()...
94  (There is a lot of work to do to document these changes in good commit log messages and break them up into logical units inasmuch as possible...)
95 
96
97Fri Jul 22 01:00:36 MDT 2011  wilcoxjg@gmail.com
98  * jacp16 or so
99
100Fri Jul 22 14:32:44 MDT 2011  wilcoxjg@gmail.com
101  * jacp17
102
103Fri Jul 22 21:19:15 MDT 2011  wilcoxjg@gmail.com
104  * jacp18
105
106Sat Jul 23 21:42:30 MDT 2011  wilcoxjg@gmail.com
107  * jacp19orso
108
109Wed Jul 27 02:05:53 MDT 2011  wilcoxjg@gmail.com
110  * jacp19
111
112Thu Jul 28 01:25:14 MDT 2011  wilcoxjg@gmail.com
113  * jacp20
114
115Thu Jul 28 22:38:30 MDT 2011  wilcoxjg@gmail.com
116  * Completed FilePath based test_write_and_read_share
117
118Fri Jul 29 17:53:56 MDT 2011  wilcoxjg@gmail.com
119  * TestServerAndFSBackend.test_read_old_share passes
120
121New patches:
122
123[storage: new mocking tests of storage server read and write
124wilcoxjg@gmail.com**20110325203514
125 Ignore-this: df65c3c4f061dd1516f88662023fdb41
126 There are already tests of read and functionality in test_storage.py, but those tests let the code under test use a real filesystem whereas these tests mock all file system calls.
127] {
128addfile ./src/allmydata/test/test_server.py
129hunk ./src/allmydata/test/test_server.py 1
130+from twisted.trial import unittest
131+
132+from StringIO import StringIO
133+
134+from allmydata.test.common_util import ReallyEqualMixin
135+
136+import mock
137+
138+# This is the code that we're going to be testing.
139+from allmydata.storage.server import StorageServer
140+
141+# The following share file contents was generated with
142+# storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
143+# with share data == 'a'.
144+share_data = 'a\x00\x00\x00\x00xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy\x00(\xde\x80'
145+share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data
146+
147+sharefname = 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a/0'
148+
149+class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
150+    @mock.patch('__builtin__.open')
151+    def test_create_server(self, mockopen):
152+        """ This tests whether a server instance can be constructed. """
153+
154+        def call_open(fname, mode):
155+            if fname == 'testdir/bucket_counter.state':
156+                raise IOError(2, "No such file or directory: 'testdir/bucket_counter.state'")
157+            elif fname == 'testdir/lease_checker.state':
158+                raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
159+            elif fname == 'testdir/lease_checker.history':
160+                return StringIO()
161+        mockopen.side_effect = call_open
162+
163+        # Now begin the test.
164+        s = StorageServer('testdir', 'testnodeidxxxxxxxxxx')
165+
166+        # You passed!
167+
168+class TestServer(unittest.TestCase, ReallyEqualMixin):
169+    @mock.patch('__builtin__.open')
170+    def setUp(self, mockopen):
171+        def call_open(fname, mode):
172+            if fname == 'testdir/bucket_counter.state':
173+                raise IOError(2, "No such file or directory: 'testdir/bucket_counter.state'")
174+            elif fname == 'testdir/lease_checker.state':
175+                raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
176+            elif fname == 'testdir/lease_checker.history':
177+                return StringIO()
178+        mockopen.side_effect = call_open
179+
180+        self.s = StorageServer('testdir', 'testnodeidxxxxxxxxxx')
181+
182+
183+    @mock.patch('time.time')
184+    @mock.patch('os.mkdir')
185+    @mock.patch('__builtin__.open')
186+    @mock.patch('os.listdir')
187+    @mock.patch('os.path.isdir')
188+    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
189+        """Handle a report of corruption."""
190+
191+        def call_listdir(dirname):
192+            self.failUnlessReallyEqual(dirname, 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
193+            raise OSError(2, "No such file or directory: 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a'")
194+
195+        mocklistdir.side_effect = call_listdir
196+
197+        class MockFile:
198+            def __init__(self):
199+                self.buffer = ''
200+                self.pos = 0
201+            def write(self, instring):
202+                begin = self.pos
203+                padlen = begin - len(self.buffer)
204+                if padlen > 0:
205+                    self.buffer += '\x00' * padlen
206+                end = self.pos + len(instring)
207+                self.buffer = self.buffer[:begin]+instring+self.buffer[end:]
208+                self.pos = end
209+            def close(self):
210+                pass
211+            def seek(self, pos):
212+                self.pos = pos
213+            def read(self, numberbytes):
214+                return self.buffer[self.pos:self.pos+numberbytes]
215+            def tell(self):
216+                return self.pos
217+
218+        mocktime.return_value = 0
219+
220+        sharefile = MockFile()
221+        def call_open(fname, mode):
222+            self.failUnlessReallyEqual(fname, 'testdir/shares/incoming/or/orsxg5dtorxxeylhmvpws3temv4a/0' )
223+            return sharefile
224+
225+        mockopen.side_effect = call_open
226+        # Now begin the test.
227+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
228+        print bs
229+        bs[0].remote_write(0, 'a')
230+        self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
231+
232+
233+    @mock.patch('os.path.exists')
234+    @mock.patch('os.path.getsize')
235+    @mock.patch('__builtin__.open')
236+    @mock.patch('os.listdir')
237+    def test_read_share(self, mocklistdir, mockopen, mockgetsize, mockexists):
238+        """ This tests whether the code correctly finds and reads
239+        shares written out by old (Tahoe-LAFS <= v1.8.2)
240+        servers. There is a similar test in test_download, but that one
241+        is from the perspective of the client and exercises a deeper
242+        stack of code. This one is for exercising just the
243+        StorageServer object. """
244+
245+        def call_listdir(dirname):
246+            self.failUnlessReallyEqual(dirname,'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
247+            return ['0']
248+
249+        mocklistdir.side_effect = call_listdir
250+
251+        def call_open(fname, mode):
252+            self.failUnlessReallyEqual(fname, sharefname)
253+            self.failUnless('r' in mode, mode)
254+            self.failUnless('b' in mode, mode)
255+
256+            return StringIO(share_file_data)
257+        mockopen.side_effect = call_open
258+
259+        datalen = len(share_file_data)
260+        def call_getsize(fname):
261+            self.failUnlessReallyEqual(fname, sharefname)
262+            return datalen
263+        mockgetsize.side_effect = call_getsize
264+
265+        def call_exists(fname):
266+            self.failUnlessReallyEqual(fname, sharefname)
267+            return True
268+        mockexists.side_effect = call_exists
269+
270+        # Now begin the test.
271+        bs = self.s.remote_get_buckets('teststorage_index')
272+
273+        self.failUnlessEqual(len(bs), 1)
274+        b = bs[0]
275+        self.failUnlessReallyEqual(b.remote_read(0, datalen), share_data)
276+        # If you try to read past the end you get the as much data as is there.
277+        self.failUnlessReallyEqual(b.remote_read(0, datalen+20), share_data)
278+        # If you start reading past the end of the file you get the empty string.
279+        self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
280}
281[server.py, test_backends.py, interfaces.py, immutable.py (others?): working patch for implementation of backends plugin
282wilcoxjg@gmail.com**20110624202850
283 Ignore-this: ca6f34987ee3b0d25cac17c1fc22d50c
284 sloppy not for production
285] {
286move ./src/allmydata/test/test_server.py ./src/allmydata/test/test_backends.py
287hunk ./src/allmydata/storage/crawler.py 13
288     pass
289 
290 class ShareCrawler(service.MultiService):
291-    """A ShareCrawler subclass is attached to a StorageServer, and
292+    """A subcless of ShareCrawler is attached to a StorageServer, and
293     periodically walks all of its shares, processing each one in some
294     fashion. This crawl is rate-limited, to reduce the IO burden on the host,
295     since large servers can easily have a terabyte of shares, in several
296hunk ./src/allmydata/storage/crawler.py 31
297     We assume that the normal upload/download/get_buckets traffic of a tahoe
298     grid will cause the prefixdir contents to be mostly cached in the kernel,
299     or that the number of buckets in each prefixdir will be small enough to
300-    load quickly. A 1TB allmydata.com server was measured to have 2.56M
301+    load quickly. A 1TB allmydata.com server was measured to have 2.56 * 10^6
302     buckets, spread into the 1024 prefixdirs, with about 2500 buckets per
303     prefix. On this server, each prefixdir took 130ms-200ms to list the first
304     time, and 17ms to list the second time.
305hunk ./src/allmydata/storage/crawler.py 68
306     cpu_slice = 1.0 # use up to 1.0 seconds before yielding
307     minimum_cycle_time = 300 # don't run a cycle faster than this
308 
309-    def __init__(self, server, statefile, allowed_cpu_percentage=None):
310+    def __init__(self, backend, statefile, allowed_cpu_percentage=None):
311         service.MultiService.__init__(self)
312         if allowed_cpu_percentage is not None:
313             self.allowed_cpu_percentage = allowed_cpu_percentage
314hunk ./src/allmydata/storage/crawler.py 72
315-        self.server = server
316-        self.sharedir = server.sharedir
317-        self.statefile = statefile
318+        self.backend = backend
319         self.prefixes = [si_b2a(struct.pack(">H", i << (16-10)))[:2]
320                          for i in range(2**10)]
321         self.prefixes.sort()
322hunk ./src/allmydata/storage/crawler.py 446
323 
324     minimum_cycle_time = 60*60 # we don't need this more than once an hour
325 
326-    def __init__(self, server, statefile, num_sample_prefixes=1):
327-        ShareCrawler.__init__(self, server, statefile)
328+    def __init__(self, statefile, num_sample_prefixes=1):
329+        ShareCrawler.__init__(self, statefile)
330         self.num_sample_prefixes = num_sample_prefixes
331 
332     def add_initial_state(self):
333hunk ./src/allmydata/storage/expirer.py 15
334     removed.
335 
336     I collect statistics on the leases and make these available to a web
337-    status page, including::
338+    status page, including:
339 
340     Space recovered during this cycle-so-far:
341      actual (only if expiration_enabled=True):
342hunk ./src/allmydata/storage/expirer.py 51
343     slow_start = 360 # wait 6 minutes after startup
344     minimum_cycle_time = 12*60*60 # not more than twice per day
345 
346-    def __init__(self, server, statefile, historyfile,
347+    def __init__(self, statefile, historyfile,
348                  expiration_enabled, mode,
349                  override_lease_duration, # used if expiration_mode=="age"
350                  cutoff_date, # used if expiration_mode=="cutoff-date"
351hunk ./src/allmydata/storage/expirer.py 71
352         else:
353             raise ValueError("GC mode '%s' must be 'age' or 'cutoff-date'" % mode)
354         self.sharetypes_to_expire = sharetypes
355-        ShareCrawler.__init__(self, server, statefile)
356+        ShareCrawler.__init__(self, statefile)
357 
358     def add_initial_state(self):
359         # we fill ["cycle-to-date"] here (even though they will be reset in
360hunk ./src/allmydata/storage/immutable.py 44
361     sharetype = "immutable"
362 
363     def __init__(self, filename, max_size=None, create=False):
364-        """ If max_size is not None then I won't allow more than max_size to be written to me. If create=True and max_size must not be None. """
365+        """ If max_size is not None then I won't allow more than
366+        max_size to be written to me. If create=True then max_size
367+        must not be None. """
368         precondition((max_size is not None) or (not create), max_size, create)
369         self.home = filename
370         self._max_size = max_size
371hunk ./src/allmydata/storage/immutable.py 87
372 
373     def read_share_data(self, offset, length):
374         precondition(offset >= 0)
375-        # reads beyond the end of the data are truncated. Reads that start
376-        # beyond the end of the data return an empty string. I wonder why
377-        # Python doesn't do the following computation for me?
378+        # Reads beyond the end of the data are truncated. Reads that start
379+        # beyond the end of the data return an empty string.
380         seekpos = self._data_offset+offset
381         fsize = os.path.getsize(self.home)
382         actuallength = max(0, min(length, fsize-seekpos))
383hunk ./src/allmydata/storage/immutable.py 198
384             space_freed += os.stat(self.home)[stat.ST_SIZE]
385             self.unlink()
386         return space_freed
387+class NullBucketWriter(Referenceable):
388+    implements(RIBucketWriter)
389 
390hunk ./src/allmydata/storage/immutable.py 201
391+    def remote_write(self, offset, data):
392+        return
393 
394 class BucketWriter(Referenceable):
395     implements(RIBucketWriter)
396hunk ./src/allmydata/storage/server.py 7
397 from twisted.application import service
398 
399 from zope.interface import implements
400-from allmydata.interfaces import RIStorageServer, IStatsProducer
401+from allmydata.interfaces import RIStorageServer, IStatsProducer, IShareStore
402 from allmydata.util import fileutil, idlib, log, time_format
403 import allmydata # for __full_version__
404 
405hunk ./src/allmydata/storage/server.py 16
406 from allmydata.storage.lease import LeaseInfo
407 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
408      create_mutable_sharefile
409-from allmydata.storage.immutable import ShareFile, BucketWriter, BucketReader
410+from allmydata.storage.immutable import ShareFile, NullBucketWriter, BucketWriter, BucketReader
411 from allmydata.storage.crawler import BucketCountingCrawler
412 from allmydata.storage.expirer import LeaseCheckingCrawler
413 
414hunk ./src/allmydata/storage/server.py 20
415+from zope.interface import implements
416+
417+# A Backend is a MultiService so that its server's crawlers (if the server has any) can
418+# be started and stopped.
419+class Backend(service.MultiService):
420+    implements(IStatsProducer)
421+    def __init__(self):
422+        service.MultiService.__init__(self)
423+
424+    def get_bucket_shares(self):
425+        """XXX"""
426+        raise NotImplementedError
427+
428+    def get_share(self):
429+        """XXX"""
430+        raise NotImplementedError
431+
432+    def make_bucket_writer(self):
433+        """XXX"""
434+        raise NotImplementedError
435+
436+class NullBackend(Backend):
437+    def __init__(self):
438+        Backend.__init__(self)
439+
440+    def get_available_space(self):
441+        return None
442+
443+    def get_bucket_shares(self, storage_index):
444+        return set()
445+
446+    def get_share(self, storage_index, sharenum):
447+        return None
448+
449+    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
450+        return NullBucketWriter()
451+
452+class FSBackend(Backend):
453+    def __init__(self, storedir, readonly=False, reserved_space=0):
454+        Backend.__init__(self)
455+
456+        self._setup_storage(storedir, readonly, reserved_space)
457+        self._setup_corruption_advisory()
458+        self._setup_bucket_counter()
459+        self._setup_lease_checkerf()
460+
461+    def _setup_storage(self, storedir, readonly, reserved_space):
462+        self.storedir = storedir
463+        self.readonly = readonly
464+        self.reserved_space = int(reserved_space)
465+        if self.reserved_space:
466+            if self.get_available_space() is None:
467+                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
468+                        umid="0wZ27w", level=log.UNUSUAL)
469+
470+        self.sharedir = os.path.join(self.storedir, "shares")
471+        fileutil.make_dirs(self.sharedir)
472+        self.incomingdir = os.path.join(self.sharedir, 'incoming')
473+        self._clean_incomplete()
474+
475+    def _clean_incomplete(self):
476+        fileutil.rm_dir(self.incomingdir)
477+        fileutil.make_dirs(self.incomingdir)
478+
479+    def _setup_corruption_advisory(self):
480+        # we don't actually create the corruption-advisory dir until necessary
481+        self.corruption_advisory_dir = os.path.join(self.storedir,
482+                                                    "corruption-advisories")
483+
484+    def _setup_bucket_counter(self):
485+        statefile = os.path.join(self.storedir, "bucket_counter.state")
486+        self.bucket_counter = BucketCountingCrawler(statefile)
487+        self.bucket_counter.setServiceParent(self)
488+
489+    def _setup_lease_checkerf(self):
490+        statefile = os.path.join(self.storedir, "lease_checker.state")
491+        historyfile = os.path.join(self.storedir, "lease_checker.history")
492+        self.lease_checker = LeaseCheckingCrawler(statefile, historyfile,
493+                                   expiration_enabled, expiration_mode,
494+                                   expiration_override_lease_duration,
495+                                   expiration_cutoff_date,
496+                                   expiration_sharetypes)
497+        self.lease_checker.setServiceParent(self)
498+
499+    def get_available_space(self):
500+        if self.readonly:
501+            return 0
502+        return fileutil.get_available_space(self.storedir, self.reserved_space)
503+
504+    def get_bucket_shares(self, storage_index):
505+        """Return a list of (shnum, pathname) tuples for files that hold
506+        shares for this storage_index. In each tuple, 'shnum' will always be
507+        the integer form of the last component of 'pathname'."""
508+        storagedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
509+        try:
510+            for f in os.listdir(storagedir):
511+                if NUM_RE.match(f):
512+                    filename = os.path.join(storagedir, f)
513+                    yield (int(f), filename)
514+        except OSError:
515+            # Commonly caused by there being no buckets at all.
516+            pass
517+
518 # storage/
519 # storage/shares/incoming
520 #   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
521hunk ./src/allmydata/storage/server.py 143
522     name = 'storage'
523     LeaseCheckerClass = LeaseCheckingCrawler
524 
525-    def __init__(self, storedir, nodeid, reserved_space=0,
526-                 discard_storage=False, readonly_storage=False,
527+    def __init__(self, nodeid, backend, reserved_space=0,
528+                 readonly_storage=False,
529                  stats_provider=None,
530                  expiration_enabled=False,
531                  expiration_mode="age",
532hunk ./src/allmydata/storage/server.py 155
533         assert isinstance(nodeid, str)
534         assert len(nodeid) == 20
535         self.my_nodeid = nodeid
536-        self.storedir = storedir
537-        sharedir = os.path.join(storedir, "shares")
538-        fileutil.make_dirs(sharedir)
539-        self.sharedir = sharedir
540-        # we don't actually create the corruption-advisory dir until necessary
541-        self.corruption_advisory_dir = os.path.join(storedir,
542-                                                    "corruption-advisories")
543-        self.reserved_space = int(reserved_space)
544-        self.no_storage = discard_storage
545-        self.readonly_storage = readonly_storage
546         self.stats_provider = stats_provider
547         if self.stats_provider:
548             self.stats_provider.register_producer(self)
549hunk ./src/allmydata/storage/server.py 158
550-        self.incomingdir = os.path.join(sharedir, 'incoming')
551-        self._clean_incomplete()
552-        fileutil.make_dirs(self.incomingdir)
553         self._active_writers = weakref.WeakKeyDictionary()
554hunk ./src/allmydata/storage/server.py 159
555+        self.backend = backend
556+        self.backend.setServiceParent(self)
557         log.msg("StorageServer created", facility="tahoe.storage")
558 
559hunk ./src/allmydata/storage/server.py 163
560-        if reserved_space:
561-            if self.get_available_space() is None:
562-                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
563-                        umin="0wZ27w", level=log.UNUSUAL)
564-
565         self.latencies = {"allocate": [], # immutable
566                           "write": [],
567                           "close": [],
568hunk ./src/allmydata/storage/server.py 174
569                           "renew": [],
570                           "cancel": [],
571                           }
572-        self.add_bucket_counter()
573-
574-        statefile = os.path.join(self.storedir, "lease_checker.state")
575-        historyfile = os.path.join(self.storedir, "lease_checker.history")
576-        klass = self.LeaseCheckerClass
577-        self.lease_checker = klass(self, statefile, historyfile,
578-                                   expiration_enabled, expiration_mode,
579-                                   expiration_override_lease_duration,
580-                                   expiration_cutoff_date,
581-                                   expiration_sharetypes)
582-        self.lease_checker.setServiceParent(self)
583 
584     def __repr__(self):
585         return "<StorageServer %s>" % (idlib.shortnodeid_b2a(self.my_nodeid),)
586hunk ./src/allmydata/storage/server.py 178
587 
588-    def add_bucket_counter(self):
589-        statefile = os.path.join(self.storedir, "bucket_counter.state")
590-        self.bucket_counter = BucketCountingCrawler(self, statefile)
591-        self.bucket_counter.setServiceParent(self)
592-
593     def count(self, name, delta=1):
594         if self.stats_provider:
595             self.stats_provider.count("storage_server." + name, delta)
596hunk ./src/allmydata/storage/server.py 233
597             kwargs["facility"] = "tahoe.storage"
598         return log.msg(*args, **kwargs)
599 
600-    def _clean_incomplete(self):
601-        fileutil.rm_dir(self.incomingdir)
602-
603     def get_stats(self):
604         # remember: RIStatsProvider requires that our return dict
605         # contains numeric values.
606hunk ./src/allmydata/storage/server.py 269
607             stats['storage_server.total_bucket_count'] = bucket_count
608         return stats
609 
610-    def get_available_space(self):
611-        """Returns available space for share storage in bytes, or None if no
612-        API to get this information is available."""
613-
614-        if self.readonly_storage:
615-            return 0
616-        return fileutil.get_available_space(self.storedir, self.reserved_space)
617-
618     def allocated_size(self):
619         space = 0
620         for bw in self._active_writers:
621hunk ./src/allmydata/storage/server.py 276
622         return space
623 
624     def remote_get_version(self):
625-        remaining_space = self.get_available_space()
626+        remaining_space = self.backend.get_available_space()
627         if remaining_space is None:
628             # We're on a platform that has no API to get disk stats.
629             remaining_space = 2**64
630hunk ./src/allmydata/storage/server.py 301
631         self.count("allocate")
632         alreadygot = set()
633         bucketwriters = {} # k: shnum, v: BucketWriter
634-        si_dir = storage_index_to_dir(storage_index)
635-        si_s = si_b2a(storage_index)
636 
637hunk ./src/allmydata/storage/server.py 302
638+        si_s = si_b2a(storage_index)
639         log.msg("storage: allocate_buckets %s" % si_s)
640 
641         # in this implementation, the lease information (including secrets)
642hunk ./src/allmydata/storage/server.py 316
643 
644         max_space_per_bucket = allocated_size
645 
646-        remaining_space = self.get_available_space()
647+        remaining_space = self.backend.get_available_space()
648         limited = remaining_space is not None
649         if limited:
650             # this is a bit conservative, since some of this allocated_size()
651hunk ./src/allmydata/storage/server.py 329
652         # they asked about: this will save them a lot of work. Add or update
653         # leases for all of them: if they want us to hold shares for this
654         # file, they'll want us to hold leases for this file.
655-        for (shnum, fn) in self._get_bucket_shares(storage_index):
656+        for (shnum, fn) in self.backend.get_bucket_shares(storage_index):
657             alreadygot.add(shnum)
658             sf = ShareFile(fn)
659             sf.add_or_renew_lease(lease_info)
660hunk ./src/allmydata/storage/server.py 335
661 
662         for shnum in sharenums:
663-            incominghome = os.path.join(self.incomingdir, si_dir, "%d" % shnum)
664-            finalhome = os.path.join(self.sharedir, si_dir, "%d" % shnum)
665-            if os.path.exists(finalhome):
666+            share = self.backend.get_share(storage_index, shnum)
667+
668+            if not share:
669+                if (not limited) or (remaining_space >= max_space_per_bucket):
670+                    # ok! we need to create the new share file.
671+                    bw = self.backend.make_bucket_writer(storage_index, shnum,
672+                                      max_space_per_bucket, lease_info, canary)
673+                    bucketwriters[shnum] = bw
674+                    self._active_writers[bw] = 1
675+                    if limited:
676+                        remaining_space -= max_space_per_bucket
677+                else:
678+                    # bummer! not enough space to accept this bucket
679+                    pass
680+
681+            elif share.is_complete():
682                 # great! we already have it. easy.
683                 pass
684hunk ./src/allmydata/storage/server.py 353
685-            elif os.path.exists(incominghome):
686+            elif not share.is_complete():
687                 # Note that we don't create BucketWriters for shnums that
688                 # have a partial share (in incoming/), so if a second upload
689                 # occurs while the first is still in progress, the second
690hunk ./src/allmydata/storage/server.py 359
691                 # uploader will use different storage servers.
692                 pass
693-            elif (not limited) or (remaining_space >= max_space_per_bucket):
694-                # ok! we need to create the new share file.
695-                bw = BucketWriter(self, incominghome, finalhome,
696-                                  max_space_per_bucket, lease_info, canary)
697-                if self.no_storage:
698-                    bw.throw_out_all_data = True
699-                bucketwriters[shnum] = bw
700-                self._active_writers[bw] = 1
701-                if limited:
702-                    remaining_space -= max_space_per_bucket
703-            else:
704-                # bummer! not enough space to accept this bucket
705-                pass
706-
707-        if bucketwriters:
708-            fileutil.make_dirs(os.path.join(self.sharedir, si_dir))
709 
710         self.add_latency("allocate", time.time() - start)
711         return alreadygot, bucketwriters
712hunk ./src/allmydata/storage/server.py 437
713             self.stats_provider.count('storage_server.bytes_added', consumed_size)
714         del self._active_writers[bw]
715 
716-    def _get_bucket_shares(self, storage_index):
717-        """Return a list of (shnum, pathname) tuples for files that hold
718-        shares for this storage_index. In each tuple, 'shnum' will always be
719-        the integer form of the last component of 'pathname'."""
720-        storagedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
721-        try:
722-            for f in os.listdir(storagedir):
723-                if NUM_RE.match(f):
724-                    filename = os.path.join(storagedir, f)
725-                    yield (int(f), filename)
726-        except OSError:
727-            # Commonly caused by there being no buckets at all.
728-            pass
729 
730     def remote_get_buckets(self, storage_index):
731         start = time.time()
732hunk ./src/allmydata/storage/server.py 444
733         si_s = si_b2a(storage_index)
734         log.msg("storage: get_buckets %s" % si_s)
735         bucketreaders = {} # k: sharenum, v: BucketReader
736-        for shnum, filename in self._get_bucket_shares(storage_index):
737+        for shnum, filename in self.backend.get_bucket_shares(storage_index):
738             bucketreaders[shnum] = BucketReader(self, filename,
739                                                 storage_index, shnum)
740         self.add_latency("get", time.time() - start)
741hunk ./src/allmydata/test/test_backends.py 10
742 import mock
743 
744 # This is the code that we're going to be testing.
745-from allmydata.storage.server import StorageServer
746+from allmydata.storage.server import StorageServer, FSBackend, NullBackend
747 
748 # The following share file contents was generated with
749 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
750hunk ./src/allmydata/test/test_backends.py 21
751 sharefname = 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a/0'
752 
753 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
754+    @mock.patch('time.time')
755+    @mock.patch('os.mkdir')
756+    @mock.patch('__builtin__.open')
757+    @mock.patch('os.listdir')
758+    @mock.patch('os.path.isdir')
759+    def test_create_server_null_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
760+        """ This tests whether a server instance can be constructed
761+        with a null backend. The server instance fails the test if it
762+        tries to read or write to the file system. """
763+
764+        # Now begin the test.
765+        s = StorageServer('testnodeidxxxxxxxxxx', backend=NullBackend())
766+
767+        self.failIf(mockisdir.called)
768+        self.failIf(mocklistdir.called)
769+        self.failIf(mockopen.called)
770+        self.failIf(mockmkdir.called)
771+
772+        # You passed!
773+
774+    @mock.patch('time.time')
775+    @mock.patch('os.mkdir')
776     @mock.patch('__builtin__.open')
777hunk ./src/allmydata/test/test_backends.py 44
778-    def test_create_server(self, mockopen):
779-        """ This tests whether a server instance can be constructed. """
780+    @mock.patch('os.listdir')
781+    @mock.patch('os.path.isdir')
782+    def test_create_server_fs_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
783+        """ This tests whether a server instance can be constructed
784+        with a filesystem backend. To pass the test, it has to use the
785+        filesystem in only the prescribed ways. """
786 
787         def call_open(fname, mode):
788             if fname == 'testdir/bucket_counter.state':
789hunk ./src/allmydata/test/test_backends.py 58
790                 raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
791             elif fname == 'testdir/lease_checker.history':
792                 return StringIO()
793+            else:
794+                self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
795         mockopen.side_effect = call_open
796 
797         # Now begin the test.
798hunk ./src/allmydata/test/test_backends.py 63
799-        s = StorageServer('testdir', 'testnodeidxxxxxxxxxx')
800+        s = StorageServer('testnodeidxxxxxxxxxx', backend=FSBackend('teststoredir'))
801+
802+        self.failIf(mockisdir.called)
803+        self.failIf(mocklistdir.called)
804+        self.failIf(mockopen.called)
805+        self.failIf(mockmkdir.called)
806+        self.failIf(mocktime.called)
807 
808         # You passed!
809 
810hunk ./src/allmydata/test/test_backends.py 73
811-class TestServer(unittest.TestCase, ReallyEqualMixin):
812+class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
813+    def setUp(self):
814+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullBackend())
815+
816+    @mock.patch('os.mkdir')
817+    @mock.patch('__builtin__.open')
818+    @mock.patch('os.listdir')
819+    @mock.patch('os.path.isdir')
820+    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir):
821+        """ Write a new share. """
822+
823+        # Now begin the test.
824+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
825+        bs[0].remote_write(0, 'a')
826+        self.failIf(mockisdir.called)
827+        self.failIf(mocklistdir.called)
828+        self.failIf(mockopen.called)
829+        self.failIf(mockmkdir.called)
830+
831+    @mock.patch('os.path.exists')
832+    @mock.patch('os.path.getsize')
833+    @mock.patch('__builtin__.open')
834+    @mock.patch('os.listdir')
835+    def test_read_share(self, mocklistdir, mockopen, mockgetsize, mockexists):
836+        """ This tests whether the code correctly finds and reads
837+        shares written out by old (Tahoe-LAFS <= v1.8.2)
838+        servers. There is a similar test in test_download, but that one
839+        is from the perspective of the client and exercises a deeper
840+        stack of code. This one is for exercising just the
841+        StorageServer object. """
842+
843+        # Now begin the test.
844+        bs = self.s.remote_get_buckets('teststorage_index')
845+
846+        self.failUnlessEqual(len(bs), 0)
847+        self.failIf(mocklistdir.called)
848+        self.failIf(mockopen.called)
849+        self.failIf(mockgetsize.called)
850+        self.failIf(mockexists.called)
851+
852+
853+class TestServerFSBackend(unittest.TestCase, ReallyEqualMixin):
854     @mock.patch('__builtin__.open')
855     def setUp(self, mockopen):
856         def call_open(fname, mode):
857hunk ./src/allmydata/test/test_backends.py 126
858                 return StringIO()
859         mockopen.side_effect = call_open
860 
861-        self.s = StorageServer('testdir', 'testnodeidxxxxxxxxxx')
862-
863+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=FSBackend('teststoredir'))
864 
865     @mock.patch('time.time')
866     @mock.patch('os.mkdir')
867hunk ./src/allmydata/test/test_backends.py 134
868     @mock.patch('os.listdir')
869     @mock.patch('os.path.isdir')
870     def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
871-        """Handle a report of corruption."""
872+        """ Write a new share. """
873 
874         def call_listdir(dirname):
875             self.failUnlessReallyEqual(dirname, 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
876hunk ./src/allmydata/test/test_backends.py 173
877         mockopen.side_effect = call_open
878         # Now begin the test.
879         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
880-        print bs
881         bs[0].remote_write(0, 'a')
882         self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
883 
884hunk ./src/allmydata/test/test_backends.py 176
885-
886     @mock.patch('os.path.exists')
887     @mock.patch('os.path.getsize')
888     @mock.patch('__builtin__.open')
889hunk ./src/allmydata/test/test_backends.py 218
890 
891         self.failUnlessEqual(len(bs), 1)
892         b = bs[0]
893+        # These should match by definition, the next two cases cover cases without (completely) unambiguous behaviors.
894         self.failUnlessReallyEqual(b.remote_read(0, datalen), share_data)
895         # If you try to read past the end you get the as much data as is there.
896         self.failUnlessReallyEqual(b.remote_read(0, datalen+20), share_data)
897hunk ./src/allmydata/test/test_backends.py 224
898         # If you start reading past the end of the file you get the empty string.
899         self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
900+
901+
902}
903[a temp patch used as a snapshot
904wilcoxjg@gmail.com**20110626052732
905 Ignore-this: 95f05e314eaec870afa04c76d979aa44
906] {
907hunk ./docs/configuration.rst 637
908   [storage]
909   enabled = True
910   readonly = True
911-  sizelimit = 10000000000
912 
913 
914   [helper]
915hunk ./docs/garbage-collection.rst 16
916 
917 When a file or directory in the virtual filesystem is no longer referenced,
918 the space that its shares occupied on each storage server can be freed,
919-making room for other shares. Tahoe currently uses a garbage collection
920+making room for other shares. Tahoe uses a garbage collection
921 ("GC") mechanism to implement this space-reclamation process. Each share has
922 one or more "leases", which are managed by clients who want the
923 file/directory to be retained. The storage server accepts each share for a
924hunk ./docs/garbage-collection.rst 34
925 the `<lease-tradeoffs.svg>`_ diagram to get an idea for the tradeoffs involved.
926 If lease renewal occurs quickly and with 100% reliability, than any renewal
927 time that is shorter than the lease duration will suffice, but a larger ratio
928-of duration-over-renewal-time will be more robust in the face of occasional
929+of lease duration to renewal time will be more robust in the face of occasional
930 delays or failures.
931 
932 The current recommended values for a small Tahoe grid are to renew the leases
933replace ./docs/garbage-collection.rst [A-Za-z_0-9\-\.] Tahoe Tahoe-LAFS
934hunk ./src/allmydata/client.py 260
935             sharetypes.append("mutable")
936         expiration_sharetypes = tuple(sharetypes)
937 
938+        if self.get_config("storage", "backend", "filesystem") == "filesystem":
939+            xyz
940+        xyz
941         ss = StorageServer(storedir, self.nodeid,
942                            reserved_space=reserved,
943                            discard_storage=discard,
944hunk ./src/allmydata/storage/crawler.py 234
945         f = open(tmpfile, "wb")
946         pickle.dump(self.state, f)
947         f.close()
948-        fileutil.move_into_place(tmpfile, self.statefile)
949+        fileutil.move_into_place(tmpfile, self.statefname)
950 
951     def startService(self):
952         # arrange things to look like we were just sleeping, so
953}
954[snapshot of progress on backend implementation (not suitable for trunk)
955wilcoxjg@gmail.com**20110626053244
956 Ignore-this: 50c764af791c2b99ada8289546806a0a
957] {
958adddir ./src/allmydata/storage/backends
959adddir ./src/allmydata/storage/backends/das
960move ./src/allmydata/storage/expirer.py ./src/allmydata/storage/backends/das/expirer.py
961adddir ./src/allmydata/storage/backends/null
962hunk ./src/allmydata/interfaces.py 270
963         store that on disk.
964         """
965 
966+class IStorageBackend(Interface):
967+    """
968+    Objects of this kind live on the server side and are used by the
969+    storage server object.
970+    """
971+    def get_available_space(self, reserved_space):
972+        """ Returns available space for share storage in bytes, or
973+        None if this information is not available or if the available
974+        space is unlimited.
975+
976+        If the backend is configured for read-only mode then this will
977+        return 0.
978+
979+        reserved_space is how many bytes to subtract from the answer, so
980+        you can pass how many bytes you would like to leave unused on this
981+        filesystem as reserved_space. """
982+
983+    def get_bucket_shares(self):
984+        """XXX"""
985+
986+    def get_share(self):
987+        """XXX"""
988+
989+    def make_bucket_writer(self):
990+        """XXX"""
991+
992+class IStorageBackendShare(Interface):
993+    """
994+    This object contains as much as all of the share data.  It is intended
995+    for lazy evaluation such that in many use cases substantially less than
996+    all of the share data will be accessed.
997+    """
998+    def is_complete(self):
999+        """
1000+        Returns the share state, or None if the share does not exist.
1001+        """
1002+
1003 class IStorageBucketWriter(Interface):
1004     """
1005     Objects of this kind live on the client side.
1006hunk ./src/allmydata/interfaces.py 2492
1007 
1008 class EmptyPathnameComponentError(Exception):
1009     """The webapi disallows empty pathname components."""
1010+
1011+class IShareStore(Interface):
1012+    pass
1013+
1014addfile ./src/allmydata/storage/backends/__init__.py
1015addfile ./src/allmydata/storage/backends/das/__init__.py
1016addfile ./src/allmydata/storage/backends/das/core.py
1017hunk ./src/allmydata/storage/backends/das/core.py 1
1018+from allmydata.interfaces import IStorageBackend
1019+from allmydata.storage.backends.base import Backend
1020+from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
1021+from allmydata.util.assertutil import precondition
1022+
1023+import os, re, weakref, struct, time
1024+
1025+from foolscap.api import Referenceable
1026+from twisted.application import service
1027+
1028+from zope.interface import implements
1029+from allmydata.interfaces import RIStorageServer, IStatsProducer, IShareStore
1030+from allmydata.util import fileutil, idlib, log, time_format
1031+import allmydata # for __full_version__
1032+
1033+from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
1034+_pyflakes_hush = [si_b2a, si_a2b, storage_index_to_dir] # re-exported
1035+from allmydata.storage.lease import LeaseInfo
1036+from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
1037+     create_mutable_sharefile
1038+from allmydata.storage.backends.das.immutable import NullBucketWriter, BucketWriter, BucketReader
1039+from allmydata.storage.crawler import FSBucketCountingCrawler
1040+from allmydata.storage.backends.das.expirer import FSLeaseCheckingCrawler
1041+
1042+from zope.interface import implements
1043+
1044+class DASCore(Backend):
1045+    implements(IStorageBackend)
1046+    def __init__(self, storedir, expiration_policy, readonly=False, reserved_space=0):
1047+        Backend.__init__(self)
1048+
1049+        self._setup_storage(storedir, readonly, reserved_space)
1050+        self._setup_corruption_advisory()
1051+        self._setup_bucket_counter()
1052+        self._setup_lease_checkerf(expiration_policy)
1053+
1054+    def _setup_storage(self, storedir, readonly, reserved_space):
1055+        self.storedir = storedir
1056+        self.readonly = readonly
1057+        self.reserved_space = int(reserved_space)
1058+        if self.reserved_space:
1059+            if self.get_available_space() is None:
1060+                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
1061+                        umid="0wZ27w", level=log.UNUSUAL)
1062+
1063+        self.sharedir = os.path.join(self.storedir, "shares")
1064+        fileutil.make_dirs(self.sharedir)
1065+        self.incomingdir = os.path.join(self.sharedir, 'incoming')
1066+        self._clean_incomplete()
1067+
1068+    def _clean_incomplete(self):
1069+        fileutil.rm_dir(self.incomingdir)
1070+        fileutil.make_dirs(self.incomingdir)
1071+
1072+    def _setup_corruption_advisory(self):
1073+        # we don't actually create the corruption-advisory dir until necessary
1074+        self.corruption_advisory_dir = os.path.join(self.storedir,
1075+                                                    "corruption-advisories")
1076+
1077+    def _setup_bucket_counter(self):
1078+        statefname = os.path.join(self.storedir, "bucket_counter.state")
1079+        self.bucket_counter = FSBucketCountingCrawler(statefname)
1080+        self.bucket_counter.setServiceParent(self)
1081+
1082+    def _setup_lease_checkerf(self, expiration_policy):
1083+        statefile = os.path.join(self.storedir, "lease_checker.state")
1084+        historyfile = os.path.join(self.storedir, "lease_checker.history")
1085+        self.lease_checker = FSLeaseCheckingCrawler(statefile, historyfile, expiration_policy)
1086+        self.lease_checker.setServiceParent(self)
1087+
1088+    def get_available_space(self):
1089+        if self.readonly:
1090+            return 0
1091+        return fileutil.get_available_space(self.storedir, self.reserved_space)
1092+
1093+    def get_shares(self, storage_index):
1094+        """Return a list of the FSBShare objects that correspond to the passed storage_index."""
1095+        finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
1096+        try:
1097+            for f in os.listdir(finalstoragedir):
1098+                if NUM_RE.match(f):
1099+                    filename = os.path.join(finalstoragedir, f)
1100+                    yield FSBShare(filename, int(f))
1101+        except OSError:
1102+            # Commonly caused by there being no buckets at all.
1103+            pass
1104+       
1105+    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
1106+        immsh = ImmutableShare(self.sharedir, storage_index, shnum, max_size=max_space_per_bucket, create=True)
1107+        bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
1108+        return bw
1109+       
1110+
1111+# each share file (in storage/shares/$SI/$SHNUM) contains lease information
1112+# and share data. The share data is accessed by RIBucketWriter.write and
1113+# RIBucketReader.read . The lease information is not accessible through these
1114+# interfaces.
1115+
1116+# The share file has the following layout:
1117+#  0x00: share file version number, four bytes, current version is 1
1118+#  0x04: share data length, four bytes big-endian = A # See Footnote 1 below.
1119+#  0x08: number of leases, four bytes big-endian
1120+#  0x0c: beginning of share data (see immutable.layout.WriteBucketProxy)
1121+#  A+0x0c = B: first lease. Lease format is:
1122+#   B+0x00: owner number, 4 bytes big-endian, 0 is reserved for no-owner
1123+#   B+0x04: renew secret, 32 bytes (SHA256)
1124+#   B+0x24: cancel secret, 32 bytes (SHA256)
1125+#   B+0x44: expiration time, 4 bytes big-endian seconds-since-epoch
1126+#   B+0x48: next lease, or end of record
1127+
1128+# Footnote 1: as of Tahoe v1.3.0 this field is not used by storage servers,
1129+# but it is still filled in by storage servers in case the storage server
1130+# software gets downgraded from >= Tahoe v1.3.0 to < Tahoe v1.3.0, or the
1131+# share file is moved from one storage server to another. The value stored in
1132+# this field is truncated, so if the actual share data length is >= 2**32,
1133+# then the value stored in this field will be the actual share data length
1134+# modulo 2**32.
1135+
1136+class ImmutableShare:
1137+    LEASE_SIZE = struct.calcsize(">L32s32sL")
1138+    sharetype = "immutable"
1139+
1140+    def __init__(self, sharedir, storageindex, shnum, max_size=None, create=False):
1141+        """ If max_size is not None then I won't allow more than
1142+        max_size to be written to me. If create=True then max_size
1143+        must not be None. """
1144+        precondition((max_size is not None) or (not create), max_size, create)
1145+        self.shnum = shnum
1146+        self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
1147+        self._max_size = max_size
1148+        if create:
1149+            # touch the file, so later callers will see that we're working on
1150+            # it. Also construct the metadata.
1151+            assert not os.path.exists(self.fname)
1152+            fileutil.make_dirs(os.path.dirname(self.fname))
1153+            f = open(self.fname, 'wb')
1154+            # The second field -- the four-byte share data length -- is no
1155+            # longer used as of Tahoe v1.3.0, but we continue to write it in
1156+            # there in case someone downgrades a storage server from >=
1157+            # Tahoe-1.3.0 to < Tahoe-1.3.0, or moves a share file from one
1158+            # server to another, etc. We do saturation -- a share data length
1159+            # larger than 2**32-1 (what can fit into the field) is marked as
1160+            # the largest length that can fit into the field. That way, even
1161+            # if this does happen, the old < v1.3.0 server will still allow
1162+            # clients to read the first part of the share.
1163+            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
1164+            f.close()
1165+            self._lease_offset = max_size + 0x0c
1166+            self._num_leases = 0
1167+        else:
1168+            f = open(self.fname, 'rb')
1169+            filesize = os.path.getsize(self.fname)
1170+            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1171+            f.close()
1172+            if version != 1:
1173+                msg = "sharefile %s had version %d but we wanted 1" % \
1174+                      (self.fname, version)
1175+                raise UnknownImmutableContainerVersionError(msg)
1176+            self._num_leases = num_leases
1177+            self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
1178+        self._data_offset = 0xc
1179+
1180+    def unlink(self):
1181+        os.unlink(self.fname)
1182+
1183+    def read_share_data(self, offset, length):
1184+        precondition(offset >= 0)
1185+        # Reads beyond the end of the data are truncated. Reads that start
1186+        # beyond the end of the data return an empty string.
1187+        seekpos = self._data_offset+offset
1188+        fsize = os.path.getsize(self.fname)
1189+        actuallength = max(0, min(length, fsize-seekpos))
1190+        if actuallength == 0:
1191+            return ""
1192+        f = open(self.fname, 'rb')
1193+        f.seek(seekpos)
1194+        return f.read(actuallength)
1195+
1196+    def write_share_data(self, offset, data):
1197+        length = len(data)
1198+        precondition(offset >= 0, offset)
1199+        if self._max_size is not None and offset+length > self._max_size:
1200+            raise DataTooLargeError(self._max_size, offset, length)
1201+        f = open(self.fname, 'rb+')
1202+        real_offset = self._data_offset+offset
1203+        f.seek(real_offset)
1204+        assert f.tell() == real_offset
1205+        f.write(data)
1206+        f.close()
1207+
1208+    def _write_lease_record(self, f, lease_number, lease_info):
1209+        offset = self._lease_offset + lease_number * self.LEASE_SIZE
1210+        f.seek(offset)
1211+        assert f.tell() == offset
1212+        f.write(lease_info.to_immutable_data())
1213+
1214+    def _read_num_leases(self, f):
1215+        f.seek(0x08)
1216+        (num_leases,) = struct.unpack(">L", f.read(4))
1217+        return num_leases
1218+
1219+    def _write_num_leases(self, f, num_leases):
1220+        f.seek(0x08)
1221+        f.write(struct.pack(">L", num_leases))
1222+
1223+    def _truncate_leases(self, f, num_leases):
1224+        f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE)
1225+
1226+    def get_leases(self):
1227+        """Yields a LeaseInfo instance for all leases."""
1228+        f = open(self.fname, 'rb')
1229+        (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1230+        f.seek(self._lease_offset)
1231+        for i in range(num_leases):
1232+            data = f.read(self.LEASE_SIZE)
1233+            if data:
1234+                yield LeaseInfo().from_immutable_data(data)
1235+
1236+    def add_lease(self, lease_info):
1237+        f = open(self.fname, 'rb+')
1238+        num_leases = self._read_num_leases(f)
1239+        self._write_lease_record(f, num_leases, lease_info)
1240+        self._write_num_leases(f, num_leases+1)
1241+        f.close()
1242+
1243+    def renew_lease(self, renew_secret, new_expire_time):
1244+        for i,lease in enumerate(self.get_leases()):
1245+            if constant_time_compare(lease.renew_secret, renew_secret):
1246+                # yup. See if we need to update the owner time.
1247+                if new_expire_time > lease.expiration_time:
1248+                    # yes
1249+                    lease.expiration_time = new_expire_time
1250+                    f = open(self.fname, 'rb+')
1251+                    self._write_lease_record(f, i, lease)
1252+                    f.close()
1253+                return
1254+        raise IndexError("unable to renew non-existent lease")
1255+
1256+    def add_or_renew_lease(self, lease_info):
1257+        try:
1258+            self.renew_lease(lease_info.renew_secret,
1259+                             lease_info.expiration_time)
1260+        except IndexError:
1261+            self.add_lease(lease_info)
1262+
1263+
1264+    def cancel_lease(self, cancel_secret):
1265+        """Remove a lease with the given cancel_secret. If the last lease is
1266+        cancelled, the file will be removed. Return the number of bytes that
1267+        were freed (by truncating the list of leases, and possibly by
1268+        deleting the file. Raise IndexError if there was no lease with the
1269+        given cancel_secret.
1270+        """
1271+
1272+        leases = list(self.get_leases())
1273+        num_leases_removed = 0
1274+        for i,lease in enumerate(leases):
1275+            if constant_time_compare(lease.cancel_secret, cancel_secret):
1276+                leases[i] = None
1277+                num_leases_removed += 1
1278+        if not num_leases_removed:
1279+            raise IndexError("unable to find matching lease to cancel")
1280+        if num_leases_removed:
1281+            # pack and write out the remaining leases. We write these out in
1282+            # the same order as they were added, so that if we crash while
1283+            # doing this, we won't lose any non-cancelled leases.
1284+            leases = [l for l in leases if l] # remove the cancelled leases
1285+            f = open(self.fname, 'rb+')
1286+            for i,lease in enumerate(leases):
1287+                self._write_lease_record(f, i, lease)
1288+            self._write_num_leases(f, len(leases))
1289+            self._truncate_leases(f, len(leases))
1290+            f.close()
1291+        space_freed = self.LEASE_SIZE * num_leases_removed
1292+        if not len(leases):
1293+            space_freed += os.stat(self.fname)[stat.ST_SIZE]
1294+            self.unlink()
1295+        return space_freed
1296hunk ./src/allmydata/storage/backends/das/expirer.py 2
1297 import time, os, pickle, struct
1298-from allmydata.storage.crawler import ShareCrawler
1299-from allmydata.storage.shares import get_share_file
1300+from allmydata.storage.crawler import FSShareCrawler
1301 from allmydata.storage.common import UnknownMutableContainerVersionError, \
1302      UnknownImmutableContainerVersionError
1303 from twisted.python import log as twlog
1304hunk ./src/allmydata/storage/backends/das/expirer.py 7
1305 
1306-class LeaseCheckingCrawler(ShareCrawler):
1307+class FSLeaseCheckingCrawler(FSShareCrawler):
1308     """I examine the leases on all shares, determining which are still valid
1309     and which have expired. I can remove the expired leases (if so
1310     configured), and the share will be deleted when the last lease is
1311hunk ./src/allmydata/storage/backends/das/expirer.py 50
1312     slow_start = 360 # wait 6 minutes after startup
1313     minimum_cycle_time = 12*60*60 # not more than twice per day
1314 
1315-    def __init__(self, statefile, historyfile,
1316-                 expiration_enabled, mode,
1317-                 override_lease_duration, # used if expiration_mode=="age"
1318-                 cutoff_date, # used if expiration_mode=="cutoff-date"
1319-                 sharetypes):
1320+    def __init__(self, statefile, historyfile, expiration_policy):
1321         self.historyfile = historyfile
1322hunk ./src/allmydata/storage/backends/das/expirer.py 52
1323-        self.expiration_enabled = expiration_enabled
1324-        self.mode = mode
1325+        self.expiration_enabled = expiration_policy['enabled']
1326+        self.mode = expiration_policy['mode']
1327         self.override_lease_duration = None
1328         self.cutoff_date = None
1329         if self.mode == "age":
1330hunk ./src/allmydata/storage/backends/das/expirer.py 57
1331-            assert isinstance(override_lease_duration, (int, type(None)))
1332-            self.override_lease_duration = override_lease_duration # seconds
1333+            assert isinstance(expiration_policy['override_lease_duration'], (int, type(None)))
1334+            self.override_lease_duration = expiration_policy['override_lease_duration']# seconds
1335         elif self.mode == "cutoff-date":
1336hunk ./src/allmydata/storage/backends/das/expirer.py 60
1337-            assert isinstance(cutoff_date, int) # seconds-since-epoch
1338+            assert isinstance(expiration_policy['cutoff_date'], int) # seconds-since-epoch
1339             assert cutoff_date is not None
1340hunk ./src/allmydata/storage/backends/das/expirer.py 62
1341-            self.cutoff_date = cutoff_date
1342+            self.cutoff_date = expiration_policy['cutoff_date']
1343         else:
1344hunk ./src/allmydata/storage/backends/das/expirer.py 64
1345-            raise ValueError("GC mode '%s' must be 'age' or 'cutoff-date'" % mode)
1346-        self.sharetypes_to_expire = sharetypes
1347-        ShareCrawler.__init__(self, statefile)
1348+            raise ValueError("GC mode '%s' must be 'age' or 'cutoff-date'" % expiration_policy['mode'])
1349+        self.sharetypes_to_expire = expiration_policy['sharetypes']
1350+        FSShareCrawler.__init__(self, statefile)
1351 
1352     def add_initial_state(self):
1353         # we fill ["cycle-to-date"] here (even though they will be reset in
1354hunk ./src/allmydata/storage/backends/das/expirer.py 156
1355 
1356     def process_share(self, sharefilename):
1357         # first, find out what kind of a share it is
1358-        sf = get_share_file(sharefilename)
1359+        f = open(sharefilename, "rb")
1360+        prefix = f.read(32)
1361+        f.close()
1362+        if prefix == MutableShareFile.MAGIC:
1363+            sf = MutableShareFile(sharefilename)
1364+        else:
1365+            # otherwise assume it's immutable
1366+            sf = FSBShare(sharefilename)
1367         sharetype = sf.sharetype
1368         now = time.time()
1369         s = self.stat(sharefilename)
1370addfile ./src/allmydata/storage/backends/null/__init__.py
1371addfile ./src/allmydata/storage/backends/null/core.py
1372hunk ./src/allmydata/storage/backends/null/core.py 1
1373+from allmydata.storage.backends.base import Backend
1374+
1375+class NullCore(Backend):
1376+    def __init__(self):
1377+        Backend.__init__(self)
1378+
1379+    def get_available_space(self):
1380+        return None
1381+
1382+    def get_shares(self, storage_index):
1383+        return set()
1384+
1385+    def get_share(self, storage_index, sharenum):
1386+        return None
1387+
1388+    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
1389+        return NullBucketWriter()
1390hunk ./src/allmydata/storage/crawler.py 12
1391 class TimeSliceExceeded(Exception):
1392     pass
1393 
1394-class ShareCrawler(service.MultiService):
1395+class FSShareCrawler(service.MultiService):
1396     """A subcless of ShareCrawler is attached to a StorageServer, and
1397     periodically walks all of its shares, processing each one in some
1398     fashion. This crawl is rate-limited, to reduce the IO burden on the host,
1399hunk ./src/allmydata/storage/crawler.py 68
1400     cpu_slice = 1.0 # use up to 1.0 seconds before yielding
1401     minimum_cycle_time = 300 # don't run a cycle faster than this
1402 
1403-    def __init__(self, backend, statefile, allowed_cpu_percentage=None):
1404+    def __init__(self, statefname, allowed_cpu_percentage=None):
1405         service.MultiService.__init__(self)
1406         if allowed_cpu_percentage is not None:
1407             self.allowed_cpu_percentage = allowed_cpu_percentage
1408hunk ./src/allmydata/storage/crawler.py 72
1409-        self.backend = backend
1410+        self.statefname = statefname
1411         self.prefixes = [si_b2a(struct.pack(">H", i << (16-10)))[:2]
1412                          for i in range(2**10)]
1413         self.prefixes.sort()
1414hunk ./src/allmydata/storage/crawler.py 192
1415         #                            of the last bucket to be processed, or
1416         #                            None if we are sleeping between cycles
1417         try:
1418-            f = open(self.statefile, "rb")
1419+            f = open(self.statefname, "rb")
1420             state = pickle.load(f)
1421             f.close()
1422         except EnvironmentError:
1423hunk ./src/allmydata/storage/crawler.py 230
1424         else:
1425             last_complete_prefix = self.prefixes[lcpi]
1426         self.state["last-complete-prefix"] = last_complete_prefix
1427-        tmpfile = self.statefile + ".tmp"
1428+        tmpfile = self.statefname + ".tmp"
1429         f = open(tmpfile, "wb")
1430         pickle.dump(self.state, f)
1431         f.close()
1432hunk ./src/allmydata/storage/crawler.py 433
1433         pass
1434 
1435 
1436-class BucketCountingCrawler(ShareCrawler):
1437+class FSBucketCountingCrawler(FSShareCrawler):
1438     """I keep track of how many buckets are being managed by this server.
1439     This is equivalent to the number of distributed files and directories for
1440     which I am providing storage. The actual number of files+directories in
1441hunk ./src/allmydata/storage/crawler.py 446
1442 
1443     minimum_cycle_time = 60*60 # we don't need this more than once an hour
1444 
1445-    def __init__(self, statefile, num_sample_prefixes=1):
1446-        ShareCrawler.__init__(self, statefile)
1447+    def __init__(self, statefname, num_sample_prefixes=1):
1448+        FSShareCrawler.__init__(self, statefname)
1449         self.num_sample_prefixes = num_sample_prefixes
1450 
1451     def add_initial_state(self):
1452hunk ./src/allmydata/storage/immutable.py 14
1453 from allmydata.storage.common import UnknownImmutableContainerVersionError, \
1454      DataTooLargeError
1455 
1456-# each share file (in storage/shares/$SI/$SHNUM) contains lease information
1457-# and share data. The share data is accessed by RIBucketWriter.write and
1458-# RIBucketReader.read . The lease information is not accessible through these
1459-# interfaces.
1460-
1461-# The share file has the following layout:
1462-#  0x00: share file version number, four bytes, current version is 1
1463-#  0x04: share data length, four bytes big-endian = A # See Footnote 1 below.
1464-#  0x08: number of leases, four bytes big-endian
1465-#  0x0c: beginning of share data (see immutable.layout.WriteBucketProxy)
1466-#  A+0x0c = B: first lease. Lease format is:
1467-#   B+0x00: owner number, 4 bytes big-endian, 0 is reserved for no-owner
1468-#   B+0x04: renew secret, 32 bytes (SHA256)
1469-#   B+0x24: cancel secret, 32 bytes (SHA256)
1470-#   B+0x44: expiration time, 4 bytes big-endian seconds-since-epoch
1471-#   B+0x48: next lease, or end of record
1472-
1473-# Footnote 1: as of Tahoe v1.3.0 this field is not used by storage servers,
1474-# but it is still filled in by storage servers in case the storage server
1475-# software gets downgraded from >= Tahoe v1.3.0 to < Tahoe v1.3.0, or the
1476-# share file is moved from one storage server to another. The value stored in
1477-# this field is truncated, so if the actual share data length is >= 2**32,
1478-# then the value stored in this field will be the actual share data length
1479-# modulo 2**32.
1480-
1481-class ShareFile:
1482-    LEASE_SIZE = struct.calcsize(">L32s32sL")
1483-    sharetype = "immutable"
1484-
1485-    def __init__(self, filename, max_size=None, create=False):
1486-        """ If max_size is not None then I won't allow more than
1487-        max_size to be written to me. If create=True then max_size
1488-        must not be None. """
1489-        precondition((max_size is not None) or (not create), max_size, create)
1490-        self.home = filename
1491-        self._max_size = max_size
1492-        if create:
1493-            # touch the file, so later callers will see that we're working on
1494-            # it. Also construct the metadata.
1495-            assert not os.path.exists(self.home)
1496-            fileutil.make_dirs(os.path.dirname(self.home))
1497-            f = open(self.home, 'wb')
1498-            # The second field -- the four-byte share data length -- is no
1499-            # longer used as of Tahoe v1.3.0, but we continue to write it in
1500-            # there in case someone downgrades a storage server from >=
1501-            # Tahoe-1.3.0 to < Tahoe-1.3.0, or moves a share file from one
1502-            # server to another, etc. We do saturation -- a share data length
1503-            # larger than 2**32-1 (what can fit into the field) is marked as
1504-            # the largest length that can fit into the field. That way, even
1505-            # if this does happen, the old < v1.3.0 server will still allow
1506-            # clients to read the first part of the share.
1507-            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
1508-            f.close()
1509-            self._lease_offset = max_size + 0x0c
1510-            self._num_leases = 0
1511-        else:
1512-            f = open(self.home, 'rb')
1513-            filesize = os.path.getsize(self.home)
1514-            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1515-            f.close()
1516-            if version != 1:
1517-                msg = "sharefile %s had version %d but we wanted 1" % \
1518-                      (filename, version)
1519-                raise UnknownImmutableContainerVersionError(msg)
1520-            self._num_leases = num_leases
1521-            self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
1522-        self._data_offset = 0xc
1523-
1524-    def unlink(self):
1525-        os.unlink(self.home)
1526-
1527-    def read_share_data(self, offset, length):
1528-        precondition(offset >= 0)
1529-        # Reads beyond the end of the data are truncated. Reads that start
1530-        # beyond the end of the data return an empty string.
1531-        seekpos = self._data_offset+offset
1532-        fsize = os.path.getsize(self.home)
1533-        actuallength = max(0, min(length, fsize-seekpos))
1534-        if actuallength == 0:
1535-            return ""
1536-        f = open(self.home, 'rb')
1537-        f.seek(seekpos)
1538-        return f.read(actuallength)
1539-
1540-    def write_share_data(self, offset, data):
1541-        length = len(data)
1542-        precondition(offset >= 0, offset)
1543-        if self._max_size is not None and offset+length > self._max_size:
1544-            raise DataTooLargeError(self._max_size, offset, length)
1545-        f = open(self.home, 'rb+')
1546-        real_offset = self._data_offset+offset
1547-        f.seek(real_offset)
1548-        assert f.tell() == real_offset
1549-        f.write(data)
1550-        f.close()
1551-
1552-    def _write_lease_record(self, f, lease_number, lease_info):
1553-        offset = self._lease_offset + lease_number * self.LEASE_SIZE
1554-        f.seek(offset)
1555-        assert f.tell() == offset
1556-        f.write(lease_info.to_immutable_data())
1557-
1558-    def _read_num_leases(self, f):
1559-        f.seek(0x08)
1560-        (num_leases,) = struct.unpack(">L", f.read(4))
1561-        return num_leases
1562-
1563-    def _write_num_leases(self, f, num_leases):
1564-        f.seek(0x08)
1565-        f.write(struct.pack(">L", num_leases))
1566-
1567-    def _truncate_leases(self, f, num_leases):
1568-        f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE)
1569-
1570-    def get_leases(self):
1571-        """Yields a LeaseInfo instance for all leases."""
1572-        f = open(self.home, 'rb')
1573-        (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
1574-        f.seek(self._lease_offset)
1575-        for i in range(num_leases):
1576-            data = f.read(self.LEASE_SIZE)
1577-            if data:
1578-                yield LeaseInfo().from_immutable_data(data)
1579-
1580-    def add_lease(self, lease_info):
1581-        f = open(self.home, 'rb+')
1582-        num_leases = self._read_num_leases(f)
1583-        self._write_lease_record(f, num_leases, lease_info)
1584-        self._write_num_leases(f, num_leases+1)
1585-        f.close()
1586-
1587-    def renew_lease(self, renew_secret, new_expire_time):
1588-        for i,lease in enumerate(self.get_leases()):
1589-            if constant_time_compare(lease.renew_secret, renew_secret):
1590-                # yup. See if we need to update the owner time.
1591-                if new_expire_time > lease.expiration_time:
1592-                    # yes
1593-                    lease.expiration_time = new_expire_time
1594-                    f = open(self.home, 'rb+')
1595-                    self._write_lease_record(f, i, lease)
1596-                    f.close()
1597-                return
1598-        raise IndexError("unable to renew non-existent lease")
1599-
1600-    def add_or_renew_lease(self, lease_info):
1601-        try:
1602-            self.renew_lease(lease_info.renew_secret,
1603-                             lease_info.expiration_time)
1604-        except IndexError:
1605-            self.add_lease(lease_info)
1606-
1607-
1608-    def cancel_lease(self, cancel_secret):
1609-        """Remove a lease with the given cancel_secret. If the last lease is
1610-        cancelled, the file will be removed. Return the number of bytes that
1611-        were freed (by truncating the list of leases, and possibly by
1612-        deleting the file. Raise IndexError if there was no lease with the
1613-        given cancel_secret.
1614-        """
1615-
1616-        leases = list(self.get_leases())
1617-        num_leases_removed = 0
1618-        for i,lease in enumerate(leases):
1619-            if constant_time_compare(lease.cancel_secret, cancel_secret):
1620-                leases[i] = None
1621-                num_leases_removed += 1
1622-        if not num_leases_removed:
1623-            raise IndexError("unable to find matching lease to cancel")
1624-        if num_leases_removed:
1625-            # pack and write out the remaining leases. We write these out in
1626-            # the same order as they were added, so that if we crash while
1627-            # doing this, we won't lose any non-cancelled leases.
1628-            leases = [l for l in leases if l] # remove the cancelled leases
1629-            f = open(self.home, 'rb+')
1630-            for i,lease in enumerate(leases):
1631-                self._write_lease_record(f, i, lease)
1632-            self._write_num_leases(f, len(leases))
1633-            self._truncate_leases(f, len(leases))
1634-            f.close()
1635-        space_freed = self.LEASE_SIZE * num_leases_removed
1636-        if not len(leases):
1637-            space_freed += os.stat(self.home)[stat.ST_SIZE]
1638-            self.unlink()
1639-        return space_freed
1640-class NullBucketWriter(Referenceable):
1641-    implements(RIBucketWriter)
1642-
1643-    def remote_write(self, offset, data):
1644-        return
1645-
1646 class BucketWriter(Referenceable):
1647     implements(RIBucketWriter)
1648 
1649hunk ./src/allmydata/storage/immutable.py 17
1650-    def __init__(self, ss, incominghome, finalhome, max_size, lease_info, canary):
1651+    def __init__(self, ss, immutableshare, max_size, lease_info, canary):
1652         self.ss = ss
1653hunk ./src/allmydata/storage/immutable.py 19
1654-        self.incominghome = incominghome
1655-        self.finalhome = finalhome
1656         self._max_size = max_size # don't allow the client to write more than this
1657         self._canary = canary
1658         self._disconnect_marker = canary.notifyOnDisconnect(self._disconnected)
1659hunk ./src/allmydata/storage/immutable.py 24
1660         self.closed = False
1661         self.throw_out_all_data = False
1662-        self._sharefile = ShareFile(incominghome, create=True, max_size=max_size)
1663+        self._sharefile = immutableshare
1664         # also, add our lease to the file now, so that other ones can be
1665         # added by simultaneous uploaders
1666         self._sharefile.add_lease(lease_info)
1667hunk ./src/allmydata/storage/server.py 16
1668 from allmydata.storage.lease import LeaseInfo
1669 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
1670      create_mutable_sharefile
1671-from allmydata.storage.immutable import ShareFile, NullBucketWriter, BucketWriter, BucketReader
1672-from allmydata.storage.crawler import BucketCountingCrawler
1673-from allmydata.storage.expirer import LeaseCheckingCrawler
1674 
1675 from zope.interface import implements
1676 
1677hunk ./src/allmydata/storage/server.py 19
1678-# A Backend is a MultiService so that its server's crawlers (if the server has any) can
1679-# be started and stopped.
1680-class Backend(service.MultiService):
1681-    implements(IStatsProducer)
1682-    def __init__(self):
1683-        service.MultiService.__init__(self)
1684-
1685-    def get_bucket_shares(self):
1686-        """XXX"""
1687-        raise NotImplementedError
1688-
1689-    def get_share(self):
1690-        """XXX"""
1691-        raise NotImplementedError
1692-
1693-    def make_bucket_writer(self):
1694-        """XXX"""
1695-        raise NotImplementedError
1696-
1697-class NullBackend(Backend):
1698-    def __init__(self):
1699-        Backend.__init__(self)
1700-
1701-    def get_available_space(self):
1702-        return None
1703-
1704-    def get_bucket_shares(self, storage_index):
1705-        return set()
1706-
1707-    def get_share(self, storage_index, sharenum):
1708-        return None
1709-
1710-    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
1711-        return NullBucketWriter()
1712-
1713-class FSBackend(Backend):
1714-    def __init__(self, storedir, readonly=False, reserved_space=0):
1715-        Backend.__init__(self)
1716-
1717-        self._setup_storage(storedir, readonly, reserved_space)
1718-        self._setup_corruption_advisory()
1719-        self._setup_bucket_counter()
1720-        self._setup_lease_checkerf()
1721-
1722-    def _setup_storage(self, storedir, readonly, reserved_space):
1723-        self.storedir = storedir
1724-        self.readonly = readonly
1725-        self.reserved_space = int(reserved_space)
1726-        if self.reserved_space:
1727-            if self.get_available_space() is None:
1728-                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
1729-                        umid="0wZ27w", level=log.UNUSUAL)
1730-
1731-        self.sharedir = os.path.join(self.storedir, "shares")
1732-        fileutil.make_dirs(self.sharedir)
1733-        self.incomingdir = os.path.join(self.sharedir, 'incoming')
1734-        self._clean_incomplete()
1735-
1736-    def _clean_incomplete(self):
1737-        fileutil.rm_dir(self.incomingdir)
1738-        fileutil.make_dirs(self.incomingdir)
1739-
1740-    def _setup_corruption_advisory(self):
1741-        # we don't actually create the corruption-advisory dir until necessary
1742-        self.corruption_advisory_dir = os.path.join(self.storedir,
1743-                                                    "corruption-advisories")
1744-
1745-    def _setup_bucket_counter(self):
1746-        statefile = os.path.join(self.storedir, "bucket_counter.state")
1747-        self.bucket_counter = BucketCountingCrawler(statefile)
1748-        self.bucket_counter.setServiceParent(self)
1749-
1750-    def _setup_lease_checkerf(self):
1751-        statefile = os.path.join(self.storedir, "lease_checker.state")
1752-        historyfile = os.path.join(self.storedir, "lease_checker.history")
1753-        self.lease_checker = LeaseCheckingCrawler(statefile, historyfile,
1754-                                   expiration_enabled, expiration_mode,
1755-                                   expiration_override_lease_duration,
1756-                                   expiration_cutoff_date,
1757-                                   expiration_sharetypes)
1758-        self.lease_checker.setServiceParent(self)
1759-
1760-    def get_available_space(self):
1761-        if self.readonly:
1762-            return 0
1763-        return fileutil.get_available_space(self.storedir, self.reserved_space)
1764-
1765-    def get_bucket_shares(self, storage_index):
1766-        """Return a list of (shnum, pathname) tuples for files that hold
1767-        shares for this storage_index. In each tuple, 'shnum' will always be
1768-        the integer form of the last component of 'pathname'."""
1769-        storagedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
1770-        try:
1771-            for f in os.listdir(storagedir):
1772-                if NUM_RE.match(f):
1773-                    filename = os.path.join(storagedir, f)
1774-                    yield (int(f), filename)
1775-        except OSError:
1776-            # Commonly caused by there being no buckets at all.
1777-            pass
1778-
1779 # storage/
1780 # storage/shares/incoming
1781 #   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
1782hunk ./src/allmydata/storage/server.py 32
1783 # $SHARENUM matches this regex:
1784 NUM_RE=re.compile("^[0-9]+$")
1785 
1786-
1787-
1788 class StorageServer(service.MultiService, Referenceable):
1789     implements(RIStorageServer, IStatsProducer)
1790     name = 'storage'
1791hunk ./src/allmydata/storage/server.py 35
1792-    LeaseCheckerClass = LeaseCheckingCrawler
1793 
1794     def __init__(self, nodeid, backend, reserved_space=0,
1795                  readonly_storage=False,
1796hunk ./src/allmydata/storage/server.py 38
1797-                 stats_provider=None,
1798-                 expiration_enabled=False,
1799-                 expiration_mode="age",
1800-                 expiration_override_lease_duration=None,
1801-                 expiration_cutoff_date=None,
1802-                 expiration_sharetypes=("mutable", "immutable")):
1803+                 stats_provider=None ):
1804         service.MultiService.__init__(self)
1805         assert isinstance(nodeid, str)
1806         assert len(nodeid) == 20
1807hunk ./src/allmydata/storage/server.py 217
1808         # they asked about: this will save them a lot of work. Add or update
1809         # leases for all of them: if they want us to hold shares for this
1810         # file, they'll want us to hold leases for this file.
1811-        for (shnum, fn) in self.backend.get_bucket_shares(storage_index):
1812-            alreadygot.add(shnum)
1813-            sf = ShareFile(fn)
1814-            sf.add_or_renew_lease(lease_info)
1815-
1816-        for shnum in sharenums:
1817-            share = self.backend.get_share(storage_index, shnum)
1818+        for share in self.backend.get_shares(storage_index):
1819+            alreadygot.add(share.shnum)
1820+            share.add_or_renew_lease(lease_info)
1821 
1822hunk ./src/allmydata/storage/server.py 221
1823-            if not share:
1824-                if (not limited) or (remaining_space >= max_space_per_bucket):
1825-                    # ok! we need to create the new share file.
1826-                    bw = self.backend.make_bucket_writer(storage_index, shnum,
1827-                                      max_space_per_bucket, lease_info, canary)
1828-                    bucketwriters[shnum] = bw
1829-                    self._active_writers[bw] = 1
1830-                    if limited:
1831-                        remaining_space -= max_space_per_bucket
1832-                else:
1833-                    # bummer! not enough space to accept this bucket
1834-                    pass
1835+        for shnum in (sharenums - alreadygot):
1836+            if (not limited) or (remaining_space >= max_space_per_bucket):
1837+                #XXX or should the following line occur in storage server construtor? ok! we need to create the new share file.
1838+                self.backend.set_storage_server(self)
1839+                bw = self.backend.make_bucket_writer(storage_index, shnum,
1840+                                                     max_space_per_bucket, lease_info, canary)
1841+                bucketwriters[shnum] = bw
1842+                self._active_writers[bw] = 1
1843+                if limited:
1844+                    remaining_space -= max_space_per_bucket
1845 
1846hunk ./src/allmydata/storage/server.py 232
1847-            elif share.is_complete():
1848-                # great! we already have it. easy.
1849-                pass
1850-            elif not share.is_complete():
1851-                # Note that we don't create BucketWriters for shnums that
1852-                # have a partial share (in incoming/), so if a second upload
1853-                # occurs while the first is still in progress, the second
1854-                # uploader will use different storage servers.
1855-                pass
1856+        #XXX We SHOULD DOCUMENT LATER.
1857 
1858         self.add_latency("allocate", time.time() - start)
1859         return alreadygot, bucketwriters
1860hunk ./src/allmydata/storage/server.py 238
1861 
1862     def _iter_share_files(self, storage_index):
1863-        for shnum, filename in self._get_bucket_shares(storage_index):
1864+        for shnum, filename in self._get_shares(storage_index):
1865             f = open(filename, 'rb')
1866             header = f.read(32)
1867             f.close()
1868hunk ./src/allmydata/storage/server.py 318
1869         si_s = si_b2a(storage_index)
1870         log.msg("storage: get_buckets %s" % si_s)
1871         bucketreaders = {} # k: sharenum, v: BucketReader
1872-        for shnum, filename in self.backend.get_bucket_shares(storage_index):
1873+        for shnum, filename in self.backend.get_shares(storage_index):
1874             bucketreaders[shnum] = BucketReader(self, filename,
1875                                                 storage_index, shnum)
1876         self.add_latency("get", time.time() - start)
1877hunk ./src/allmydata/storage/server.py 334
1878         # since all shares get the same lease data, we just grab the leases
1879         # from the first share
1880         try:
1881-            shnum, filename = self._get_bucket_shares(storage_index).next()
1882+            shnum, filename = self._get_shares(storage_index).next()
1883             sf = ShareFile(filename)
1884             return sf.get_leases()
1885         except StopIteration:
1886hunk ./src/allmydata/storage/shares.py 1
1887-#! /usr/bin/python
1888-
1889-from allmydata.storage.mutable import MutableShareFile
1890-from allmydata.storage.immutable import ShareFile
1891-
1892-def get_share_file(filename):
1893-    f = open(filename, "rb")
1894-    prefix = f.read(32)
1895-    f.close()
1896-    if prefix == MutableShareFile.MAGIC:
1897-        return MutableShareFile(filename)
1898-    # otherwise assume it's immutable
1899-    return ShareFile(filename)
1900-
1901rmfile ./src/allmydata/storage/shares.py
1902hunk ./src/allmydata/test/common_util.py 20
1903 
1904 def flip_one_bit(s, offset=0, size=None):
1905     """ flip one random bit of the string s, in a byte greater than or equal to offset and less
1906-    than offset+size. """
1907+    than offset+size. Return the new string. """
1908     if size is None:
1909         size=len(s)-offset
1910     i = randrange(offset, offset+size)
1911hunk ./src/allmydata/test/test_backends.py 7
1912 
1913 from allmydata.test.common_util import ReallyEqualMixin
1914 
1915-import mock
1916+import mock, os
1917 
1918 # This is the code that we're going to be testing.
1919hunk ./src/allmydata/test/test_backends.py 10
1920-from allmydata.storage.server import StorageServer, FSBackend, NullBackend
1921+from allmydata.storage.server import StorageServer
1922+
1923+from allmydata.storage.backends.das.core import DASCore
1924+from allmydata.storage.backends.null.core import NullCore
1925+
1926 
1927 # The following share file contents was generated with
1928 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
1929hunk ./src/allmydata/test/test_backends.py 22
1930 share_data = 'a\x00\x00\x00\x00xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy\x00(\xde\x80'
1931 share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data
1932 
1933-sharefname = 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a/0'
1934+tempdir = 'teststoredir'
1935+sharedirname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
1936+sharefname = os.path.join(sharedirname, '0')
1937 
1938 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
1939     @mock.patch('time.time')
1940hunk ./src/allmydata/test/test_backends.py 58
1941         filesystem in only the prescribed ways. """
1942 
1943         def call_open(fname, mode):
1944-            if fname == 'testdir/bucket_counter.state':
1945-                raise IOError(2, "No such file or directory: 'testdir/bucket_counter.state'")
1946-            elif fname == 'testdir/lease_checker.state':
1947-                raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
1948-            elif fname == 'testdir/lease_checker.history':
1949+            if fname == os.path.join(tempdir,'bucket_counter.state'):
1950+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
1951+            elif fname == os.path.join(tempdir, 'lease_checker.state'):
1952+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
1953+            elif fname == os.path.join(tempdir, 'lease_checker.history'):
1954                 return StringIO()
1955             else:
1956                 self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
1957hunk ./src/allmydata/test/test_backends.py 124
1958     @mock.patch('__builtin__.open')
1959     def setUp(self, mockopen):
1960         def call_open(fname, mode):
1961-            if fname == 'testdir/bucket_counter.state':
1962-                raise IOError(2, "No such file or directory: 'testdir/bucket_counter.state'")
1963-            elif fname == 'testdir/lease_checker.state':
1964-                raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'")
1965-            elif fname == 'testdir/lease_checker.history':
1966+            if fname == os.path.join(tempdir, 'bucket_counter.state'):
1967+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
1968+            elif fname == os.path.join(tempdir, 'lease_checker.state'):
1969+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
1970+            elif fname == os.path.join(tempdir, 'lease_checker.history'):
1971                 return StringIO()
1972         mockopen.side_effect = call_open
1973hunk ./src/allmydata/test/test_backends.py 131
1974-
1975-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=FSBackend('teststoredir'))
1976+        expiration_policy = {'enabled' : False,
1977+                             'mode' : 'age',
1978+                             'override_lease_duration' : None,
1979+                             'cutoff_date' : None,
1980+                             'sharetypes' : None}
1981+        testbackend = DASCore(tempdir, expiration_policy)
1982+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
1983 
1984     @mock.patch('time.time')
1985     @mock.patch('os.mkdir')
1986hunk ./src/allmydata/test/test_backends.py 148
1987         """ Write a new share. """
1988 
1989         def call_listdir(dirname):
1990-            self.failUnlessReallyEqual(dirname, 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
1991-            raise OSError(2, "No such file or directory: 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a'")
1992+            self.failUnlessReallyEqual(dirname, sharedirname)
1993+            raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares/or/orsxg5dtorxxeylhmvpws3temv4a'))
1994 
1995         mocklistdir.side_effect = call_listdir
1996 
1997hunk ./src/allmydata/test/test_backends.py 178
1998 
1999         sharefile = MockFile()
2000         def call_open(fname, mode):
2001-            self.failUnlessReallyEqual(fname, 'testdir/shares/incoming/or/orsxg5dtorxxeylhmvpws3temv4a/0' )
2002+            self.failUnlessReallyEqual(fname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a', '0' ))
2003             return sharefile
2004 
2005         mockopen.side_effect = call_open
2006hunk ./src/allmydata/test/test_backends.py 200
2007         StorageServer object. """
2008 
2009         def call_listdir(dirname):
2010-            self.failUnlessReallyEqual(dirname,'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a')
2011+            self.failUnlessReallyEqual(dirname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
2012             return ['0']
2013 
2014         mocklistdir.side_effect = call_listdir
2015}
2016[checkpoint patch
2017wilcoxjg@gmail.com**20110626165715
2018 Ignore-this: fbfce2e8a1c1bb92715793b8ad6854d5
2019] {
2020hunk ./src/allmydata/storage/backends/das/core.py 21
2021 from allmydata.storage.lease import LeaseInfo
2022 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
2023      create_mutable_sharefile
2024-from allmydata.storage.backends.das.immutable import NullBucketWriter, BucketWriter, BucketReader
2025+from allmydata.storage.immutable import BucketWriter, BucketReader
2026 from allmydata.storage.crawler import FSBucketCountingCrawler
2027 from allmydata.storage.backends.das.expirer import FSLeaseCheckingCrawler
2028 
2029hunk ./src/allmydata/storage/backends/das/core.py 27
2030 from zope.interface import implements
2031 
2032+# $SHARENUM matches this regex:
2033+NUM_RE=re.compile("^[0-9]+$")
2034+
2035 class DASCore(Backend):
2036     implements(IStorageBackend)
2037     def __init__(self, storedir, expiration_policy, readonly=False, reserved_space=0):
2038hunk ./src/allmydata/storage/backends/das/core.py 80
2039         return fileutil.get_available_space(self.storedir, self.reserved_space)
2040 
2041     def get_shares(self, storage_index):
2042-        """Return a list of the FSBShare objects that correspond to the passed storage_index."""
2043+        """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
2044         finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
2045         try:
2046             for f in os.listdir(finalstoragedir):
2047hunk ./src/allmydata/storage/backends/das/core.py 86
2048                 if NUM_RE.match(f):
2049                     filename = os.path.join(finalstoragedir, f)
2050-                    yield FSBShare(filename, int(f))
2051+                    yield ImmutableShare(self.sharedir, storage_index, int(f))
2052         except OSError:
2053             # Commonly caused by there being no buckets at all.
2054             pass
2055hunk ./src/allmydata/storage/backends/das/core.py 95
2056         immsh = ImmutableShare(self.sharedir, storage_index, shnum, max_size=max_space_per_bucket, create=True)
2057         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
2058         return bw
2059+
2060+    def set_storage_server(self, ss):
2061+        self.ss = ss
2062         
2063 
2064 # each share file (in storage/shares/$SI/$SHNUM) contains lease information
2065hunk ./src/allmydata/storage/server.py 29
2066 # Where "$START" denotes the first 10 bits worth of $STORAGEINDEX (that's 2
2067 # base-32 chars).
2068 
2069-# $SHARENUM matches this regex:
2070-NUM_RE=re.compile("^[0-9]+$")
2071 
2072 class StorageServer(service.MultiService, Referenceable):
2073     implements(RIStorageServer, IStatsProducer)
2074}
2075[checkpoint4
2076wilcoxjg@gmail.com**20110628202202
2077 Ignore-this: 9778596c10bb066b58fc211f8c1707b7
2078] {
2079hunk ./src/allmydata/storage/backends/das/core.py 96
2080         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
2081         return bw
2082 
2083+    def make_bucket_reader(self, share):
2084+        return BucketReader(self.ss, share)
2085+
2086     def set_storage_server(self, ss):
2087         self.ss = ss
2088         
2089hunk ./src/allmydata/storage/backends/das/core.py 138
2090         must not be None. """
2091         precondition((max_size is not None) or (not create), max_size, create)
2092         self.shnum = shnum
2093+        self.storage_index = storageindex
2094         self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
2095         self._max_size = max_size
2096         if create:
2097hunk ./src/allmydata/storage/backends/das/core.py 173
2098             self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
2099         self._data_offset = 0xc
2100 
2101+    def get_shnum(self):
2102+        return self.shnum
2103+
2104     def unlink(self):
2105         os.unlink(self.fname)
2106 
2107hunk ./src/allmydata/storage/backends/null/core.py 2
2108 from allmydata.storage.backends.base import Backend
2109+from allmydata.storage.immutable import BucketWriter, BucketReader
2110 
2111 class NullCore(Backend):
2112     def __init__(self):
2113hunk ./src/allmydata/storage/backends/null/core.py 17
2114     def get_share(self, storage_index, sharenum):
2115         return None
2116 
2117-    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
2118-        return NullBucketWriter()
2119+    def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
2120+       
2121+        return BucketWriter(self.ss, immutableshare, max_space_per_bucket, lease_info, canary)
2122+
2123+    def set_storage_server(self, ss):
2124+        self.ss = ss
2125+
2126+class ImmutableShare:
2127+    sharetype = "immutable"
2128+
2129+    def __init__(self, sharedir, storageindex, shnum, max_size=None, create=False):
2130+        """ If max_size is not None then I won't allow more than
2131+        max_size to be written to me. If create=True then max_size
2132+        must not be None. """
2133+        precondition((max_size is not None) or (not create), max_size, create)
2134+        self.shnum = shnum
2135+        self.storage_index = storageindex
2136+        self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
2137+        self._max_size = max_size
2138+        if create:
2139+            # touch the file, so later callers will see that we're working on
2140+            # it. Also construct the metadata.
2141+            assert not os.path.exists(self.fname)
2142+            fileutil.make_dirs(os.path.dirname(self.fname))
2143+            f = open(self.fname, 'wb')
2144+            # The second field -- the four-byte share data length -- is no
2145+            # longer used as of Tahoe v1.3.0, but we continue to write it in
2146+            # there in case someone downgrades a storage server from >=
2147+            # Tahoe-1.3.0 to < Tahoe-1.3.0, or moves a share file from one
2148+            # server to another, etc. We do saturation -- a share data length
2149+            # larger than 2**32-1 (what can fit into the field) is marked as
2150+            # the largest length that can fit into the field. That way, even
2151+            # if this does happen, the old < v1.3.0 server will still allow
2152+            # clients to read the first part of the share.
2153+            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
2154+            f.close()
2155+            self._lease_offset = max_size + 0x0c
2156+            self._num_leases = 0
2157+        else:
2158+            f = open(self.fname, 'rb')
2159+            filesize = os.path.getsize(self.fname)
2160+            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
2161+            f.close()
2162+            if version != 1:
2163+                msg = "sharefile %s had version %d but we wanted 1" % \
2164+                      (self.fname, version)
2165+                raise UnknownImmutableContainerVersionError(msg)
2166+            self._num_leases = num_leases
2167+            self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
2168+        self._data_offset = 0xc
2169+
2170+    def get_shnum(self):
2171+        return self.shnum
2172+
2173+    def unlink(self):
2174+        os.unlink(self.fname)
2175+
2176+    def read_share_data(self, offset, length):
2177+        precondition(offset >= 0)
2178+        # Reads beyond the end of the data are truncated. Reads that start
2179+        # beyond the end of the data return an empty string.
2180+        seekpos = self._data_offset+offset
2181+        fsize = os.path.getsize(self.fname)
2182+        actuallength = max(0, min(length, fsize-seekpos))
2183+        if actuallength == 0:
2184+            return ""
2185+        f = open(self.fname, 'rb')
2186+        f.seek(seekpos)
2187+        return f.read(actuallength)
2188+
2189+    def write_share_data(self, offset, data):
2190+        length = len(data)
2191+        precondition(offset >= 0, offset)
2192+        if self._max_size is not None and offset+length > self._max_size:
2193+            raise DataTooLargeError(self._max_size, offset, length)
2194+        f = open(self.fname, 'rb+')
2195+        real_offset = self._data_offset+offset
2196+        f.seek(real_offset)
2197+        assert f.tell() == real_offset
2198+        f.write(data)
2199+        f.close()
2200+
2201+    def _write_lease_record(self, f, lease_number, lease_info):
2202+        offset = self._lease_offset + lease_number * self.LEASE_SIZE
2203+        f.seek(offset)
2204+        assert f.tell() == offset
2205+        f.write(lease_info.to_immutable_data())
2206+
2207+    def _read_num_leases(self, f):
2208+        f.seek(0x08)
2209+        (num_leases,) = struct.unpack(">L", f.read(4))
2210+        return num_leases
2211+
2212+    def _write_num_leases(self, f, num_leases):
2213+        f.seek(0x08)
2214+        f.write(struct.pack(">L", num_leases))
2215+
2216+    def _truncate_leases(self, f, num_leases):
2217+        f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE)
2218+
2219+    def get_leases(self):
2220+        """Yields a LeaseInfo instance for all leases."""
2221+        f = open(self.fname, 'rb')
2222+        (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
2223+        f.seek(self._lease_offset)
2224+        for i in range(num_leases):
2225+            data = f.read(self.LEASE_SIZE)
2226+            if data:
2227+                yield LeaseInfo().from_immutable_data(data)
2228+
2229+    def add_lease(self, lease_info):
2230+        f = open(self.fname, 'rb+')
2231+        num_leases = self._read_num_leases(f)
2232+        self._write_lease_record(f, num_leases, lease_info)
2233+        self._write_num_leases(f, num_leases+1)
2234+        f.close()
2235+
2236+    def renew_lease(self, renew_secret, new_expire_time):
2237+        for i,lease in enumerate(self.get_leases()):
2238+            if constant_time_compare(lease.renew_secret, renew_secret):
2239+                # yup. See if we need to update the owner time.
2240+                if new_expire_time > lease.expiration_time:
2241+                    # yes
2242+                    lease.expiration_time = new_expire_time
2243+                    f = open(self.fname, 'rb+')
2244+                    self._write_lease_record(f, i, lease)
2245+                    f.close()
2246+                return
2247+        raise IndexError("unable to renew non-existent lease")
2248+
2249+    def add_or_renew_lease(self, lease_info):
2250+        try:
2251+            self.renew_lease(lease_info.renew_secret,
2252+                             lease_info.expiration_time)
2253+        except IndexError:
2254+            self.add_lease(lease_info)
2255+
2256+
2257+    def cancel_lease(self, cancel_secret):
2258+        """Remove a lease with the given cancel_secret. If the last lease is
2259+        cancelled, the file will be removed. Return the number of bytes that
2260+        were freed (by truncating the list of leases, and possibly by
2261+        deleting the file. Raise IndexError if there was no lease with the
2262+        given cancel_secret.
2263+        """
2264+
2265+        leases = list(self.get_leases())
2266+        num_leases_removed = 0
2267+        for i,lease in enumerate(leases):
2268+            if constant_time_compare(lease.cancel_secret, cancel_secret):
2269+                leases[i] = None
2270+                num_leases_removed += 1
2271+        if not num_leases_removed:
2272+            raise IndexError("unable to find matching lease to cancel")
2273+        if num_leases_removed:
2274+            # pack and write out the remaining leases. We write these out in
2275+            # the same order as they were added, so that if we crash while
2276+            # doing this, we won't lose any non-cancelled leases.
2277+            leases = [l for l in leases if l] # remove the cancelled leases
2278+            f = open(self.fname, 'rb+')
2279+            for i,lease in enumerate(leases):
2280+                self._write_lease_record(f, i, lease)
2281+            self._write_num_leases(f, len(leases))
2282+            self._truncate_leases(f, len(leases))
2283+            f.close()
2284+        space_freed = self.LEASE_SIZE * num_leases_removed
2285+        if not len(leases):
2286+            space_freed += os.stat(self.fname)[stat.ST_SIZE]
2287+            self.unlink()
2288+        return space_freed
2289hunk ./src/allmydata/storage/immutable.py 114
2290 class BucketReader(Referenceable):
2291     implements(RIBucketReader)
2292 
2293-    def __init__(self, ss, sharefname, storage_index=None, shnum=None):
2294+    def __init__(self, ss, share):
2295         self.ss = ss
2296hunk ./src/allmydata/storage/immutable.py 116
2297-        self._share_file = ShareFile(sharefname)
2298-        self.storage_index = storage_index
2299-        self.shnum = shnum
2300+        self._share_file = share
2301+        self.storage_index = share.storage_index
2302+        self.shnum = share.shnum
2303 
2304     def __repr__(self):
2305         return "<%s %s %s>" % (self.__class__.__name__,
2306hunk ./src/allmydata/storage/server.py 316
2307         si_s = si_b2a(storage_index)
2308         log.msg("storage: get_buckets %s" % si_s)
2309         bucketreaders = {} # k: sharenum, v: BucketReader
2310-        for shnum, filename in self.backend.get_shares(storage_index):
2311-            bucketreaders[shnum] = BucketReader(self, filename,
2312-                                                storage_index, shnum)
2313+        self.backend.set_storage_server(self)
2314+        for share in self.backend.get_shares(storage_index):
2315+            bucketreaders[share.get_shnum()] = self.backend.make_bucket_reader(share)
2316         self.add_latency("get", time.time() - start)
2317         return bucketreaders
2318 
2319hunk ./src/allmydata/test/test_backends.py 25
2320 tempdir = 'teststoredir'
2321 sharedirname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
2322 sharefname = os.path.join(sharedirname, '0')
2323+expiration_policy = {'enabled' : False,
2324+                     'mode' : 'age',
2325+                     'override_lease_duration' : None,
2326+                     'cutoff_date' : None,
2327+                     'sharetypes' : None}
2328 
2329 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
2330     @mock.patch('time.time')
2331hunk ./src/allmydata/test/test_backends.py 43
2332         tries to read or write to the file system. """
2333 
2334         # Now begin the test.
2335-        s = StorageServer('testnodeidxxxxxxxxxx', backend=NullBackend())
2336+        s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2337 
2338         self.failIf(mockisdir.called)
2339         self.failIf(mocklistdir.called)
2340hunk ./src/allmydata/test/test_backends.py 74
2341         mockopen.side_effect = call_open
2342 
2343         # Now begin the test.
2344-        s = StorageServer('testnodeidxxxxxxxxxx', backend=FSBackend('teststoredir'))
2345+        s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore('teststoredir', expiration_policy))
2346 
2347         self.failIf(mockisdir.called)
2348         self.failIf(mocklistdir.called)
2349hunk ./src/allmydata/test/test_backends.py 86
2350 
2351 class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
2352     def setUp(self):
2353-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullBackend())
2354+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2355 
2356     @mock.patch('os.mkdir')
2357     @mock.patch('__builtin__.open')
2358hunk ./src/allmydata/test/test_backends.py 136
2359             elif fname == os.path.join(tempdir, 'lease_checker.history'):
2360                 return StringIO()
2361         mockopen.side_effect = call_open
2362-        expiration_policy = {'enabled' : False,
2363-                             'mode' : 'age',
2364-                             'override_lease_duration' : None,
2365-                             'cutoff_date' : None,
2366-                             'sharetypes' : None}
2367         testbackend = DASCore(tempdir, expiration_policy)
2368         self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
2369 
2370}
2371[checkpoint5
2372wilcoxjg@gmail.com**20110705034626
2373 Ignore-this: 255780bd58299b0aa33c027e9d008262
2374] {
2375addfile ./src/allmydata/storage/backends/base.py
2376hunk ./src/allmydata/storage/backends/base.py 1
2377+from twisted.application import service
2378+
2379+class Backend(service.MultiService):
2380+    def __init__(self):
2381+        service.MultiService.__init__(self)
2382hunk ./src/allmydata/storage/backends/null/core.py 19
2383 
2384     def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
2385         
2386+        immutableshare = ImmutableShare()
2387         return BucketWriter(self.ss, immutableshare, max_space_per_bucket, lease_info, canary)
2388 
2389     def set_storage_server(self, ss):
2390hunk ./src/allmydata/storage/backends/null/core.py 28
2391 class ImmutableShare:
2392     sharetype = "immutable"
2393 
2394-    def __init__(self, sharedir, storageindex, shnum, max_size=None, create=False):
2395+    def __init__(self):
2396         """ If max_size is not None then I won't allow more than
2397         max_size to be written to me. If create=True then max_size
2398         must not be None. """
2399hunk ./src/allmydata/storage/backends/null/core.py 32
2400-        precondition((max_size is not None) or (not create), max_size, create)
2401-        self.shnum = shnum
2402-        self.storage_index = storageindex
2403-        self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
2404-        self._max_size = max_size
2405-        if create:
2406-            # touch the file, so later callers will see that we're working on
2407-            # it. Also construct the metadata.
2408-            assert not os.path.exists(self.fname)
2409-            fileutil.make_dirs(os.path.dirname(self.fname))
2410-            f = open(self.fname, 'wb')
2411-            # The second field -- the four-byte share data length -- is no
2412-            # longer used as of Tahoe v1.3.0, but we continue to write it in
2413-            # there in case someone downgrades a storage server from >=
2414-            # Tahoe-1.3.0 to < Tahoe-1.3.0, or moves a share file from one
2415-            # server to another, etc. We do saturation -- a share data length
2416-            # larger than 2**32-1 (what can fit into the field) is marked as
2417-            # the largest length that can fit into the field. That way, even
2418-            # if this does happen, the old < v1.3.0 server will still allow
2419-            # clients to read the first part of the share.
2420-            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
2421-            f.close()
2422-            self._lease_offset = max_size + 0x0c
2423-            self._num_leases = 0
2424-        else:
2425-            f = open(self.fname, 'rb')
2426-            filesize = os.path.getsize(self.fname)
2427-            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
2428-            f.close()
2429-            if version != 1:
2430-                msg = "sharefile %s had version %d but we wanted 1" % \
2431-                      (self.fname, version)
2432-                raise UnknownImmutableContainerVersionError(msg)
2433-            self._num_leases = num_leases
2434-            self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
2435-        self._data_offset = 0xc
2436+        pass
2437 
2438     def get_shnum(self):
2439         return self.shnum
2440hunk ./src/allmydata/storage/backends/null/core.py 54
2441         return f.read(actuallength)
2442 
2443     def write_share_data(self, offset, data):
2444-        length = len(data)
2445-        precondition(offset >= 0, offset)
2446-        if self._max_size is not None and offset+length > self._max_size:
2447-            raise DataTooLargeError(self._max_size, offset, length)
2448-        f = open(self.fname, 'rb+')
2449-        real_offset = self._data_offset+offset
2450-        f.seek(real_offset)
2451-        assert f.tell() == real_offset
2452-        f.write(data)
2453-        f.close()
2454+        pass
2455 
2456     def _write_lease_record(self, f, lease_number, lease_info):
2457         offset = self._lease_offset + lease_number * self.LEASE_SIZE
2458hunk ./src/allmydata/storage/backends/null/core.py 84
2459             if data:
2460                 yield LeaseInfo().from_immutable_data(data)
2461 
2462-    def add_lease(self, lease_info):
2463-        f = open(self.fname, 'rb+')
2464-        num_leases = self._read_num_leases(f)
2465-        self._write_lease_record(f, num_leases, lease_info)
2466-        self._write_num_leases(f, num_leases+1)
2467-        f.close()
2468+    def add_lease(self, lease):
2469+        pass
2470 
2471     def renew_lease(self, renew_secret, new_expire_time):
2472         for i,lease in enumerate(self.get_leases()):
2473hunk ./src/allmydata/test/test_backends.py 32
2474                      'sharetypes' : None}
2475 
2476 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
2477-    @mock.patch('time.time')
2478-    @mock.patch('os.mkdir')
2479-    @mock.patch('__builtin__.open')
2480-    @mock.patch('os.listdir')
2481-    @mock.patch('os.path.isdir')
2482-    def test_create_server_null_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
2483-        """ This tests whether a server instance can be constructed
2484-        with a null backend. The server instance fails the test if it
2485-        tries to read or write to the file system. """
2486-
2487-        # Now begin the test.
2488-        s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2489-
2490-        self.failIf(mockisdir.called)
2491-        self.failIf(mocklistdir.called)
2492-        self.failIf(mockopen.called)
2493-        self.failIf(mockmkdir.called)
2494-
2495-        # You passed!
2496-
2497     @mock.patch('time.time')
2498     @mock.patch('os.mkdir')
2499     @mock.patch('__builtin__.open')
2500hunk ./src/allmydata/test/test_backends.py 53
2501                 self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
2502         mockopen.side_effect = call_open
2503 
2504-        # Now begin the test.
2505-        s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore('teststoredir', expiration_policy))
2506-
2507-        self.failIf(mockisdir.called)
2508-        self.failIf(mocklistdir.called)
2509-        self.failIf(mockopen.called)
2510-        self.failIf(mockmkdir.called)
2511-        self.failIf(mocktime.called)
2512-
2513-        # You passed!
2514-
2515-class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
2516-    def setUp(self):
2517-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2518-
2519-    @mock.patch('os.mkdir')
2520-    @mock.patch('__builtin__.open')
2521-    @mock.patch('os.listdir')
2522-    @mock.patch('os.path.isdir')
2523-    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir):
2524-        """ Write a new share. """
2525-
2526-        # Now begin the test.
2527-        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
2528-        bs[0].remote_write(0, 'a')
2529-        self.failIf(mockisdir.called)
2530-        self.failIf(mocklistdir.called)
2531-        self.failIf(mockopen.called)
2532-        self.failIf(mockmkdir.called)
2533+        def call_isdir(fname):
2534+            if fname == os.path.join(tempdir,'shares'):
2535+                return True
2536+            elif fname == os.path.join(tempdir,'shares', 'incoming'):
2537+                return True
2538+            else:
2539+                self.fail("Server with FS backend tried to idsir '%s'" % (fname,))
2540+        mockisdir.side_effect = call_isdir
2541 
2542hunk ./src/allmydata/test/test_backends.py 62
2543-    @mock.patch('os.path.exists')
2544-    @mock.patch('os.path.getsize')
2545-    @mock.patch('__builtin__.open')
2546-    @mock.patch('os.listdir')
2547-    def test_read_share(self, mocklistdir, mockopen, mockgetsize, mockexists):
2548-        """ This tests whether the code correctly finds and reads
2549-        shares written out by old (Tahoe-LAFS <= v1.8.2)
2550-        servers. There is a similar test in test_download, but that one
2551-        is from the perspective of the client and exercises a deeper
2552-        stack of code. This one is for exercising just the
2553-        StorageServer object. """
2554+        def call_mkdir(fname, mode):
2555+            """XXX something is calling mkdir teststoredir and teststoredir/shares twice...  this is odd!"""
2556+            self.failUnlessEqual(0777, mode)
2557+            if fname == tempdir:
2558+                return None
2559+            elif fname == os.path.join(tempdir,'shares'):
2560+                return None
2561+            elif fname == os.path.join(tempdir,'shares', 'incoming'):
2562+                return None
2563+            else:
2564+                self.fail("Server with FS backend tried to mkdir '%s'" % (fname,))
2565+        mockmkdir.side_effect = call_mkdir
2566 
2567         # Now begin the test.
2568hunk ./src/allmydata/test/test_backends.py 76
2569-        bs = self.s.remote_get_buckets('teststorage_index')
2570+        s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore('teststoredir', expiration_policy))
2571 
2572hunk ./src/allmydata/test/test_backends.py 78
2573-        self.failUnlessEqual(len(bs), 0)
2574-        self.failIf(mocklistdir.called)
2575-        self.failIf(mockopen.called)
2576-        self.failIf(mockgetsize.called)
2577-        self.failIf(mockexists.called)
2578+        self.failIf(mocklistdir.called, mocklistdir.call_args_list)
2579 
2580 
2581 class TestServerFSBackend(unittest.TestCase, ReallyEqualMixin):
2582hunk ./src/allmydata/test/test_backends.py 193
2583         self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
2584 
2585 
2586+
2587+class TestBackendConstruction(unittest.TestCase, ReallyEqualMixin):
2588+    @mock.patch('time.time')
2589+    @mock.patch('os.mkdir')
2590+    @mock.patch('__builtin__.open')
2591+    @mock.patch('os.listdir')
2592+    @mock.patch('os.path.isdir')
2593+    def test_create_fs_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
2594+        """ This tests whether a file system backend instance can be
2595+        constructed. To pass the test, it has to use the
2596+        filesystem in only the prescribed ways. """
2597+
2598+        def call_open(fname, mode):
2599+            if fname == os.path.join(tempdir,'bucket_counter.state'):
2600+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
2601+            elif fname == os.path.join(tempdir, 'lease_checker.state'):
2602+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
2603+            elif fname == os.path.join(tempdir, 'lease_checker.history'):
2604+                return StringIO()
2605+            else:
2606+                self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
2607+        mockopen.side_effect = call_open
2608+
2609+        def call_isdir(fname):
2610+            if fname == os.path.join(tempdir,'shares'):
2611+                return True
2612+            elif fname == os.path.join(tempdir,'shares', 'incoming'):
2613+                return True
2614+            else:
2615+                self.fail("Server with FS backend tried to idsir '%s'" % (fname,))
2616+        mockisdir.side_effect = call_isdir
2617+
2618+        def call_mkdir(fname, mode):
2619+            """XXX something is calling mkdir teststoredir and teststoredir/shares twice...  this is odd!"""
2620+            self.failUnlessEqual(0777, mode)
2621+            if fname == tempdir:
2622+                return None
2623+            elif fname == os.path.join(tempdir,'shares'):
2624+                return None
2625+            elif fname == os.path.join(tempdir,'shares', 'incoming'):
2626+                return None
2627+            else:
2628+                self.fail("Server with FS backend tried to mkdir '%s'" % (fname,))
2629+        mockmkdir.side_effect = call_mkdir
2630+
2631+        # Now begin the test.
2632+        DASCore('teststoredir', expiration_policy)
2633+
2634+        self.failIf(mocklistdir.called, mocklistdir.call_args_list)
2635}
2636[checkpoint 6
2637wilcoxjg@gmail.com**20110706190824
2638 Ignore-this: 2fb2d722b53fe4a72c99118c01fceb69
2639] {
2640hunk ./src/allmydata/interfaces.py 100
2641                          renew_secret=LeaseRenewSecret,
2642                          cancel_secret=LeaseCancelSecret,
2643                          sharenums=SetOf(int, maxLength=MAX_BUCKETS),
2644-                         allocated_size=Offset, canary=Referenceable):
2645+                         allocated_size=Offset,
2646+                         canary=Referenceable):
2647         """
2648hunk ./src/allmydata/interfaces.py 103
2649-        @param storage_index: the index of the bucket to be created or
2650+        @param storage_index: the index of the shares to be created or
2651                               increfed.
2652hunk ./src/allmydata/interfaces.py 105
2653-        @param sharenums: these are the share numbers (probably between 0 and
2654-                          99) that the sender is proposing to store on this
2655-                          server.
2656-        @param renew_secret: This is the secret used to protect bucket refresh
2657+        @param renew_secret: This is the secret used to protect shares refresh
2658                              This secret is generated by the client and
2659                              stored for later comparison by the server. Each
2660                              server is given a different secret.
2661hunk ./src/allmydata/interfaces.py 109
2662-        @param cancel_secret: Like renew_secret, but protects bucket decref.
2663-        @param canary: If the canary is lost before close(), the bucket is
2664+        @param cancel_secret: Like renew_secret, but protects shares decref.
2665+        @param sharenums: these are the share numbers (probably between 0 and
2666+                          99) that the sender is proposing to store on this
2667+                          server.
2668+        @param allocated_size: XXX The size of the shares the client wishes to store.
2669+        @param canary: If the canary is lost before close(), the shares are
2670                        deleted.
2671hunk ./src/allmydata/interfaces.py 116
2672+
2673         @return: tuple of (alreadygot, allocated), where alreadygot is what we
2674                  already have and allocated is what we hereby agree to accept.
2675                  New leases are added for shares in both lists.
2676hunk ./src/allmydata/interfaces.py 128
2677                   renew_secret=LeaseRenewSecret,
2678                   cancel_secret=LeaseCancelSecret):
2679         """
2680-        Add a new lease on the given bucket. If the renew_secret matches an
2681+        Add a new lease on the given shares. If the renew_secret matches an
2682         existing lease, that lease will be renewed instead. If there is no
2683         bucket for the given storage_index, return silently. (note that in
2684         tahoe-1.3.0 and earlier, IndexError was raised if there was no
2685hunk ./src/allmydata/storage/server.py 17
2686 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
2687      create_mutable_sharefile
2688 
2689-from zope.interface import implements
2690-
2691 # storage/
2692 # storage/shares/incoming
2693 #   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
2694hunk ./src/allmydata/test/test_backends.py 6
2695 from StringIO import StringIO
2696 
2697 from allmydata.test.common_util import ReallyEqualMixin
2698+from allmydata.util.assertutil import _assert
2699 
2700 import mock, os
2701 
2702hunk ./src/allmydata/test/test_backends.py 92
2703                 raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
2704             elif fname == os.path.join(tempdir, 'lease_checker.history'):
2705                 return StringIO()
2706+            else:
2707+                _assert(False, "The tester code doesn't recognize this case.") 
2708+
2709         mockopen.side_effect = call_open
2710         testbackend = DASCore(tempdir, expiration_policy)
2711         self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
2712hunk ./src/allmydata/test/test_backends.py 109
2713 
2714         def call_listdir(dirname):
2715             self.failUnlessReallyEqual(dirname, sharedirname)
2716-            raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares/or/orsxg5dtorxxeylhmvpws3temv4a'))
2717+            raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
2718 
2719         mocklistdir.side_effect = call_listdir
2720 
2721hunk ./src/allmydata/test/test_backends.py 113
2722+        def call_isdir(dirname):
2723+            self.failUnlessReallyEqual(dirname, sharedirname)
2724+            return True
2725+
2726+        mockisdir.side_effect = call_isdir
2727+
2728+        def call_mkdir(dirname, permissions):
2729+            if dirname not in [sharedirname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
2730+                self.Fail
2731+            else:
2732+                return True
2733+
2734+        mockmkdir.side_effect = call_mkdir
2735+
2736         class MockFile:
2737             def __init__(self):
2738                 self.buffer = ''
2739hunk ./src/allmydata/test/test_backends.py 156
2740             return sharefile
2741 
2742         mockopen.side_effect = call_open
2743+
2744         # Now begin the test.
2745         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
2746         bs[0].remote_write(0, 'a')
2747hunk ./src/allmydata/test/test_backends.py 161
2748         self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
2749+       
2750+        # Now test the allocated_size method.
2751+        spaceint = self.s.allocated_size()
2752 
2753     @mock.patch('os.path.exists')
2754     @mock.patch('os.path.getsize')
2755}
2756[checkpoint 7
2757wilcoxjg@gmail.com**20110706200820
2758 Ignore-this: 16b790efc41a53964cbb99c0e86dafba
2759] hunk ./src/allmydata/test/test_backends.py 164
2760         
2761         # Now test the allocated_size method.
2762         spaceint = self.s.allocated_size()
2763+        self.failUnlessReallyEqual(spaceint, 1)
2764 
2765     @mock.patch('os.path.exists')
2766     @mock.patch('os.path.getsize')
2767[checkpoint8
2768wilcoxjg@gmail.com**20110706223126
2769 Ignore-this: 97336180883cb798b16f15411179f827
2770   The nullbackend is necessary to test unlimited space in a backend.  It is a mock-like object.
2771] hunk ./src/allmydata/test/test_backends.py 32
2772                      'cutoff_date' : None,
2773                      'sharetypes' : None}
2774 
2775+class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
2776+    def setUp(self):
2777+        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2778+
2779+    @mock.patch('os.mkdir')
2780+    @mock.patch('__builtin__.open')
2781+    @mock.patch('os.listdir')
2782+    @mock.patch('os.path.isdir')
2783+    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir):
2784+        """ Write a new share. """
2785+
2786+        # Now begin the test.
2787+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
2788+        bs[0].remote_write(0, 'a')
2789+        self.failIf(mockisdir.called)
2790+        self.failIf(mocklistdir.called)
2791+        self.failIf(mockopen.called)
2792+        self.failIf(mockmkdir.called)
2793+
2794 class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
2795     @mock.patch('time.time')
2796     @mock.patch('os.mkdir')
2797[checkpoint 9
2798wilcoxjg@gmail.com**20110707042942
2799 Ignore-this: 75396571fd05944755a104a8fc38aaf6
2800] {
2801hunk ./src/allmydata/storage/backends/das/core.py 88
2802                     filename = os.path.join(finalstoragedir, f)
2803                     yield ImmutableShare(self.sharedir, storage_index, int(f))
2804         except OSError:
2805-            # Commonly caused by there being no buckets at all.
2806+            # Commonly caused by there being no shares at all.
2807             pass
2808         
2809     def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
2810hunk ./src/allmydata/storage/backends/das/core.py 141
2811         self.storage_index = storageindex
2812         self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
2813         self._max_size = max_size
2814+        self.incomingdir = os.path.join(sharedir, 'incoming')
2815+        si_dir = storage_index_to_dir(storageindex)
2816+        self.incominghome = os.path.join(self.incomingdir, si_dir, "%d" % shnum)
2817+        self.finalhome = os.path.join(sharedir, si_dir, "%d" % shnum)
2818         if create:
2819             # touch the file, so later callers will see that we're working on
2820             # it. Also construct the metadata.
2821hunk ./src/allmydata/storage/backends/das/core.py 177
2822             self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
2823         self._data_offset = 0xc
2824 
2825+    def close(self):
2826+        fileutil.make_dirs(os.path.dirname(self.finalhome))
2827+        fileutil.rename(self.incominghome, self.finalhome)
2828+        try:
2829+            # self.incominghome is like storage/shares/incoming/ab/abcde/4 .
2830+            # We try to delete the parent (.../ab/abcde) to avoid leaving
2831+            # these directories lying around forever, but the delete might
2832+            # fail if we're working on another share for the same storage
2833+            # index (like ab/abcde/5). The alternative approach would be to
2834+            # use a hierarchy of objects (PrefixHolder, BucketHolder,
2835+            # ShareWriter), each of which is responsible for a single
2836+            # directory on disk, and have them use reference counting of
2837+            # their children to know when they should do the rmdir. This
2838+            # approach is simpler, but relies on os.rmdir refusing to delete
2839+            # a non-empty directory. Do *not* use fileutil.rm_dir() here!
2840+            os.rmdir(os.path.dirname(self.incominghome))
2841+            # we also delete the grandparent (prefix) directory, .../ab ,
2842+            # again to avoid leaving directories lying around. This might
2843+            # fail if there is another bucket open that shares a prefix (like
2844+            # ab/abfff).
2845+            os.rmdir(os.path.dirname(os.path.dirname(self.incominghome)))
2846+            # we leave the great-grandparent (incoming/) directory in place.
2847+        except EnvironmentError:
2848+            # ignore the "can't rmdir because the directory is not empty"
2849+            # exceptions, those are normal consequences of the
2850+            # above-mentioned conditions.
2851+            pass
2852+        pass
2853+       
2854+    def stat(self):
2855+        return os.stat(self.finalhome)[stat.ST_SIZE]
2856+
2857     def get_shnum(self):
2858         return self.shnum
2859 
2860hunk ./src/allmydata/storage/immutable.py 7
2861 
2862 from zope.interface import implements
2863 from allmydata.interfaces import RIBucketWriter, RIBucketReader
2864-from allmydata.util import base32, fileutil, log
2865+from allmydata.util import base32, log
2866 from allmydata.util.assertutil import precondition
2867 from allmydata.util.hashutil import constant_time_compare
2868 from allmydata.storage.lease import LeaseInfo
2869hunk ./src/allmydata/storage/immutable.py 44
2870     def remote_close(self):
2871         precondition(not self.closed)
2872         start = time.time()
2873-
2874-        fileutil.make_dirs(os.path.dirname(self.finalhome))
2875-        fileutil.rename(self.incominghome, self.finalhome)
2876-        try:
2877-            # self.incominghome is like storage/shares/incoming/ab/abcde/4 .
2878-            # We try to delete the parent (.../ab/abcde) to avoid leaving
2879-            # these directories lying around forever, but the delete might
2880-            # fail if we're working on another share for the same storage
2881-            # index (like ab/abcde/5). The alternative approach would be to
2882-            # use a hierarchy of objects (PrefixHolder, BucketHolder,
2883-            # ShareWriter), each of which is responsible for a single
2884-            # directory on disk, and have them use reference counting of
2885-            # their children to know when they should do the rmdir. This
2886-            # approach is simpler, but relies on os.rmdir refusing to delete
2887-            # a non-empty directory. Do *not* use fileutil.rm_dir() here!
2888-            os.rmdir(os.path.dirname(self.incominghome))
2889-            # we also delete the grandparent (prefix) directory, .../ab ,
2890-            # again to avoid leaving directories lying around. This might
2891-            # fail if there is another bucket open that shares a prefix (like
2892-            # ab/abfff).
2893-            os.rmdir(os.path.dirname(os.path.dirname(self.incominghome)))
2894-            # we leave the great-grandparent (incoming/) directory in place.
2895-        except EnvironmentError:
2896-            # ignore the "can't rmdir because the directory is not empty"
2897-            # exceptions, those are normal consequences of the
2898-            # above-mentioned conditions.
2899-            pass
2900+        self._sharefile.close()
2901         self._sharefile = None
2902         self.closed = True
2903         self._canary.dontNotifyOnDisconnect(self._disconnect_marker)
2904hunk ./src/allmydata/storage/immutable.py 49
2905 
2906-        filelen = os.stat(self.finalhome)[stat.ST_SIZE]
2907+        filelen = self._sharefile.stat()
2908         self.ss.bucket_writer_closed(self, filelen)
2909         self.ss.add_latency("close", time.time() - start)
2910         self.ss.count("close")
2911hunk ./src/allmydata/storage/server.py 45
2912         self._active_writers = weakref.WeakKeyDictionary()
2913         self.backend = backend
2914         self.backend.setServiceParent(self)
2915+        self.backend.set_storage_server(self)
2916         log.msg("StorageServer created", facility="tahoe.storage")
2917 
2918         self.latencies = {"allocate": [], # immutable
2919hunk ./src/allmydata/storage/server.py 220
2920 
2921         for shnum in (sharenums - alreadygot):
2922             if (not limited) or (remaining_space >= max_space_per_bucket):
2923-                #XXX or should the following line occur in storage server construtor? ok! we need to create the new share file.
2924-                self.backend.set_storage_server(self)
2925                 bw = self.backend.make_bucket_writer(storage_index, shnum,
2926                                                      max_space_per_bucket, lease_info, canary)
2927                 bucketwriters[shnum] = bw
2928hunk ./src/allmydata/test/test_backends.py 117
2929         mockopen.side_effect = call_open
2930         testbackend = DASCore(tempdir, expiration_policy)
2931         self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
2932-
2933+   
2934+    @mock.patch('allmydata.util.fileutil.get_available_space')
2935     @mock.patch('time.time')
2936     @mock.patch('os.mkdir')
2937     @mock.patch('__builtin__.open')
2938hunk ./src/allmydata/test/test_backends.py 124
2939     @mock.patch('os.listdir')
2940     @mock.patch('os.path.isdir')
2941-    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
2942+    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime,\
2943+                             mockget_available_space):
2944         """ Write a new share. """
2945 
2946         def call_listdir(dirname):
2947hunk ./src/allmydata/test/test_backends.py 148
2948 
2949         mockmkdir.side_effect = call_mkdir
2950 
2951+        def call_get_available_space(storedir, reserved_space):
2952+            self.failUnlessReallyEqual(storedir, tempdir)
2953+            return 1
2954+
2955+        mockget_available_space.side_effect = call_get_available_space
2956+
2957         class MockFile:
2958             def __init__(self):
2959                 self.buffer = ''
2960hunk ./src/allmydata/test/test_backends.py 188
2961         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
2962         bs[0].remote_write(0, 'a')
2963         self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
2964-       
2965+
2966+        # What happens when there's not enough space for the client's request?
2967+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
2968+
2969         # Now test the allocated_size method.
2970         spaceint = self.s.allocated_size()
2971         self.failUnlessReallyEqual(spaceint, 1)
2972}
2973[checkpoint10
2974wilcoxjg@gmail.com**20110707172049
2975 Ignore-this: 9dd2fb8bee93a88cea2625058decff32
2976] {
2977hunk ./src/allmydata/test/test_backends.py 20
2978 # The following share file contents was generated with
2979 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
2980 # with share data == 'a'.
2981-share_data = 'a\x00\x00\x00\x00xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy\x00(\xde\x80'
2982+renew_secret  = 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
2983+cancel_secret = 'yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy'
2984+share_data = 'a\x00\x00\x00\x00' + renew_secret + cancel_secret + '\x00(\xde\x80'
2985 share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data
2986 
2987hunk ./src/allmydata/test/test_backends.py 25
2988+testnodeid = 'testnodeidxxxxxxxxxx'
2989 tempdir = 'teststoredir'
2990 sharedirname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
2991 sharefname = os.path.join(sharedirname, '0')
2992hunk ./src/allmydata/test/test_backends.py 37
2993 
2994 class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
2995     def setUp(self):
2996-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore())
2997+        self.s = StorageServer(testnodeid, backend=NullCore())
2998 
2999     @mock.patch('os.mkdir')
3000     @mock.patch('__builtin__.open')
3001hunk ./src/allmydata/test/test_backends.py 99
3002         mockmkdir.side_effect = call_mkdir
3003 
3004         # Now begin the test.
3005-        s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore('teststoredir', expiration_policy))
3006+        s = StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
3007 
3008         self.failIf(mocklistdir.called, mocklistdir.call_args_list)
3009 
3010hunk ./src/allmydata/test/test_backends.py 119
3011 
3012         mockopen.side_effect = call_open
3013         testbackend = DASCore(tempdir, expiration_policy)
3014-        self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) )
3015-   
3016+        self.s = StorageServer(testnodeid, backend=DASCore(tempdir, expiration_policy) )
3017+       
3018+    @mock.patch('allmydata.storage.backends.das.core.DASCore.get_shares')
3019     @mock.patch('allmydata.util.fileutil.get_available_space')
3020     @mock.patch('time.time')
3021     @mock.patch('os.mkdir')
3022hunk ./src/allmydata/test/test_backends.py 129
3023     @mock.patch('os.listdir')
3024     @mock.patch('os.path.isdir')
3025     def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime,\
3026-                             mockget_available_space):
3027+                             mockget_available_space, mockget_shares):
3028         """ Write a new share. """
3029 
3030         def call_listdir(dirname):
3031hunk ./src/allmydata/test/test_backends.py 139
3032         mocklistdir.side_effect = call_listdir
3033 
3034         def call_isdir(dirname):
3035+            #XXX Should there be any other tests here?
3036             self.failUnlessReallyEqual(dirname, sharedirname)
3037             return True
3038 
3039hunk ./src/allmydata/test/test_backends.py 159
3040 
3041         mockget_available_space.side_effect = call_get_available_space
3042 
3043+        mocktime.return_value = 0
3044+        class MockShare:
3045+            def __init__(self):
3046+                self.shnum = 1
3047+               
3048+            def add_or_renew_lease(elf, lease_info):
3049+                self.failUnlessReallyEqual(lease_info.renew_secret, renew_secret)
3050+                self.failUnlessReallyEqual(lease_info.cancel_secret, cancel_secret)
3051+                self.failUnlessReallyEqual(lease_info.owner_num, 0)
3052+                self.failUnlessReallyEqual(lease_info.expiration_time, mocktime() + 31*24*60*60)
3053+                self.failUnlessReallyEqual(lease_info.nodeid, testnodeid)
3054+               
3055+
3056+        share = MockShare()
3057+        def call_get_shares(storageindex):
3058+            return [share]
3059+
3060+        mockget_shares.side_effect = call_get_shares
3061+
3062         class MockFile:
3063             def __init__(self):
3064                 self.buffer = ''
3065hunk ./src/allmydata/test/test_backends.py 199
3066             def tell(self):
3067                 return self.pos
3068 
3069-        mocktime.return_value = 0
3070 
3071         sharefile = MockFile()
3072         def call_open(fname, mode):
3073}
3074[jacp 11
3075wilcoxjg@gmail.com**20110708213919
3076 Ignore-this: b8f81b264800590b3e2bfc6fffd21ff9
3077] {
3078hunk ./src/allmydata/storage/backends/das/core.py 144
3079         self.incomingdir = os.path.join(sharedir, 'incoming')
3080         si_dir = storage_index_to_dir(storageindex)
3081         self.incominghome = os.path.join(self.incomingdir, si_dir, "%d" % shnum)
3082+        #XXX  self.fname and self.finalhome need to be resolve/merged.
3083         self.finalhome = os.path.join(sharedir, si_dir, "%d" % shnum)
3084         if create:
3085             # touch the file, so later callers will see that we're working on
3086hunk ./src/allmydata/storage/backends/das/core.py 208
3087         pass
3088         
3089     def stat(self):
3090-        return os.stat(self.finalhome)[stat.ST_SIZE]
3091+        return os.stat(self.finalhome)[os.stat.ST_SIZE]
3092 
3093     def get_shnum(self):
3094         return self.shnum
3095hunk ./src/allmydata/storage/immutable.py 44
3096     def remote_close(self):
3097         precondition(not self.closed)
3098         start = time.time()
3099+
3100         self._sharefile.close()
3101hunk ./src/allmydata/storage/immutable.py 46
3102+        filelen = self._sharefile.stat()
3103         self._sharefile = None
3104hunk ./src/allmydata/storage/immutable.py 48
3105+
3106         self.closed = True
3107         self._canary.dontNotifyOnDisconnect(self._disconnect_marker)
3108 
3109hunk ./src/allmydata/storage/immutable.py 52
3110-        filelen = self._sharefile.stat()
3111         self.ss.bucket_writer_closed(self, filelen)
3112         self.ss.add_latency("close", time.time() - start)
3113         self.ss.count("close")
3114hunk ./src/allmydata/storage/server.py 220
3115 
3116         for shnum in (sharenums - alreadygot):
3117             if (not limited) or (remaining_space >= max_space_per_bucket):
3118-                bw = self.backend.make_bucket_writer(storage_index, shnum,
3119-                                                     max_space_per_bucket, lease_info, canary)
3120+                bw = self.backend.make_bucket_writer(storage_index, shnum, max_space_per_bucket, lease_info, canary)
3121                 bucketwriters[shnum] = bw
3122                 self._active_writers[bw] = 1
3123                 if limited:
3124hunk ./src/allmydata/test/test_backends.py 20
3125 # The following share file contents was generated with
3126 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
3127 # with share data == 'a'.
3128-renew_secret  = 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
3129-cancel_secret = 'yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy'
3130+renew_secret  = 'x'*32
3131+cancel_secret = 'y'*32
3132 share_data = 'a\x00\x00\x00\x00' + renew_secret + cancel_secret + '\x00(\xde\x80'
3133 share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data
3134 
3135hunk ./src/allmydata/test/test_backends.py 27
3136 testnodeid = 'testnodeidxxxxxxxxxx'
3137 tempdir = 'teststoredir'
3138-sharedirname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3139-sharefname = os.path.join(sharedirname, '0')
3140+sharedirfinalname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3141+sharedirincomingname = os.path.join(tempdir, 'shares', 'incoming', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3142+shareincomingname = os.path.join(sharedirincomingname, '0')
3143+sharefname = os.path.join(sharedirfinalname, '0')
3144+
3145 expiration_policy = {'enabled' : False,
3146                      'mode' : 'age',
3147                      'override_lease_duration' : None,
3148hunk ./src/allmydata/test/test_backends.py 123
3149         mockopen.side_effect = call_open
3150         testbackend = DASCore(tempdir, expiration_policy)
3151         self.s = StorageServer(testnodeid, backend=DASCore(tempdir, expiration_policy) )
3152-       
3153+
3154+    @mock.patch('allmydata.util.fileutil.rename')
3155+    @mock.patch('allmydata.util.fileutil.make_dirs')
3156+    @mock.patch('os.path.exists')
3157+    @mock.patch('os.stat')
3158     @mock.patch('allmydata.storage.backends.das.core.DASCore.get_shares')
3159     @mock.patch('allmydata.util.fileutil.get_available_space')
3160     @mock.patch('time.time')
3161hunk ./src/allmydata/test/test_backends.py 136
3162     @mock.patch('os.listdir')
3163     @mock.patch('os.path.isdir')
3164     def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime,\
3165-                             mockget_available_space, mockget_shares):
3166+                             mockget_available_space, mockget_shares, mockstat, mockexists, \
3167+                             mockmake_dirs, mockrename):
3168         """ Write a new share. """
3169 
3170         def call_listdir(dirname):
3171hunk ./src/allmydata/test/test_backends.py 141
3172-            self.failUnlessReallyEqual(dirname, sharedirname)
3173+            self.failUnlessReallyEqual(dirname, sharedirfinalname)
3174             raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
3175 
3176         mocklistdir.side_effect = call_listdir
3177hunk ./src/allmydata/test/test_backends.py 148
3178 
3179         def call_isdir(dirname):
3180             #XXX Should there be any other tests here?
3181-            self.failUnlessReallyEqual(dirname, sharedirname)
3182+            self.failUnlessReallyEqual(dirname, sharedirfinalname)
3183             return True
3184 
3185         mockisdir.side_effect = call_isdir
3186hunk ./src/allmydata/test/test_backends.py 154
3187 
3188         def call_mkdir(dirname, permissions):
3189-            if dirname not in [sharedirname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
3190+            if dirname not in [sharedirfinalname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
3191                 self.Fail
3192             else:
3193                 return True
3194hunk ./src/allmydata/test/test_backends.py 208
3195                 return self.pos
3196 
3197 
3198-        sharefile = MockFile()
3199+        fobj = MockFile()
3200         def call_open(fname, mode):
3201             self.failUnlessReallyEqual(fname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a', '0' ))
3202hunk ./src/allmydata/test/test_backends.py 211
3203-            return sharefile
3204+            return fobj
3205 
3206         mockopen.side_effect = call_open
3207 
3208hunk ./src/allmydata/test/test_backends.py 215
3209+        def call_make_dirs(dname):
3210+            self.failUnlessReallyEqual(dname, sharedirfinalname)
3211+           
3212+        mockmake_dirs.side_effect = call_make_dirs
3213+
3214+        def call_rename(src, dst):
3215+           self.failUnlessReallyEqual(src, shareincomingname)
3216+           self.failUnlessReallyEqual(dst, sharefname)
3217+           
3218+        mockrename.side_effect = call_rename
3219+
3220+        def call_exists(fname):
3221+            self.failUnlessReallyEqual(fname, sharefname)
3222+
3223+        mockexists.side_effect = call_exists
3224+
3225         # Now begin the test.
3226         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
3227         bs[0].remote_write(0, 'a')
3228hunk ./src/allmydata/test/test_backends.py 234
3229-        self.failUnlessReallyEqual(sharefile.buffer, share_file_data)
3230+        self.failUnlessReallyEqual(fobj.buffer, share_file_data)
3231+        spaceint = self.s.allocated_size()
3232+        self.failUnlessReallyEqual(spaceint, 1)
3233+
3234+        bs[0].remote_close()
3235 
3236         # What happens when there's not enough space for the client's request?
3237hunk ./src/allmydata/test/test_backends.py 241
3238-        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
3239+        # XXX Need to uncomment! alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
3240 
3241         # Now test the allocated_size method.
3242hunk ./src/allmydata/test/test_backends.py 244
3243-        spaceint = self.s.allocated_size()
3244-        self.failUnlessReallyEqual(spaceint, 1)
3245+        #self.failIf(mockexists.called, mockexists.call_args_list)
3246+        #self.failIf(mockmake_dirs.called, mockmake_dirs.call_args_list)
3247+        #self.failIf(mockrename.called, mockrename.call_args_list)
3248+        #self.failIf(mockstat.called, mockstat.call_args_list)
3249 
3250     @mock.patch('os.path.exists')
3251     @mock.patch('os.path.getsize')
3252}
3253[checkpoint12 testing correct behavior with regard to incoming and final
3254wilcoxjg@gmail.com**20110710191915
3255 Ignore-this: 34413c6dc100f8aec3c1bb217eaa6bc7
3256] {
3257hunk ./src/allmydata/storage/backends/das/core.py 74
3258         self.lease_checker = FSLeaseCheckingCrawler(statefile, historyfile, expiration_policy)
3259         self.lease_checker.setServiceParent(self)
3260 
3261+    def get_incoming(self, storageindex):
3262+        return set((1,))
3263+
3264     def get_available_space(self):
3265         if self.readonly:
3266             return 0
3267hunk ./src/allmydata/storage/server.py 77
3268         """Return a dict, indexed by category, that contains a dict of
3269         latency numbers for each category. If there are sufficient samples
3270         for unambiguous interpretation, each dict will contain the
3271-        following keys: mean, 01_0_percentile, 10_0_percentile,
3272+        following keys: samplesize, mean, 01_0_percentile, 10_0_percentile,
3273         50_0_percentile (median), 90_0_percentile, 95_0_percentile,
3274         99_0_percentile, 99_9_percentile.  If there are insufficient
3275         samples for a given percentile to be interpreted unambiguously
3276hunk ./src/allmydata/storage/server.py 120
3277 
3278     def get_stats(self):
3279         # remember: RIStatsProvider requires that our return dict
3280-        # contains numeric values.
3281+        # contains numeric, or None values.
3282         stats = { 'storage_server.allocated': self.allocated_size(), }
3283         stats['storage_server.reserved_space'] = self.reserved_space
3284         for category,ld in self.get_latencies().items():
3285hunk ./src/allmydata/storage/server.py 185
3286         start = time.time()
3287         self.count("allocate")
3288         alreadygot = set()
3289+        incoming = set()
3290         bucketwriters = {} # k: shnum, v: BucketWriter
3291 
3292         si_s = si_b2a(storage_index)
3293hunk ./src/allmydata/storage/server.py 219
3294             alreadygot.add(share.shnum)
3295             share.add_or_renew_lease(lease_info)
3296 
3297-        for shnum in (sharenums - alreadygot):
3298+        # fill incoming with all shares that are incoming use a set operation since there's no need to operate on individual pieces
3299+        incoming = self.backend.get_incoming(storageindex)
3300+
3301+        for shnum in ((sharenums - alreadygot) - incoming):
3302             if (not limited) or (remaining_space >= max_space_per_bucket):
3303                 bw = self.backend.make_bucket_writer(storage_index, shnum, max_space_per_bucket, lease_info, canary)
3304                 bucketwriters[shnum] = bw
3305hunk ./src/allmydata/storage/server.py 229
3306                 self._active_writers[bw] = 1
3307                 if limited:
3308                     remaining_space -= max_space_per_bucket
3309-
3310-        #XXX We SHOULD DOCUMENT LATER.
3311+            else:
3312+                # Bummer not enough space to accept this share.
3313+                pass
3314 
3315         self.add_latency("allocate", time.time() - start)
3316         return alreadygot, bucketwriters
3317hunk ./src/allmydata/storage/server.py 323
3318         self.add_latency("get", time.time() - start)
3319         return bucketreaders
3320 
3321-    def get_leases(self, storage_index):
3322+    def remote_get_incoming(self, storageindex):
3323+        incoming_share_set = self.backend.get_incoming(storageindex)
3324+        return incoming_share_set
3325+
3326+    def get_leases(self, storageindex):
3327         """Provide an iterator that yields all of the leases attached to this
3328         bucket. Each lease is returned as a LeaseInfo instance.
3329 
3330hunk ./src/allmydata/storage/server.py 337
3331         # since all shares get the same lease data, we just grab the leases
3332         # from the first share
3333         try:
3334-            shnum, filename = self._get_shares(storage_index).next()
3335+            shnum, filename = self._get_shares(storageindex).next()
3336             sf = ShareFile(filename)
3337             return sf.get_leases()
3338         except StopIteration:
3339hunk ./src/allmydata/test/test_backends.py 182
3340 
3341         share = MockShare()
3342         def call_get_shares(storageindex):
3343-            return [share]
3344+            #XXX  Whether or not to return an empty list depends on which case of get_shares we are interested in.
3345+            return []#share]
3346 
3347         mockget_shares.side_effect = call_get_shares
3348 
3349hunk ./src/allmydata/test/test_backends.py 222
3350         mockmake_dirs.side_effect = call_make_dirs
3351 
3352         def call_rename(src, dst):
3353-           self.failUnlessReallyEqual(src, shareincomingname)
3354-           self.failUnlessReallyEqual(dst, sharefname)
3355+            self.failUnlessReallyEqual(src, shareincomingname)
3356+            self.failUnlessReallyEqual(dst, sharefname)
3357             
3358         mockrename.side_effect = call_rename
3359 
3360hunk ./src/allmydata/test/test_backends.py 233
3361         mockexists.side_effect = call_exists
3362 
3363         # Now begin the test.
3364+
3365+        # XXX (0) ???  Fail unless something is not properly set-up?
3366         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
3367hunk ./src/allmydata/test/test_backends.py 236
3368+
3369+        # XXX (1) Inspect incoming and fail unless the sharenum is listed there.
3370+        alreadygota, bsa = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
3371+
3372+        self.failUnlessEqual(self.s.remote_get_incoming('teststorage_index'), set((0,)))
3373+        # XXX (2) Test that no bucketwriter results from a remote_allocate_buckets
3374+        # with the same si, until BucketWriter.remote_close() has been called.
3375+        # self.failIf(bsa)
3376+
3377+        # XXX (3) Inspect final and fail unless there's nothing there.
3378         bs[0].remote_write(0, 'a')
3379hunk ./src/allmydata/test/test_backends.py 247
3380+        # XXX (4a) Inspect final and fail unless share 0 is there.
3381+        # XXX (4b) Inspect incoming and fail unless share 0 is NOT there.
3382         self.failUnlessReallyEqual(fobj.buffer, share_file_data)
3383         spaceint = self.s.allocated_size()
3384         self.failUnlessReallyEqual(spaceint, 1)
3385hunk ./src/allmydata/test/test_backends.py 253
3386 
3387+        #  If there's something in self.alreadygot prior to remote_close() then fail.
3388         bs[0].remote_close()
3389 
3390         # What happens when there's not enough space for the client's request?
3391hunk ./src/allmydata/test/test_backends.py 260
3392         # XXX Need to uncomment! alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
3393 
3394         # Now test the allocated_size method.
3395-        #self.failIf(mockexists.called, mockexists.call_args_list)
3396+        # self.failIf(mockexists.called, mockexists.call_args_list)
3397         #self.failIf(mockmake_dirs.called, mockmake_dirs.call_args_list)
3398         #self.failIf(mockrename.called, mockrename.call_args_list)
3399         #self.failIf(mockstat.called, mockstat.call_args_list)
3400}
3401[fix inconsistent naming of storage_index vs storageindex in storage/server.py
3402wilcoxjg@gmail.com**20110710195139
3403 Ignore-this: 3b05cf549f3374f2c891159a8d4015aa
3404] {
3405hunk ./src/allmydata/storage/server.py 220
3406             share.add_or_renew_lease(lease_info)
3407 
3408         # fill incoming with all shares that are incoming use a set operation since there's no need to operate on individual pieces
3409-        incoming = self.backend.get_incoming(storageindex)
3410+        incoming = self.backend.get_incoming(storage_index)
3411 
3412         for shnum in ((sharenums - alreadygot) - incoming):
3413             if (not limited) or (remaining_space >= max_space_per_bucket):
3414hunk ./src/allmydata/storage/server.py 323
3415         self.add_latency("get", time.time() - start)
3416         return bucketreaders
3417 
3418-    def remote_get_incoming(self, storageindex):
3419-        incoming_share_set = self.backend.get_incoming(storageindex)
3420+    def remote_get_incoming(self, storage_index):
3421+        incoming_share_set = self.backend.get_incoming(storage_index)
3422         return incoming_share_set
3423 
3424hunk ./src/allmydata/storage/server.py 327
3425-    def get_leases(self, storageindex):
3426+    def get_leases(self, storage_index):
3427         """Provide an iterator that yields all of the leases attached to this
3428         bucket. Each lease is returned as a LeaseInfo instance.
3429 
3430hunk ./src/allmydata/storage/server.py 337
3431         # since all shares get the same lease data, we just grab the leases
3432         # from the first share
3433         try:
3434-            shnum, filename = self._get_shares(storageindex).next()
3435+            shnum, filename = self._get_shares(storage_index).next()
3436             sf = ShareFile(filename)
3437             return sf.get_leases()
3438         except StopIteration:
3439replace ./src/allmydata/storage/server.py [A-Za-z_0-9] storage_index storageindex
3440}
3441[adding comments to clarify what I'm about to do.
3442wilcoxjg@gmail.com**20110710220623
3443 Ignore-this: 44f97633c3eac1047660272e2308dd7c
3444] {
3445hunk ./src/allmydata/storage/backends/das/core.py 8
3446 
3447 import os, re, weakref, struct, time
3448 
3449-from foolscap.api import Referenceable
3450+#from foolscap.api import Referenceable
3451 from twisted.application import service
3452 
3453 from zope.interface import implements
3454hunk ./src/allmydata/storage/backends/das/core.py 12
3455-from allmydata.interfaces import RIStorageServer, IStatsProducer, IShareStore
3456+from allmydata.interfaces import IStatsProducer, IShareStore# XXX, RIStorageServer
3457 from allmydata.util import fileutil, idlib, log, time_format
3458 import allmydata # for __full_version__
3459 
3460hunk ./src/allmydata/storage/server.py 219
3461             alreadygot.add(share.shnum)
3462             share.add_or_renew_lease(lease_info)
3463 
3464-        # fill incoming with all shares that are incoming use a set operation since there's no need to operate on individual pieces
3465+        # fill incoming with all shares that are incoming use a set operation
3466+        # since there's no need to operate on individual pieces
3467         incoming = self.backend.get_incoming(storageindex)
3468 
3469         for shnum in ((sharenums - alreadygot) - incoming):
3470hunk ./src/allmydata/test/test_backends.py 245
3471         # with the same si, until BucketWriter.remote_close() has been called.
3472         # self.failIf(bsa)
3473 
3474-        # XXX (3) Inspect final and fail unless there's nothing there.
3475         bs[0].remote_write(0, 'a')
3476hunk ./src/allmydata/test/test_backends.py 246
3477-        # XXX (4a) Inspect final and fail unless share 0 is there.
3478-        # XXX (4b) Inspect incoming and fail unless share 0 is NOT there.
3479         self.failUnlessReallyEqual(fobj.buffer, share_file_data)
3480         spaceint = self.s.allocated_size()
3481         self.failUnlessReallyEqual(spaceint, 1)
3482hunk ./src/allmydata/test/test_backends.py 250
3483 
3484-        #  If there's something in self.alreadygot prior to remote_close() then fail.
3485+        # XXX (3) Inspect final and fail unless there's nothing there.
3486         bs[0].remote_close()
3487hunk ./src/allmydata/test/test_backends.py 252
3488+        # XXX (4a) Inspect final and fail unless share 0 is there.
3489+        # XXX (4b) Inspect incoming and fail unless share 0 is NOT there.
3490 
3491         # What happens when there's not enough space for the client's request?
3492         # XXX Need to uncomment! alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
3493}
3494[branching back, no longer attempting to mock inside TestServerFSBackend
3495wilcoxjg@gmail.com**20110711190849
3496 Ignore-this: e72c9560f8d05f1f93d46c91d2354df0
3497] {
3498hunk ./src/allmydata/storage/backends/das/core.py 75
3499         self.lease_checker.setServiceParent(self)
3500 
3501     def get_incoming(self, storageindex):
3502-        return set((1,))
3503-
3504-    def get_available_space(self):
3505-        if self.readonly:
3506-            return 0
3507-        return fileutil.get_available_space(self.storedir, self.reserved_space)
3508+        """Return the set of incoming shnums."""
3509+        return set(os.listdir(self.incomingdir))
3510 
3511     def get_shares(self, storage_index):
3512         """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
3513hunk ./src/allmydata/storage/backends/das/core.py 90
3514             # Commonly caused by there being no shares at all.
3515             pass
3516         
3517+    def get_available_space(self):
3518+        if self.readonly:
3519+            return 0
3520+        return fileutil.get_available_space(self.storedir, self.reserved_space)
3521+
3522     def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
3523         immsh = ImmutableShare(self.sharedir, storage_index, shnum, max_size=max_space_per_bucket, create=True)
3524         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
3525hunk ./src/allmydata/test/test_backends.py 27
3526 
3527 testnodeid = 'testnodeidxxxxxxxxxx'
3528 tempdir = 'teststoredir'
3529-sharedirfinalname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3530-sharedirincomingname = os.path.join(tempdir, 'shares', 'incoming', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3531+basedir = os.path.join(tempdir, 'shares')
3532+baseincdir = os.path.join(basedir, 'incoming')
3533+sharedirfinalname = os.path.join(basedir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3534+sharedirincomingname = os.path.join(baseincdir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
3535 shareincomingname = os.path.join(sharedirincomingname, '0')
3536 sharefname = os.path.join(sharedirfinalname, '0')
3537 
3538hunk ./src/allmydata/test/test_backends.py 142
3539                              mockmake_dirs, mockrename):
3540         """ Write a new share. """
3541 
3542-        def call_listdir(dirname):
3543-            self.failUnlessReallyEqual(dirname, sharedirfinalname)
3544-            raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
3545-
3546-        mocklistdir.side_effect = call_listdir
3547-
3548-        def call_isdir(dirname):
3549-            #XXX Should there be any other tests here?
3550-            self.failUnlessReallyEqual(dirname, sharedirfinalname)
3551-            return True
3552-
3553-        mockisdir.side_effect = call_isdir
3554-
3555-        def call_mkdir(dirname, permissions):
3556-            if dirname not in [sharedirfinalname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
3557-                self.Fail
3558-            else:
3559-                return True
3560-
3561-        mockmkdir.side_effect = call_mkdir
3562-
3563-        def call_get_available_space(storedir, reserved_space):
3564-            self.failUnlessReallyEqual(storedir, tempdir)
3565-            return 1
3566-
3567-        mockget_available_space.side_effect = call_get_available_space
3568-
3569-        mocktime.return_value = 0
3570         class MockShare:
3571             def __init__(self):
3572                 self.shnum = 1
3573hunk ./src/allmydata/test/test_backends.py 152
3574                 self.failUnlessReallyEqual(lease_info.owner_num, 0)
3575                 self.failUnlessReallyEqual(lease_info.expiration_time, mocktime() + 31*24*60*60)
3576                 self.failUnlessReallyEqual(lease_info.nodeid, testnodeid)
3577-               
3578 
3579         share = MockShare()
3580hunk ./src/allmydata/test/test_backends.py 154
3581-        def call_get_shares(storageindex):
3582-            #XXX  Whether or not to return an empty list depends on which case of get_shares we are interested in.
3583-            return []#share]
3584-
3585-        mockget_shares.side_effect = call_get_shares
3586 
3587         class MockFile:
3588             def __init__(self):
3589hunk ./src/allmydata/test/test_backends.py 176
3590             def tell(self):
3591                 return self.pos
3592 
3593-
3594         fobj = MockFile()
3595hunk ./src/allmydata/test/test_backends.py 177
3596+
3597+        directories = {}
3598+        def call_listdir(dirname):
3599+            if dirname not in directories:
3600+                raise OSError(2, "No such file or directory: '%s'" % os.path.join(basedir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
3601+            else:
3602+                return directories[dirname].get_contents()
3603+
3604+        mocklistdir.side_effect = call_listdir
3605+
3606+        class MockDir:
3607+            def __init__(self, dirname):
3608+                self.name = dirname
3609+                self.contents = []
3610+   
3611+            def get_contents(self):
3612+                return self.contents
3613+
3614+        def call_isdir(dirname):
3615+            #XXX Should there be any other tests here?
3616+            self.failUnlessReallyEqual(dirname, sharedirfinalname)
3617+            return True
3618+
3619+        mockisdir.side_effect = call_isdir
3620+
3621+        def call_mkdir(dirname, permissions):
3622+            if dirname not in [sharedirfinalname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
3623+                self.Fail
3624+            if dirname in directories:
3625+                raise OSError(17, "File exists: '%s'" % dirname)
3626+                self.Fail
3627+            elif dirname not in directories:
3628+                directories[dirname] = MockDir(dirname)
3629+                return True
3630+
3631+        mockmkdir.side_effect = call_mkdir
3632+
3633+        def call_get_available_space(storedir, reserved_space):
3634+            self.failUnlessReallyEqual(storedir, tempdir)
3635+            return 1
3636+
3637+        mockget_available_space.side_effect = call_get_available_space
3638+
3639+        mocktime.return_value = 0
3640+        def call_get_shares(storageindex):
3641+            #XXX  Whether or not to return an empty list depends on which case of get_shares we are interested in.
3642+            return []#share]
3643+
3644+        mockget_shares.side_effect = call_get_shares
3645+
3646         def call_open(fname, mode):
3647             self.failUnlessReallyEqual(fname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a', '0' ))
3648             return fobj
3649}
3650[checkpoint12 TestServerFSBackend no longer mocks filesystem
3651wilcoxjg@gmail.com**20110711193357
3652 Ignore-this: 48654a6c0eb02cf1e97e62fe24920b5f
3653] {
3654hunk ./src/allmydata/storage/backends/das/core.py 23
3655      create_mutable_sharefile
3656 from allmydata.storage.immutable import BucketWriter, BucketReader
3657 from allmydata.storage.crawler import FSBucketCountingCrawler
3658+from allmydata.util.hashutil import constant_time_compare
3659 from allmydata.storage.backends.das.expirer import FSLeaseCheckingCrawler
3660 
3661 from zope.interface import implements
3662hunk ./src/allmydata/storage/backends/das/core.py 28
3663 
3664+# storage/
3665+# storage/shares/incoming
3666+#   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
3667+#   be moved to storage/shares/$START/$STORAGEINDEX/$SHARENUM upon success
3668+# storage/shares/$START/$STORAGEINDEX
3669+# storage/shares/$START/$STORAGEINDEX/$SHARENUM
3670+
3671+# Where "$START" denotes the first 10 bits worth of $STORAGEINDEX (that's 2
3672+# base-32 chars).
3673 # $SHARENUM matches this regex:
3674 NUM_RE=re.compile("^[0-9]+$")
3675 
3676hunk ./src/allmydata/test/test_backends.py 126
3677         testbackend = DASCore(tempdir, expiration_policy)
3678         self.s = StorageServer(testnodeid, backend=DASCore(tempdir, expiration_policy) )
3679 
3680-    @mock.patch('allmydata.util.fileutil.rename')
3681-    @mock.patch('allmydata.util.fileutil.make_dirs')
3682-    @mock.patch('os.path.exists')
3683-    @mock.patch('os.stat')
3684-    @mock.patch('allmydata.storage.backends.das.core.DASCore.get_shares')
3685-    @mock.patch('allmydata.util.fileutil.get_available_space')
3686     @mock.patch('time.time')
3687hunk ./src/allmydata/test/test_backends.py 127
3688-    @mock.patch('os.mkdir')
3689-    @mock.patch('__builtin__.open')
3690-    @mock.patch('os.listdir')
3691-    @mock.patch('os.path.isdir')
3692-    def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime,\
3693-                             mockget_available_space, mockget_shares, mockstat, mockexists, \
3694-                             mockmake_dirs, mockrename):
3695+    def test_write_share(self, mocktime):
3696         """ Write a new share. """
3697 
3698         class MockShare:
3699hunk ./src/allmydata/test/test_backends.py 143
3700 
3701         share = MockShare()
3702 
3703-        class MockFile:
3704-            def __init__(self):
3705-                self.buffer = ''
3706-                self.pos = 0
3707-            def write(self, instring):
3708-                begin = self.pos
3709-                padlen = begin - len(self.buffer)
3710-                if padlen > 0:
3711-                    self.buffer += '\x00' * padlen
3712-                end = self.pos + len(instring)
3713-                self.buffer = self.buffer[:begin]+instring+self.buffer[end:]
3714-                self.pos = end
3715-            def close(self):
3716-                pass
3717-            def seek(self, pos):
3718-                self.pos = pos
3719-            def read(self, numberbytes):
3720-                return self.buffer[self.pos:self.pos+numberbytes]
3721-            def tell(self):
3722-                return self.pos
3723-
3724-        fobj = MockFile()
3725-
3726-        directories = {}
3727-        def call_listdir(dirname):
3728-            if dirname not in directories:
3729-                raise OSError(2, "No such file or directory: '%s'" % os.path.join(basedir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
3730-            else:
3731-                return directories[dirname].get_contents()
3732-
3733-        mocklistdir.side_effect = call_listdir
3734-
3735-        class MockDir:
3736-            def __init__(self, dirname):
3737-                self.name = dirname
3738-                self.contents = []
3739-   
3740-            def get_contents(self):
3741-                return self.contents
3742-
3743-        def call_isdir(dirname):
3744-            #XXX Should there be any other tests here?
3745-            self.failUnlessReallyEqual(dirname, sharedirfinalname)
3746-            return True
3747-
3748-        mockisdir.side_effect = call_isdir
3749-
3750-        def call_mkdir(dirname, permissions):
3751-            if dirname not in [sharedirfinalname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511:
3752-                self.Fail
3753-            if dirname in directories:
3754-                raise OSError(17, "File exists: '%s'" % dirname)
3755-                self.Fail
3756-            elif dirname not in directories:
3757-                directories[dirname] = MockDir(dirname)
3758-                return True
3759-
3760-        mockmkdir.side_effect = call_mkdir
3761-
3762-        def call_get_available_space(storedir, reserved_space):
3763-            self.failUnlessReallyEqual(storedir, tempdir)
3764-            return 1
3765-
3766-        mockget_available_space.side_effect = call_get_available_space
3767-
3768-        mocktime.return_value = 0
3769-        def call_get_shares(storageindex):
3770-            #XXX  Whether or not to return an empty list depends on which case of get_shares we are interested in.
3771-            return []#share]
3772-
3773-        mockget_shares.side_effect = call_get_shares
3774-
3775-        def call_open(fname, mode):
3776-            self.failUnlessReallyEqual(fname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a', '0' ))
3777-            return fobj
3778-
3779-        mockopen.side_effect = call_open
3780-
3781-        def call_make_dirs(dname):
3782-            self.failUnlessReallyEqual(dname, sharedirfinalname)
3783-           
3784-        mockmake_dirs.side_effect = call_make_dirs
3785-
3786-        def call_rename(src, dst):
3787-            self.failUnlessReallyEqual(src, shareincomingname)
3788-            self.failUnlessReallyEqual(dst, sharefname)
3789-           
3790-        mockrename.side_effect = call_rename
3791-
3792-        def call_exists(fname):
3793-            self.failUnlessReallyEqual(fname, sharefname)
3794-
3795-        mockexists.side_effect = call_exists
3796-
3797         # Now begin the test.
3798 
3799         # XXX (0) ???  Fail unless something is not properly set-up?
3800}
3801[JACP
3802wilcoxjg@gmail.com**20110711194407
3803 Ignore-this: b54745de777c4bb58d68d708f010bbb
3804] {
3805hunk ./src/allmydata/storage/backends/das/core.py 86
3806 
3807     def get_incoming(self, storageindex):
3808         """Return the set of incoming shnums."""
3809-        return set(os.listdir(self.incomingdir))
3810+        try:
3811+            incominglist = os.listdir(self.incomingdir)
3812+            print "incominglist: ", incominglist
3813+            return set(incominglist)
3814+        except OSError:
3815+            # XXX I'd like to make this more specific. Commonly caused by there being no shares at all.
3816+            pass
3817 
3818     def get_shares(self, storage_index):
3819         """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
3820hunk ./src/allmydata/storage/server.py 17
3821 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
3822      create_mutable_sharefile
3823 
3824-# storage/
3825-# storage/shares/incoming
3826-#   incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will
3827-#   be moved to storage/shares/$START/$STORAGEINDEX/$SHARENUM upon success
3828-# storage/shares/$START/$STORAGEINDEX
3829-# storage/shares/$START/$STORAGEINDEX/$SHARENUM
3830-
3831-# Where "$START" denotes the first 10 bits worth of $STORAGEINDEX (that's 2
3832-# base-32 chars).
3833-
3834-
3835 class StorageServer(service.MultiService, Referenceable):
3836     implements(RIStorageServer, IStatsProducer)
3837     name = 'storage'
3838}
3839[testing get incoming
3840wilcoxjg@gmail.com**20110711210224
3841 Ignore-this: 279ee530a7d1daff3c30421d9e3a2161
3842] {
3843hunk ./src/allmydata/storage/backends/das/core.py 87
3844     def get_incoming(self, storageindex):
3845         """Return the set of incoming shnums."""
3846         try:
3847-            incominglist = os.listdir(self.incomingdir)
3848+            incomingsharesdir = os.path.join(self.incomingdir, storage_index_to_dir(storageindex))
3849+            incominglist = os.listdir(incomingsharesdir)
3850             print "incominglist: ", incominglist
3851             return set(incominglist)
3852         except OSError:
3853hunk ./src/allmydata/storage/backends/das/core.py 92
3854-            # XXX I'd like to make this more specific. Commonly caused by there being no shares at all.
3855-            pass
3856-
3857+            # XXX I'd like to make this more specific. If there are no shares at all.
3858+            return set()
3859+           
3860     def get_shares(self, storage_index):
3861         """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
3862         finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
3863hunk ./src/allmydata/test/test_backends.py 149
3864         alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
3865 
3866         # XXX (1) Inspect incoming and fail unless the sharenum is listed there.
3867+        self.failUnlessEqual(self.s.remote_get_incoming('teststorage_index'), set((0,)))
3868         alreadygota, bsa = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
3869 
3870hunk ./src/allmydata/test/test_backends.py 152
3871-        self.failUnlessEqual(self.s.remote_get_incoming('teststorage_index'), set((0,)))
3872         # XXX (2) Test that no bucketwriter results from a remote_allocate_buckets
3873         # with the same si, until BucketWriter.remote_close() has been called.
3874         # self.failIf(bsa)
3875}
3876[ImmutableShareFile does not know its StorageIndex
3877wilcoxjg@gmail.com**20110711211424
3878 Ignore-this: 595de5c2781b607e1c9ebf6f64a2898a
3879] {
3880hunk ./src/allmydata/storage/backends/das/core.py 112
3881             return 0
3882         return fileutil.get_available_space(self.storedir, self.reserved_space)
3883 
3884-    def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary):
3885-        immsh = ImmutableShare(self.sharedir, storage_index, shnum, max_size=max_space_per_bucket, create=True)
3886+    def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
3887+        finalhome = os.path.join(self.sharedir, storage_index_to_dir(storageindex), shnum)
3888+        incominghome = os.path.join(self.sharedir,'incoming', storage_index_to_dir(storageindex), shnum)
3889+        immsh = ImmutableShare(self, finalhome, incominghome, max_size=max_space_per_bucket, create=True)
3890         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
3891         return bw
3892 
3893hunk ./src/allmydata/storage/backends/das/core.py 155
3894     LEASE_SIZE = struct.calcsize(">L32s32sL")
3895     sharetype = "immutable"
3896 
3897-    def __init__(self, sharedir, storageindex, shnum, max_size=None, create=False):
3898+    def __init__(self, finalhome, incominghome, max_size=None, create=False):
3899         """ If max_size is not None then I won't allow more than
3900         max_size to be written to me. If create=True then max_size
3901         must not be None. """
3902}
3903[get_incoming correctly reports the 0 share after it has arrived
3904wilcoxjg@gmail.com**20110712025157
3905 Ignore-this: 893b2df6e41391567fffc85e4799bb0b
3906] {
3907hunk ./src/allmydata/storage/backends/das/core.py 1
3908+import os, re, weakref, struct, time, stat
3909+
3910 from allmydata.interfaces import IStorageBackend
3911 from allmydata.storage.backends.base import Backend
3912 from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
3913hunk ./src/allmydata/storage/backends/das/core.py 8
3914 from allmydata.util.assertutil import precondition
3915 
3916-import os, re, weakref, struct, time
3917-
3918 #from foolscap.api import Referenceable
3919 from twisted.application import service
3920 
3921hunk ./src/allmydata/storage/backends/das/core.py 89
3922         try:
3923             incomingsharesdir = os.path.join(self.incomingdir, storage_index_to_dir(storageindex))
3924             incominglist = os.listdir(incomingsharesdir)
3925-            print "incominglist: ", incominglist
3926-            return set(incominglist)
3927+            incomingshnums = [int(x) for x in incominglist]
3928+            return set(incomingshnums)
3929         except OSError:
3930             # XXX I'd like to make this more specific. If there are no shares at all.
3931             return set()
3932hunk ./src/allmydata/storage/backends/das/core.py 113
3933         return fileutil.get_available_space(self.storedir, self.reserved_space)
3934 
3935     def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
3936-        finalhome = os.path.join(self.sharedir, storage_index_to_dir(storageindex), shnum)
3937-        incominghome = os.path.join(self.sharedir,'incoming', storage_index_to_dir(storageindex), shnum)
3938-        immsh = ImmutableShare(self, finalhome, incominghome, max_size=max_space_per_bucket, create=True)
3939+        finalhome = os.path.join(self.sharedir, storage_index_to_dir(storageindex), str(shnum))
3940+        incominghome = os.path.join(self.sharedir,'incoming', storage_index_to_dir(storageindex), str(shnum))
3941+        immsh = ImmutableShare(finalhome, incominghome, max_size=max_space_per_bucket, create=True)
3942         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
3943         return bw
3944 
3945hunk ./src/allmydata/storage/backends/das/core.py 160
3946         max_size to be written to me. If create=True then max_size
3947         must not be None. """
3948         precondition((max_size is not None) or (not create), max_size, create)
3949-        self.shnum = shnum
3950-        self.storage_index = storageindex
3951-        self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum))
3952         self._max_size = max_size
3953hunk ./src/allmydata/storage/backends/das/core.py 161
3954-        self.incomingdir = os.path.join(sharedir, 'incoming')
3955-        si_dir = storage_index_to_dir(storageindex)
3956-        self.incominghome = os.path.join(self.incomingdir, si_dir, "%d" % shnum)
3957-        #XXX  self.fname and self.finalhome need to be resolve/merged.
3958-        self.finalhome = os.path.join(sharedir, si_dir, "%d" % shnum)
3959+        self.incominghome = incominghome
3960+        self.finalhome = finalhome
3961         if create:
3962             # touch the file, so later callers will see that we're working on
3963             # it. Also construct the metadata.
3964hunk ./src/allmydata/storage/backends/das/core.py 166
3965-            assert not os.path.exists(self.fname)
3966-            fileutil.make_dirs(os.path.dirname(self.fname))
3967-            f = open(self.fname, 'wb')
3968+            assert not os.path.exists(self.finalhome)
3969+            fileutil.make_dirs(os.path.dirname(self.incominghome))
3970+            f = open(self.incominghome, 'wb')
3971             # The second field -- the four-byte share data length -- is no
3972             # longer used as of Tahoe v1.3.0, but we continue to write it in
3973             # there in case someone downgrades a storage server from >=
3974hunk ./src/allmydata/storage/backends/das/core.py 183
3975             self._lease_offset = max_size + 0x0c
3976             self._num_leases = 0
3977         else:
3978-            f = open(self.fname, 'rb')
3979-            filesize = os.path.getsize(self.fname)
3980+            f = open(self.finalhome, 'rb')
3981+            filesize = os.path.getsize(self.finalhome)
3982             (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
3983             f.close()
3984             if version != 1:
3985hunk ./src/allmydata/storage/backends/das/core.py 189
3986                 msg = "sharefile %s had version %d but we wanted 1" % \
3987-                      (self.fname, version)
3988+                      (self.finalhome, version)
3989                 raise UnknownImmutableContainerVersionError(msg)
3990             self._num_leases = num_leases
3991             self._lease_offset = filesize - (num_leases * self.LEASE_SIZE)
3992hunk ./src/allmydata/storage/backends/das/core.py 225
3993         pass
3994         
3995     def stat(self):
3996-        return os.stat(self.finalhome)[os.stat.ST_SIZE]
3997+        return os.stat(self.finalhome)[stat.ST_SIZE]
3998+        #filelen = os.stat(self.finalhome)[stat.ST_SIZE]
3999 
4000     def get_shnum(self):
4001         return self.shnum
4002hunk ./src/allmydata/storage/backends/das/core.py 232
4003 
4004     def unlink(self):
4005-        os.unlink(self.fname)
4006+        os.unlink(self.finalhome)
4007 
4008     def read_share_data(self, offset, length):
4009         precondition(offset >= 0)
4010hunk ./src/allmydata/storage/backends/das/core.py 239
4011         # Reads beyond the end of the data are truncated. Reads that start
4012         # beyond the end of the data return an empty string.
4013         seekpos = self._data_offset+offset
4014-        fsize = os.path.getsize(self.fname)
4015+        fsize = os.path.getsize(self.finalhome)
4016         actuallength = max(0, min(length, fsize-seekpos))
4017         if actuallength == 0:
4018             return ""
4019hunk ./src/allmydata/storage/backends/das/core.py 243
4020-        f = open(self.fname, 'rb')
4021+        f = open(self.finalhome, 'rb')
4022         f.seek(seekpos)
4023         return f.read(actuallength)
4024 
4025hunk ./src/allmydata/storage/backends/das/core.py 252
4026         precondition(offset >= 0, offset)
4027         if self._max_size is not None and offset+length > self._max_size:
4028             raise DataTooLargeError(self._max_size, offset, length)
4029-        f = open(self.fname, 'rb+')
4030+        f = open(self.incominghome, 'rb+')
4031         real_offset = self._data_offset+offset
4032         f.seek(real_offset)
4033         assert f.tell() == real_offset
4034hunk ./src/allmydata/storage/backends/das/core.py 279
4035 
4036     def get_leases(self):
4037         """Yields a LeaseInfo instance for all leases."""
4038-        f = open(self.fname, 'rb')
4039+        f = open(self.finalhome, 'rb')
4040         (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
4041         f.seek(self._lease_offset)
4042         for i in range(num_leases):
4043hunk ./src/allmydata/storage/backends/das/core.py 288
4044                 yield LeaseInfo().from_immutable_data(data)
4045 
4046     def add_lease(self, lease_info):
4047-        f = open(self.fname, 'rb+')
4048+        f = open(self.incominghome, 'rb+')
4049         num_leases = self._read_num_leases(f)
4050         self._write_lease_record(f, num_leases, lease_info)
4051         self._write_num_leases(f, num_leases+1)
4052hunk ./src/allmydata/storage/backends/das/core.py 301
4053                 if new_expire_time > lease.expiration_time:
4054                     # yes
4055                     lease.expiration_time = new_expire_time
4056-                    f = open(self.fname, 'rb+')
4057+                    f = open(self.finalhome, 'rb+')
4058                     self._write_lease_record(f, i, lease)
4059                     f.close()
4060                 return
4061hunk ./src/allmydata/storage/backends/das/core.py 336
4062             # the same order as they were added, so that if we crash while
4063             # doing this, we won't lose any non-cancelled leases.
4064             leases = [l for l in leases if l] # remove the cancelled leases
4065-            f = open(self.fname, 'rb+')
4066+            f = open(self.finalhome, 'rb+')
4067             for i,lease in enumerate(leases):
4068                 self._write_lease_record(f, i, lease)
4069             self._write_num_leases(f, len(leases))
4070hunk ./src/allmydata/storage/backends/das/core.py 344
4071             f.close()
4072         space_freed = self.LEASE_SIZE * num_leases_removed
4073         if not len(leases):
4074-            space_freed += os.stat(self.fname)[stat.ST_SIZE]
4075+            space_freed += os.stat(self.finalhome)[stat.ST_SIZE]
4076             self.unlink()
4077         return space_freed
4078hunk ./src/allmydata/test/test_backends.py 129
4079     @mock.patch('time.time')
4080     def test_write_share(self, mocktime):
4081         """ Write a new share. """
4082-
4083-        class MockShare:
4084-            def __init__(self):
4085-                self.shnum = 1
4086-               
4087-            def add_or_renew_lease(elf, lease_info):
4088-                self.failUnlessReallyEqual(lease_info.renew_secret, renew_secret)
4089-                self.failUnlessReallyEqual(lease_info.cancel_secret, cancel_secret)
4090-                self.failUnlessReallyEqual(lease_info.owner_num, 0)
4091-                self.failUnlessReallyEqual(lease_info.expiration_time, mocktime() + 31*24*60*60)
4092-                self.failUnlessReallyEqual(lease_info.nodeid, testnodeid)
4093-
4094-        share = MockShare()
4095-
4096         # Now begin the test.
4097 
4098         # XXX (0) ???  Fail unless something is not properly set-up?
4099hunk ./src/allmydata/test/test_backends.py 143
4100         # self.failIf(bsa)
4101 
4102         bs[0].remote_write(0, 'a')
4103-        self.failUnlessReallyEqual(fobj.buffer, share_file_data)
4104+        #self.failUnlessReallyEqual(fobj.buffer, share_file_data)
4105         spaceint = self.s.allocated_size()
4106         self.failUnlessReallyEqual(spaceint, 1)
4107 
4108hunk ./src/allmydata/test/test_backends.py 161
4109         #self.failIf(mockrename.called, mockrename.call_args_list)
4110         #self.failIf(mockstat.called, mockstat.call_args_list)
4111 
4112+    def test_handle_incoming(self):
4113+        incomingset = self.s.backend.get_incoming('teststorage_index')
4114+        self.failUnlessReallyEqual(incomingset, set())
4115+
4116+        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4117+       
4118+        incomingset = self.s.backend.get_incoming('teststorage_index')
4119+        self.failUnlessReallyEqual(incomingset, set((0,)))
4120+
4121+        bs[0].remote_close()
4122+        self.failUnlessReallyEqual(incomingset, set())
4123+
4124     @mock.patch('os.path.exists')
4125     @mock.patch('os.path.getsize')
4126     @mock.patch('__builtin__.open')
4127hunk ./src/allmydata/test/test_backends.py 223
4128         self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
4129 
4130 
4131-
4132 class TestBackendConstruction(unittest.TestCase, ReallyEqualMixin):
4133     @mock.patch('time.time')
4134     @mock.patch('os.mkdir')
4135hunk ./src/allmydata/test/test_backends.py 271
4136         DASCore('teststoredir', expiration_policy)
4137 
4138         self.failIf(mocklistdir.called, mocklistdir.call_args_list)
4139+
4140}
4141[jacp14
4142wilcoxjg@gmail.com**20110712061211
4143 Ignore-this: 57b86958eceeef1442b21cca14798a0f
4144] {
4145hunk ./src/allmydata/storage/backends/das/core.py 95
4146             # XXX I'd like to make this more specific. If there are no shares at all.
4147             return set()
4148             
4149-    def get_shares(self, storage_index):
4150+    def get_shares(self, storageindex):
4151         """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
4152hunk ./src/allmydata/storage/backends/das/core.py 97
4153-        finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index))
4154+        finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storageindex))
4155         try:
4156             for f in os.listdir(finalstoragedir):
4157                 if NUM_RE.match(f):
4158hunk ./src/allmydata/storage/backends/das/core.py 102
4159                     filename = os.path.join(finalstoragedir, f)
4160-                    yield ImmutableShare(self.sharedir, storage_index, int(f))
4161+                    yield ImmutableShare(filename, storageindex, f)
4162         except OSError:
4163             # Commonly caused by there being no shares at all.
4164             pass
4165hunk ./src/allmydata/storage/backends/das/core.py 115
4166     def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
4167         finalhome = os.path.join(self.sharedir, storage_index_to_dir(storageindex), str(shnum))
4168         incominghome = os.path.join(self.sharedir,'incoming', storage_index_to_dir(storageindex), str(shnum))
4169-        immsh = ImmutableShare(finalhome, incominghome, max_size=max_space_per_bucket, create=True)
4170+        immsh = ImmutableShare(finalhome, storageindex, shnum, incominghome, max_size=max_space_per_bucket, create=True)
4171         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
4172         return bw
4173 
4174hunk ./src/allmydata/storage/backends/das/core.py 155
4175     LEASE_SIZE = struct.calcsize(">L32s32sL")
4176     sharetype = "immutable"
4177 
4178-    def __init__(self, finalhome, incominghome, max_size=None, create=False):
4179+    def __init__(self, finalhome, storageindex, shnum, incominghome=None, max_size=None, create=False):
4180         """ If max_size is not None then I won't allow more than
4181         max_size to be written to me. If create=True then max_size
4182         must not be None. """
4183hunk ./src/allmydata/storage/backends/das/core.py 160
4184         precondition((max_size is not None) or (not create), max_size, create)
4185+        self.storageindex = storageindex
4186         self._max_size = max_size
4187         self.incominghome = incominghome
4188         self.finalhome = finalhome
4189hunk ./src/allmydata/storage/backends/das/core.py 164
4190+        self.shnum = shnum
4191         if create:
4192             # touch the file, so later callers will see that we're working on
4193             # it. Also construct the metadata.
4194hunk ./src/allmydata/storage/backends/das/core.py 212
4195             # their children to know when they should do the rmdir. This
4196             # approach is simpler, but relies on os.rmdir refusing to delete
4197             # a non-empty directory. Do *not* use fileutil.rm_dir() here!
4198+            #print "os.path.dirname(self.incominghome): "
4199+            #print os.path.dirname(self.incominghome)
4200             os.rmdir(os.path.dirname(self.incominghome))
4201             # we also delete the grandparent (prefix) directory, .../ab ,
4202             # again to avoid leaving directories lying around. This might
4203hunk ./src/allmydata/storage/immutable.py 93
4204     def __init__(self, ss, share):
4205         self.ss = ss
4206         self._share_file = share
4207-        self.storage_index = share.storage_index
4208+        self.storageindex = share.storageindex
4209         self.shnum = share.shnum
4210 
4211     def __repr__(self):
4212hunk ./src/allmydata/storage/immutable.py 98
4213         return "<%s %s %s>" % (self.__class__.__name__,
4214-                               base32.b2a_l(self.storage_index[:8], 60),
4215+                               base32.b2a_l(self.storageindex[:8], 60),
4216                                self.shnum)
4217 
4218     def remote_read(self, offset, length):
4219hunk ./src/allmydata/storage/immutable.py 110
4220 
4221     def remote_advise_corrupt_share(self, reason):
4222         return self.ss.remote_advise_corrupt_share("immutable",
4223-                                                   self.storage_index,
4224+                                                   self.storageindex,
4225                                                    self.shnum,
4226                                                    reason)
4227hunk ./src/allmydata/test/test_backends.py 20
4228 # The following share file contents was generated with
4229 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
4230 # with share data == 'a'.
4231-renew_secret  = 'x'*32
4232-cancel_secret = 'y'*32
4233-share_data = 'a\x00\x00\x00\x00' + renew_secret + cancel_secret + '\x00(\xde\x80'
4234-share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data
4235+shareversionnumber = '\x00\x00\x00\x01'
4236+sharedatalength = '\x00\x00\x00\x01'
4237+numberofleases = '\x00\x00\x00\x01'
4238+shareinputdata = 'a'
4239+ownernumber = '\x00\x00\x00\x00'
4240+renewsecret  = 'x'*32
4241+cancelsecret = 'y'*32
4242+expirationtime = '\x00(\xde\x80'
4243+nextlease = ''
4244+containerdata = shareversionnumber + sharedatalength + numberofleases
4245+client_data = shareinputdata + ownernumber + renewsecret + \
4246+    cancelsecret + expirationtime + nextlease
4247+share_data = containerdata + client_data
4248+
4249 
4250 testnodeid = 'testnodeidxxxxxxxxxx'
4251 tempdir = 'teststoredir'
4252hunk ./src/allmydata/test/test_backends.py 52
4253 
4254 class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
4255     def setUp(self):
4256-        self.s = StorageServer(testnodeid, backend=NullCore())
4257+        self.ss = StorageServer(testnodeid, backend=NullCore())
4258 
4259     @mock.patch('os.mkdir')
4260     @mock.patch('__builtin__.open')
4261hunk ./src/allmydata/test/test_backends.py 62
4262         """ Write a new share. """
4263 
4264         # Now begin the test.
4265-        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4266+        alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4267         bs[0].remote_write(0, 'a')
4268         self.failIf(mockisdir.called)
4269         self.failIf(mocklistdir.called)
4270hunk ./src/allmydata/test/test_backends.py 133
4271                 _assert(False, "The tester code doesn't recognize this case.") 
4272 
4273         mockopen.side_effect = call_open
4274-        testbackend = DASCore(tempdir, expiration_policy)
4275-        self.s = StorageServer(testnodeid, backend=DASCore(tempdir, expiration_policy) )
4276+        self.backend = DASCore(tempdir, expiration_policy)
4277+        self.ss = StorageServer(testnodeid, self.backend)
4278+        self.ssinf = StorageServer(testnodeid, self.backend)
4279 
4280     @mock.patch('time.time')
4281     def test_write_share(self, mocktime):
4282hunk ./src/allmydata/test/test_backends.py 142
4283         """ Write a new share. """
4284         # Now begin the test.
4285 
4286-        # XXX (0) ???  Fail unless something is not properly set-up?
4287-        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4288+        mocktime.return_value = 0
4289+        # Inspect incoming and fail unless it's empty.
4290+        incomingset = self.ss.backend.get_incoming('teststorage_index')
4291+        self.failUnlessReallyEqual(incomingset, set())
4292+       
4293+        # Among other things, populate incoming with the sharenum: 0.
4294+        alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4295 
4296hunk ./src/allmydata/test/test_backends.py 150
4297-        # XXX (1) Inspect incoming and fail unless the sharenum is listed there.
4298-        self.failUnlessEqual(self.s.remote_get_incoming('teststorage_index'), set((0,)))
4299-        alreadygota, bsa = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4300+        # Inspect incoming and fail unless the sharenum: 0 is listed there.
4301+        self.failUnlessEqual(self.ss.remote_get_incoming('teststorage_index'), set((0,)))
4302+       
4303+        # Attempt to create a second share writer with the same share.
4304+        alreadygota, bsa = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4305 
4306hunk ./src/allmydata/test/test_backends.py 156
4307-        # XXX (2) Test that no bucketwriter results from a remote_allocate_buckets
4308+        # Show that no sharewriter results from a remote_allocate_buckets
4309         # with the same si, until BucketWriter.remote_close() has been called.
4310hunk ./src/allmydata/test/test_backends.py 158
4311-        # self.failIf(bsa)
4312+        self.failIf(bsa)
4313 
4314hunk ./src/allmydata/test/test_backends.py 160
4315+        # Write 'a' to shnum 0. Only tested together with close and read.
4316         bs[0].remote_write(0, 'a')
4317hunk ./src/allmydata/test/test_backends.py 162
4318-        #self.failUnlessReallyEqual(fobj.buffer, share_file_data)
4319-        spaceint = self.s.allocated_size()
4320+
4321+        # Test allocated size.
4322+        spaceint = self.ss.allocated_size()
4323         self.failUnlessReallyEqual(spaceint, 1)
4324 
4325         # XXX (3) Inspect final and fail unless there's nothing there.
4326hunk ./src/allmydata/test/test_backends.py 168
4327+        self.failUnlessReallyEqual(len(list(self.backend.get_shares('teststorage_index'))), 0)
4328         bs[0].remote_close()
4329         # XXX (4a) Inspect final and fail unless share 0 is there.
4330hunk ./src/allmydata/test/test_backends.py 171
4331+        #sharesinfinal = list(self.backend.get_shares('teststorage_index'))
4332+        #contents = sharesinfinal[0].read_share_data(0,999)
4333+        #self.failUnlessReallyEqual(sharesinfinal[0].read_share_data(0,73), client_data)
4334         # XXX (4b) Inspect incoming and fail unless share 0 is NOT there.
4335 
4336         # What happens when there's not enough space for the client's request?
4337hunk ./src/allmydata/test/test_backends.py 177
4338-        # XXX Need to uncomment! alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
4339+        # XXX Need to uncomment! alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
4340 
4341         # Now test the allocated_size method.
4342         # self.failIf(mockexists.called, mockexists.call_args_list)
4343hunk ./src/allmydata/test/test_backends.py 185
4344         #self.failIf(mockrename.called, mockrename.call_args_list)
4345         #self.failIf(mockstat.called, mockstat.call_args_list)
4346 
4347-    def test_handle_incoming(self):
4348-        incomingset = self.s.backend.get_incoming('teststorage_index')
4349-        self.failUnlessReallyEqual(incomingset, set())
4350-
4351-        alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4352-       
4353-        incomingset = self.s.backend.get_incoming('teststorage_index')
4354-        self.failUnlessReallyEqual(incomingset, set((0,)))
4355-
4356-        bs[0].remote_close()
4357-        self.failUnlessReallyEqual(incomingset, set())
4358-
4359     @mock.patch('os.path.exists')
4360     @mock.patch('os.path.getsize')
4361     @mock.patch('__builtin__.open')
4362hunk ./src/allmydata/test/test_backends.py 208
4363             self.failUnless('r' in mode, mode)
4364             self.failUnless('b' in mode, mode)
4365 
4366-            return StringIO(share_file_data)
4367+            return StringIO(share_data)
4368         mockopen.side_effect = call_open
4369 
4370hunk ./src/allmydata/test/test_backends.py 211
4371-        datalen = len(share_file_data)
4372+        datalen = len(share_data)
4373         def call_getsize(fname):
4374             self.failUnlessReallyEqual(fname, sharefname)
4375             return datalen
4376hunk ./src/allmydata/test/test_backends.py 223
4377         mockexists.side_effect = call_exists
4378 
4379         # Now begin the test.
4380-        bs = self.s.remote_get_buckets('teststorage_index')
4381+        bs = self.ss.remote_get_buckets('teststorage_index')
4382 
4383         self.failUnlessEqual(len(bs), 1)
4384hunk ./src/allmydata/test/test_backends.py 226
4385-        b = bs[0]
4386+        b = bs['0']
4387         # These should match by definition, the next two cases cover cases without (completely) unambiguous behaviors.
4388hunk ./src/allmydata/test/test_backends.py 228
4389-        self.failUnlessReallyEqual(b.remote_read(0, datalen), share_data)
4390+        self.failUnlessReallyEqual(b.remote_read(0, datalen), client_data)
4391         # If you try to read past the end you get the as much data as is there.
4392hunk ./src/allmydata/test/test_backends.py 230
4393-        self.failUnlessReallyEqual(b.remote_read(0, datalen+20), share_data)
4394+        self.failUnlessReallyEqual(b.remote_read(0, datalen+20), client_data)
4395         # If you start reading past the end of the file you get the empty string.
4396         self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
4397 
4398}
4399[jacp14 or so
4400wilcoxjg@gmail.com**20110713060346
4401 Ignore-this: 7026810f60879d65b525d450e43ff87a
4402] {
4403hunk ./src/allmydata/storage/backends/das/core.py 102
4404             for f in os.listdir(finalstoragedir):
4405                 if NUM_RE.match(f):
4406                     filename = os.path.join(finalstoragedir, f)
4407-                    yield ImmutableShare(filename, storageindex, f)
4408+                    yield ImmutableShare(filename, storageindex, int(f))
4409         except OSError:
4410             # Commonly caused by there being no shares at all.
4411             pass
4412hunk ./src/allmydata/storage/backends/null/core.py 25
4413     def set_storage_server(self, ss):
4414         self.ss = ss
4415 
4416+    def get_incoming(self, storageindex):
4417+        return set()
4418+
4419 class ImmutableShare:
4420     sharetype = "immutable"
4421 
4422hunk ./src/allmydata/storage/immutable.py 19
4423 
4424     def __init__(self, ss, immutableshare, max_size, lease_info, canary):
4425         self.ss = ss
4426-        self._max_size = max_size # don't allow the client to write more than this
4427+        self._max_size = max_size # don't allow the client to write more than this        print self.ss._active_writers.keys()
4428+
4429         self._canary = canary
4430         self._disconnect_marker = canary.notifyOnDisconnect(self._disconnected)
4431         self.closed = False
4432hunk ./src/allmydata/test/test_backends.py 135
4433         mockopen.side_effect = call_open
4434         self.backend = DASCore(tempdir, expiration_policy)
4435         self.ss = StorageServer(testnodeid, self.backend)
4436-        self.ssinf = StorageServer(testnodeid, self.backend)
4437+        self.backendsmall = DASCore(tempdir, expiration_policy, reserved_space = 1)
4438+        self.ssmallback = StorageServer(testnodeid, self.backendsmall)
4439 
4440     @mock.patch('time.time')
4441     def test_write_share(self, mocktime):
4442hunk ./src/allmydata/test/test_backends.py 161
4443         # with the same si, until BucketWriter.remote_close() has been called.
4444         self.failIf(bsa)
4445 
4446-        # Write 'a' to shnum 0. Only tested together with close and read.
4447-        bs[0].remote_write(0, 'a')
4448-
4449         # Test allocated size.
4450         spaceint = self.ss.allocated_size()
4451         self.failUnlessReallyEqual(spaceint, 1)
4452hunk ./src/allmydata/test/test_backends.py 165
4453 
4454-        # XXX (3) Inspect final and fail unless there's nothing there.
4455+        # Write 'a' to shnum 0. Only tested together with close and read.
4456+        bs[0].remote_write(0, 'a')
4457+       
4458+        # Preclose: Inspect final, failUnless nothing there.
4459         self.failUnlessReallyEqual(len(list(self.backend.get_shares('teststorage_index'))), 0)
4460         bs[0].remote_close()
4461hunk ./src/allmydata/test/test_backends.py 171
4462-        # XXX (4a) Inspect final and fail unless share 0 is there.
4463-        #sharesinfinal = list(self.backend.get_shares('teststorage_index'))
4464-        #contents = sharesinfinal[0].read_share_data(0,999)
4465-        #self.failUnlessReallyEqual(sharesinfinal[0].read_share_data(0,73), client_data)
4466-        # XXX (4b) Inspect incoming and fail unless share 0 is NOT there.
4467 
4468hunk ./src/allmydata/test/test_backends.py 172
4469-        # What happens when there's not enough space for the client's request?
4470-        # XXX Need to uncomment! alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock())
4471+        # Postclose: (Omnibus) failUnless written data is in final.
4472+        sharesinfinal = list(self.backend.get_shares('teststorage_index'))
4473+        contents = sharesinfinal[0].read_share_data(0,73)
4474+        self.failUnlessReallyEqual(sharesinfinal[0].read_share_data(0,73), client_data)
4475 
4476hunk ./src/allmydata/test/test_backends.py 177
4477-        # Now test the allocated_size method.
4478-        # self.failIf(mockexists.called, mockexists.call_args_list)
4479-        #self.failIf(mockmake_dirs.called, mockmake_dirs.call_args_list)
4480-        #self.failIf(mockrename.called, mockrename.call_args_list)
4481-        #self.failIf(mockstat.called, mockstat.call_args_list)
4482+        # Cover interior of for share in get_shares loop.
4483+        alreadygotb, bsb = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4484+       
4485+    @mock.patch('time.time')
4486+    @mock.patch('allmydata.util.fileutil.get_available_space')
4487+    def test_out_of_space(self, mockget_available_space, mocktime):
4488+        mocktime.return_value = 0
4489+       
4490+        def call_get_available_space(dir, reserve):
4491+            return 0
4492+
4493+        mockget_available_space.side_effect = call_get_available_space
4494+       
4495+       
4496+        alreadygotc, bsc = self.ssmallback.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4497 
4498     @mock.patch('os.path.exists')
4499     @mock.patch('os.path.getsize')
4500hunk ./src/allmydata/test/test_backends.py 234
4501         bs = self.ss.remote_get_buckets('teststorage_index')
4502 
4503         self.failUnlessEqual(len(bs), 1)
4504-        b = bs['0']
4505+        b = bs[0]
4506         # These should match by definition, the next two cases cover cases without (completely) unambiguous behaviors.
4507         self.failUnlessReallyEqual(b.remote_read(0, datalen), client_data)
4508         # If you try to read past the end you get the as much data as is there.
4509}
4510[temporary work-in-progress patch to be unrecorded
4511zooko@zooko.com**20110714003008
4512 Ignore-this: 39ecb812eca5abe04274c19897af5b45
4513 tidy up a few tests, work done in pair-programming with Zancas
4514] {
4515hunk ./src/allmydata/storage/backends/das/core.py 65
4516         self._clean_incomplete()
4517 
4518     def _clean_incomplete(self):
4519-        fileutil.rm_dir(self.incomingdir)
4520+        fileutil.rmtree(self.incomingdir)
4521         fileutil.make_dirs(self.incomingdir)
4522 
4523     def _setup_corruption_advisory(self):
4524hunk ./src/allmydata/storage/immutable.py 1
4525-import os, stat, struct, time
4526+import os, time
4527 
4528 from foolscap.api import Referenceable
4529 
4530hunk ./src/allmydata/storage/server.py 1
4531-import os, re, weakref, struct, time
4532+import os, weakref, struct, time
4533 
4534 from foolscap.api import Referenceable
4535 from twisted.application import service
4536hunk ./src/allmydata/storage/server.py 7
4537 
4538 from zope.interface import implements
4539-from allmydata.interfaces import RIStorageServer, IStatsProducer, IShareStore
4540+from allmydata.interfaces import RIStorageServer, IStatsProducer
4541 from allmydata.util import fileutil, idlib, log, time_format
4542 import allmydata # for __full_version__
4543 
4544hunk ./src/allmydata/storage/server.py 313
4545         self.add_latency("get", time.time() - start)
4546         return bucketreaders
4547 
4548-    def remote_get_incoming(self, storageindex):
4549-        incoming_share_set = self.backend.get_incoming(storageindex)
4550-        return incoming_share_set
4551-
4552     def get_leases(self, storageindex):
4553         """Provide an iterator that yields all of the leases attached to this
4554         bucket. Each lease is returned as a LeaseInfo instance.
4555hunk ./src/allmydata/test/test_backends.py 3
4556 from twisted.trial import unittest
4557 
4558+from twisted.path.filepath import FilePath
4559+
4560 from StringIO import StringIO
4561 
4562 from allmydata.test.common_util import ReallyEqualMixin
4563hunk ./src/allmydata/test/test_backends.py 38
4564 
4565 
4566 testnodeid = 'testnodeidxxxxxxxxxx'
4567-tempdir = 'teststoredir'
4568-basedir = os.path.join(tempdir, 'shares')
4569+storedir = 'teststoredir'
4570+storedirfp = FilePath(storedir)
4571+basedir = os.path.join(storedir, 'shares')
4572 baseincdir = os.path.join(basedir, 'incoming')
4573 sharedirfinalname = os.path.join(basedir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
4574 sharedirincomingname = os.path.join(baseincdir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
4575hunk ./src/allmydata/test/test_backends.py 53
4576                      'cutoff_date' : None,
4577                      'sharetypes' : None}
4578 
4579-class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin):
4580+class TestServerWithNullBackend(unittest.TestCase, ReallyEqualMixin):
4581+    """ NullBackend is just for testing and executable documentation, so
4582+    this test is actually a test of StorageServer in which we're using
4583+    NullBackend as helper code for the test, rather than a test of
4584+    NullBackend. """
4585     def setUp(self):
4586         self.ss = StorageServer(testnodeid, backend=NullCore())
4587 
4588hunk ./src/allmydata/test/test_backends.py 62
4589     @mock.patch('os.mkdir')
4590+
4591     @mock.patch('__builtin__.open')
4592     @mock.patch('os.listdir')
4593     @mock.patch('os.path.isdir')
4594hunk ./src/allmydata/test/test_backends.py 69
4595     def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir):
4596         """ Write a new share. """
4597 
4598-        # Now begin the test.
4599         alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4600         bs[0].remote_write(0, 'a')
4601         self.failIf(mockisdir.called)
4602hunk ./src/allmydata/test/test_backends.py 83
4603     @mock.patch('os.listdir')
4604     @mock.patch('os.path.isdir')
4605     def test_create_server_fs_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
4606-        """ This tests whether a server instance can be constructed
4607-        with a filesystem backend. To pass the test, it has to use the
4608-        filesystem in only the prescribed ways. """
4609+        """ This tests whether a server instance can be constructed with a
4610+        filesystem backend. To pass the test, it mustn't use the filesystem
4611+        outside of its configured storedir. """
4612 
4613         def call_open(fname, mode):
4614hunk ./src/allmydata/test/test_backends.py 88
4615-            if fname == os.path.join(tempdir,'bucket_counter.state'):
4616-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
4617-            elif fname == os.path.join(tempdir, 'lease_checker.state'):
4618-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
4619-            elif fname == os.path.join(tempdir, 'lease_checker.history'):
4620+            if fname == os.path.join(storedir, 'bucket_counter.state'):
4621+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'bucket_counter.state'))
4622+            elif fname == os.path.join(storedir, 'lease_checker.state'):
4623+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'lease_checker.state'))
4624+            elif fname == os.path.join(storedir, 'lease_checker.history'):
4625                 return StringIO()
4626             else:
4627hunk ./src/allmydata/test/test_backends.py 95
4628-                self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
4629+                fnamefp = FilePath(fname)
4630+                self.failUnless(storedirfp in fnamefp.parents(),
4631+                                "Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
4632         mockopen.side_effect = call_open
4633 
4634         def call_isdir(fname):
4635hunk ./src/allmydata/test/test_backends.py 101
4636-            if fname == os.path.join(tempdir,'shares'):
4637+            if fname == os.path.join(storedir, 'shares'):
4638                 return True
4639hunk ./src/allmydata/test/test_backends.py 103
4640-            elif fname == os.path.join(tempdir,'shares', 'incoming'):
4641+            elif fname == os.path.join(storedir, 'shares', 'incoming'):
4642                 return True
4643             else:
4644                 self.fail("Server with FS backend tried to idsir '%s'" % (fname,))
4645hunk ./src/allmydata/test/test_backends.py 109
4646         mockisdir.side_effect = call_isdir
4647 
4648+        mocklistdir.return_value = []
4649+
4650         def call_mkdir(fname, mode):
4651hunk ./src/allmydata/test/test_backends.py 112
4652-            """XXX something is calling mkdir teststoredir and teststoredir/shares twice...  this is odd!"""
4653             self.failUnlessEqual(0777, mode)
4654hunk ./src/allmydata/test/test_backends.py 113
4655-            if fname == tempdir:
4656-                return None
4657-            elif fname == os.path.join(tempdir,'shares'):
4658-                return None
4659-            elif fname == os.path.join(tempdir,'shares', 'incoming'):
4660-                return None
4661-            else:
4662-                self.fail("Server with FS backend tried to mkdir '%s'" % (fname,))
4663+            self.failUnlessIn(fname,
4664+                              [storedir,
4665+                               os.path.join(storedir, 'shares'),
4666+                               os.path.join(storedir, 'shares', 'incoming')],
4667+                              "Server with FS backend tried to mkdir '%s'" % (fname,))
4668         mockmkdir.side_effect = call_mkdir
4669 
4670         # Now begin the test.
4671hunk ./src/allmydata/test/test_backends.py 121
4672-        s = StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
4673+        StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
4674 
4675         self.failIf(mocklistdir.called, mocklistdir.call_args_list)
4676 
4677hunk ./src/allmydata/test/test_backends.py 126
4678 
4679-class TestServerFSBackend(unittest.TestCase, ReallyEqualMixin):
4680+class TestServerAndFSBackend(unittest.TestCase, ReallyEqualMixin):
4681+    """ This tests both the StorageServer xyz """
4682     @mock.patch('__builtin__.open')
4683     def setUp(self, mockopen):
4684         def call_open(fname, mode):
4685hunk ./src/allmydata/test/test_backends.py 131
4686-            if fname == os.path.join(tempdir, 'bucket_counter.state'):
4687-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
4688-            elif fname == os.path.join(tempdir, 'lease_checker.state'):
4689-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
4690-            elif fname == os.path.join(tempdir, 'lease_checker.history'):
4691+            if fname == os.path.join(storedir, 'bucket_counter.state'):
4692+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'bucket_counter.state'))
4693+            elif fname == os.path.join(storedir, 'lease_checker.state'):
4694+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'lease_checker.state'))
4695+            elif fname == os.path.join(storedir, 'lease_checker.history'):
4696                 return StringIO()
4697             else:
4698                 _assert(False, "The tester code doesn't recognize this case.") 
4699hunk ./src/allmydata/test/test_backends.py 141
4700 
4701         mockopen.side_effect = call_open
4702-        self.backend = DASCore(tempdir, expiration_policy)
4703+        self.backend = DASCore(storedir, expiration_policy)
4704         self.ss = StorageServer(testnodeid, self.backend)
4705hunk ./src/allmydata/test/test_backends.py 143
4706-        self.backendsmall = DASCore(tempdir, expiration_policy, reserved_space = 1)
4707+        self.backendsmall = DASCore(storedir, expiration_policy, reserved_space = 1)
4708         self.ssmallback = StorageServer(testnodeid, self.backendsmall)
4709 
4710     @mock.patch('time.time')
4711hunk ./src/allmydata/test/test_backends.py 147
4712-    def test_write_share(self, mocktime):
4713-        """ Write a new share. """
4714-        # Now begin the test.
4715+    def test_write_and_read_share(self, mocktime):
4716+        """
4717+        Write a new share, read it, and test the server's (and FS backend's)
4718+        handling of simultaneous and successive attempts to write the same
4719+        share.
4720+        """
4721 
4722         mocktime.return_value = 0
4723         # Inspect incoming and fail unless it's empty.
4724hunk ./src/allmydata/test/test_backends.py 159
4725         incomingset = self.ss.backend.get_incoming('teststorage_index')
4726         self.failUnlessReallyEqual(incomingset, set())
4727         
4728-        # Among other things, populate incoming with the sharenum: 0.
4729+        # Populate incoming with the sharenum: 0.
4730         alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4731 
4732         # Inspect incoming and fail unless the sharenum: 0 is listed there.
4733hunk ./src/allmydata/test/test_backends.py 163
4734-        self.failUnlessEqual(self.ss.remote_get_incoming('teststorage_index'), set((0,)))
4735+        self.failUnlessEqual(self.ss.backend.get_incoming('teststorage_index'), set((0,)))
4736         
4737hunk ./src/allmydata/test/test_backends.py 165
4738-        # Attempt to create a second share writer with the same share.
4739+        # Attempt to create a second share writer with the same sharenum.
4740         alreadygota, bsa = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4741 
4742         # Show that no sharewriter results from a remote_allocate_buckets
4743hunk ./src/allmydata/test/test_backends.py 169
4744-        # with the same si, until BucketWriter.remote_close() has been called.
4745+        # with the same si and sharenum, until BucketWriter.remote_close()
4746+        # has been called.
4747         self.failIf(bsa)
4748 
4749         # Test allocated size.
4750hunk ./src/allmydata/test/test_backends.py 187
4751         # Postclose: (Omnibus) failUnless written data is in final.
4752         sharesinfinal = list(self.backend.get_shares('teststorage_index'))
4753         contents = sharesinfinal[0].read_share_data(0,73)
4754-        self.failUnlessReallyEqual(sharesinfinal[0].read_share_data(0,73), client_data)
4755+        self.failUnlessReallyEqual(contents, client_data)
4756 
4757hunk ./src/allmydata/test/test_backends.py 189
4758-        # Cover interior of for share in get_shares loop.
4759-        alreadygotb, bsb = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4760+        # Exercise the case that the share we're asking to allocate is
4761+        # already (completely) uploaded.
4762+        self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
4763         
4764     @mock.patch('time.time')
4765     @mock.patch('allmydata.util.fileutil.get_available_space')
4766hunk ./src/allmydata/test/test_backends.py 210
4767     @mock.patch('os.path.getsize')
4768     @mock.patch('__builtin__.open')
4769     @mock.patch('os.listdir')
4770-    def test_read_share(self, mocklistdir, mockopen, mockgetsize, mockexists):
4771+    def test_read_old_share(self, mocklistdir, mockopen, mockgetsize, mockexists):
4772         """ This tests whether the code correctly finds and reads
4773         shares written out by old (Tahoe-LAFS <= v1.8.2)
4774         servers. There is a similar test in test_download, but that one
4775hunk ./src/allmydata/test/test_backends.py 219
4776         StorageServer object. """
4777 
4778         def call_listdir(dirname):
4779-            self.failUnlessReallyEqual(dirname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
4780+            self.failUnlessReallyEqual(dirname, os.path.join(storedir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
4781             return ['0']
4782 
4783         mocklistdir.side_effect = call_listdir
4784hunk ./src/allmydata/test/test_backends.py 226
4785 
4786         def call_open(fname, mode):
4787             self.failUnlessReallyEqual(fname, sharefname)
4788-            self.failUnless('r' in mode, mode)
4789+            self.failUnlessEqual(mode[0], 'r', mode)
4790             self.failUnless('b' in mode, mode)
4791 
4792             return StringIO(share_data)
4793hunk ./src/allmydata/test/test_backends.py 268
4794         filesystem in only the prescribed ways. """
4795 
4796         def call_open(fname, mode):
4797-            if fname == os.path.join(tempdir,'bucket_counter.state'):
4798-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state'))
4799-            elif fname == os.path.join(tempdir, 'lease_checker.state'):
4800-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state'))
4801-            elif fname == os.path.join(tempdir, 'lease_checker.history'):
4802+            if fname == os.path.join(storedir,'bucket_counter.state'):
4803+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'bucket_counter.state'))
4804+            elif fname == os.path.join(storedir, 'lease_checker.state'):
4805+                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'lease_checker.state'))
4806+            elif fname == os.path.join(storedir, 'lease_checker.history'):
4807                 return StringIO()
4808             else:
4809                 self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
4810hunk ./src/allmydata/test/test_backends.py 279
4811         mockopen.side_effect = call_open
4812 
4813         def call_isdir(fname):
4814-            if fname == os.path.join(tempdir,'shares'):
4815+            if fname == os.path.join(storedir,'shares'):
4816                 return True
4817hunk ./src/allmydata/test/test_backends.py 281
4818-            elif fname == os.path.join(tempdir,'shares', 'incoming'):
4819+            elif fname == os.path.join(storedir,'shares', 'incoming'):
4820                 return True
4821             else:
4822                 self.fail("Server with FS backend tried to idsir '%s'" % (fname,))
4823hunk ./src/allmydata/test/test_backends.py 290
4824         def call_mkdir(fname, mode):
4825             """XXX something is calling mkdir teststoredir and teststoredir/shares twice...  this is odd!"""
4826             self.failUnlessEqual(0777, mode)
4827-            if fname == tempdir:
4828+            if fname == storedir:
4829                 return None
4830hunk ./src/allmydata/test/test_backends.py 292
4831-            elif fname == os.path.join(tempdir,'shares'):
4832+            elif fname == os.path.join(storedir,'shares'):
4833                 return None
4834hunk ./src/allmydata/test/test_backends.py 294
4835-            elif fname == os.path.join(tempdir,'shares', 'incoming'):
4836+            elif fname == os.path.join(storedir,'shares', 'incoming'):
4837                 return None
4838             else:
4839                 self.fail("Server with FS backend tried to mkdir '%s'" % (fname,))
4840hunk ./src/allmydata/util/fileutil.py 5
4841 Futz with files like a pro.
4842 """
4843 
4844-import sys, exceptions, os, stat, tempfile, time, binascii
4845+import errno, sys, exceptions, os, stat, tempfile, time, binascii
4846 
4847 from twisted.python import log
4848 
4849hunk ./src/allmydata/util/fileutil.py 186
4850             raise tx
4851         raise exceptions.IOError, "unknown error prevented creation of directory, or deleted the directory immediately after creation: %s" % dirname # careful not to construct an IOError with a 2-tuple, as that has a special meaning...
4852 
4853-def rm_dir(dirname):
4854+def rmtree(dirname):
4855     """
4856     A threadsafe and idempotent version of shutil.rmtree().  If the dir is
4857     already gone, do nothing and return without raising an exception.  If this
4858hunk ./src/allmydata/util/fileutil.py 205
4859             else:
4860                 remove(fullname)
4861         os.rmdir(dirname)
4862-    except Exception, le:
4863-        # Ignore "No such file or directory"
4864-        if (not isinstance(le, OSError)) or le.args[0] != 2:
4865+    except EnvironmentError, le:
4866+        # Ignore "No such file or directory", collect any other exception.
4867+        if (le.args[0] != 2 and le.args[0] != 3) or (le.args[0] != errno.ENOENT):
4868             excs.append(le)
4869hunk ./src/allmydata/util/fileutil.py 209
4870+    except Exception, le:
4871+        excs.append(le)
4872 
4873     # Okay, now we've recursively removed everything, ignoring any "No
4874     # such file or directory" errors, and collecting any other errors.
4875hunk ./src/allmydata/util/fileutil.py 222
4876             raise OSError, "Failed to remove dir for unknown reason."
4877         raise OSError, excs
4878 
4879+def rm_dir(dirname):
4880+    # Renamed to be like shutil.rmtree and unlike rmdir.
4881+    return rmtree(dirname)
4882 
4883 def remove_if_possible(f):
4884     try:
4885}
4886[work in progress intended to be unrecorded and never committed to trunk
4887zooko@zooko.com**20110714212139
4888 Ignore-this: c291aaf2b22c4887ad0ba2caea911537
4889 switch from os.path.join to filepath
4890 incomplete refactoring of common "stay in your subtree" tester code into a superclass
4891 
4892] {
4893hunk ./src/allmydata/test/test_backends.py 3
4894 from twisted.trial import unittest
4895 
4896-from twisted.path.filepath import FilePath
4897+from twisted.python.filepath import FilePath
4898 
4899 from StringIO import StringIO
4900 
4901hunk ./src/allmydata/test/test_backends.py 10
4902 from allmydata.test.common_util import ReallyEqualMixin
4903 from allmydata.util.assertutil import _assert
4904 
4905-import mock, os
4906+import mock
4907 
4908 # This is the code that we're going to be testing.
4909 from allmydata.storage.server import StorageServer
4910hunk ./src/allmydata/test/test_backends.py 25
4911 shareversionnumber = '\x00\x00\x00\x01'
4912 sharedatalength = '\x00\x00\x00\x01'
4913 numberofleases = '\x00\x00\x00\x01'
4914+
4915 shareinputdata = 'a'
4916 ownernumber = '\x00\x00\x00\x00'
4917 renewsecret  = 'x'*32
4918hunk ./src/allmydata/test/test_backends.py 39
4919 
4920 
4921 testnodeid = 'testnodeidxxxxxxxxxx'
4922-storedir = 'teststoredir'
4923-storedirfp = FilePath(storedir)
4924-basedir = os.path.join(storedir, 'shares')
4925-baseincdir = os.path.join(basedir, 'incoming')
4926-sharedirfinalname = os.path.join(basedir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
4927-sharedirincomingname = os.path.join(baseincdir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a')
4928-shareincomingname = os.path.join(sharedirincomingname, '0')
4929-sharefname = os.path.join(sharedirfinalname, '0')
4930+
4931+class TestFilesMixin(unittest.TestCase):
4932+    def setUp(self):
4933+        self.storedir = FilePath('teststoredir')
4934+        self.basedir = self.storedir.child('shares')
4935+        self.baseincdir = self.basedir.child('incoming')
4936+        self.sharedirfinalname = self.basedir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
4937+        self.sharedirincomingname = self.baseincdir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
4938+        self.shareincomingname = self.sharedirincomingname.child('0')
4939+        self.sharefname = self.sharedirfinalname.child('0')
4940+
4941+    def call_open(self, fname, mode):
4942+        fnamefp = FilePath(fname)
4943+        if fnamefp == self.storedir.child('bucket_counter.state'):
4944+            raise IOError(2, "No such file or directory: '%s'" % self.storedir.child('bucket_counter.state'))
4945+        elif fnamefp == self.storedir.child('lease_checker.state'):
4946+            raise IOError(2, "No such file or directory: '%s'" % self.storedir.child('lease_checker.state'))
4947+        elif fnamefp == self.storedir.child('lease_checker.history'):
4948+            return StringIO()
4949+        else:
4950+            self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
4951+                            "Server with FS backend tried to open '%s' which is outside of the storage tree '%s' in mode '%s'" % (fnamefp, self.storedir, mode))
4952+
4953+    def call_isdir(self, fname):
4954+        fnamefp = FilePath(fname)
4955+        if fnamefp == self.storedir.child('shares'):
4956+            return True
4957+        elif fnamefp == self.storedir.child('shares').child('incoming'):
4958+            return True
4959+        else:
4960+            self.failUnless(self.storedir in fnamefp.parents(),
4961+                            "Server with FS backend tried to isdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
4962+
4963+    def call_mkdir(self, fname, mode):
4964+        self.failUnlessEqual(0777, mode)
4965+        fnamefp = FilePath(fname)
4966+        self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
4967+                        "Server with FS backend tried to mkdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
4968+
4969+
4970+    @mock.patch('os.mkdir')
4971+    @mock.patch('__builtin__.open')
4972+    @mock.patch('os.listdir')
4973+    @mock.patch('os.path.isdir')
4974+    def _help_test_stay_in_your_subtree(self, test_func, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
4975+        mocklistdir.return_value = []
4976+        mockmkdir.side_effect = self.call_mkdir
4977+        mockisdir.side_effect = self.call_isdir
4978+        mockopen.side_effect = self.call_open
4979+        mocklistdir.return_value = []
4980+       
4981+        test_func()
4982+       
4983+        self.failIf(mocklistdir.called, mocklistdir.call_args_list)
4984 
4985 expiration_policy = {'enabled' : False,
4986                      'mode' : 'age',
4987hunk ./src/allmydata/test/test_backends.py 123
4988         self.failIf(mockopen.called)
4989         self.failIf(mockmkdir.called)
4990 
4991-class TestServerConstruction(unittest.TestCase, ReallyEqualMixin):
4992-    @mock.patch('time.time')
4993-    @mock.patch('os.mkdir')
4994-    @mock.patch('__builtin__.open')
4995-    @mock.patch('os.listdir')
4996-    @mock.patch('os.path.isdir')
4997-    def test_create_server_fs_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
4998+class TestServerConstruction(ReallyEqualMixin, TestFilesMixin):
4999+    def test_create_server_fs_backend(self):
5000         """ This tests whether a server instance can be constructed with a
5001         filesystem backend. To pass the test, it mustn't use the filesystem
5002         outside of its configured storedir. """
5003hunk ./src/allmydata/test/test_backends.py 129
5004 
5005-        def call_open(fname, mode):
5006-            if fname == os.path.join(storedir, 'bucket_counter.state'):
5007-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'bucket_counter.state'))
5008-            elif fname == os.path.join(storedir, 'lease_checker.state'):
5009-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'lease_checker.state'))
5010-            elif fname == os.path.join(storedir, 'lease_checker.history'):
5011-                return StringIO()
5012-            else:
5013-                fnamefp = FilePath(fname)
5014-                self.failUnless(storedirfp in fnamefp.parents(),
5015-                                "Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
5016-        mockopen.side_effect = call_open
5017+        def _f():
5018+            StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
5019 
5020hunk ./src/allmydata/test/test_backends.py 132
5021-        def call_isdir(fname):
5022-            if fname == os.path.join(storedir, 'shares'):
5023-                return True
5024-            elif fname == os.path.join(storedir, 'shares', 'incoming'):
5025-                return True
5026-            else:
5027-                self.fail("Server with FS backend tried to idsir '%s'" % (fname,))
5028-        mockisdir.side_effect = call_isdir
5029-
5030-        mocklistdir.return_value = []
5031-
5032-        def call_mkdir(fname, mode):
5033-            self.failUnlessEqual(0777, mode)
5034-            self.failUnlessIn(fname,
5035-                              [storedir,
5036-                               os.path.join(storedir, 'shares'),
5037-                               os.path.join(storedir, 'shares', 'incoming')],
5038-                              "Server with FS backend tried to mkdir '%s'" % (fname,))
5039-        mockmkdir.side_effect = call_mkdir
5040-
5041-        # Now begin the test.
5042-        StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
5043-
5044-        self.failIf(mocklistdir.called, mocklistdir.call_args_list)
5045+        self._help_test_stay_in_your_subtree(_f)
5046 
5047 
5048 class TestServerAndFSBackend(unittest.TestCase, ReallyEqualMixin):
5049}
5050[another incomplete patch for people who are very curious about incomplete work or for Zancas to apply and build on top of 2011-07-15_19_15Z
5051zooko@zooko.com**20110715191500
5052 Ignore-this: af33336789041800761e80510ea2f583
5053 In this patch (very incomplete) we started two major changes: first was to refactor the mockery of the filesystem into a common base class which provides a mock filesystem for all the DAS tests. Second was to convert from Python standard library filename manipulation like os.path.join to twisted.python.filepath. The former *might* be close to complete -- it seems to run at least most of the first test before that test hits a problem due to the incomplete converstion to filepath. The latter has still a lot of work to go.
5054] {
5055hunk ./src/allmydata/storage/backends/das/core.py 59
5056                 log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
5057                         umid="0wZ27w", level=log.UNUSUAL)
5058 
5059-        self.sharedir = os.path.join(self.storedir, "shares")
5060-        fileutil.make_dirs(self.sharedir)
5061-        self.incomingdir = os.path.join(self.sharedir, 'incoming')
5062+        self.sharedir = self.storedir.child("shares")
5063+        fileutil.fp_make_dirs(self.sharedir)
5064+        self.incomingdir = self.sharedir.child('incoming')
5065         self._clean_incomplete()
5066 
5067     def _clean_incomplete(self):
5068hunk ./src/allmydata/storage/backends/das/core.py 65
5069-        fileutil.rmtree(self.incomingdir)
5070-        fileutil.make_dirs(self.incomingdir)
5071+        fileutil.fp_remove(self.incomingdir)
5072+        fileutil.fp_make_dirs(self.incomingdir)
5073 
5074     def _setup_corruption_advisory(self):
5075         # we don't actually create the corruption-advisory dir until necessary
5076hunk ./src/allmydata/storage/backends/das/core.py 70
5077-        self.corruption_advisory_dir = os.path.join(self.storedir,
5078-                                                    "corruption-advisories")
5079+        self.corruption_advisory_dir = self.storedir.child("corruption-advisories")
5080 
5081     def _setup_bucket_counter(self):
5082hunk ./src/allmydata/storage/backends/das/core.py 73
5083-        statefname = os.path.join(self.storedir, "bucket_counter.state")
5084+        statefname = self.storedir.child("bucket_counter.state")
5085         self.bucket_counter = FSBucketCountingCrawler(statefname)
5086         self.bucket_counter.setServiceParent(self)
5087 
5088hunk ./src/allmydata/storage/backends/das/core.py 78
5089     def _setup_lease_checkerf(self, expiration_policy):
5090-        statefile = os.path.join(self.storedir, "lease_checker.state")
5091-        historyfile = os.path.join(self.storedir, "lease_checker.history")
5092+        statefile = self.storedir.child("lease_checker.state")
5093+        historyfile = self.storedir.child("lease_checker.history")
5094         self.lease_checker = FSLeaseCheckingCrawler(statefile, historyfile, expiration_policy)
5095         self.lease_checker.setServiceParent(self)
5096 
5097hunk ./src/allmydata/storage/backends/das/core.py 83
5098-    def get_incoming(self, storageindex):
5099+    def get_incoming_shnums(self, storageindex):
5100         """Return the set of incoming shnums."""
5101         try:
5102hunk ./src/allmydata/storage/backends/das/core.py 86
5103-            incomingsharesdir = os.path.join(self.incomingdir, storage_index_to_dir(storageindex))
5104-            incominglist = os.listdir(incomingsharesdir)
5105-            incomingshnums = [int(x) for x in incominglist]
5106-            return set(incomingshnums)
5107-        except OSError:
5108-            # XXX I'd like to make this more specific. If there are no shares at all.
5109-            return set()
5110+           
5111+            incomingsharesdir = storage_index_to_dir(self.incomingdir, storageindex)
5112+            incomingshnums = [int(x) for x in incomingsharesdir.listdir()]
5113+            return frozenset(incomingshnums)
5114+        except UnlistableError:
5115+            # There is no shares directory at all.
5116+            return frozenset()
5117             
5118     def get_shares(self, storageindex):
5119         """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
5120hunk ./src/allmydata/storage/backends/das/core.py 96
5121-        finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storageindex))
5122+        finalstoragedir = storage_index_to_dir(self.sharedir, storageindex)
5123         try:
5124hunk ./src/allmydata/storage/backends/das/core.py 98
5125-            for f in os.listdir(finalstoragedir):
5126-                if NUM_RE.match(f):
5127-                    filename = os.path.join(finalstoragedir, f)
5128-                    yield ImmutableShare(filename, storageindex, int(f))
5129-        except OSError:
5130-            # Commonly caused by there being no shares at all.
5131+            for f in finalstoragedir.listdir():
5132+                if NUM_RE.match(f.basename):
5133+                    yield ImmutableShare(f, storageindex, int(f))
5134+        except UnlistableError:
5135+            # There is no shares directory at all.
5136             pass
5137         
5138     def get_available_space(self):
5139hunk ./src/allmydata/storage/backends/das/core.py 149
5140 # then the value stored in this field will be the actual share data length
5141 # modulo 2**32.
5142 
5143-class ImmutableShare:
5144+class ImmutableShare(object):
5145     LEASE_SIZE = struct.calcsize(">L32s32sL")
5146     sharetype = "immutable"
5147 
5148hunk ./src/allmydata/storage/backends/das/core.py 166
5149         if create:
5150             # touch the file, so later callers will see that we're working on
5151             # it. Also construct the metadata.
5152-            assert not os.path.exists(self.finalhome)
5153-            fileutil.make_dirs(os.path.dirname(self.incominghome))
5154+            assert not finalhome.exists()
5155+            fp_make_dirs(self.incominghome)
5156             f = open(self.incominghome, 'wb')
5157             # The second field -- the four-byte share data length -- is no
5158             # longer used as of Tahoe v1.3.0, but we continue to write it in
5159hunk ./src/allmydata/storage/backends/das/core.py 316
5160         except IndexError:
5161             self.add_lease(lease_info)
5162 
5163-
5164     def cancel_lease(self, cancel_secret):
5165         """Remove a lease with the given cancel_secret. If the last lease is
5166         cancelled, the file will be removed. Return the number of bytes that
5167hunk ./src/allmydata/storage/common.py 19
5168 def si_a2b(ascii_storageindex):
5169     return base32.a2b(ascii_storageindex)
5170 
5171-def storage_index_to_dir(storageindex):
5172+def storage_index_to_dir(startfp, storageindex):
5173     sia = si_b2a(storageindex)
5174     return os.path.join(sia[:2], sia)
5175hunk ./src/allmydata/storage/server.py 210
5176 
5177         # fill incoming with all shares that are incoming use a set operation
5178         # since there's no need to operate on individual pieces
5179-        incoming = self.backend.get_incoming(storageindex)
5180+        incoming = self.backend.get_incoming_shnums(storageindex)
5181 
5182         for shnum in ((sharenums - alreadygot) - incoming):
5183             if (not limited) or (remaining_space >= max_space_per_bucket):
5184hunk ./src/allmydata/test/test_backends.py 5
5185 
5186 from twisted.python.filepath import FilePath
5187 
5188+from allmydata.util.log import msg
5189+
5190 from StringIO import StringIO
5191 
5192 from allmydata.test.common_util import ReallyEqualMixin
5193hunk ./src/allmydata/test/test_backends.py 42
5194 
5195 testnodeid = 'testnodeidxxxxxxxxxx'
5196 
5197-class TestFilesMixin(unittest.TestCase):
5198-    def setUp(self):
5199-        self.storedir = FilePath('teststoredir')
5200-        self.basedir = self.storedir.child('shares')
5201-        self.baseincdir = self.basedir.child('incoming')
5202-        self.sharedirfinalname = self.basedir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
5203-        self.sharedirincomingname = self.baseincdir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
5204-        self.shareincomingname = self.sharedirincomingname.child('0')
5205-        self.sharefname = self.sharedirfinalname.child('0')
5206+class MockStat:
5207+    def __init__(self):
5208+        self.st_mode = None
5209 
5210hunk ./src/allmydata/test/test_backends.py 46
5211+class MockFiles(unittest.TestCase):
5212+    """ I simulate a filesystem that the code under test can use. I flag the
5213+    code under test if it reads or writes outside of its prescribed
5214+    subtree. I simulate just the parts of the filesystem that the current
5215+    implementation of DAS backend needs. """
5216     def call_open(self, fname, mode):
5217         fnamefp = FilePath(fname)
5218hunk ./src/allmydata/test/test_backends.py 53
5219+        self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
5220+                        "Server with FS backend tried to open '%s' which is outside of the storage tree '%s' in mode '%s'" % (fnamefp, self.storedir, mode))
5221+
5222         if fnamefp == self.storedir.child('bucket_counter.state'):
5223             raise IOError(2, "No such file or directory: '%s'" % self.storedir.child('bucket_counter.state'))
5224         elif fnamefp == self.storedir.child('lease_checker.state'):
5225hunk ./src/allmydata/test/test_backends.py 61
5226             raise IOError(2, "No such file or directory: '%s'" % self.storedir.child('lease_checker.state'))
5227         elif fnamefp == self.storedir.child('lease_checker.history'):
5228+            # This is separated out from the else clause below just because
5229+            # we know this particular file is going to be used by the
5230+            # current implementation of DAS backend, and we might want to
5231+            # use this information in this test in the future...
5232             return StringIO()
5233         else:
5234hunk ./src/allmydata/test/test_backends.py 67
5235-            self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
5236-                            "Server with FS backend tried to open '%s' which is outside of the storage tree '%s' in mode '%s'" % (fnamefp, self.storedir, mode))
5237+            # Anything else you open inside your subtree appears to be an
5238+            # empty file.
5239+            return StringIO()
5240 
5241     def call_isdir(self, fname):
5242         fnamefp = FilePath(fname)
5243hunk ./src/allmydata/test/test_backends.py 73
5244-        if fnamefp == self.storedir.child('shares'):
5245+        return fnamefp.isdir()
5246+
5247+        self.failUnless(self.storedir == self or self.storedir in self.parents(),
5248+                        "Server with FS backend tried to isdir '%s' which is outside of the storage tree '%s''" % (self, self.storedir))
5249+
5250+        # The first two cases are separate from the else clause below just
5251+        # because we know that the current implementation of the DAS backend
5252+        # inspects these two directories and we might want to make use of
5253+        # that information in the tests in the future...
5254+        if self == self.storedir.child('shares'):
5255             return True
5256hunk ./src/allmydata/test/test_backends.py 84
5257-        elif fnamefp == self.storedir.child('shares').child('incoming'):
5258+        elif self == self.storedir.child('shares').child('incoming'):
5259             return True
5260         else:
5261hunk ./src/allmydata/test/test_backends.py 87
5262-            self.failUnless(self.storedir in fnamefp.parents(),
5263-                            "Server with FS backend tried to isdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
5264+            # Anything else you open inside your subtree appears to be a
5265+            # directory.
5266+            return True
5267 
5268     def call_mkdir(self, fname, mode):
5269hunk ./src/allmydata/test/test_backends.py 92
5270-        self.failUnlessEqual(0777, mode)
5271         fnamefp = FilePath(fname)
5272         self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
5273                         "Server with FS backend tried to mkdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
5274hunk ./src/allmydata/test/test_backends.py 95
5275+        self.failUnlessEqual(0777, mode)
5276 
5277hunk ./src/allmydata/test/test_backends.py 97
5278+    def call_listdir(self, fname):
5279+        fnamefp = FilePath(fname)
5280+        self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
5281+                        "Server with FS backend tried to listdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
5282 
5283hunk ./src/allmydata/test/test_backends.py 102
5284-    @mock.patch('os.mkdir')
5285-    @mock.patch('__builtin__.open')
5286-    @mock.patch('os.listdir')
5287-    @mock.patch('os.path.isdir')
5288-    def _help_test_stay_in_your_subtree(self, test_func, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
5289-        mocklistdir.return_value = []
5290+    def call_stat(self, fname):
5291+        fnamefp = FilePath(fname)
5292+        self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
5293+                        "Server with FS backend tried to isdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
5294+
5295+        msg("%s.call_stat(%s)" % (self, fname,))
5296+        mstat = MockStat()
5297+        mstat.st_mode = 16893 # a directory
5298+        return mstat
5299+
5300+    def setUp(self):
5301+        msg( "%s.setUp()" % (self,))
5302+        self.storedir = FilePath('teststoredir')
5303+        self.basedir = self.storedir.child('shares')
5304+        self.baseincdir = self.basedir.child('incoming')
5305+        self.sharedirfinalname = self.basedir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
5306+        self.sharedirincomingname = self.baseincdir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
5307+        self.shareincomingname = self.sharedirincomingname.child('0')
5308+        self.sharefname = self.sharedirfinalname.child('0')
5309+
5310+        self.mocklistdirp = mock.patch('os.listdir')
5311+        mocklistdir = self.mocklistdirp.__enter__()
5312+        mocklistdir.side_effect = self.call_listdir
5313+
5314+        self.mockmkdirp = mock.patch('os.mkdir')
5315+        mockmkdir = self.mockmkdirp.__enter__()
5316         mockmkdir.side_effect = self.call_mkdir
5317hunk ./src/allmydata/test/test_backends.py 129
5318+
5319+        self.mockisdirp = mock.patch('os.path.isdir')
5320+        mockisdir = self.mockisdirp.__enter__()
5321         mockisdir.side_effect = self.call_isdir
5322hunk ./src/allmydata/test/test_backends.py 133
5323+
5324+        self.mockopenp = mock.patch('__builtin__.open')
5325+        mockopen = self.mockopenp.__enter__()
5326         mockopen.side_effect = self.call_open
5327hunk ./src/allmydata/test/test_backends.py 137
5328-        mocklistdir.return_value = []
5329-       
5330-        test_func()
5331-       
5332-        self.failIf(mocklistdir.called, mocklistdir.call_args_list)
5333+
5334+        self.mockstatp = mock.patch('os.stat')
5335+        mockstat = self.mockstatp.__enter__()
5336+        mockstat.side_effect = self.call_stat
5337+
5338+        self.mockfpstatp = mock.patch('twisted.python.filepath.stat')
5339+        mockfpstat = self.mockfpstatp.__enter__()
5340+        mockfpstat.side_effect = self.call_stat
5341+
5342+    def tearDown(self):
5343+        msg( "%s.tearDown()" % (self,))
5344+        self.mockfpstatp.__exit__()
5345+        self.mockstatp.__exit__()
5346+        self.mockopenp.__exit__()
5347+        self.mockisdirp.__exit__()
5348+        self.mockmkdirp.__exit__()
5349+        self.mocklistdirp.__exit__()
5350 
5351 expiration_policy = {'enabled' : False,
5352                      'mode' : 'age',
5353hunk ./src/allmydata/test/test_backends.py 184
5354         self.failIf(mockopen.called)
5355         self.failIf(mockmkdir.called)
5356 
5357-class TestServerConstruction(ReallyEqualMixin, TestFilesMixin):
5358+class TestServerConstruction(MockFiles, ReallyEqualMixin):
5359     def test_create_server_fs_backend(self):
5360         """ This tests whether a server instance can be constructed with a
5361         filesystem backend. To pass the test, it mustn't use the filesystem
5362hunk ./src/allmydata/test/test_backends.py 190
5363         outside of its configured storedir. """
5364 
5365-        def _f():
5366-            StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
5367+        StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
5368 
5369hunk ./src/allmydata/test/test_backends.py 192
5370-        self._help_test_stay_in_your_subtree(_f)
5371-
5372-
5373-class TestServerAndFSBackend(unittest.TestCase, ReallyEqualMixin):
5374-    """ This tests both the StorageServer xyz """
5375-    @mock.patch('__builtin__.open')
5376-    def setUp(self, mockopen):
5377-        def call_open(fname, mode):
5378-            if fname == os.path.join(storedir, 'bucket_counter.state'):
5379-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'bucket_counter.state'))
5380-            elif fname == os.path.join(storedir, 'lease_checker.state'):
5381-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'lease_checker.state'))
5382-            elif fname == os.path.join(storedir, 'lease_checker.history'):
5383-                return StringIO()
5384-            else:
5385-                _assert(False, "The tester code doesn't recognize this case.") 
5386-
5387-        mockopen.side_effect = call_open
5388-        self.backend = DASCore(storedir, expiration_policy)
5389-        self.ss = StorageServer(testnodeid, self.backend)
5390-        self.backendsmall = DASCore(storedir, expiration_policy, reserved_space = 1)
5391-        self.ssmallback = StorageServer(testnodeid, self.backendsmall)
5392+class TestServerAndFSBackend(MockFiles, ReallyEqualMixin):
5393+    """ This tests both the StorageServer and the DAS backend together. """
5394+    def setUp(self):
5395+        MockFiles.setUp(self)
5396+        try:
5397+            self.backend = DASCore(self.storedir, expiration_policy)
5398+            self.ss = StorageServer(testnodeid, self.backend)
5399+            self.backendsmall = DASCore(self.storedir, expiration_policy, reserved_space = 1)
5400+            self.ssmallback = StorageServer(testnodeid, self.backendsmall)
5401+        except:
5402+            MockFiles.tearDown(self)
5403+            raise
5404 
5405     @mock.patch('time.time')
5406     def test_write_and_read_share(self, mocktime):
5407hunk ./src/allmydata/util/fileutil.py 8
5408 import errno, sys, exceptions, os, stat, tempfile, time, binascii
5409 
5410 from twisted.python import log
5411+from twisted.python.filepath import UnlistableError
5412 
5413 from pycryptopp.cipher.aes import AES
5414 
5415hunk ./src/allmydata/util/fileutil.py 187
5416             raise tx
5417         raise exceptions.IOError, "unknown error prevented creation of directory, or deleted the directory immediately after creation: %s" % dirname # careful not to construct an IOError with a 2-tuple, as that has a special meaning...
5418 
5419+def fp_make_dirs(dirfp):
5420+    """
5421+    An idempotent version of FilePath.makedirs().  If the dir already
5422+    exists, do nothing and return without raising an exception.  If this
5423+    call creates the dir, return without raising an exception.  If there is
5424+    an error that prevents creation or if the directory gets deleted after
5425+    fp_make_dirs() creates it and before fp_make_dirs() checks that it
5426+    exists, raise an exception.
5427+    """
5428+    log.msg( "xxx 0 %s" % (dirfp,))
5429+    tx = None
5430+    try:
5431+        dirfp.makedirs()
5432+    except OSError, x:
5433+        tx = x
5434+
5435+    if not dirfp.isdir():
5436+        if tx:
5437+            raise tx
5438+        raise exceptions.IOError, "unknown error prevented creation of directory, or deleted the directory immediately after creation: %s" % dirfp # careful not to construct an IOError with a 2-tuple, as that has a special meaning...
5439+
5440 def rmtree(dirname):
5441     """
5442     A threadsafe and idempotent version of shutil.rmtree().  If the dir is
5443hunk ./src/allmydata/util/fileutil.py 244
5444             raise OSError, "Failed to remove dir for unknown reason."
5445         raise OSError, excs
5446 
5447+def fp_remove(dirfp):
5448+    try:
5449+        dirfp.remove()
5450+    except UnlistableError, e:
5451+        if e.originalException.errno != errno.ENOENT:
5452+            raise
5453+
5454 def rm_dir(dirname):
5455     # Renamed to be like shutil.rmtree and unlike rmdir.
5456     return rmtree(dirname)
5457}
5458[another temporary patch for sharing work-in-progress
5459zooko@zooko.com**20110720055918
5460 Ignore-this: dfa0270476cbc6511cdb54c5c9a55a8e
5461 A lot more filepathification. The changes made in this patch feel really good to me -- we get to remove and simplify code by relying on filepath.
5462 There are a few other changes in this file, notably removing the misfeature of catching OSError and returning 0 from get_available_space()...
5463 (There is a lot of work to do to document these changes in good commit log messages and break them up into logical units inasmuch as possible...)
5464 
5465] {
5466hunk ./src/allmydata/storage/backends/das/core.py 5
5467 
5468 from allmydata.interfaces import IStorageBackend
5469 from allmydata.storage.backends.base import Backend
5470-from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
5471+from allmydata.storage.common import si_b2a, si_a2b, si_dir
5472 from allmydata.util.assertutil import precondition
5473 
5474 #from foolscap.api import Referenceable
5475hunk ./src/allmydata/storage/backends/das/core.py 10
5476 from twisted.application import service
5477+from twisted.python.filepath import UnlistableError
5478 
5479 from zope.interface import implements
5480 from allmydata.interfaces import IStatsProducer, IShareStore# XXX, RIStorageServer
5481hunk ./src/allmydata/storage/backends/das/core.py 17
5482 from allmydata.util import fileutil, idlib, log, time_format
5483 import allmydata # for __full_version__
5484 
5485-from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
5486-_pyflakes_hush = [si_b2a, si_a2b, storage_index_to_dir] # re-exported
5487+from allmydata.storage.common import si_b2a, si_a2b, si_dir
5488+_pyflakes_hush = [si_b2a, si_a2b, si_dir] # re-exported
5489 from allmydata.storage.lease import LeaseInfo
5490 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
5491      create_mutable_sharefile
5492hunk ./src/allmydata/storage/backends/das/core.py 41
5493 # $SHARENUM matches this regex:
5494 NUM_RE=re.compile("^[0-9]+$")
5495 
5496+def is_num(fp):
5497+    return NUM_RE.match(fp.basename)
5498+
5499 class DASCore(Backend):
5500     implements(IStorageBackend)
5501     def __init__(self, storedir, expiration_policy, readonly=False, reserved_space=0):
5502hunk ./src/allmydata/storage/backends/das/core.py 58
5503         self.storedir = storedir
5504         self.readonly = readonly
5505         self.reserved_space = int(reserved_space)
5506-        if self.reserved_space:
5507-            if self.get_available_space() is None:
5508-                log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
5509-                        umid="0wZ27w", level=log.UNUSUAL)
5510-
5511         self.sharedir = self.storedir.child("shares")
5512         fileutil.fp_make_dirs(self.sharedir)
5513         self.incomingdir = self.sharedir.child('incoming')
5514hunk ./src/allmydata/storage/backends/das/core.py 62
5515         self._clean_incomplete()
5516+        if self.reserved_space and (self.get_available_space() is None):
5517+            log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored",
5518+                    umid="0wZ27w", level=log.UNUSUAL)
5519+
5520 
5521     def _clean_incomplete(self):
5522         fileutil.fp_remove(self.incomingdir)
5523hunk ./src/allmydata/storage/backends/das/core.py 87
5524         self.lease_checker.setServiceParent(self)
5525 
5526     def get_incoming_shnums(self, storageindex):
5527-        """Return the set of incoming shnums."""
5528+        """ Return a frozenset of the shnum (as ints) of incoming shares. """
5529+        incomingdir = storage_index_to_dir(self.incomingdir, storageindex)
5530         try:
5531hunk ./src/allmydata/storage/backends/das/core.py 90
5532-           
5533-            incomingsharesdir = storage_index_to_dir(self.incomingdir, storageindex)
5534-            incomingshnums = [int(x) for x in incomingsharesdir.listdir()]
5535-            return frozenset(incomingshnums)
5536+            childfps = [ fp for fp in incomingdir.children() if is_num(fp) ]
5537+            shnums = [ int(fp.basename) for fp in childfps ]
5538+            return frozenset(shnums)
5539         except UnlistableError:
5540             # There is no shares directory at all.
5541             return frozenset()
5542hunk ./src/allmydata/storage/backends/das/core.py 98
5543             
5544     def get_shares(self, storageindex):
5545-        """Return a list of the ImmutableShare objects that correspond to the passed storage_index."""
5546+        """ Generate ImmutableShare objects for shares we have for this
5547+        storageindex. ("Shares we have" means completed ones, excluding
5548+        incoming ones.)"""
5549         finalstoragedir = storage_index_to_dir(self.sharedir, storageindex)
5550         try:
5551hunk ./src/allmydata/storage/backends/das/core.py 103
5552-            for f in finalstoragedir.listdir():
5553-                if NUM_RE.match(f.basename):
5554-                    yield ImmutableShare(f, storageindex, int(f))
5555+            for fp in finalstoragedir.children():
5556+                if is_num(fp):
5557+                    yield ImmutableShare(fp, storageindex)
5558         except UnlistableError:
5559             # There is no shares directory at all.
5560             pass
5561hunk ./src/allmydata/storage/backends/das/core.py 116
5562         return fileutil.get_available_space(self.storedir, self.reserved_space)
5563 
5564     def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
5565-        finalhome = os.path.join(self.sharedir, storage_index_to_dir(storageindex), str(shnum))
5566-        incominghome = os.path.join(self.sharedir,'incoming', storage_index_to_dir(storageindex), str(shnum))
5567+        finalhome = storage_index_to_dir(self.sharedir, storageindex).child(str(shnum))
5568+        incominghome = storage_index_to_dir(self.sharedir.child('incoming'), storageindex).child(str(shnum))
5569         immsh = ImmutableShare(finalhome, storageindex, shnum, incominghome, max_size=max_space_per_bucket, create=True)
5570         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
5571         return bw
5572hunk ./src/allmydata/storage/backends/das/expirer.py 50
5573     slow_start = 360 # wait 6 minutes after startup
5574     minimum_cycle_time = 12*60*60 # not more than twice per day
5575 
5576-    def __init__(self, statefile, historyfile, expiration_policy):
5577-        self.historyfile = historyfile
5578+    def __init__(self, statefile, historyfp, expiration_policy):
5579+        self.historyfp = historyfp
5580         self.expiration_enabled = expiration_policy['enabled']
5581         self.mode = expiration_policy['mode']
5582         self.override_lease_duration = None
5583hunk ./src/allmydata/storage/backends/das/expirer.py 80
5584             self.state["cycle-to-date"].setdefault(k, so_far[k])
5585 
5586         # initialize history
5587-        if not os.path.exists(self.historyfile):
5588+        if not self.historyfp.exists():
5589             history = {} # cyclenum -> dict
5590hunk ./src/allmydata/storage/backends/das/expirer.py 82
5591-            f = open(self.historyfile, "wb")
5592-            pickle.dump(history, f)
5593-            f.close()
5594+            self.historyfp.setContent(pickle.dumps(history))
5595 
5596     def create_empty_cycle_dict(self):
5597         recovered = self.create_empty_recovered_dict()
5598hunk ./src/allmydata/storage/backends/das/expirer.py 305
5599         # copy() needs to become a deepcopy
5600         h["space-recovered"] = s["space-recovered"].copy()
5601 
5602-        history = pickle.load(open(self.historyfile, "rb"))
5603+        history = pickle.load(self.historyfp.getContent())
5604         history[cycle] = h
5605         while len(history) > 10:
5606             oldcycles = sorted(history.keys())
5607hunk ./src/allmydata/storage/backends/das/expirer.py 310
5608             del history[oldcycles[0]]
5609-        f = open(self.historyfile, "wb")
5610-        pickle.dump(history, f)
5611-        f.close()
5612+        self.historyfp.setContent(pickle.dumps(history))
5613 
5614     def get_state(self):
5615         """In addition to the crawler state described in
5616hunk ./src/allmydata/storage/backends/das/expirer.py 379
5617         progress = self.get_progress()
5618 
5619         state = ShareCrawler.get_state(self) # does a shallow copy
5620-        history = pickle.load(open(self.historyfile, "rb"))
5621+        history = pickle.load(self.historyfp.getContent())
5622         state["history"] = history
5623 
5624         if not progress["cycle-in-progress"]:
5625hunk ./src/allmydata/storage/common.py 19
5626 def si_a2b(ascii_storageindex):
5627     return base32.a2b(ascii_storageindex)
5628 
5629-def storage_index_to_dir(startfp, storageindex):
5630+def si_dir(startfp, storageindex):
5631     sia = si_b2a(storageindex)
5632hunk ./src/allmydata/storage/common.py 21
5633-    return os.path.join(sia[:2], sia)
5634+    return startfp.child(sia[:2]).child(sia)
5635hunk ./src/allmydata/storage/crawler.py 68
5636     cpu_slice = 1.0 # use up to 1.0 seconds before yielding
5637     minimum_cycle_time = 300 # don't run a cycle faster than this
5638 
5639-    def __init__(self, statefname, allowed_cpu_percentage=None):
5640+    def __init__(self, statefp, allowed_cpu_percentage=None):
5641         service.MultiService.__init__(self)
5642         if allowed_cpu_percentage is not None:
5643             self.allowed_cpu_percentage = allowed_cpu_percentage
5644hunk ./src/allmydata/storage/crawler.py 72
5645-        self.statefname = statefname
5646+        self.statefp = statefp
5647         self.prefixes = [si_b2a(struct.pack(">H", i << (16-10)))[:2]
5648                          for i in range(2**10)]
5649         self.prefixes.sort()
5650hunk ./src/allmydata/storage/crawler.py 192
5651         #                            of the last bucket to be processed, or
5652         #                            None if we are sleeping between cycles
5653         try:
5654-            f = open(self.statefname, "rb")
5655-            state = pickle.load(f)
5656-            f.close()
5657+            state = pickle.loads(self.statefp.getContent())
5658         except EnvironmentError:
5659             state = {"version": 1,
5660                      "last-cycle-finished": None,
5661hunk ./src/allmydata/storage/crawler.py 228
5662         else:
5663             last_complete_prefix = self.prefixes[lcpi]
5664         self.state["last-complete-prefix"] = last_complete_prefix
5665-        tmpfile = self.statefname + ".tmp"
5666-        f = open(tmpfile, "wb")
5667-        pickle.dump(self.state, f)
5668-        f.close()
5669-        fileutil.move_into_place(tmpfile, self.statefname)
5670+        self.statefp.setContent(pickle.dumps(self.state))
5671 
5672     def startService(self):
5673         # arrange things to look like we were just sleeping, so
5674hunk ./src/allmydata/storage/crawler.py 440
5675 
5676     minimum_cycle_time = 60*60 # we don't need this more than once an hour
5677 
5678-    def __init__(self, statefname, num_sample_prefixes=1):
5679-        FSShareCrawler.__init__(self, statefname)
5680+    def __init__(self, statefp, num_sample_prefixes=1):
5681+        FSShareCrawler.__init__(self, statefp)
5682         self.num_sample_prefixes = num_sample_prefixes
5683 
5684     def add_initial_state(self):
5685hunk ./src/allmydata/storage/server.py 11
5686 from allmydata.util import fileutil, idlib, log, time_format
5687 import allmydata # for __full_version__
5688 
5689-from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
5690-_pyflakes_hush = [si_b2a, si_a2b, storage_index_to_dir] # re-exported
5691+from allmydata.storage.common import si_b2a, si_a2b, si_dir
5692+_pyflakes_hush = [si_b2a, si_a2b, si_dir] # re-exported
5693 from allmydata.storage.lease import LeaseInfo
5694 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
5695      create_mutable_sharefile
5696hunk ./src/allmydata/storage/server.py 173
5697         # to a particular owner.
5698         start = time.time()
5699         self.count("allocate")
5700-        alreadygot = set()
5701         incoming = set()
5702         bucketwriters = {} # k: shnum, v: BucketWriter
5703 
5704hunk ./src/allmydata/storage/server.py 199
5705             remaining_space -= self.allocated_size()
5706         # self.readonly_storage causes remaining_space <= 0
5707 
5708-        # fill alreadygot with all shares that we have, not just the ones
5709+        # Fill alreadygot with all shares that we have, not just the ones
5710         # they asked about: this will save them a lot of work. Add or update
5711         # leases for all of them: if they want us to hold shares for this
5712hunk ./src/allmydata/storage/server.py 202
5713-        # file, they'll want us to hold leases for this file.
5714+        # file, they'll want us to hold leases for all the shares of it.
5715+        alreadygot = set()
5716         for share in self.backend.get_shares(storageindex):
5717hunk ./src/allmydata/storage/server.py 205
5718-            alreadygot.add(share.shnum)
5719             share.add_or_renew_lease(lease_info)
5720hunk ./src/allmydata/storage/server.py 206
5721+            alreadygot.add(share.shnum)
5722 
5723hunk ./src/allmydata/storage/server.py 208
5724-        # fill incoming with all shares that are incoming use a set operation
5725-        # since there's no need to operate on individual pieces
5726+        # all share numbers that are incoming
5727         incoming = self.backend.get_incoming_shnums(storageindex)
5728 
5729         for shnum in ((sharenums - alreadygot) - incoming):
5730hunk ./src/allmydata/storage/server.py 282
5731             total_space_freed += sf.cancel_lease(cancel_secret)
5732 
5733         if found_buckets:
5734-            storagedir = os.path.join(self.sharedir,
5735-                                      storage_index_to_dir(storageindex))
5736-            if not os.listdir(storagedir):
5737-                os.rmdir(storagedir)
5738+            storagedir = si_dir(self.sharedir, storageindex)
5739+            fp_rmdir_if_empty(storagedir)
5740 
5741         if self.stats_provider:
5742             self.stats_provider.count('storage_server.bytes_freed',
5743hunk ./src/allmydata/test/test_backends.py 52
5744     subtree. I simulate just the parts of the filesystem that the current
5745     implementation of DAS backend needs. """
5746     def call_open(self, fname, mode):
5747+        assert isinstance(fname, basestring), fname
5748         fnamefp = FilePath(fname)
5749         self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
5750                         "Server with FS backend tried to open '%s' which is outside of the storage tree '%s' in mode '%s'" % (fnamefp, self.storedir, mode))
5751hunk ./src/allmydata/test/test_backends.py 104
5752                         "Server with FS backend tried to listdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
5753 
5754     def call_stat(self, fname):
5755+        assert isinstance(fname, basestring), fname
5756         fnamefp = FilePath(fname)
5757         self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
5758                         "Server with FS backend tried to isdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
5759hunk ./src/allmydata/test/test_backends.py 217
5760 
5761         mocktime.return_value = 0
5762         # Inspect incoming and fail unless it's empty.
5763-        incomingset = self.ss.backend.get_incoming('teststorage_index')
5764-        self.failUnlessReallyEqual(incomingset, set())
5765+        incomingset = self.ss.backend.get_incoming_shnums('teststorage_index')
5766+        self.failUnlessReallyEqual(incomingset, frozenset())
5767         
5768         # Populate incoming with the sharenum: 0.
5769hunk ./src/allmydata/test/test_backends.py 221
5770-        alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
5771+        alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, frozenset((0,)), 1, mock.Mock())
5772 
5773         # Inspect incoming and fail unless the sharenum: 0 is listed there.
5774hunk ./src/allmydata/test/test_backends.py 224
5775-        self.failUnlessEqual(self.ss.backend.get_incoming('teststorage_index'), set((0,)))
5776+        self.failUnlessReallyEqual(self.ss.backend.get_incoming_shnums('teststorage_index'), frozenset((0,)))
5777         
5778         # Attempt to create a second share writer with the same sharenum.
5779hunk ./src/allmydata/test/test_backends.py 227
5780-        alreadygota, bsa = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
5781+        alreadygota, bsa = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, frozenset((0,)), 1, mock.Mock())
5782 
5783         # Show that no sharewriter results from a remote_allocate_buckets
5784         # with the same si and sharenum, until BucketWriter.remote_close()
5785hunk ./src/allmydata/test/test_backends.py 280
5786         StorageServer object. """
5787 
5788         def call_listdir(dirname):
5789+            precondition(isinstance(dirname, basestring), dirname)
5790             self.failUnlessReallyEqual(dirname, os.path.join(storedir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
5791             return ['0']
5792 
5793hunk ./src/allmydata/test/test_backends.py 287
5794         mocklistdir.side_effect = call_listdir
5795 
5796         def call_open(fname, mode):
5797+            precondition(isinstance(fname, basestring), fname)
5798             self.failUnlessReallyEqual(fname, sharefname)
5799             self.failUnlessEqual(mode[0], 'r', mode)
5800             self.failUnless('b' in mode, mode)
5801hunk ./src/allmydata/test/test_backends.py 297
5802 
5803         datalen = len(share_data)
5804         def call_getsize(fname):
5805+            precondition(isinstance(fname, basestring), fname)
5806             self.failUnlessReallyEqual(fname, sharefname)
5807             return datalen
5808         mockgetsize.side_effect = call_getsize
5809hunk ./src/allmydata/test/test_backends.py 303
5810 
5811         def call_exists(fname):
5812+            precondition(isinstance(fname, basestring), fname)
5813             self.failUnlessReallyEqual(fname, sharefname)
5814             return True
5815         mockexists.side_effect = call_exists
5816hunk ./src/allmydata/test/test_backends.py 321
5817         self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '')
5818 
5819 
5820-class TestBackendConstruction(unittest.TestCase, ReallyEqualMixin):
5821-    @mock.patch('time.time')
5822-    @mock.patch('os.mkdir')
5823-    @mock.patch('__builtin__.open')
5824-    @mock.patch('os.listdir')
5825-    @mock.patch('os.path.isdir')
5826-    def test_create_fs_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime):
5827+class TestBackendConstruction(MockFiles, ReallyEqualMixin):
5828+    def test_create_fs_backend(self):
5829         """ This tests whether a file system backend instance can be
5830         constructed. To pass the test, it has to use the
5831         filesystem in only the prescribed ways. """
5832hunk ./src/allmydata/test/test_backends.py 327
5833 
5834-        def call_open(fname, mode):
5835-            if fname == os.path.join(storedir,'bucket_counter.state'):
5836-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'bucket_counter.state'))
5837-            elif fname == os.path.join(storedir, 'lease_checker.state'):
5838-                raise IOError(2, "No such file or directory: '%s'" % os.path.join(storedir, 'lease_checker.state'))
5839-            elif fname == os.path.join(storedir, 'lease_checker.history'):
5840-                return StringIO()
5841-            else:
5842-                self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode))
5843-        mockopen.side_effect = call_open
5844-
5845-        def call_isdir(fname):
5846-            if fname == os.path.join(storedir,'shares'):
5847-                return True
5848-            elif fname == os.path.join(storedir,'shares', 'incoming'):
5849-                return True
5850-            else:
5851-                self.fail("Server with FS backend tried to idsir '%s'" % (fname,))
5852-        mockisdir.side_effect = call_isdir
5853-
5854-        def call_mkdir(fname, mode):
5855-            """XXX something is calling mkdir teststoredir and teststoredir/shares twice...  this is odd!"""
5856-            self.failUnlessEqual(0777, mode)
5857-            if fname == storedir:
5858-                return None
5859-            elif fname == os.path.join(storedir,'shares'):
5860-                return None
5861-            elif fname == os.path.join(storedir,'shares', 'incoming'):
5862-                return None
5863-            else:
5864-                self.fail("Server with FS backend tried to mkdir '%s'" % (fname,))
5865-        mockmkdir.side_effect = call_mkdir
5866-
5867         # Now begin the test.
5868hunk ./src/allmydata/test/test_backends.py 328
5869-        DASCore('teststoredir', expiration_policy)
5870-
5871-        self.failIf(mocklistdir.called, mocklistdir.call_args_list)
5872-
5873+        DASCore(self.storedir, expiration_policy)
5874hunk ./src/allmydata/util/fileutil.py 7
5875 
5876 import errno, sys, exceptions, os, stat, tempfile, time, binascii
5877 
5878+from allmydata.util.assertutil import precondition
5879+
5880 from twisted.python import log
5881hunk ./src/allmydata/util/fileutil.py 10
5882-from twisted.python.filepath import UnlistableError
5883+from twisted.python.filepath import FilePath, UnlistableError
5884 
5885 from pycryptopp.cipher.aes import AES
5886 
5887hunk ./src/allmydata/util/fileutil.py 210
5888             raise tx
5889         raise exceptions.IOError, "unknown error prevented creation of directory, or deleted the directory immediately after creation: %s" % dirfp # careful not to construct an IOError with a 2-tuple, as that has a special meaning...
5890 
5891+def fp_rmdir_if_empty(dirfp):
5892+    """ Remove the directory if it is empty. """
5893+    try:
5894+        os.rmdir(dirfp.path)
5895+    except OSError, e:
5896+        if e.errno != errno.ENOTEMPTY:
5897+            raise
5898+    else:
5899+        dirfp.changed()
5900+
5901 def rmtree(dirname):
5902     """
5903     A threadsafe and idempotent version of shutil.rmtree().  If the dir is
5904hunk ./src/allmydata/util/fileutil.py 257
5905         raise OSError, excs
5906 
5907 def fp_remove(dirfp):
5908+    """
5909+    An idempotent version of shutil.rmtree().  If the dir is already gone,
5910+    do nothing and return without raising an exception.  If this call
5911+    removes the dir, return without raising an exception.  If there is an
5912+    error that prevents removal or if the directory gets created again by
5913+    someone else after this deletes it and before this checks that it is
5914+    gone, raise an exception.
5915+    """
5916     try:
5917         dirfp.remove()
5918     except UnlistableError, e:
5919hunk ./src/allmydata/util/fileutil.py 270
5920         if e.originalException.errno != errno.ENOENT:
5921             raise
5922+    except OSError, e:
5923+        if e.errno != errno.ENOENT:
5924+            raise
5925 
5926 def rm_dir(dirname):
5927     # Renamed to be like shutil.rmtree and unlike rmdir.
5928hunk ./src/allmydata/util/fileutil.py 387
5929         import traceback
5930         traceback.print_exc()
5931 
5932-def get_disk_stats(whichdir, reserved_space=0):
5933+def get_disk_stats(whichdirfp, reserved_space=0):
5934     """Return disk statistics for the storage disk, in the form of a dict
5935     with the following fields.
5936       total:            total bytes on disk
5937hunk ./src/allmydata/util/fileutil.py 408
5938     you can pass how many bytes you would like to leave unused on this
5939     filesystem as reserved_space.
5940     """
5941+    precondition(isinstance(whichdirfp, FilePath), whichdirfp)
5942 
5943     if have_GetDiskFreeSpaceExW:
5944         # If this is a Windows system and GetDiskFreeSpaceExW is available, use it.
5945hunk ./src/allmydata/util/fileutil.py 419
5946         n_free_for_nonroot = c_ulonglong(0)
5947         n_total            = c_ulonglong(0)
5948         n_free_for_root    = c_ulonglong(0)
5949-        retval = GetDiskFreeSpaceExW(whichdir, byref(n_free_for_nonroot),
5950+        retval = GetDiskFreeSpaceExW(whichdirfp.path, byref(n_free_for_nonroot),
5951                                                byref(n_total),
5952                                                byref(n_free_for_root))
5953         if retval == 0:
5954hunk ./src/allmydata/util/fileutil.py 424
5955             raise OSError("Windows error %d attempting to get disk statistics for %r"
5956-                          % (GetLastError(), whichdir))
5957+                          % (GetLastError(), whichdirfp.path))
5958         free_for_nonroot = n_free_for_nonroot.value
5959         total            = n_total.value
5960         free_for_root    = n_free_for_root.value
5961hunk ./src/allmydata/util/fileutil.py 433
5962         # <http://docs.python.org/library/os.html#os.statvfs>
5963         # <http://opengroup.org/onlinepubs/7990989799/xsh/fstatvfs.html>
5964         # <http://opengroup.org/onlinepubs/7990989799/xsh/sysstatvfs.h.html>
5965-        s = os.statvfs(whichdir)
5966+        s = os.statvfs(whichdirfp.path)
5967 
5968         # on my mac laptop:
5969         #  statvfs(2) is a wrapper around statfs(2).
5970hunk ./src/allmydata/util/fileutil.py 460
5971              'avail': avail,
5972            }
5973 
5974-def get_available_space(whichdir, reserved_space):
5975+def get_available_space(whichdirfp, reserved_space):
5976     """Returns available space for share storage in bytes, or None if no
5977     API to get this information is available.
5978 
5979hunk ./src/allmydata/util/fileutil.py 472
5980     you can pass how many bytes you would like to leave unused on this
5981     filesystem as reserved_space.
5982     """
5983+    precondition(isinstance(whichdirfp, FilePath), whichdirfp)
5984     try:
5985hunk ./src/allmydata/util/fileutil.py 474
5986-        return get_disk_stats(whichdir, reserved_space)['avail']
5987+        return get_disk_stats(whichdirfp, reserved_space)['avail']
5988     except AttributeError:
5989         return None
5990hunk ./src/allmydata/util/fileutil.py 477
5991-    except EnvironmentError:
5992-        log.msg("OS call to get disk statistics failed")
5993-        return 0
5994}
5995[jacp16 or so
5996wilcoxjg@gmail.com**20110722070036
5997 Ignore-this: 7548785cad146056eede9a16b93b569f
5998] {
5999merger 0.0 (
6000hunk ./src/allmydata/_auto_deps.py 21
6001-    "Twisted >= 2.4.0",
6002+    # On Windows we need at least Twisted 9.0 to avoid an indirect dependency on pywin32.
6003+    # We also need Twisted 10.1 for the FTP frontend in order for Twisted's FTP server to
6004+    # support asynchronous close.
6005+    "Twisted >= 10.1.0",
6006hunk ./src/allmydata/_auto_deps.py 21
6007-    "Twisted >= 2.4.0",
6008+    "Twisted >= 11.0",
6009)
6010hunk ./src/allmydata/storage/backends/das/core.py 2
6011 import os, re, weakref, struct, time, stat
6012+from twisted.application import service
6013+from twisted.python.filepath import UnlistableError
6014+from twisted.python.filepath import FilePath
6015+from zope.interface import implements
6016 
6017hunk ./src/allmydata/storage/backends/das/core.py 7
6018+import allmydata # for __full_version__
6019 from allmydata.interfaces import IStorageBackend
6020 from allmydata.storage.backends.base import Backend
6021hunk ./src/allmydata/storage/backends/das/core.py 10
6022-from allmydata.storage.common import si_b2a, si_a2b, si_dir
6023+from allmydata.storage.common import si_b2a, si_a2b, si_si2dir
6024 from allmydata.util.assertutil import precondition
6025hunk ./src/allmydata/storage/backends/das/core.py 12
6026-
6027-#from foolscap.api import Referenceable
6028-from twisted.application import service
6029-from twisted.python.filepath import UnlistableError
6030-
6031-from zope.interface import implements
6032 from allmydata.interfaces import IStatsProducer, IShareStore# XXX, RIStorageServer
6033 from allmydata.util import fileutil, idlib, log, time_format
6034hunk ./src/allmydata/storage/backends/das/core.py 14
6035-import allmydata # for __full_version__
6036-
6037-from allmydata.storage.common import si_b2a, si_a2b, si_dir
6038-_pyflakes_hush = [si_b2a, si_a2b, si_dir] # re-exported
6039 from allmydata.storage.lease import LeaseInfo
6040 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
6041      create_mutable_sharefile
6042hunk ./src/allmydata/storage/backends/das/core.py 21
6043 from allmydata.storage.crawler import FSBucketCountingCrawler
6044 from allmydata.util.hashutil import constant_time_compare
6045 from allmydata.storage.backends.das.expirer import FSLeaseCheckingCrawler
6046-
6047-from zope.interface import implements
6048+_pyflakes_hush = [si_b2a, si_a2b, si_si2dir] # re-exported
6049 
6050 # storage/
6051 # storage/shares/incoming
6052hunk ./src/allmydata/storage/backends/das/core.py 49
6053         self._setup_lease_checkerf(expiration_policy)
6054 
6055     def _setup_storage(self, storedir, readonly, reserved_space):
6056+        precondition(isinstance(storedir, FilePath)) 
6057         self.storedir = storedir
6058         self.readonly = readonly
6059         self.reserved_space = int(reserved_space)
6060hunk ./src/allmydata/storage/backends/das/core.py 83
6061 
6062     def get_incoming_shnums(self, storageindex):
6063         """ Return a frozenset of the shnum (as ints) of incoming shares. """
6064-        incomingdir = storage_index_to_dir(self.incomingdir, storageindex)
6065+        incomingdir = si_si2dir(self.incomingdir, storageindex)
6066         try:
6067             childfps = [ fp for fp in incomingdir.children() if is_num(fp) ]
6068             shnums = [ int(fp.basename) for fp in childfps ]
6069hunk ./src/allmydata/storage/backends/das/core.py 96
6070         """ Generate ImmutableShare objects for shares we have for this
6071         storageindex. ("Shares we have" means completed ones, excluding
6072         incoming ones.)"""
6073-        finalstoragedir = storage_index_to_dir(self.sharedir, storageindex)
6074+        finalstoragedir = si_si2dir(self.sharedir, storageindex)
6075         try:
6076             for fp in finalstoragedir.children():
6077                 if is_num(fp):
6078hunk ./src/allmydata/storage/backends/das/core.py 111
6079         return fileutil.get_available_space(self.storedir, self.reserved_space)
6080 
6081     def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
6082-        finalhome = storage_index_to_dir(self.sharedir, storageindex).child(str(shnum))
6083-        incominghome = storage_index_to_dir(self.sharedir.child('incoming'), storageindex).child(str(shnum))
6084+        finalhome = si_si2dir(self.sharedir, storageindex).child(str(shnum))
6085+        incominghome = si_si2dir(self.sharedir.child('incoming'), storageindex).child(str(shnum))
6086         immsh = ImmutableShare(finalhome, storageindex, shnum, incominghome, max_size=max_space_per_bucket, create=True)
6087         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
6088         return bw
6089hunk ./src/allmydata/storage/backends/null/core.py 18
6090         return None
6091 
6092     def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
6093-       
6094-        immutableshare = ImmutableShare()
6095+        immutableshare = ImmutableShare()
6096         return BucketWriter(self.ss, immutableshare, max_space_per_bucket, lease_info, canary)
6097 
6098     def set_storage_server(self, ss):
6099hunk ./src/allmydata/storage/backends/null/core.py 24
6100         self.ss = ss
6101 
6102-    def get_incoming(self, storageindex):
6103-        return set()
6104+    def get_incoming_shnums(self, storageindex):
6105+        return frozenset()
6106 
6107 class ImmutableShare:
6108     sharetype = "immutable"
6109hunk ./src/allmydata/storage/common.py 19
6110 def si_a2b(ascii_storageindex):
6111     return base32.a2b(ascii_storageindex)
6112 
6113-def si_dir(startfp, storageindex):
6114+def si_si2dir(startfp, storageindex):
6115     sia = si_b2a(storageindex)
6116     return startfp.child(sia[:2]).child(sia)
6117hunk ./src/allmydata/storage/immutable.py 20
6118     def __init__(self, ss, immutableshare, max_size, lease_info, canary):
6119         self.ss = ss
6120         self._max_size = max_size # don't allow the client to write more than this        print self.ss._active_writers.keys()
6121-
6122         self._canary = canary
6123         self._disconnect_marker = canary.notifyOnDisconnect(self._disconnected)
6124         self.closed = False
6125hunk ./src/allmydata/storage/lease.py 17
6126 
6127     def get_expiration_time(self):
6128         return self.expiration_time
6129+
6130     def get_grant_renew_time_time(self):
6131         # hack, based upon fixed 31day expiration period
6132         return self.expiration_time - 31*24*60*60
6133hunk ./src/allmydata/storage/lease.py 21
6134+
6135     def get_age(self):
6136         return time.time() - self.get_grant_renew_time_time()
6137 
6138hunk ./src/allmydata/storage/lease.py 32
6139          self.expiration_time) = struct.unpack(">L32s32sL", data)
6140         self.nodeid = None
6141         return self
6142+
6143     def to_immutable_data(self):
6144         return struct.pack(">L32s32sL",
6145                            self.owner_num,
6146hunk ./src/allmydata/storage/lease.py 45
6147                            int(self.expiration_time),
6148                            self.renew_secret, self.cancel_secret,
6149                            self.nodeid)
6150+
6151     def from_mutable_data(self, data):
6152         (self.owner_num,
6153          self.expiration_time,
6154hunk ./src/allmydata/storage/server.py 11
6155 from allmydata.util import fileutil, idlib, log, time_format
6156 import allmydata # for __full_version__
6157 
6158-from allmydata.storage.common import si_b2a, si_a2b, si_dir
6159-_pyflakes_hush = [si_b2a, si_a2b, si_dir] # re-exported
6160+from allmydata.storage.common import si_b2a, si_a2b, si_si2dir
6161+_pyflakes_hush = [si_b2a, si_a2b, si_si2dir] # re-exported
6162 from allmydata.storage.lease import LeaseInfo
6163 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
6164      create_mutable_sharefile
6165hunk ./src/allmydata/storage/server.py 88
6166             else:
6167                 stats["mean"] = None
6168 
6169-            orderstatlist = [(0.01, "01_0_percentile", 100), (0.1, "10_0_percentile", 10),\
6170-                             (0.50, "50_0_percentile", 10), (0.90, "90_0_percentile", 10),\
6171-                             (0.95, "95_0_percentile", 20), (0.99, "99_0_percentile", 100),\
6172+            orderstatlist = [(0.1, "10_0_percentile", 10), (0.5, "50_0_percentile", 10), \
6173+                             (0.9, "90_0_percentile", 10), (0.95, "95_0_percentile", 20), \
6174+                             (0.01, "01_0_percentile", 100),  (0.99, "99_0_percentile", 100),\
6175                              (0.999, "99_9_percentile", 1000)]
6176 
6177             for percentile, percentilestring, minnumtoobserve in orderstatlist:
6178hunk ./src/allmydata/storage/server.py 231
6179             header = f.read(32)
6180             f.close()
6181             if header[:32] == MutableShareFile.MAGIC:
6182+                # XXX  Can I exploit this code?
6183                 sf = MutableShareFile(filename, self)
6184                 # note: if the share has been migrated, the renew_lease()
6185                 # call will throw an exception, with information to help the
6186hunk ./src/allmydata/storage/server.py 237
6187                 # client update the lease.
6188             elif header[:4] == struct.pack(">L", 1):
6189+                # Check if version number is "1".
6190+                # XXX WHAT ABOUT OTHER VERSIONS!!!!!!!?
6191                 sf = ShareFile(filename)
6192             else:
6193                 continue # non-sharefile
6194hunk ./src/allmydata/storage/server.py 285
6195             total_space_freed += sf.cancel_lease(cancel_secret)
6196 
6197         if found_buckets:
6198-            storagedir = si_dir(self.sharedir, storageindex)
6199+            # XXX  Yikes looks like code that shouldn't be in the server!
6200+            storagedir = si_si2dir(self.sharedir, storageindex)
6201             fp_rmdir_if_empty(storagedir)
6202 
6203         if self.stats_provider:
6204hunk ./src/allmydata/storage/server.py 301
6205             self.stats_provider.count('storage_server.bytes_added', consumed_size)
6206         del self._active_writers[bw]
6207 
6208-
6209     def remote_get_buckets(self, storageindex):
6210         start = time.time()
6211         self.count("get")
6212hunk ./src/allmydata/storage/server.py 329
6213         except StopIteration:
6214             return iter([])
6215 
6216+    #  XXX  As far as Zancas' grockery has gotten.
6217     def remote_slot_testv_and_readv_and_writev(self, storageindex,
6218                                                secrets,
6219                                                test_and_write_vectors,
6220hunk ./src/allmydata/storage/server.py 338
6221         self.count("writev")
6222         si_s = si_b2a(storageindex)
6223         log.msg("storage: slot_writev %s" % si_s)
6224-        si_dir = storage_index_to_dir(storageindex)
6225+       
6226         (write_enabler, renew_secret, cancel_secret) = secrets
6227         # shares exist if there is a file for them
6228hunk ./src/allmydata/storage/server.py 341
6229-        bucketdir = os.path.join(self.sharedir, si_dir)
6230+        bucketdir = si_si2dir(self.sharedir, storageindex)
6231         shares = {}
6232         if os.path.isdir(bucketdir):
6233             for sharenum_s in os.listdir(bucketdir):
6234hunk ./src/allmydata/storage/server.py 430
6235         si_s = si_b2a(storageindex)
6236         lp = log.msg("storage: slot_readv %s %s" % (si_s, shares),
6237                      facility="tahoe.storage", level=log.OPERATIONAL)
6238-        si_dir = storage_index_to_dir(storageindex)
6239         # shares exist if there is a file for them
6240hunk ./src/allmydata/storage/server.py 431
6241-        bucketdir = os.path.join(self.sharedir, si_dir)
6242+        bucketdir = si_si2dir(self.sharedir, storageindex)
6243         if not os.path.isdir(bucketdir):
6244             self.add_latency("readv", time.time() - start)
6245             return {}
6246hunk ./src/allmydata/test/test_backends.py 2
6247 from twisted.trial import unittest
6248-
6249 from twisted.python.filepath import FilePath
6250hunk ./src/allmydata/test/test_backends.py 3
6251-
6252 from allmydata.util.log import msg
6253hunk ./src/allmydata/test/test_backends.py 4
6254-
6255 from StringIO import StringIO
6256hunk ./src/allmydata/test/test_backends.py 5
6257-
6258 from allmydata.test.common_util import ReallyEqualMixin
6259 from allmydata.util.assertutil import _assert
6260hunk ./src/allmydata/test/test_backends.py 7
6261-
6262 import mock
6263 
6264 # This is the code that we're going to be testing.
6265hunk ./src/allmydata/test/test_backends.py 11
6266 from allmydata.storage.server import StorageServer
6267-
6268 from allmydata.storage.backends.das.core import DASCore
6269 from allmydata.storage.backends.null.core import NullCore
6270 
6271hunk ./src/allmydata/test/test_backends.py 14
6272-
6273-# The following share file contents was generated with
6274+# The following share file content was generated with
6275 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
6276hunk ./src/allmydata/test/test_backends.py 16
6277-# with share data == 'a'.
6278+# with share data == 'a'. The total size of this input
6279+# is 85 bytes.
6280 shareversionnumber = '\x00\x00\x00\x01'
6281 sharedatalength = '\x00\x00\x00\x01'
6282 numberofleases = '\x00\x00\x00\x01'
6283hunk ./src/allmydata/test/test_backends.py 21
6284-
6285 shareinputdata = 'a'
6286 ownernumber = '\x00\x00\x00\x00'
6287 renewsecret  = 'x'*32
6288hunk ./src/allmydata/test/test_backends.py 31
6289 client_data = shareinputdata + ownernumber + renewsecret + \
6290     cancelsecret + expirationtime + nextlease
6291 share_data = containerdata + client_data
6292-
6293-
6294 testnodeid = 'testnodeidxxxxxxxxxx'
6295 
6296 class MockStat:
6297hunk ./src/allmydata/test/test_backends.py 105
6298         mstat.st_mode = 16893 # a directory
6299         return mstat
6300 
6301+    def call_get_available_space(self, storedir, reservedspace):
6302+        # The input vector has an input size of 85.
6303+        return 85 - reservedspace
6304+
6305+    def call_exists(self):
6306+        # I'm only called in the ImmutableShareFile constructor.
6307+        return False
6308+
6309     def setUp(self):
6310         msg( "%s.setUp()" % (self,))
6311         self.storedir = FilePath('teststoredir')
6312hunk ./src/allmydata/test/test_backends.py 147
6313         mockfpstat = self.mockfpstatp.__enter__()
6314         mockfpstat.side_effect = self.call_stat
6315 
6316+        self.mockget_available_space = mock.patch('allmydata.util.fileutil.get_available_space')
6317+        mockget_available_space = self.mockget_available_space.__enter__()
6318+        mockget_available_space.side_effect = self.call_get_available_space
6319+
6320+        self.mockfpexists = mock.patch('twisted.python.filepath.FilePath.exists')
6321+        mockfpexists = self.mockfpexists.__enter__()
6322+        mockfpexists.side_effect = self.call_exists
6323+
6324     def tearDown(self):
6325         msg( "%s.tearDown()" % (self,))
6326hunk ./src/allmydata/test/test_backends.py 157
6327+        self.mockfpexists.__exit__()
6328+        self.mockget_available_space.__exit__()
6329         self.mockfpstatp.__exit__()
6330         self.mockstatp.__exit__()
6331         self.mockopenp.__exit__()
6332hunk ./src/allmydata/test/test_backends.py 166
6333         self.mockmkdirp.__exit__()
6334         self.mocklistdirp.__exit__()
6335 
6336+
6337 expiration_policy = {'enabled' : False,
6338                      'mode' : 'age',
6339                      'override_lease_duration' : None,
6340hunk ./src/allmydata/test/test_backends.py 182
6341         self.ss = StorageServer(testnodeid, backend=NullCore())
6342 
6343     @mock.patch('os.mkdir')
6344-
6345     @mock.patch('__builtin__.open')
6346     @mock.patch('os.listdir')
6347     @mock.patch('os.path.isdir')
6348hunk ./src/allmydata/test/test_backends.py 201
6349         filesystem backend. To pass the test, it mustn't use the filesystem
6350         outside of its configured storedir. """
6351 
6352-        StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy))
6353+        StorageServer(testnodeid, backend=DASCore(self.storedir, expiration_policy))
6354 
6355 class TestServerAndFSBackend(MockFiles, ReallyEqualMixin):
6356     """ This tests both the StorageServer and the DAS backend together. """
6357hunk ./src/allmydata/test/test_backends.py 205
6358+   
6359     def setUp(self):
6360         MockFiles.setUp(self)
6361         try:
6362hunk ./src/allmydata/test/test_backends.py 211
6363             self.backend = DASCore(self.storedir, expiration_policy)
6364             self.ss = StorageServer(testnodeid, self.backend)
6365-            self.backendsmall = DASCore(self.storedir, expiration_policy, reserved_space = 1)
6366-            self.ssmallback = StorageServer(testnodeid, self.backendsmall)
6367+            self.backendwithreserve = DASCore(self.storedir, expiration_policy, reserved_space = 1)
6368+            self.sswithreserve = StorageServer(testnodeid, self.backendwithreserve)
6369         except:
6370             MockFiles.tearDown(self)
6371             raise
6372hunk ./src/allmydata/test/test_backends.py 233
6373         # Populate incoming with the sharenum: 0.
6374         alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, frozenset((0,)), 1, mock.Mock())
6375 
6376-        # Inspect incoming and fail unless the sharenum: 0 is listed there.
6377-        self.failUnlessReallyEqual(self.ss.backend.get_incoming_shnums('teststorage_index'), frozenset((0,)))
6378+        # This is a transparent-box test: Inspect incoming and fail unless the sharenum: 0 is listed there.
6379+        # self.failUnlessReallyEqual(self.ss.backend.get_incoming_shnums('teststorage_index'), frozenset((0,)))
6380         
6381         # Attempt to create a second share writer with the same sharenum.
6382         alreadygota, bsa = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, frozenset((0,)), 1, mock.Mock())
6383hunk ./src/allmydata/test/test_backends.py 257
6384 
6385         # Postclose: (Omnibus) failUnless written data is in final.
6386         sharesinfinal = list(self.backend.get_shares('teststorage_index'))
6387-        contents = sharesinfinal[0].read_share_data(0,73)
6388+        self.failUnlessReallyEqual(len(sharesinfinal), 1)
6389+        contents = sharesinfinal[0].read_share_data(0, 73)
6390         self.failUnlessReallyEqual(contents, client_data)
6391 
6392         # Exercise the case that the share we're asking to allocate is
6393hunk ./src/allmydata/test/test_backends.py 276
6394         mockget_available_space.side_effect = call_get_available_space
6395         
6396         
6397-        alreadygotc, bsc = self.ssmallback.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
6398+        alreadygotc, bsc = self.sswithreserve.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
6399 
6400     @mock.patch('os.path.exists')
6401     @mock.patch('os.path.getsize')
6402}
6403[jacp17
6404wilcoxjg@gmail.com**20110722203244
6405 Ignore-this: e79a5924fb2eb786ee4e9737a8228f87
6406] {
6407hunk ./src/allmydata/storage/backends/das/core.py 14
6408 from allmydata.util.assertutil import precondition
6409 from allmydata.interfaces import IStatsProducer, IShareStore# XXX, RIStorageServer
6410 from allmydata.util import fileutil, idlib, log, time_format
6411+from allmydata.util.fileutil import fp_make_dirs
6412 from allmydata.storage.lease import LeaseInfo
6413 from allmydata.storage.mutable import MutableShareFile, EmptyShare, \
6414      create_mutable_sharefile
6415hunk ./src/allmydata/storage/backends/das/core.py 19
6416 from allmydata.storage.immutable import BucketWriter, BucketReader
6417-from allmydata.storage.crawler import FSBucketCountingCrawler
6418+from allmydata.storage.crawler import BucketCountingCrawler
6419 from allmydata.util.hashutil import constant_time_compare
6420hunk ./src/allmydata/storage/backends/das/core.py 21
6421-from allmydata.storage.backends.das.expirer import FSLeaseCheckingCrawler
6422+from allmydata.storage.backends.das.expirer import LeaseCheckingCrawler
6423 _pyflakes_hush = [si_b2a, si_a2b, si_si2dir] # re-exported
6424 
6425 # storage/
6426hunk ./src/allmydata/storage/backends/das/core.py 43
6427     implements(IStorageBackend)
6428     def __init__(self, storedir, expiration_policy, readonly=False, reserved_space=0):
6429         Backend.__init__(self)
6430-
6431         self._setup_storage(storedir, readonly, reserved_space)
6432         self._setup_corruption_advisory()
6433         self._setup_bucket_counter()
6434hunk ./src/allmydata/storage/backends/das/core.py 72
6435 
6436     def _setup_bucket_counter(self):
6437         statefname = self.storedir.child("bucket_counter.state")
6438-        self.bucket_counter = FSBucketCountingCrawler(statefname)
6439+        self.bucket_counter = BucketCountingCrawler(statefname)
6440         self.bucket_counter.setServiceParent(self)
6441 
6442     def _setup_lease_checkerf(self, expiration_policy):
6443hunk ./src/allmydata/storage/backends/das/core.py 78
6444         statefile = self.storedir.child("lease_checker.state")
6445         historyfile = self.storedir.child("lease_checker.history")
6446-        self.lease_checker = FSLeaseCheckingCrawler(statefile, historyfile, expiration_policy)
6447+        self.lease_checker = LeaseCheckingCrawler(statefile, historyfile, expiration_policy)
6448         self.lease_checker.setServiceParent(self)
6449 
6450     def get_incoming_shnums(self, storageindex):
6451hunk ./src/allmydata/storage/backends/das/core.py 168
6452             # it. Also construct the metadata.
6453             assert not finalhome.exists()
6454             fp_make_dirs(self.incominghome)
6455-            f = open(self.incominghome, 'wb')
6456+            f = self.incominghome.child(str(self.shnum))
6457             # The second field -- the four-byte share data length -- is no
6458             # longer used as of Tahoe v1.3.0, but we continue to write it in
6459             # there in case someone downgrades a storage server from >=
6460hunk ./src/allmydata/storage/backends/das/core.py 178
6461             # the largest length that can fit into the field. That way, even
6462             # if this does happen, the old < v1.3.0 server will still allow
6463             # clients to read the first part of the share.
6464-            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
6465-            f.close()
6466+            f.setContent(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
6467+            #f.close()
6468             self._lease_offset = max_size + 0x0c
6469             self._num_leases = 0
6470         else:
6471hunk ./src/allmydata/storage/backends/das/core.py 261
6472         f.write(data)
6473         f.close()
6474 
6475-    def _write_lease_record(self, f, lease_number, lease_info):
6476+    def _write_lease_record(self, lease_number, lease_info):
6477         offset = self._lease_offset + lease_number * self.LEASE_SIZE
6478         f.seek(offset)
6479         assert f.tell() == offset
6480hunk ./src/allmydata/storage/backends/das/core.py 290
6481                 yield LeaseInfo().from_immutable_data(data)
6482 
6483     def add_lease(self, lease_info):
6484-        f = open(self.incominghome, 'rb+')
6485+        self.incominghome, 'rb+')
6486         num_leases = self._read_num_leases(f)
6487         self._write_lease_record(f, num_leases, lease_info)
6488         self._write_num_leases(f, num_leases+1)
6489hunk ./src/allmydata/storage/backends/das/expirer.py 1
6490-import time, os, pickle, struct
6491-from allmydata.storage.crawler import FSShareCrawler
6492+import time, os, pickle, struct # os, pickle, and struct will almost certainly be migrated to the backend...
6493+from allmydata.storage.crawler import ShareCrawler
6494 from allmydata.storage.common import UnknownMutableContainerVersionError, \
6495      UnknownImmutableContainerVersionError
6496 from twisted.python import log as twlog
6497hunk ./src/allmydata/storage/backends/das/expirer.py 7
6498 
6499-class FSLeaseCheckingCrawler(FSShareCrawler):
6500+class LeaseCheckingCrawler(ShareCrawler):
6501     """I examine the leases on all shares, determining which are still valid
6502     and which have expired. I can remove the expired leases (if so
6503     configured), and the share will be deleted when the last lease is
6504hunk ./src/allmydata/storage/backends/das/expirer.py 66
6505         else:
6506             raise ValueError("GC mode '%s' must be 'age' or 'cutoff-date'" % expiration_policy['mode'])
6507         self.sharetypes_to_expire = expiration_policy['sharetypes']
6508-        FSShareCrawler.__init__(self, statefile)
6509+        ShareCrawler.__init__(self, statefile)
6510 
6511     def add_initial_state(self):
6512         # we fill ["cycle-to-date"] here (even though they will be reset in
6513hunk ./src/allmydata/storage/crawler.py 1
6514-
6515 import os, time, struct
6516 import cPickle as pickle
6517 from twisted.internet import reactor
6518hunk ./src/allmydata/storage/crawler.py 11
6519 class TimeSliceExceeded(Exception):
6520     pass
6521 
6522-class FSShareCrawler(service.MultiService):
6523-    """A subcless of ShareCrawler is attached to a StorageServer, and
6524+class ShareCrawler(service.MultiService):
6525+    """A subclass of ShareCrawler is attached to a StorageServer, and
6526     periodically walks all of its shares, processing each one in some
6527     fashion. This crawl is rate-limited, to reduce the IO burden on the host,
6528     since large servers can easily have a terabyte of shares, in several
6529hunk ./src/allmydata/storage/crawler.py 426
6530         pass
6531 
6532 
6533-class FSBucketCountingCrawler(FSShareCrawler):
6534+class BucketCountingCrawler(ShareCrawler):
6535     """I keep track of how many buckets are being managed by this server.
6536     This is equivalent to the number of distributed files and directories for
6537     which I am providing storage. The actual number of files+directories in
6538hunk ./src/allmydata/storage/crawler.py 440
6539     minimum_cycle_time = 60*60 # we don't need this more than once an hour
6540 
6541     def __init__(self, statefp, num_sample_prefixes=1):
6542-        FSShareCrawler.__init__(self, statefp)
6543+        ShareCrawler.__init__(self, statefp)
6544         self.num_sample_prefixes = num_sample_prefixes
6545 
6546     def add_initial_state(self):
6547hunk ./src/allmydata/test/test_backends.py 113
6548         # I'm only called in the ImmutableShareFile constructor.
6549         return False
6550 
6551+    def call_setContent(self, inputstring):
6552+        # XXX Good enough for expirer, not sure about elsewhere...
6553+        return True
6554+
6555     def setUp(self):
6556         msg( "%s.setUp()" % (self,))
6557         self.storedir = FilePath('teststoredir')
6558hunk ./src/allmydata/test/test_backends.py 159
6559         mockfpexists = self.mockfpexists.__enter__()
6560         mockfpexists.side_effect = self.call_exists
6561 
6562+        self.mocksetContent = mock.patch('twisted.python.filepath.FilePath.setContent')
6563+        mocksetContent = self.mocksetContent.__enter__()
6564+        mocksetContent.side_effect = self.call_setContent
6565+
6566     def tearDown(self):
6567         msg( "%s.tearDown()" % (self,))
6568hunk ./src/allmydata/test/test_backends.py 165
6569+        self.mocksetContent.__exit__()
6570         self.mockfpexists.__exit__()
6571         self.mockget_available_space.__exit__()
6572         self.mockfpstatp.__exit__()
6573}
6574[jacp18
6575wilcoxjg@gmail.com**20110723031915
6576 Ignore-this: 21e7f22ac20e3f8af22ea2e9b755d6a5
6577] {
6578hunk ./src/allmydata/_auto_deps.py 21
6579     # These are the versions packaged in major versions of Debian or Ubuntu, or in pkgsrc.
6580     "zope.interface == 3.3.1, == 3.5.3, == 3.6.1",
6581 
6582-    "Twisted >= 2.4.0",
6583+v v v v v v v
6584+    "Twisted >= 11.0",
6585+*************
6586+    # On Windows we need at least Twisted 9.0 to avoid an indirect dependency on pywin32.
6587+    # We also need Twisted 10.1 for the FTP frontend in order for Twisted's FTP server to
6588+    # support asynchronous close.
6589+    "Twisted >= 10.1.0",
6590+^ ^ ^ ^ ^ ^ ^
6591 
6592     # foolscap < 0.5.1 had a performance bug which spent
6593     # O(N**2) CPU for transferring large mutable files
6594hunk ./src/allmydata/storage/backends/das/core.py 168
6595             # it. Also construct the metadata.
6596             assert not finalhome.exists()
6597             fp_make_dirs(self.incominghome)
6598-            f = self.incominghome.child(str(self.shnum))
6599+            f = self.incominghome
6600             # The second field -- the four-byte share data length -- is no
6601             # longer used as of Tahoe v1.3.0, but we continue to write it in
6602             # there in case someone downgrades a storage server from >=
6603hunk ./src/allmydata/storage/backends/das/core.py 178
6604             # the largest length that can fit into the field. That way, even
6605             # if this does happen, the old < v1.3.0 server will still allow
6606             # clients to read the first part of the share.
6607-            f.setContent(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
6608-            #f.close()
6609+            print 'f: ',f
6610+            f.setContent(struct.pack(">LLL", 1, min(2**32-1, max_size), 0) )
6611             self._lease_offset = max_size + 0x0c
6612             self._num_leases = 0
6613         else:
6614hunk ./src/allmydata/storage/backends/das/core.py 263
6615 
6616     def _write_lease_record(self, lease_number, lease_info):
6617         offset = self._lease_offset + lease_number * self.LEASE_SIZE
6618-        f.seek(offset)
6619-        assert f.tell() == offset
6620-        f.write(lease_info.to_immutable_data())
6621+        fh = f.open()
6622+        try:
6623+            fh.seek(offset)
6624+            assert fh.tell() == offset
6625+            fh.write(lease_info.to_immutable_data())
6626+        finally:
6627+            fh.close()
6628 
6629     def _read_num_leases(self, f):
6630hunk ./src/allmydata/storage/backends/das/core.py 272
6631-        f.seek(0x08)
6632-        (num_leases,) = struct.unpack(">L", f.read(4))
6633+        fh = f.open()
6634+        try:
6635+            fh.seek(0x08)
6636+            ro = fh.read(4)
6637+            print "repr(rp): %s len(ro): %s"  % (repr(ro), len(ro))
6638+            (num_leases,) = struct.unpack(">L", ro)
6639+        finally:
6640+            fh.close()
6641         return num_leases
6642 
6643     def _write_num_leases(self, f, num_leases):
6644hunk ./src/allmydata/storage/backends/das/core.py 283
6645-        f.seek(0x08)
6646-        f.write(struct.pack(">L", num_leases))
6647+        fh = f.open()
6648+        try:
6649+            fh.seek(0x08)
6650+            fh.write(struct.pack(">L", num_leases))
6651+        finally:
6652+            fh.close()
6653 
6654     def _truncate_leases(self, f, num_leases):
6655         f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE)
6656hunk ./src/allmydata/storage/backends/das/core.py 304
6657                 yield LeaseInfo().from_immutable_data(data)
6658 
6659     def add_lease(self, lease_info):
6660-        self.incominghome, 'rb+')
6661-        num_leases = self._read_num_leases(f)
6662+        f = self.incominghome
6663+        num_leases = self._read_num_leases(self.incominghome)
6664         self._write_lease_record(f, num_leases, lease_info)
6665         self._write_num_leases(f, num_leases+1)
6666hunk ./src/allmydata/storage/backends/das/core.py 308
6667-        f.close()
6668-
6669+       
6670     def renew_lease(self, renew_secret, new_expire_time):
6671         for i,lease in enumerate(self.get_leases()):
6672             if constant_time_compare(lease.renew_secret, renew_secret):
6673hunk ./src/allmydata/test/test_backends.py 33
6674 share_data = containerdata + client_data
6675 testnodeid = 'testnodeidxxxxxxxxxx'
6676 
6677+
6678 class MockStat:
6679     def __init__(self):
6680         self.st_mode = None
6681hunk ./src/allmydata/test/test_backends.py 43
6682     code under test if it reads or writes outside of its prescribed
6683     subtree. I simulate just the parts of the filesystem that the current
6684     implementation of DAS backend needs. """
6685+
6686+    def setUp(self):
6687+        msg( "%s.setUp()" % (self,))
6688+        self.storedir = FilePath('teststoredir')
6689+        self.basedir = self.storedir.child('shares')
6690+        self.baseincdir = self.basedir.child('incoming')
6691+        self.sharedirfinalname = self.basedir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
6692+        self.sharedirincomingname = self.baseincdir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
6693+        self.shareincomingname = self.sharedirincomingname.child('0')
6694+        self.sharefilename = self.sharedirfinalname.child('0')
6695+        self.sharefilecontents = StringIO(share_data)
6696+
6697+        self.mocklistdirp = mock.patch('os.listdir')
6698+        mocklistdir = self.mocklistdirp.__enter__()
6699+        mocklistdir.side_effect = self.call_listdir
6700+
6701+        self.mockmkdirp = mock.patch('os.mkdir')
6702+        mockmkdir = self.mockmkdirp.__enter__()
6703+        mockmkdir.side_effect = self.call_mkdir
6704+
6705+        self.mockisdirp = mock.patch('os.path.isdir')
6706+        mockisdir = self.mockisdirp.__enter__()
6707+        mockisdir.side_effect = self.call_isdir
6708+
6709+        self.mockopenp = mock.patch('__builtin__.open')
6710+        mockopen = self.mockopenp.__enter__()
6711+        mockopen.side_effect = self.call_open
6712+
6713+        self.mockstatp = mock.patch('os.stat')
6714+        mockstat = self.mockstatp.__enter__()
6715+        mockstat.side_effect = self.call_stat
6716+
6717+        self.mockfpstatp = mock.patch('twisted.python.filepath.stat')
6718+        mockfpstat = self.mockfpstatp.__enter__()
6719+        mockfpstat.side_effect = self.call_stat
6720+
6721+        self.mockget_available_space = mock.patch('allmydata.util.fileutil.get_available_space')
6722+        mockget_available_space = self.mockget_available_space.__enter__()
6723+        mockget_available_space.side_effect = self.call_get_available_space
6724+
6725+        self.mockfpexists = mock.patch('twisted.python.filepath.FilePath.exists')
6726+        mockfpexists = self.mockfpexists.__enter__()
6727+        mockfpexists.side_effect = self.call_exists
6728+
6729+        self.mocksetContent = mock.patch('twisted.python.filepath.FilePath.setContent')
6730+        mocksetContent = self.mocksetContent.__enter__()
6731+        mocksetContent.side_effect = self.call_setContent
6732+
6733     def call_open(self, fname, mode):
6734         assert isinstance(fname, basestring), fname
6735         fnamefp = FilePath(fname)
6736hunk ./src/allmydata/test/test_backends.py 107
6737             # current implementation of DAS backend, and we might want to
6738             # use this information in this test in the future...
6739             return StringIO()
6740+        elif fnamefp == self.shareincomingname:
6741+            print "repr(fnamefp): ", repr(fnamefp)
6742         else:
6743             # Anything else you open inside your subtree appears to be an
6744             # empty file.
6745hunk ./src/allmydata/test/test_backends.py 168
6746         # XXX Good enough for expirer, not sure about elsewhere...
6747         return True
6748 
6749-    def setUp(self):
6750-        msg( "%s.setUp()" % (self,))
6751-        self.storedir = FilePath('teststoredir')
6752-        self.basedir = self.storedir.child('shares')
6753-        self.baseincdir = self.basedir.child('incoming')
6754-        self.sharedirfinalname = self.basedir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
6755-        self.sharedirincomingname = self.baseincdir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
6756-        self.shareincomingname = self.sharedirincomingname.child('0')
6757-        self.sharefname = self.sharedirfinalname.child('0')
6758-
6759-        self.mocklistdirp = mock.patch('os.listdir')
6760-        mocklistdir = self.mocklistdirp.__enter__()
6761-        mocklistdir.side_effect = self.call_listdir
6762-
6763-        self.mockmkdirp = mock.patch('os.mkdir')
6764-        mockmkdir = self.mockmkdirp.__enter__()
6765-        mockmkdir.side_effect = self.call_mkdir
6766-
6767-        self.mockisdirp = mock.patch('os.path.isdir')
6768-        mockisdir = self.mockisdirp.__enter__()
6769-        mockisdir.side_effect = self.call_isdir
6770-
6771-        self.mockopenp = mock.patch('__builtin__.open')
6772-        mockopen = self.mockopenp.__enter__()
6773-        mockopen.side_effect = self.call_open
6774-
6775-        self.mockstatp = mock.patch('os.stat')
6776-        mockstat = self.mockstatp.__enter__()
6777-        mockstat.side_effect = self.call_stat
6778-
6779-        self.mockfpstatp = mock.patch('twisted.python.filepath.stat')
6780-        mockfpstat = self.mockfpstatp.__enter__()
6781-        mockfpstat.side_effect = self.call_stat
6782-
6783-        self.mockget_available_space = mock.patch('allmydata.util.fileutil.get_available_space')
6784-        mockget_available_space = self.mockget_available_space.__enter__()
6785-        mockget_available_space.side_effect = self.call_get_available_space
6786-
6787-        self.mockfpexists = mock.patch('twisted.python.filepath.FilePath.exists')
6788-        mockfpexists = self.mockfpexists.__enter__()
6789-        mockfpexists.side_effect = self.call_exists
6790-
6791-        self.mocksetContent = mock.patch('twisted.python.filepath.FilePath.setContent')
6792-        mocksetContent = self.mocksetContent.__enter__()
6793-        mocksetContent.side_effect = self.call_setContent
6794 
6795     def tearDown(self):
6796         msg( "%s.tearDown()" % (self,))
6797hunk ./src/allmydata/test/test_backends.py 239
6798         handling of simultaneous and successive attempts to write the same
6799         share.
6800         """
6801-
6802         mocktime.return_value = 0
6803         # Inspect incoming and fail unless it's empty.
6804         incomingset = self.ss.backend.get_incoming_shnums('teststorage_index')
6805}
6806[jacp19orso
6807wilcoxjg@gmail.com**20110724034230
6808 Ignore-this: f001093c467225c289489636a61935fe
6809] {
6810hunk ./src/allmydata/_auto_deps.py 21
6811     # These are the versions packaged in major versions of Debian or Ubuntu, or in pkgsrc.
6812     "zope.interface == 3.3.1, == 3.5.3, == 3.6.1",
6813 
6814-v v v v v v v
6815-    "Twisted >= 11.0",
6816-*************
6817+
6818     # On Windows we need at least Twisted 9.0 to avoid an indirect dependency on pywin32.
6819     # We also need Twisted 10.1 for the FTP frontend in order for Twisted's FTP server to
6820     # support asynchronous close.
6821hunk ./src/allmydata/_auto_deps.py 26
6822     "Twisted >= 10.1.0",
6823-^ ^ ^ ^ ^ ^ ^
6824+
6825 
6826     # foolscap < 0.5.1 had a performance bug which spent
6827     # O(N**2) CPU for transferring large mutable files
6828hunk ./src/allmydata/storage/backends/das/core.py 153
6829     LEASE_SIZE = struct.calcsize(">L32s32sL")
6830     sharetype = "immutable"
6831 
6832-    def __init__(self, finalhome, storageindex, shnum, incominghome=None, max_size=None, create=False):
6833+    def __init__(self, finalhome, storageindex, shnum, incominghome, max_size=None, create=False):
6834         """ If max_size is not None then I won't allow more than
6835         max_size to be written to me. If create=True then max_size
6836         must not be None. """
6837hunk ./src/allmydata/storage/backends/das/core.py 167
6838             # touch the file, so later callers will see that we're working on
6839             # it. Also construct the metadata.
6840             assert not finalhome.exists()
6841-            fp_make_dirs(self.incominghome)
6842-            f = self.incominghome
6843+            fp_make_dirs(self.incominghome.parent())
6844             # The second field -- the four-byte share data length -- is no
6845             # longer used as of Tahoe v1.3.0, but we continue to write it in
6846             # there in case someone downgrades a storage server from >=
6847hunk ./src/allmydata/storage/backends/das/core.py 177
6848             # the largest length that can fit into the field. That way, even
6849             # if this does happen, the old < v1.3.0 server will still allow
6850             # clients to read the first part of the share.
6851-            print 'f: ',f
6852-            f.setContent(struct.pack(">LLL", 1, min(2**32-1, max_size), 0) )
6853+            self.incominghome.setContent(struct.pack(">LLL", 1, min(2**32-1, max_size), 0) )
6854             self._lease_offset = max_size + 0x0c
6855             self._num_leases = 0
6856         else:
6857hunk ./src/allmydata/storage/backends/das/core.py 182
6858             f = open(self.finalhome, 'rb')
6859-            filesize = os.path.getsize(self.finalhome)
6860             (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
6861             f.close()
6862hunk ./src/allmydata/storage/backends/das/core.py 184
6863+            filesize = self.finalhome.getsize()
6864             if version != 1:
6865                 msg = "sharefile %s had version %d but we wanted 1" % \
6866                       (self.finalhome, version)
6867hunk ./src/allmydata/storage/backends/das/core.py 259
6868         f.write(data)
6869         f.close()
6870 
6871-    def _write_lease_record(self, lease_number, lease_info):
6872+    def _write_lease_record(self, f, lease_number, lease_info):
6873         offset = self._lease_offset + lease_number * self.LEASE_SIZE
6874         fh = f.open()
6875hunk ./src/allmydata/storage/backends/das/core.py 262
6876+        print fh
6877         try:
6878             fh.seek(offset)
6879             assert fh.tell() == offset
6880hunk ./src/allmydata/storage/backends/das/core.py 271
6881             fh.close()
6882 
6883     def _read_num_leases(self, f):
6884-        fh = f.open()
6885+        fh = f.open() #XXX  Ackkk I've mocked open...  is this wrong?
6886         try:
6887             fh.seek(0x08)
6888             ro = fh.read(4)
6889hunk ./src/allmydata/storage/backends/das/core.py 275
6890-            print "repr(rp): %s len(ro): %s"  % (repr(ro), len(ro))
6891             (num_leases,) = struct.unpack(">L", ro)
6892         finally:
6893             fh.close()
6894hunk ./src/allmydata/storage/backends/das/core.py 302
6895                 yield LeaseInfo().from_immutable_data(data)
6896 
6897     def add_lease(self, lease_info):
6898-        f = self.incominghome
6899         num_leases = self._read_num_leases(self.incominghome)
6900hunk ./src/allmydata/storage/backends/das/core.py 303
6901-        self._write_lease_record(f, num_leases, lease_info)
6902-        self._write_num_leases(f, num_leases+1)
6903+        self._write_lease_record(self.incominghome, num_leases, lease_info)
6904+        self._write_num_leases(self.incominghome, num_leases+1)
6905         
6906     def renew_lease(self, renew_secret, new_expire_time):
6907         for i,lease in enumerate(self.get_leases()):
6908hunk ./src/allmydata/test/test_backends.py 52
6909         self.sharedirfinalname = self.basedir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
6910         self.sharedirincomingname = self.baseincdir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
6911         self.shareincomingname = self.sharedirincomingname.child('0')
6912-        self.sharefilename = self.sharedirfinalname.child('0')
6913-        self.sharefilecontents = StringIO(share_data)
6914+        self.sharefinalname = self.sharedirfinalname.child('0')
6915 
6916hunk ./src/allmydata/test/test_backends.py 54
6917-        self.mocklistdirp = mock.patch('os.listdir')
6918-        mocklistdir = self.mocklistdirp.__enter__()
6919-        mocklistdir.side_effect = self.call_listdir
6920+        # Make patcher, patch, and make effects for fs using functions.
6921+        self.mocklistdirp = mock.patch('twisted.python.filepath.FilePath.listdir') # Create a patcher that can replace 'listdir'
6922+        mocklistdir = self.mocklistdirp.__enter__()  # Patches namespace with mockobject replacing 'listdir'
6923+        mocklistdir.side_effect = self.call_listdir  # When replacement 'mocklistdir' is invoked in place of 'listdir', 'call_listdir'
6924 
6925hunk ./src/allmydata/test/test_backends.py 59
6926-        self.mockmkdirp = mock.patch('os.mkdir')
6927-        mockmkdir = self.mockmkdirp.__enter__()
6928-        mockmkdir.side_effect = self.call_mkdir
6929+        #self.mockmkdirp = mock.patch('os.mkdir')
6930+        #mockmkdir = self.mockmkdirp.__enter__()
6931+        #mockmkdir.side_effect = self.call_mkdir
6932 
6933hunk ./src/allmydata/test/test_backends.py 63
6934-        self.mockisdirp = mock.patch('os.path.isdir')
6935+        self.mockisdirp = mock.patch('FilePath.isdir')
6936         mockisdir = self.mockisdirp.__enter__()
6937         mockisdir.side_effect = self.call_isdir
6938 
6939hunk ./src/allmydata/test/test_backends.py 67
6940-        self.mockopenp = mock.patch('__builtin__.open')
6941+        self.mockopenp = mock.patch('FilePath.open')
6942         mockopen = self.mockopenp.__enter__()
6943         mockopen.side_effect = self.call_open
6944 
6945hunk ./src/allmydata/test/test_backends.py 71
6946-        self.mockstatp = mock.patch('os.stat')
6947+        self.mockstatp = mock.patch('filepath.stat')
6948         mockstat = self.mockstatp.__enter__()
6949         mockstat.side_effect = self.call_stat
6950 
6951hunk ./src/allmydata/test/test_backends.py 91
6952         mocksetContent = self.mocksetContent.__enter__()
6953         mocksetContent.side_effect = self.call_setContent
6954 
6955+    #  The behavior of mocked filesystem using functions
6956     def call_open(self, fname, mode):
6957         assert isinstance(fname, basestring), fname
6958         fnamefp = FilePath(fname)
6959hunk ./src/allmydata/test/test_backends.py 109
6960             # use this information in this test in the future...
6961             return StringIO()
6962         elif fnamefp == self.shareincomingname:
6963-            print "repr(fnamefp): ", repr(fnamefp)
6964+            self.incomingsharefilecontents.closed = False
6965+            return self.incomingsharefilecontents
6966         else:
6967             # Anything else you open inside your subtree appears to be an
6968             # empty file.
6969hunk ./src/allmydata/test/test_backends.py 152
6970         fnamefp = FilePath(fname)
6971         self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
6972                         "Server with FS backend tried to isdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
6973-
6974         msg("%s.call_stat(%s)" % (self, fname,))
6975         mstat = MockStat()
6976         mstat.st_mode = 16893 # a directory
6977hunk ./src/allmydata/test/test_backends.py 166
6978         return False
6979 
6980     def call_setContent(self, inputstring):
6981-        # XXX Good enough for expirer, not sure about elsewhere...
6982-        return True
6983-
6984+        self.incomingsharefilecontents = StringIO(inputstring)
6985 
6986     def tearDown(self):
6987         msg( "%s.tearDown()" % (self,))
6988}
6989[jacp19
6990wilcoxjg@gmail.com**20110727080553
6991 Ignore-this: 851b1ebdeeee712abfbda557af142726
6992] {
6993hunk ./src/allmydata/storage/backends/das/core.py 1
6994-import os, re, weakref, struct, time, stat
6995+import re, weakref, struct, time, stat
6996 from twisted.application import service
6997 from twisted.python.filepath import UnlistableError
6998hunk ./src/allmydata/storage/backends/das/core.py 4
6999+from twisted.python import filepath
7000 from twisted.python.filepath import FilePath
7001 from zope.interface import implements
7002 
7003hunk ./src/allmydata/storage/backends/das/core.py 50
7004         self._setup_lease_checkerf(expiration_policy)
7005 
7006     def _setup_storage(self, storedir, readonly, reserved_space):
7007-        precondition(isinstance(storedir, FilePath)) 
7008+        precondition(isinstance(storedir, FilePath), storedir, FilePath) 
7009         self.storedir = storedir
7010         self.readonly = readonly
7011         self.reserved_space = int(reserved_space)
7012hunk ./src/allmydata/storage/backends/das/core.py 195
7013         self._data_offset = 0xc
7014 
7015     def close(self):
7016-        fileutil.make_dirs(os.path.dirname(self.finalhome))
7017-        fileutil.rename(self.incominghome, self.finalhome)
7018+        fileutil.fp_make_dirs(self.finalhome.parent())
7019+        self.incominghome.moveTo(self.finalhome)
7020         try:
7021             # self.incominghome is like storage/shares/incoming/ab/abcde/4 .
7022             # We try to delete the parent (.../ab/abcde) to avoid leaving
7023hunk ./src/allmydata/storage/backends/das/core.py 209
7024             # their children to know when they should do the rmdir. This
7025             # approach is simpler, but relies on os.rmdir refusing to delete
7026             # a non-empty directory. Do *not* use fileutil.rm_dir() here!
7027-            #print "os.path.dirname(self.incominghome): "
7028-            #print os.path.dirname(self.incominghome)
7029-            os.rmdir(os.path.dirname(self.incominghome))
7030+            fileutil.fp_rmdir_if_empty(self.incominghome.parent())
7031             # we also delete the grandparent (prefix) directory, .../ab ,
7032             # again to avoid leaving directories lying around. This might
7033             # fail if there is another bucket open that shares a prefix (like
7034hunk ./src/allmydata/storage/backends/das/core.py 214
7035             # ab/abfff).
7036-            os.rmdir(os.path.dirname(os.path.dirname(self.incominghome)))
7037+            fileutil.fp_rmdir_if_empty(self.incominghome.parent().parent())
7038             # we leave the great-grandparent (incoming/) directory in place.
7039         except EnvironmentError:
7040             # ignore the "can't rmdir because the directory is not empty"
7041hunk ./src/allmydata/storage/backends/das/core.py 224
7042         pass
7043         
7044     def stat(self):
7045-        return os.stat(self.finalhome)[stat.ST_SIZE]
7046-        #filelen = os.stat(self.finalhome)[stat.ST_SIZE]
7047+        return filepath.stat(self.finalhome)[stat.ST_SIZE]
7048 
7049     def get_shnum(self):
7050         return self.shnum
7051hunk ./src/allmydata/storage/backends/das/core.py 230
7052 
7053     def unlink(self):
7054-        os.unlink(self.finalhome)
7055+        self.finalhome.remove()
7056 
7057     def read_share_data(self, offset, length):
7058         precondition(offset >= 0)
7059hunk ./src/allmydata/storage/backends/das/core.py 237
7060         # Reads beyond the end of the data are truncated. Reads that start
7061         # beyond the end of the data return an empty string.
7062         seekpos = self._data_offset+offset
7063-        fsize = os.path.getsize(self.finalhome)
7064+        fsize = self.finalhome.getsize()
7065         actuallength = max(0, min(length, fsize-seekpos))
7066         if actuallength == 0:
7067             return ""
7068hunk ./src/allmydata/storage/backends/das/core.py 241
7069-        f = open(self.finalhome, 'rb')
7070-        f.seek(seekpos)
7071-        return f.read(actuallength)
7072+        try:
7073+            fh = open(self.finalhome, 'rb')
7074+            fh.seek(seekpos)
7075+            sharedata = fh.read(actuallength)
7076+        finally:
7077+            fh.close()
7078+        return sharedata
7079 
7080     def write_share_data(self, offset, data):
7081         length = len(data)
7082hunk ./src/allmydata/storage/backends/das/core.py 264
7083     def _write_lease_record(self, f, lease_number, lease_info):
7084         offset = self._lease_offset + lease_number * self.LEASE_SIZE
7085         fh = f.open()
7086-        print fh
7087         try:
7088             fh.seek(offset)
7089             assert fh.tell() == offset
7090hunk ./src/allmydata/storage/backends/das/core.py 269
7091             fh.write(lease_info.to_immutable_data())
7092         finally:
7093+            print dir(fh)
7094             fh.close()
7095 
7096     def _read_num_leases(self, f):
7097hunk ./src/allmydata/storage/backends/das/core.py 273
7098-        fh = f.open() #XXX  Ackkk I've mocked open...  is this wrong?
7099+        fh = f.open() #XXX  Should be mocking FilePath.open()
7100         try:
7101             fh.seek(0x08)
7102             ro = fh.read(4)
7103hunk ./src/allmydata/storage/backends/das/core.py 280
7104             (num_leases,) = struct.unpack(">L", ro)
7105         finally:
7106             fh.close()
7107+            print "end of _read_num_leases"
7108         return num_leases
7109 
7110     def _write_num_leases(self, f, num_leases):
7111hunk ./src/allmydata/storage/crawler.py 6
7112 from twisted.internet import reactor
7113 from twisted.application import service
7114 from allmydata.storage.common import si_b2a
7115-from allmydata.util import fileutil
7116 
7117 class TimeSliceExceeded(Exception):
7118     pass
7119hunk ./src/allmydata/storage/crawler.py 478
7120             old_cycle,buckets = self.state["storage-index-samples"][prefix]
7121             if old_cycle != cycle:
7122                 del self.state["storage-index-samples"][prefix]
7123-
7124hunk ./src/allmydata/test/test_backends.py 1
7125+import os
7126 from twisted.trial import unittest
7127 from twisted.python.filepath import FilePath
7128 from allmydata.util.log import msg
7129hunk ./src/allmydata/test/test_backends.py 9
7130 from allmydata.test.common_util import ReallyEqualMixin
7131 from allmydata.util.assertutil import _assert
7132 import mock
7133+from mock import Mock
7134 
7135 # This is the code that we're going to be testing.
7136 from allmydata.storage.server import StorageServer
7137hunk ./src/allmydata/test/test_backends.py 40
7138     def __init__(self):
7139         self.st_mode = None
7140 
7141+class MockFilePath:
7142+    def __init__(self, PathString):
7143+        self.PathName = PathString
7144+    def child(self, ChildString):
7145+        return MockFilePath(os.path.join(self.PathName, ChildString))
7146+    def parent(self):
7147+        return MockFilePath(os.path.dirname(self.PathName))
7148+    def makedirs(self):
7149+        # XXX These methods assume that fp_<FOO> functions in fileutil will be tested elsewhere!
7150+        pass
7151+    def isdir(self):
7152+        return True
7153+    def remove(self):
7154+        pass
7155+    def children(self):
7156+        return []
7157+    def exists(self):
7158+        return False
7159+    def setContent(self, ContentString):
7160+        self.File = MockFile(ContentString)
7161+    def open(self):
7162+        return self.File.open()
7163+
7164+class MockFile:
7165+    def __init__(self, ContentString):
7166+        self.Contents = ContentString
7167+    def open(self):
7168+        return self
7169+    def close(self):
7170+        pass
7171+    def seek(self, position):
7172+        pass
7173+    def read(self, amount):
7174+        pass
7175+
7176+
7177+class MockBCC:
7178+    def setServiceParent(self, Parent):
7179+        pass
7180+
7181+class MockLCC:
7182+    def setServiceParent(self, Parent):
7183+        pass
7184+
7185 class MockFiles(unittest.TestCase):
7186     """ I simulate a filesystem that the code under test can use. I flag the
7187     code under test if it reads or writes outside of its prescribed
7188hunk ./src/allmydata/test/test_backends.py 91
7189     implementation of DAS backend needs. """
7190 
7191     def setUp(self):
7192+        # Make patcher, patch, and make effects for fs using functions.
7193         msg( "%s.setUp()" % (self,))
7194hunk ./src/allmydata/test/test_backends.py 93
7195-        self.storedir = FilePath('teststoredir')
7196+        self.storedir = MockFilePath('teststoredir')
7197         self.basedir = self.storedir.child('shares')
7198         self.baseincdir = self.basedir.child('incoming')
7199         self.sharedirfinalname = self.basedir.child('or').child('orsxg5dtorxxeylhmvpws3temv4a')
7200hunk ./src/allmydata/test/test_backends.py 101
7201         self.shareincomingname = self.sharedirincomingname.child('0')
7202         self.sharefinalname = self.sharedirfinalname.child('0')
7203 
7204-        # Make patcher, patch, and make effects for fs using functions.
7205-        self.mocklistdirp = mock.patch('twisted.python.filepath.FilePath.listdir') # Create a patcher that can replace 'listdir'
7206-        mocklistdir = self.mocklistdirp.__enter__()  # Patches namespace with mockobject replacing 'listdir'
7207-        mocklistdir.side_effect = self.call_listdir  # When replacement 'mocklistdir' is invoked in place of 'listdir', 'call_listdir'
7208-
7209-        #self.mockmkdirp = mock.patch('os.mkdir')
7210-        #mockmkdir = self.mockmkdirp.__enter__()
7211-        #mockmkdir.side_effect = self.call_mkdir
7212-
7213-        self.mockisdirp = mock.patch('FilePath.isdir')
7214-        mockisdir = self.mockisdirp.__enter__()
7215-        mockisdir.side_effect = self.call_isdir
7216+        self.FilePathFake = mock.patch('allmydata.storage.backends.das.core.FilePath', new = MockFilePath )
7217+        FakePath = self.FilePathFake.__enter__()
7218 
7219hunk ./src/allmydata/test/test_backends.py 104
7220-        self.mockopenp = mock.patch('FilePath.open')
7221-        mockopen = self.mockopenp.__enter__()
7222-        mockopen.side_effect = self.call_open
7223+        self.BCountingCrawler = mock.patch('allmydata.storage.backends.das.core.BucketCountingCrawler')
7224+        FakeBCC = self.BCountingCrawler.__enter__()
7225+        FakeBCC.side_effect = self.call_FakeBCC
7226 
7227hunk ./src/allmydata/test/test_backends.py 108
7228-        self.mockstatp = mock.patch('filepath.stat')
7229-        mockstat = self.mockstatp.__enter__()
7230-        mockstat.side_effect = self.call_stat
7231+        self.LeaseCheckingCrawler = mock.patch('allmydata.storage.backends.das.core.LeaseCheckingCrawler')
7232+        FakeLCC = self.LeaseCheckingCrawler.__enter__()
7233+        FakeLCC.side_effect = self.call_FakeLCC
7234 
7235hunk ./src/allmydata/test/test_backends.py 112
7236-        self.mockfpstatp = mock.patch('twisted.python.filepath.stat')
7237-        mockfpstat = self.mockfpstatp.__enter__()
7238-        mockfpstat.side_effect = self.call_stat
7239+        self.get_available_space = mock.patch('allmydata.util.fileutil.get_available_space')
7240+        GetSpace = self.get_available_space.__enter__()
7241+        GetSpace.side_effect = self.call_get_available_space
7242 
7243hunk ./src/allmydata/test/test_backends.py 116
7244-        self.mockget_available_space = mock.patch('allmydata.util.fileutil.get_available_space')
7245-        mockget_available_space = self.mockget_available_space.__enter__()
7246-        mockget_available_space.side_effect = self.call_get_available_space
7247+    def call_FakeBCC(self, StateFile):
7248+        return MockBCC()
7249 
7250hunk ./src/allmydata/test/test_backends.py 119
7251-        self.mockfpexists = mock.patch('twisted.python.filepath.FilePath.exists')
7252-        mockfpexists = self.mockfpexists.__enter__()
7253-        mockfpexists.side_effect = self.call_exists
7254-
7255-        self.mocksetContent = mock.patch('twisted.python.filepath.FilePath.setContent')
7256-        mocksetContent = self.mocksetContent.__enter__()
7257-        mocksetContent.side_effect = self.call_setContent
7258-
7259-    #  The behavior of mocked filesystem using functions
7260-    def call_open(self, fname, mode):
7261-        assert isinstance(fname, basestring), fname
7262-        fnamefp = FilePath(fname)
7263-        self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
7264-                        "Server with FS backend tried to open '%s' which is outside of the storage tree '%s' in mode '%s'" % (fnamefp, self.storedir, mode))
7265-
7266-        if fnamefp == self.storedir.child('bucket_counter.state'):
7267-            raise IOError(2, "No such file or directory: '%s'" % self.storedir.child('bucket_counter.state'))
7268-        elif fnamefp == self.storedir.child('lease_checker.state'):
7269-            raise IOError(2, "No such file or directory: '%s'" % self.storedir.child('lease_checker.state'))
7270-        elif fnamefp == self.storedir.child('lease_checker.history'):
7271-            # This is separated out from the else clause below just because
7272-            # we know this particular file is going to be used by the
7273-            # current implementation of DAS backend, and we might want to
7274-            # use this information in this test in the future...
7275-            return StringIO()
7276-        elif fnamefp == self.shareincomingname:
7277-            self.incomingsharefilecontents.closed = False
7278-            return self.incomingsharefilecontents
7279-        else:
7280-            # Anything else you open inside your subtree appears to be an
7281-            # empty file.
7282-            return StringIO()
7283-
7284-    def call_isdir(self, fname):
7285-        fnamefp = FilePath(fname)
7286-        return fnamefp.isdir()
7287-
7288-        self.failUnless(self.storedir == self or self.storedir in self.parents(),
7289-                        "Server with FS backend tried to isdir '%s' which is outside of the storage tree '%s''" % (self, self.storedir))
7290-
7291-        # The first two cases are separate from the else clause below just
7292-        # because we know that the current implementation of the DAS backend
7293-        # inspects these two directories and we might want to make use of
7294-        # that information in the tests in the future...
7295-        if self == self.storedir.child('shares'):
7296-            return True
7297-        elif self == self.storedir.child('shares').child('incoming'):
7298-            return True
7299-        else:
7300-            # Anything else you open inside your subtree appears to be a
7301-            # directory.
7302-            return True
7303-
7304-    def call_mkdir(self, fname, mode):
7305-        fnamefp = FilePath(fname)
7306-        self.failUnless(self.storedir == fnamefp or self.storedir in fnamefp.parents(),
7307-                        "Server with FS backend tried to mkdir '%s' which is outside of the storage tree '%s''" % (fnamefp, self.storedir))
7308-        self.failUnlessEqual(0777, mode)
7309+    def call_FakeLCC(self, StateFile, HistoryFile, ExpirationPolicy):
7310+        return MockLCC()
7311 
7312     def call_listdir(self, fname):
7313         fnamefp = FilePath(fname)
7314hunk ./src/allmydata/test/test_backends.py 150
7315 
7316     def tearDown(self):
7317         msg( "%s.tearDown()" % (self,))
7318-        self.mocksetContent.__exit__()
7319-        self.mockfpexists.__exit__()
7320-        self.mockget_available_space.__exit__()
7321-        self.mockfpstatp.__exit__()
7322-        self.mockstatp.__exit__()
7323-        self.mockopenp.__exit__()
7324-        self.mockisdirp.__exit__()
7325-        self.mockmkdirp.__exit__()
7326-        self.mocklistdirp.__exit__()
7327-
7328+        FakePath = self.FilePathFake.__exit__()       
7329+        FakeBCC = self.BCountingCrawler.__exit__()
7330 
7331 expiration_policy = {'enabled' : False,
7332                      'mode' : 'age',
7333hunk ./src/allmydata/test/test_backends.py 222
7334         # self.failUnlessReallyEqual(self.ss.backend.get_incoming_shnums('teststorage_index'), frozenset((0,)))
7335         
7336         # Attempt to create a second share writer with the same sharenum.
7337-        alreadygota, bsa = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, frozenset((0,)), 1, mock.Mock())
7338+        # alreadygota, bsa = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, frozenset((0,)), 1, mock.Mock())
7339 
7340         # Show that no sharewriter results from a remote_allocate_buckets
7341         # with the same si and sharenum, until BucketWriter.remote_close()
7342hunk ./src/allmydata/test/test_backends.py 227
7343         # has been called.
7344-        self.failIf(bsa)
7345+        # self.failIf(bsa)
7346 
7347         # Test allocated size.
7348hunk ./src/allmydata/test/test_backends.py 230
7349-        spaceint = self.ss.allocated_size()
7350-        self.failUnlessReallyEqual(spaceint, 1)
7351+        # spaceint = self.ss.allocated_size()
7352+        # self.failUnlessReallyEqual(spaceint, 1)
7353 
7354         # Write 'a' to shnum 0. Only tested together with close and read.
7355hunk ./src/allmydata/test/test_backends.py 234
7356-        bs[0].remote_write(0, 'a')
7357+        # bs[0].remote_write(0, 'a')
7358         
7359         # Preclose: Inspect final, failUnless nothing there.
7360hunk ./src/allmydata/test/test_backends.py 237
7361-        self.failUnlessReallyEqual(len(list(self.backend.get_shares('teststorage_index'))), 0)
7362-        bs[0].remote_close()
7363+        # self.failUnlessReallyEqual(len(list(self.backend.get_shares('teststorage_index'))), 0)
7364+        # bs[0].remote_close()
7365 
7366         # Postclose: (Omnibus) failUnless written data is in final.
7367hunk ./src/allmydata/test/test_backends.py 241
7368-        sharesinfinal = list(self.backend.get_shares('teststorage_index'))
7369-        self.failUnlessReallyEqual(len(sharesinfinal), 1)
7370-        contents = sharesinfinal[0].read_share_data(0, 73)
7371-        self.failUnlessReallyEqual(contents, client_data)
7372+        # sharesinfinal = list(self.backend.get_shares('teststorage_index'))
7373+        # self.failUnlessReallyEqual(len(sharesinfinal), 1)
7374+        # contents = sharesinfinal[0].read_share_data(0, 73)
7375+        # self.failUnlessReallyEqual(contents, client_data)
7376 
7377         # Exercise the case that the share we're asking to allocate is
7378         # already (completely) uploaded.
7379hunk ./src/allmydata/test/test_backends.py 248
7380-        self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
7381+        # self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
7382         
7383     @mock.patch('time.time')
7384     @mock.patch('allmydata.util.fileutil.get_available_space')
7385}
7386[jacp20
7387wilcoxjg@gmail.com**20110728072514
7388 Ignore-this: 6a03289023c3c79b8d09e2711183ea82
7389] {
7390hunk ./src/allmydata/storage/backends/das/core.py 52
7391     def _setup_storage(self, storedir, readonly, reserved_space):
7392         precondition(isinstance(storedir, FilePath), storedir, FilePath) 
7393         self.storedir = storedir
7394+        print "self.storedir: ", self.storedir
7395         self.readonly = readonly
7396         self.reserved_space = int(reserved_space)
7397         self.sharedir = self.storedir.child("shares")
7398hunk ./src/allmydata/storage/backends/das/core.py 85
7399 
7400     def get_incoming_shnums(self, storageindex):
7401         """ Return a frozenset of the shnum (as ints) of incoming shares. """
7402-        incomingdir = si_si2dir(self.incomingdir, storageindex)
7403+        print "self.incomingdir.children(): ", self.incomingdir.children()
7404+        print "self.incomingdir.pathname: ", self.incomingdir.pathname
7405+        incomingthissi = si_si2dir(self.incomingdir, storageindex)
7406+        print "incomingthissi.children(): ", incomingthissi.children()
7407         try:
7408hunk ./src/allmydata/storage/backends/das/core.py 90
7409-            childfps = [ fp for fp in incomingdir.children() if is_num(fp) ]
7410+            childfps = [ fp for fp in incomingthissi.children() if is_num(fp) ]
7411             shnums = [ int(fp.basename) for fp in childfps ]
7412             return frozenset(shnums)
7413         except UnlistableError:
7414hunk ./src/allmydata/storage/backends/das/core.py 117
7415 
7416     def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
7417         finalhome = si_si2dir(self.sharedir, storageindex).child(str(shnum))
7418-        incominghome = si_si2dir(self.sharedir.child('incoming'), storageindex).child(str(shnum))
7419+        incominghome = si_si2dir(self.incomingdir, storageindex).child(str(shnum))
7420         immsh = ImmutableShare(finalhome, storageindex, shnum, incominghome, max_size=max_space_per_bucket, create=True)
7421         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
7422         return bw
7423hunk ./src/allmydata/storage/backends/das/core.py 183
7424             # if this does happen, the old < v1.3.0 server will still allow
7425             # clients to read the first part of the share.
7426             self.incominghome.setContent(struct.pack(">LLL", 1, min(2**32-1, max_size), 0) )
7427+            print "We got here right?"
7428             self._lease_offset = max_size + 0x0c
7429             self._num_leases = 0
7430         else:
7431hunk ./src/allmydata/storage/backends/das/core.py 274
7432             assert fh.tell() == offset
7433             fh.write(lease_info.to_immutable_data())
7434         finally:
7435-            print dir(fh)
7436             fh.close()
7437 
7438     def _read_num_leases(self, f):
7439hunk ./src/allmydata/storage/backends/das/core.py 284
7440             (num_leases,) = struct.unpack(">L", ro)
7441         finally:
7442             fh.close()
7443-            print "end of _read_num_leases"
7444         return num_leases
7445 
7446     def _write_num_leases(self, f, num_leases):
7447hunk ./src/allmydata/storage/common.py 21
7448 
7449 def si_si2dir(startfp, storageindex):
7450     sia = si_b2a(storageindex)
7451-    return startfp.child(sia[:2]).child(sia)
7452+    print "I got here right?  sia =", sia
7453+    print "What the fuck is startfp? ", startfp
7454+    print "What the fuck is startfp.pathname? ", startfp.pathname
7455+    newfp = startfp.child(sia[:2])
7456+    print "Did I get here?"
7457+    return newfp.child(sia)
7458hunk ./src/allmydata/test/test_backends.py 5
7459 from twisted.trial import unittest
7460 from twisted.python.filepath import FilePath
7461 from allmydata.util.log import msg
7462-from StringIO import StringIO
7463+from tempfile import TemporaryFile
7464 from allmydata.test.common_util import ReallyEqualMixin
7465 from allmydata.util.assertutil import _assert
7466 import mock
7467hunk ./src/allmydata/test/test_backends.py 34
7468     cancelsecret + expirationtime + nextlease
7469 share_data = containerdata + client_data
7470 testnodeid = 'testnodeidxxxxxxxxxx'
7471+fakefilepaths = {}
7472 
7473 
7474 class MockStat:
7475hunk ./src/allmydata/test/test_backends.py 41
7476     def __init__(self):
7477         self.st_mode = None
7478 
7479+
7480 class MockFilePath:
7481hunk ./src/allmydata/test/test_backends.py 43
7482-    def __init__(self, PathString):
7483-        self.PathName = PathString
7484-    def child(self, ChildString):
7485-        return MockFilePath(os.path.join(self.PathName, ChildString))
7486+    def __init__(self, pathstring):
7487+        self.pathname = pathstring
7488+        self.spawn = {}
7489+        self.antecedent = os.path.dirname(self.pathname)
7490+    def child(self, childstring):
7491+        arg2child = os.path.join(self.pathname, childstring)
7492+        print "arg2child: ", arg2child
7493+        if fakefilepaths.has_key(arg2child):
7494+            child = fakefilepaths[arg2child]
7495+            print "Should have gotten here."
7496+        else:
7497+            child = MockFilePath(arg2child)
7498+        return child
7499     def parent(self):
7500hunk ./src/allmydata/test/test_backends.py 57
7501-        return MockFilePath(os.path.dirname(self.PathName))
7502+        if fakefilepaths.has_key(self.antecedent):
7503+            parent = fakefilepaths[self.antecedent]
7504+        else:
7505+            parent = MockFilePath(self.antecedent)
7506+        return parent
7507+    def children(self):
7508+        childrenfromffs = frozenset(fakefilepaths.values())
7509+        return list(childrenfromffs | frozenset(self.spawn.values())) 
7510     def makedirs(self):
7511         # XXX These methods assume that fp_<FOO> functions in fileutil will be tested elsewhere!
7512         pass
7513hunk ./src/allmydata/test/test_backends.py 72
7514         return True
7515     def remove(self):
7516         pass
7517-    def children(self):
7518-        return []
7519     def exists(self):
7520         return False
7521hunk ./src/allmydata/test/test_backends.py 74
7522-    def setContent(self, ContentString):
7523-        self.File = MockFile(ContentString)
7524     def open(self):
7525         return self.File.open()
7526hunk ./src/allmydata/test/test_backends.py 76
7527+    def setparents(self):
7528+        antecedents = []
7529+        def f(fps, antecedents):
7530+            newfps = os.path.split(fps)[0]
7531+            if newfps:
7532+                antecedents.append(newfps)
7533+                f(newfps, antecedents)
7534+        f(self.pathname, antecedents)
7535+        for fps in antecedents:
7536+            if not fakefilepaths.has_key(fps):
7537+                fakefilepaths[fps] = MockFilePath(fps)
7538+    def setContent(self, contentstring):
7539+        print "I am self.pathname: ", self.pathname
7540+        fakefilepaths[self.pathname] = self
7541+        self.File = MockFile(contentstring)
7542+        self.setparents()
7543+    def create(self):
7544+        fakefilepaths[self.pathname] = self
7545+        self.setparents()
7546+           
7547 
7548 class MockFile:
7549hunk ./src/allmydata/test/test_backends.py 98
7550-    def __init__(self, ContentString):
7551-        self.Contents = ContentString
7552+    def __init__(self, contentstring):
7553+        self.buffer = contentstring
7554+        self.pos = 0
7555     def open(self):
7556         return self
7557hunk ./src/allmydata/test/test_backends.py 103
7558+    def write(self, instring):
7559+        begin = self.pos
7560+        padlen = begin - len(self.buffer)
7561+        if padlen > 0:
7562+            self.buffer += '\x00' * padlen
7563+            end = self.pos + len(instring)
7564+            self.buffer = self.buffer[:begin]+instring+self.buffer[end:]
7565+            self.pos = end
7566     def close(self):
7567         pass
7568hunk ./src/allmydata/test/test_backends.py 113
7569-    def seek(self, position):
7570-        pass
7571-    def read(self, amount):
7572-        pass
7573+    def seek(self, pos):
7574+        self.pos = pos
7575+    def read(self, numberbytes):
7576+        return self.buffer[self.pos:self.pos+numberbytes]
7577+    def tell(self):
7578+        return self.pos
7579 
7580 
7581 class MockBCC:
7582hunk ./src/allmydata/test/test_backends.py 125
7583     def setServiceParent(self, Parent):
7584         pass
7585 
7586+
7587 class MockLCC:
7588     def setServiceParent(self, Parent):
7589         pass
7590hunk ./src/allmydata/test/test_backends.py 130
7591 
7592+
7593 class MockFiles(unittest.TestCase):
7594     """ I simulate a filesystem that the code under test can use. I flag the
7595     code under test if it reads or writes outside of its prescribed
7596hunk ./src/allmydata/test/test_backends.py 193
7597         return False
7598 
7599     def call_setContent(self, inputstring):
7600-        self.incomingsharefilecontents = StringIO(inputstring)
7601+        self.incomingsharefilecontents = TemporaryFile(inputstring)
7602 
7603     def tearDown(self):
7604         msg( "%s.tearDown()" % (self,))
7605hunk ./src/allmydata/test/test_backends.py 206
7606                      'cutoff_date' : None,
7607                      'sharetypes' : None}
7608 
7609+
7610 class TestServerWithNullBackend(unittest.TestCase, ReallyEqualMixin):
7611     """ NullBackend is just for testing and executable documentation, so
7612     this test is actually a test of StorageServer in which we're using
7613hunk ./src/allmydata/test/test_backends.py 229
7614         self.failIf(mockopen.called)
7615         self.failIf(mockmkdir.called)
7616 
7617+
7618 class TestServerConstruction(MockFiles, ReallyEqualMixin):
7619     def test_create_server_fs_backend(self):
7620         """ This tests whether a server instance can be constructed with a
7621hunk ./src/allmydata/test/test_backends.py 238
7622 
7623         StorageServer(testnodeid, backend=DASCore(self.storedir, expiration_policy))
7624 
7625+
7626 class TestServerAndFSBackend(MockFiles, ReallyEqualMixin):
7627     """ This tests both the StorageServer and the DAS backend together. """
7628     
7629hunk ./src/allmydata/test/test_backends.py 262
7630         """
7631         mocktime.return_value = 0
7632         # Inspect incoming and fail unless it's empty.
7633-        incomingset = self.ss.backend.get_incoming_shnums('teststorage_index')
7634-        self.failUnlessReallyEqual(incomingset, frozenset())
7635+        # incomingset = self.ss.backend.get_incoming_shnums('teststorage_index')
7636+        # self.failUnlessReallyEqual(incomingset, frozenset())
7637         
7638         # Populate incoming with the sharenum: 0.
7639         alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, frozenset((0,)), 1, mock.Mock())
7640hunk ./src/allmydata/test/test_backends.py 269
7641 
7642         # This is a transparent-box test: Inspect incoming and fail unless the sharenum: 0 is listed there.
7643-        # self.failUnlessReallyEqual(self.ss.backend.get_incoming_shnums('teststorage_index'), frozenset((0,)))
7644+        self.failUnlessReallyEqual(self.ss.backend.get_incoming_shnums('teststorage_index'), frozenset((0,)))
7645         
7646         # Attempt to create a second share writer with the same sharenum.
7647         # alreadygota, bsa = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, frozenset((0,)), 1, mock.Mock())
7648hunk ./src/allmydata/test/test_backends.py 274
7649 
7650+        # print bsa
7651         # Show that no sharewriter results from a remote_allocate_buckets
7652         # with the same si and sharenum, until BucketWriter.remote_close()
7653         # has been called.
7654hunk ./src/allmydata/test/test_backends.py 339
7655             self.failUnlessEqual(mode[0], 'r', mode)
7656             self.failUnless('b' in mode, mode)
7657 
7658-            return StringIO(share_data)
7659+            return TemporaryFile(share_data)
7660         mockopen.side_effect = call_open
7661 
7662         datalen = len(share_data)
7663}
7664[Completed FilePath based test_write_and_read_share
7665wilcoxjg@gmail.com**20110729043830
7666 Ignore-this: 2c32adb041f0344394927cd3ce8f3b36
7667] {
7668hunk ./src/allmydata/storage/backends/das/core.py 38
7669 NUM_RE=re.compile("^[0-9]+$")
7670 
7671 def is_num(fp):
7672-    return NUM_RE.match(fp.basename)
7673+    return NUM_RE.match(fp.basename())
7674 
7675 class DASCore(Backend):
7676     implements(IStorageBackend)
7677hunk ./src/allmydata/storage/backends/das/core.py 52
7678     def _setup_storage(self, storedir, readonly, reserved_space):
7679         precondition(isinstance(storedir, FilePath), storedir, FilePath) 
7680         self.storedir = storedir
7681-        print "self.storedir: ", self.storedir
7682         self.readonly = readonly
7683         self.reserved_space = int(reserved_space)
7684         self.sharedir = self.storedir.child("shares")
7685hunk ./src/allmydata/storage/backends/das/core.py 84
7686 
7687     def get_incoming_shnums(self, storageindex):
7688         """ Return a frozenset of the shnum (as ints) of incoming shares. """
7689-        print "self.incomingdir.children(): ", self.incomingdir.children()
7690-        print "self.incomingdir.pathname: ", self.incomingdir.pathname
7691         incomingthissi = si_si2dir(self.incomingdir, storageindex)
7692hunk ./src/allmydata/storage/backends/das/core.py 85
7693-        print "incomingthissi.children(): ", incomingthissi.children()
7694         try:
7695             childfps = [ fp for fp in incomingthissi.children() if is_num(fp) ]
7696hunk ./src/allmydata/storage/backends/das/core.py 87
7697-            shnums = [ int(fp.basename) for fp in childfps ]
7698+            shnums = [ int(fp.basename()) for fp in childfps ]
7699             return frozenset(shnums)
7700         except UnlistableError:
7701             # There is no shares directory at all.
7702hunk ./src/allmydata/storage/backends/das/core.py 101
7703         try:
7704             for fp in finalstoragedir.children():
7705                 if is_num(fp):
7706-                    yield ImmutableShare(fp, storageindex)
7707+                    finalhome = finalstoragedir.child(str(fp.basename()))
7708+                    yield ImmutableShare(storageindex, fp, finalhome)
7709         except UnlistableError:
7710             # There is no shares directory at all.
7711             pass
7712hunk ./src/allmydata/storage/backends/das/core.py 115
7713     def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary):
7714         finalhome = si_si2dir(self.sharedir, storageindex).child(str(shnum))
7715         incominghome = si_si2dir(self.incomingdir, storageindex).child(str(shnum))
7716-        immsh = ImmutableShare(finalhome, storageindex, shnum, incominghome, max_size=max_space_per_bucket, create=True)
7717+        immsh = ImmutableShare(storageindex, shnum, finalhome, incominghome, max_size=max_space_per_bucket, create=True)
7718         bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary)
7719         return bw
7720 
7721hunk ./src/allmydata/storage/backends/das/core.py 155
7722     LEASE_SIZE = struct.calcsize(">L32s32sL")
7723     sharetype = "immutable"
7724 
7725-    def __init__(self, finalhome, storageindex, shnum, incominghome, max_size=None, create=False):
7726+    def __init__(self, storageindex, shnum, finalhome=None, incominghome=None, max_size=None, create=False):
7727         """ If max_size is not None then I won't allow more than
7728         max_size to be written to me. If create=True then max_size
7729         must not be None. """
7730hunk ./src/allmydata/storage/backends/das/core.py 180
7731             # if this does happen, the old < v1.3.0 server will still allow
7732             # clients to read the first part of the share.
7733             self.incominghome.setContent(struct.pack(">LLL", 1, min(2**32-1, max_size), 0) )
7734-            print "We got here right?"
7735             self._lease_offset = max_size + 0x0c
7736             self._num_leases = 0
7737         else:
7738hunk ./src/allmydata/storage/backends/das/core.py 183
7739-            f = open(self.finalhome, 'rb')
7740-            (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
7741-            f.close()
7742+            fh = self.finalhome.open(mode='rb')
7743+            try:
7744+                (version, unused, num_leases) = struct.unpack(">LLL", fh.read(0xc))
7745+            finally:
7746+                fh.close()
7747             filesize = self.finalhome.getsize()
7748             if version != 1:
7749                 msg = "sharefile %s had version %d but we wanted 1" % \
7750hunk ./src/allmydata/storage/backends/das/core.py 227
7751         pass
7752         
7753     def stat(self):
7754-        return filepath.stat(self.finalhome)[stat.ST_SIZE]
7755+        return filepath.stat(self.finalhome.path)[stat.ST_SIZE]
7756 
7757     def get_shnum(self):
7758         return self.shnum
7759hunk ./src/allmydata/storage/backends/das/core.py 244
7760         actuallength = max(0, min(length, fsize-seekpos))
7761         if actuallength == 0:
7762             return ""
7763+        fh = self.finalhome.open(mode='rb')
7764         try:
7765hunk ./src/allmydata/storage/backends/das/core.py 246
7766-            fh = open(self.finalhome, 'rb')
7767             fh.seek(seekpos)
7768             sharedata = fh.read(actuallength)
7769         finally:
7770hunk ./src/allmydata/storage/backends/das/core.py 257
7771         precondition(offset >= 0, offset)
7772         if self._max_size is not None and offset+length > self._max_size:
7773             raise DataTooLargeError(self._max_size, offset, length)
7774-        f = open(self.incominghome, 'rb+')
7775-        real_offset = self._data_offset+offset
7776-        f.seek(real_offset)
7777-        assert f.tell() == real_offset
7778-        f.write(data)
7779-        f.close()
7780+        fh = self.incominghome.open(mode='rb+')
7781+        try:
7782+            real_offset = self._data_offset+offset
7783+            fh.seek(real_offset)
7784+            assert fh.tell() == real_offset
7785+            fh.write(data)
7786+        finally:
7787+            fh.close()
7788 
7789     def _write_lease_record(self, f, lease_number, lease_info):
7790         offset = self._lease_offset + lease_number * self.LEASE_SIZE
7791hunk ./src/allmydata/storage/backends/das/core.py 299
7792 
7793     def get_leases(self):
7794         """Yields a LeaseInfo instance for all leases."""
7795-        f = open(self.finalhome, 'rb')
7796-        (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc))
7797-        f.seek(self._lease_offset)
7798+        fh = self.finalhome.open(mode='rb')
7799+        (version, unused, num_leases) = struct.unpack(">LLL", fh.read(0xc))
7800+        fh.seek(self._lease_offset)
7801         for i in range(num_leases):
7802hunk ./src/allmydata/storage/backends/das/core.py 303
7803-            data = f.read(self.LEASE_SIZE)
7804+            data = fh.read(self.LEASE_SIZE)
7805             if data:
7806                 yield LeaseInfo().from_immutable_data(data)
7807 
7808hunk ./src/allmydata/storage/common.py 21
7809 
7810 def si_si2dir(startfp, storageindex):
7811     sia = si_b2a(storageindex)
7812-    print "I got here right?  sia =", sia
7813-    print "What the fuck is startfp? ", startfp
7814-    print "What the fuck is startfp.pathname? ", startfp.pathname
7815     newfp = startfp.child(sia[:2])
7816hunk ./src/allmydata/storage/common.py 22
7817-    print "Did I get here?"
7818     return newfp.child(sia)
7819hunk ./src/allmydata/test/test_backends.py 1
7820-import os
7821+import os, stat
7822 from twisted.trial import unittest
7823 from twisted.python.filepath import FilePath
7824 from allmydata.util.log import msg
7825hunk ./src/allmydata/test/test_backends.py 44
7826 
7827 class MockFilePath:
7828     def __init__(self, pathstring):
7829-        self.pathname = pathstring
7830+        self.path = pathstring
7831         self.spawn = {}
7832hunk ./src/allmydata/test/test_backends.py 46
7833-        self.antecedent = os.path.dirname(self.pathname)
7834+        self.antecedent = os.path.dirname(self.path)
7835     def child(self, childstring):
7836hunk ./src/allmydata/test/test_backends.py 48
7837-        arg2child = os.path.join(self.pathname, childstring)
7838-        print "arg2child: ", arg2child
7839+        arg2child = os.path.join(self.path, childstring)
7840         if fakefilepaths.has_key(arg2child):
7841             child = fakefilepaths[arg2child]
7842hunk ./src/allmydata/test/test_backends.py 51
7843-            print "Should have gotten here."
7844         else:
7845             child = MockFilePath(arg2child)
7846         return child
7847hunk ./src/allmydata/test/test_backends.py 61
7848             parent = MockFilePath(self.antecedent)
7849         return parent
7850     def children(self):
7851-        childrenfromffs = frozenset(fakefilepaths.values())
7852+        childrenfromffs = [ffp for ffp in fakefilepaths.values() if ffp.path.startswith(self.path)]
7853+        childrenfromffs = [ffp for ffp in childrenfromffs if not ffp.path.endswith(self.path)]
7854+        childrenfromffs = frozenset(childrenfromffs)
7855         return list(childrenfromffs | frozenset(self.spawn.values())) 
7856     def makedirs(self):
7857         # XXX These methods assume that fp_<FOO> functions in fileutil will be tested elsewhere!
7858hunk ./src/allmydata/test/test_backends.py 74
7859         pass
7860     def exists(self):
7861         return False
7862-    def open(self):
7863-        return self.File.open()
7864+    def open(self, mode='r'):
7865+        return self.fileobject.open(mode)
7866     def setparents(self):
7867         antecedents = []
7868         def f(fps, antecedents):
7869hunk ./src/allmydata/test/test_backends.py 83
7870             if newfps:
7871                 antecedents.append(newfps)
7872                 f(newfps, antecedents)
7873-        f(self.pathname, antecedents)
7874+        f(self.path, antecedents)
7875         for fps in antecedents:
7876             if not fakefilepaths.has_key(fps):
7877                 fakefilepaths[fps] = MockFilePath(fps)
7878hunk ./src/allmydata/test/test_backends.py 88
7879     def setContent(self, contentstring):
7880-        print "I am self.pathname: ", self.pathname
7881-        fakefilepaths[self.pathname] = self
7882-        self.File = MockFile(contentstring)
7883+        fakefilepaths[self.path] = self
7884+        self.fileobject = MockFileObject(contentstring)
7885         self.setparents()
7886     def create(self):
7887hunk ./src/allmydata/test/test_backends.py 92
7888-        fakefilepaths[self.pathname] = self
7889+        fakefilepaths[self.path] = self
7890         self.setparents()
7891hunk ./src/allmydata/test/test_backends.py 94
7892-           
7893+    def basename(self):
7894+        return os.path.split(self.path)[1]
7895+    def moveTo(self, newffp):
7896+        #  XXX Makes no distinction between file and directory arguments, this is deviation from filepath.moveTo
7897+        if fakefilepaths.has_key(newffp.path):
7898+            raise OSError
7899+        else:
7900+            fakefilepaths[newffp.path] = self
7901+            self.path = newffp.path
7902+    def getsize(self):
7903+        return self.fileobject.getsize()
7904 
7905hunk ./src/allmydata/test/test_backends.py 106
7906-class MockFile:
7907+class MockFileObject:
7908     def __init__(self, contentstring):
7909         self.buffer = contentstring
7910         self.pos = 0
7911hunk ./src/allmydata/test/test_backends.py 110
7912-    def open(self):
7913+    def open(self, mode='r'):
7914         return self
7915     def write(self, instring):
7916         begin = self.pos
7917hunk ./src/allmydata/test/test_backends.py 117
7918         padlen = begin - len(self.buffer)
7919         if padlen > 0:
7920             self.buffer += '\x00' * padlen
7921-            end = self.pos + len(instring)
7922-            self.buffer = self.buffer[:begin]+instring+self.buffer[end:]
7923-            self.pos = end
7924+        end = self.pos + len(instring)
7925+        self.buffer = self.buffer[:begin]+instring+self.buffer[end:]
7926+        self.pos = end
7927     def close(self):
7928hunk ./src/allmydata/test/test_backends.py 121
7929-        pass
7930+        self.pos = 0
7931     def seek(self, pos):
7932         self.pos = pos
7933     def read(self, numberbytes):
7934hunk ./src/allmydata/test/test_backends.py 128
7935         return self.buffer[self.pos:self.pos+numberbytes]
7936     def tell(self):
7937         return self.pos
7938-
7939+    def size(self):
7940+        # XXX This method A: Is not to be found in a real file B: Is part of a wild-mung-up of filepath.stat!
7941+        # XXX Finally we shall hopefully use a getsize method soon, must consult first though.
7942+        return {stat.ST_SIZE:len(self.buffer)}
7943+    def getsize(self):
7944+        return len(self.buffer)
7945 
7946 class MockBCC:
7947     def setServiceParent(self, Parent):
7948hunk ./src/allmydata/test/test_backends.py 177
7949         GetSpace = self.get_available_space.__enter__()
7950         GetSpace.side_effect = self.call_get_available_space
7951 
7952+        self.statforsize = mock.patch('allmydata.storage.backends.das.core.filepath.stat')
7953+        getsize = self.statforsize.__enter__()
7954+        getsize.side_effect = self.call_statforsize
7955+
7956+    def call_statforsize(self, fakefpname):
7957+        return fakefilepaths[fakefpname].fileobject.size()
7958+
7959     def call_FakeBCC(self, StateFile):
7960         return MockBCC()
7961 
7962hunk ./src/allmydata/test/test_backends.py 220
7963         msg( "%s.tearDown()" % (self,))
7964         FakePath = self.FilePathFake.__exit__()       
7965         FakeBCC = self.BCountingCrawler.__exit__()
7966+        getsize = self.statforsize.__exit__()
7967 
7968 expiration_policy = {'enabled' : False,
7969                      'mode' : 'age',
7970hunk ./src/allmydata/test/test_backends.py 284
7971         """
7972         mocktime.return_value = 0
7973         # Inspect incoming and fail unless it's empty.
7974-        # incomingset = self.ss.backend.get_incoming_shnums('teststorage_index')
7975-        # self.failUnlessReallyEqual(incomingset, frozenset())
7976+        incomingset = self.ss.backend.get_incoming_shnums('teststorage_index')
7977+        self.failUnlessReallyEqual(incomingset, frozenset())
7978         
7979         # Populate incoming with the sharenum: 0.
7980         alreadygot, bs = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, frozenset((0,)), 1, mock.Mock())
7981hunk ./src/allmydata/test/test_backends.py 294
7982         self.failUnlessReallyEqual(self.ss.backend.get_incoming_shnums('teststorage_index'), frozenset((0,)))
7983         
7984         # Attempt to create a second share writer with the same sharenum.
7985-        # alreadygota, bsa = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, frozenset((0,)), 1, mock.Mock())
7986+        alreadygota, bsa = self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, frozenset((0,)), 1, mock.Mock())
7987 
7988hunk ./src/allmydata/test/test_backends.py 296
7989-        # print bsa
7990         # Show that no sharewriter results from a remote_allocate_buckets
7991         # with the same si and sharenum, until BucketWriter.remote_close()
7992         # has been called.
7993hunk ./src/allmydata/test/test_backends.py 299
7994-        # self.failIf(bsa)
7995+        self.failIf(bsa)
7996 
7997         # Test allocated size.
7998hunk ./src/allmydata/test/test_backends.py 302
7999-        # spaceint = self.ss.allocated_size()
8000-        # self.failUnlessReallyEqual(spaceint, 1)
8001+        spaceint = self.ss.allocated_size()
8002+        self.failUnlessReallyEqual(spaceint, 1)
8003 
8004         # Write 'a' to shnum 0. Only tested together with close and read.
8005hunk ./src/allmydata/test/test_backends.py 306
8006-        # bs[0].remote_write(0, 'a')
8007+        bs[0].remote_write(0, 'a')
8008         
8009         # Preclose: Inspect final, failUnless nothing there.
8010hunk ./src/allmydata/test/test_backends.py 309
8011-        # self.failUnlessReallyEqual(len(list(self.backend.get_shares('teststorage_index'))), 0)
8012-        # bs[0].remote_close()
8013+        self.failUnlessReallyEqual(len(list(self.backend.get_shares('teststorage_index'))), 0)
8014+        bs[0].remote_close()
8015 
8016         # Postclose: (Omnibus) failUnless written data is in final.
8017hunk ./src/allmydata/test/test_backends.py 313
8018-        # sharesinfinal = list(self.backend.get_shares('teststorage_index'))
8019-        # self.failUnlessReallyEqual(len(sharesinfinal), 1)
8020-        # contents = sharesinfinal[0].read_share_data(0, 73)
8021-        # self.failUnlessReallyEqual(contents, client_data)
8022+        sharesinfinal = list(self.backend.get_shares('teststorage_index'))
8023+        self.failUnlessReallyEqual(len(sharesinfinal), 1)
8024+        contents = sharesinfinal[0].read_share_data(0, 73)
8025+        self.failUnlessReallyEqual(contents, client_data)
8026 
8027         # Exercise the case that the share we're asking to allocate is
8028         # already (completely) uploaded.
8029hunk ./src/allmydata/test/test_backends.py 320
8030-        # self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
8031+        self.ss.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
8032         
8033     @mock.patch('time.time')
8034     @mock.patch('allmydata.util.fileutil.get_available_space')
8035}
8036[TestServerAndFSBackend.test_read_old_share passes
8037wilcoxjg@gmail.com**20110729235356
8038 Ignore-this: 574636c959ea58d4609bea2428ff51d3
8039] {
8040hunk ./src/allmydata/storage/backends/das/core.py 37
8041 # $SHARENUM matches this regex:
8042 NUM_RE=re.compile("^[0-9]+$")
8043 
8044-def is_num(fp):
8045-    return NUM_RE.match(fp.basename())
8046-
8047 class DASCore(Backend):
8048     implements(IStorageBackend)
8049     def __init__(self, storedir, expiration_policy, readonly=False, reserved_space=0):
8050hunk ./src/allmydata/storage/backends/das/core.py 97
8051         finalstoragedir = si_si2dir(self.sharedir, storageindex)
8052         try:
8053             for fp in finalstoragedir.children():
8054-                if is_num(fp):
8055-                    finalhome = finalstoragedir.child(str(fp.basename()))
8056-                    yield ImmutableShare(storageindex, fp, finalhome)
8057+                fpshnumstr = fp.basename()
8058+                if NUM_RE.match(fpshnumstr):
8059+                    finalhome = finalstoragedir.child(fpshnumstr)
8060+                    yield ImmutableShare(storageindex, fpshnumstr, finalhome)
8061         except UnlistableError:
8062             # There is no shares directory at all.
8063             pass
8064hunk ./src/allmydata/test/test_backends.py 15
8065 from allmydata.storage.server import StorageServer
8066 from allmydata.storage.backends.das.core import DASCore
8067 from allmydata.storage.backends.null.core import NullCore
8068+from allmydata.storage.common import si_si2dir
8069 
8070 # The following share file content was generated with
8071 # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2
8072hunk ./src/allmydata/test/test_backends.py 155
8073     def setUp(self):
8074         # Make patcher, patch, and make effects for fs using functions.
8075         msg( "%s.setUp()" % (self,))
8076+        fakefilepaths = {}
8077         self.storedir = MockFilePath('teststoredir')
8078         self.basedir = self.storedir.child('shares')
8079         self.baseincdir = self.basedir.child('incoming')
8080hunk ./src/allmydata/test/test_backends.py 223
8081         FakePath = self.FilePathFake.__exit__()       
8082         FakeBCC = self.BCountingCrawler.__exit__()
8083         getsize = self.statforsize.__exit__()
8084+        fakefilepaths = {}
8085 
8086 expiration_policy = {'enabled' : False,
8087                      'mode' : 'age',
8088hunk ./src/allmydata/test/test_backends.py 334
8089             return 0
8090 
8091         mockget_available_space.side_effect = call_get_available_space
8092-       
8093-       
8094         alreadygotc, bsc = self.sswithreserve.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock())
8095 
8096hunk ./src/allmydata/test/test_backends.py 336
8097-    @mock.patch('os.path.exists')
8098-    @mock.patch('os.path.getsize')
8099-    @mock.patch('__builtin__.open')
8100-    @mock.patch('os.listdir')
8101-    def test_read_old_share(self, mocklistdir, mockopen, mockgetsize, mockexists):
8102+    def test_read_old_share(self):
8103         """ This tests whether the code correctly finds and reads
8104         shares written out by old (Tahoe-LAFS <= v1.8.2)
8105         servers. There is a similar test in test_download, but that one
8106hunk ./src/allmydata/test/test_backends.py 344
8107         stack of code. This one is for exercising just the
8108         StorageServer object. """
8109 
8110-        def call_listdir(dirname):
8111-            precondition(isinstance(dirname, basestring), dirname)
8112-            self.failUnlessReallyEqual(dirname, os.path.join(storedir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a'))
8113-            return ['0']
8114-
8115-        mocklistdir.side_effect = call_listdir
8116-
8117-        def call_open(fname, mode):
8118-            precondition(isinstance(fname, basestring), fname)
8119-            self.failUnlessReallyEqual(fname, sharefname)
8120-            self.failUnlessEqual(mode[0], 'r', mode)
8121-            self.failUnless('b' in mode, mode)
8122-
8123-            return TemporaryFile(share_data)
8124-        mockopen.side_effect = call_open
8125-
8126         datalen = len(share_data)
8127hunk ./src/allmydata/test/test_backends.py 345
8128-        def call_getsize(fname):
8129-            precondition(isinstance(fname, basestring), fname)
8130-            self.failUnlessReallyEqual(fname, sharefname)
8131-            return datalen
8132-        mockgetsize.side_effect = call_getsize
8133-
8134-        def call_exists(fname):
8135-            precondition(isinstance(fname, basestring), fname)
8136-            self.failUnlessReallyEqual(fname, sharefname)
8137-            return True
8138-        mockexists.side_effect = call_exists
8139+        finalhome = si_si2dir(self.basedir, 'teststorage_index').child(str(3))
8140+        finalhome.setContent(share_data)
8141 
8142         # Now begin the test.
8143         bs = self.ss.remote_get_buckets('teststorage_index')
8144hunk ./src/allmydata/test/test_backends.py 352
8145 
8146         self.failUnlessEqual(len(bs), 1)
8147-        b = bs[0]
8148+        b = bs['3']
8149         # These should match by definition, the next two cases cover cases without (completely) unambiguous behaviors.
8150         self.failUnlessReallyEqual(b.remote_read(0, datalen), client_data)
8151         # If you try to read past the end you get the as much data as is there.
8152}
8153
8154Context:
8155
8156[src/allmydata/scripts/cli.py: fix pyflakes warning.
8157david-sarah@jacaranda.org**20110728021402
8158 Ignore-this: 94050140ddb99865295973f49927c509
8159]
8160[Fix the help synopses of CLI commands to include [options] in the right place. fixes #1359, fixes #636
8161david-sarah@jacaranda.org**20110724225440
8162 Ignore-this: 2a8e488a5f63dabfa9db9efd83768a5
8163]
8164[encodingutil: argv and output encodings are always the same on all platforms. Lose the unnecessary generality of them being different. fixes #1120
8165david-sarah@jacaranda.org**20110629185356
8166 Ignore-this: 5ebacbe6903dfa83ffd3ff8436a97787
8167]
8168[docs/man/tahoe.1: add man page. fixes #1420
8169david-sarah@jacaranda.org**20110724171728
8170 Ignore-this: fc7601ec7f25494288d6141d0ae0004c
8171]
8172[Update the dependency on zope.interface to fix an incompatiblity between Nevow and zope.interface 3.6.4. fixes #1435
8173david-sarah@jacaranda.org**20110721234941
8174 Ignore-this: 2ff3fcfc030fca1a4d4c7f1fed0f2aa9
8175]
8176[frontends/ftpd.py: remove the check for IWriteFile.close since we're now guaranteed to be using Twisted >= 10.1 which has it.
8177david-sarah@jacaranda.org**20110722000320
8178 Ignore-this: 55cd558b791526113db3f83c00ec328a
8179]
8180[Update the dependency on Twisted to >= 10.1. This allows us to simplify some documentation: it's no longer necessary to install pywin32 on Windows, or apply a patch to Twisted in order to use the FTP frontend. fixes #1274, #1438. refs #1429
8181david-sarah@jacaranda.org**20110721233658
8182 Ignore-this: 81b41745477163c9b39c0b59db91cc62
8183]
8184[misc/build_helpers/run_trial.py: undo change to block pywin32 (it didn't work because run_trial.py is no longer used). refs #1334
8185david-sarah@jacaranda.org**20110722035402
8186 Ignore-this: 5d03f544c4154f088e26c7107494bf39
8187]
8188[misc/build_helpers/run_trial.py: ensure that pywin32 is not on the sys.path when running the test suite. Includes some temporary debugging printouts that will be removed. refs #1334
8189david-sarah@jacaranda.org**20110722024907
8190 Ignore-this: 5141a9f83a4085ed4ca21f0bbb20bb9c
8191]
8192[docs/running.rst: use 'tahoe run ~/.tahoe' instead of 'tahoe run' (the default is the current directory, unlike 'tahoe start').
8193david-sarah@jacaranda.org**20110718005949
8194 Ignore-this: 81837fbce073e93d88a3e7ae3122458c
8195]
8196[docs/running.rst: say to put the introducer.furl in tahoe.cfg.
8197david-sarah@jacaranda.org**20110717194315
8198 Ignore-this: 954cc4c08e413e8c62685d58ff3e11f3
8199]
8200[README.txt: say that quickstart.rst is in the docs directory.
8201david-sarah@jacaranda.org**20110717192400
8202 Ignore-this: bc6d35a85c496b77dbef7570677ea42a
8203]
8204[setup: remove the dependency on foolscap's "secure_connections" extra, add a dependency on pyOpenSSL
8205zooko@zooko.com**20110717114226
8206 Ignore-this: df222120d41447ce4102616921626c82
8207 fixes #1383
8208]
8209[test_sftp.py cleanup: remove a redundant definition of failUnlessReallyEqual.
8210david-sarah@jacaranda.org**20110716181813
8211 Ignore-this: 50113380b368c573f07ac6fe2eb1e97f
8212]
8213[docs: add missing link in NEWS.rst
8214zooko@zooko.com**20110712153307
8215 Ignore-this: be7b7eb81c03700b739daa1027d72b35
8216]
8217[contrib: remove the contributed fuse modules and the entire contrib/ directory, which is now empty
8218zooko@zooko.com**20110712153229
8219 Ignore-this: 723c4f9e2211027c79d711715d972c5
8220 Also remove a couple of vestigial references to figleaf, which is long gone.
8221 fixes #1409 (remove contrib/fuse)
8222]
8223[add Protovis.js-based download-status timeline visualization
8224Brian Warner <warner@lothar.com>**20110629222606
8225 Ignore-this: 477ccef5c51b30e246f5b6e04ab4a127
8226 
8227 provide status overlap info on the webapi t=json output, add decode/decrypt
8228 rate tooltips, add zoomin/zoomout buttons
8229]
8230[add more download-status data, fix tests
8231Brian Warner <warner@lothar.com>**20110629222555
8232 Ignore-this: e9e0b7e0163f1e95858aa646b9b17b8c
8233]
8234[prepare for viz: improve DownloadStatus events
8235Brian Warner <warner@lothar.com>**20110629222542
8236 Ignore-this: 16d0bde6b734bb501aa6f1174b2b57be
8237 
8238 consolidate IDownloadStatusHandlingConsumer stuff into DownloadNode
8239]
8240[docs: fix error in crypto specification that was noticed by Taylor R Campbell <campbell+tahoe@mumble.net>
8241zooko@zooko.com**20110629185711
8242 Ignore-this: b921ed60c1c8ba3c390737fbcbe47a67
8243]
8244[setup.py: don't make bin/tahoe.pyscript executable. fixes #1347
8245david-sarah@jacaranda.org**20110130235809
8246 Ignore-this: 3454c8b5d9c2c77ace03de3ef2d9398a
8247]
8248[Makefile: remove targets relating to 'setup.py check_auto_deps' which no longer exists. fixes #1345
8249david-sarah@jacaranda.org**20110626054124
8250 Ignore-this: abb864427a1b91bd10d5132b4589fd90
8251]
8252[Makefile: add 'make check' as an alias for 'make test'. Also remove an unnecessary dependency of 'test' on 'build' and 'src/allmydata/_version.py'. fixes #1344
8253david-sarah@jacaranda.org**20110623205528
8254 Ignore-this: c63e23146c39195de52fb17c7c49b2da
8255]
8256[Rename test_package_initialization.py to (much shorter) test_import.py .
8257Brian Warner <warner@lothar.com>**20110611190234
8258 Ignore-this: 3eb3dbac73600eeff5cfa6b65d65822
8259 
8260 The former name was making my 'ls' listings hard to read, by forcing them
8261 down to just two columns.
8262]
8263[tests: fix tests to accomodate [20110611153758-92b7f-0ba5e4726fb6318dac28fb762a6512a003f4c430]
8264zooko@zooko.com**20110611163741
8265 Ignore-this: 64073a5f39e7937e8e5e1314c1a302d1
8266 Apparently none of the two authors (stercor, terrell), three reviewers (warner, davidsarah, terrell), or one committer (me) actually ran the tests. This is presumably due to #20.
8267 fixes #1412
8268]
8269[wui: right-align the size column in the WUI
8270zooko@zooko.com**20110611153758
8271 Ignore-this: 492bdaf4373c96f59f90581c7daf7cd7
8272 Thanks to Ted "stercor" Rolle Jr. and Terrell Russell.
8273 fixes #1412
8274]
8275[docs: three minor fixes
8276zooko@zooko.com**20110610121656
8277 Ignore-this: fec96579eb95aceb2ad5fc01a814c8a2
8278 CREDITS for arc for stats tweak
8279 fix link to .zip file in quickstart.rst (thanks to ChosenOne for noticing)
8280 English usage tweak
8281]
8282[docs/running.rst: fix stray HTML (not .rst) link noticed by ChosenOne.
8283david-sarah@jacaranda.org**20110609223719
8284 Ignore-this: fc50ac9c94792dcac6f1067df8ac0d4a
8285]
8286[server.py:  get_latencies now reports percentiles _only_ if there are sufficient observations for the interpretation of the percentile to be unambiguous.
8287wilcoxjg@gmail.com**20110527120135
8288 Ignore-this: 2e7029764bffc60e26f471d7c2b6611e
8289 interfaces.py:  modified the return type of RIStatsProvider.get_stats to allow for None as a return value
8290 NEWS.rst, stats.py: documentation of change to get_latencies
8291 stats.rst: now documents percentile modification in get_latencies
8292 test_storage.py:  test_latencies now expects None in output categories that contain too few samples for the associated percentile to be unambiguously reported.
8293 fixes #1392
8294]
8295[docs: revert link in relnotes.txt from NEWS.rst to NEWS, since the former did not exist at revision 5000.
8296david-sarah@jacaranda.org**20110517011214
8297 Ignore-this: 6a5be6e70241e3ec0575641f64343df7
8298]
8299[docs: convert NEWS to NEWS.rst and change all references to it.
8300david-sarah@jacaranda.org**20110517010255
8301 Ignore-this: a820b93ea10577c77e9c8206dbfe770d
8302]
8303[docs: remove out-of-date docs/testgrid/introducer.furl and containing directory. fixes #1404
8304david-sarah@jacaranda.org**20110512140559
8305 Ignore-this: 784548fc5367fac5450df1c46890876d
8306]
8307[scripts/common.py: don't assume that the default alias is always 'tahoe' (it is, but the API of get_alias doesn't say so). refs #1342
8308david-sarah@jacaranda.org**20110130164923
8309 Ignore-this: a271e77ce81d84bb4c43645b891d92eb
8310]
8311[setup: don't catch all Exception from check_requirement(), but only PackagingError and ImportError
8312zooko@zooko.com**20110128142006
8313 Ignore-this: 57d4bc9298b711e4bc9dc832c75295de
8314 I noticed this because I had accidentally inserted a bug which caused AssertionError to be raised from check_requirement().
8315]
8316[M-x whitespace-cleanup
8317zooko@zooko.com**20110510193653
8318 Ignore-this: dea02f831298c0f65ad096960e7df5c7
8319]
8320[docs: fix typo in running.rst, thanks to arch_o_median
8321zooko@zooko.com**20110510193633
8322 Ignore-this: ca06de166a46abbc61140513918e79e8
8323]
8324[relnotes.txt: don't claim to work on Cygwin (which has been untested for some time). refs #1342
8325david-sarah@jacaranda.org**20110204204902
8326 Ignore-this: 85ef118a48453d93fa4cddc32d65b25b
8327]
8328[relnotes.txt: forseeable -> foreseeable. refs #1342
8329david-sarah@jacaranda.org**20110204204116
8330 Ignore-this: 746debc4d82f4031ebf75ab4031b3a9
8331]
8332[replace remaining .html docs with .rst docs
8333zooko@zooko.com**20110510191650
8334 Ignore-this: d557d960a986d4ac8216d1677d236399
8335 Remove install.html (long since deprecated).
8336 Also replace some obsolete references to install.html with references to quickstart.rst.
8337 Fix some broken internal references within docs/historical/historical_known_issues.txt.
8338 Thanks to Ravi Pinjala and Patrick McDonald.
8339 refs #1227
8340]
8341[docs: FTP-and-SFTP.rst: fix a minor error and update the information about which version of Twisted fixes #1297
8342zooko@zooko.com**20110428055232
8343 Ignore-this: b63cfb4ebdbe32fb3b5f885255db4d39
8344]
8345[munin tahoe_files plugin: fix incorrect file count
8346francois@ctrlaltdel.ch**20110428055312
8347 Ignore-this: 334ba49a0bbd93b4a7b06a25697aba34
8348 fixes #1391
8349]
8350[corrected "k must never be smaller than N" to "k must never be greater than N"
8351secorp@allmydata.org**20110425010308
8352 Ignore-this: 233129505d6c70860087f22541805eac
8353]
8354[Fix a test failure in test_package_initialization on Python 2.4.x due to exceptions being stringified differently than in later versions of Python. refs #1389
8355david-sarah@jacaranda.org**20110411190738
8356 Ignore-this: 7847d26bc117c328c679f08a7baee519
8357]
8358[tests: add test for including the ImportError message and traceback entry in the summary of errors from importing dependencies. refs #1389
8359david-sarah@jacaranda.org**20110410155844
8360 Ignore-this: fbecdbeb0d06a0f875fe8d4030aabafa
8361]
8362[allmydata/__init__.py: preserve the message and last traceback entry (file, line number, function, and source line) of ImportErrors in the package versions string. fixes #1389
8363david-sarah@jacaranda.org**20110410155705
8364 Ignore-this: 2f87b8b327906cf8bfca9440a0904900
8365]
8366[remove unused variable detected by pyflakes
8367zooko@zooko.com**20110407172231
8368 Ignore-this: 7344652d5e0720af822070d91f03daf9
8369]
8370[allmydata/__init__.py: Nicer reporting of unparseable version numbers in dependencies. fixes #1388
8371david-sarah@jacaranda.org**20110401202750
8372 Ignore-this: 9c6bd599259d2405e1caadbb3e0d8c7f
8373]
8374[update FTP-and-SFTP.rst: the necessary patch is included in Twisted-10.1
8375Brian Warner <warner@lothar.com>**20110325232511
8376 Ignore-this: d5307faa6900f143193bfbe14e0f01a
8377]
8378[control.py: remove all uses of s.get_serverid()
8379warner@lothar.com**20110227011203
8380 Ignore-this: f80a787953bd7fa3d40e828bde00e855
8381]
8382[web: remove some uses of s.get_serverid(), not all
8383warner@lothar.com**20110227011159
8384 Ignore-this: a9347d9cf6436537a47edc6efde9f8be
8385]
8386[immutable/downloader/fetcher.py: remove all get_serverid() calls
8387warner@lothar.com**20110227011156
8388 Ignore-this: fb5ef018ade1749348b546ec24f7f09a
8389]
8390[immutable/downloader/fetcher.py: fix diversity bug in server-response handling
8391warner@lothar.com**20110227011153
8392 Ignore-this: bcd62232c9159371ae8a16ff63d22c1b
8393 
8394 When blocks terminate (either COMPLETE or CORRUPT/DEAD/BADSEGNUM), the
8395 _shares_from_server dict was being popped incorrectly (using shnum as the
8396 index instead of serverid). I'm still thinking through the consequences of
8397 this bug. It was probably benign and really hard to detect. I think it would
8398 cause us to incorrectly believe that we're pulling too many shares from a
8399 server, and thus prefer a different server rather than asking for a second
8400 share from the first server. The diversity code is intended to spread out the
8401 number of shares simultaneously being requested from each server, but with
8402 this bug, it might be spreading out the total number of shares requested at
8403 all, not just simultaneously. (note that SegmentFetcher is scoped to a single
8404 segment, so the effect doesn't last very long).
8405]
8406[immutable/downloader/share.py: reduce get_serverid(), one left, update ext deps
8407warner@lothar.com**20110227011150
8408 Ignore-this: d8d56dd8e7b280792b40105e13664554
8409 
8410 test_download.py: create+check MyShare instances better, make sure they share
8411 Server objects, now that finder.py cares
8412]
8413[immutable/downloader/finder.py: reduce use of get_serverid(), one left
8414warner@lothar.com**20110227011146
8415 Ignore-this: 5785be173b491ae8a78faf5142892020
8416]
8417[immutable/offloaded.py: reduce use of get_serverid() a bit more
8418warner@lothar.com**20110227011142
8419 Ignore-this: b48acc1b2ae1b311da7f3ba4ffba38f
8420]
8421[immutable/upload.py: reduce use of get_serverid()
8422warner@lothar.com**20110227011138
8423 Ignore-this: ffdd7ff32bca890782119a6e9f1495f6
8424]
8425[immutable/checker.py: remove some uses of s.get_serverid(), not all
8426warner@lothar.com**20110227011134
8427 Ignore-this: e480a37efa9e94e8016d826c492f626e
8428]
8429[add remaining get_* methods to storage_client.Server, NoNetworkServer, and
8430warner@lothar.com**20110227011132
8431 Ignore-this: 6078279ddf42b179996a4b53bee8c421
8432 MockIServer stubs
8433]
8434[upload.py: rearrange _make_trackers a bit, no behavior changes
8435warner@lothar.com**20110227011128
8436 Ignore-this: 296d4819e2af452b107177aef6ebb40f
8437]
8438[happinessutil.py: finally rename merge_peers to merge_servers
8439warner@lothar.com**20110227011124
8440 Ignore-this: c8cd381fea1dd888899cb71e4f86de6e
8441]
8442[test_upload.py: factor out FakeServerTracker
8443warner@lothar.com**20110227011120
8444 Ignore-this: 6c182cba90e908221099472cc159325b
8445]
8446[test_upload.py: server-vs-tracker cleanup
8447warner@lothar.com**20110227011115
8448 Ignore-this: 2915133be1a3ba456e8603885437e03
8449]
8450[happinessutil.py: server-vs-tracker cleanup
8451warner@lothar.com**20110227011111
8452 Ignore-this: b856c84033562d7d718cae7cb01085a9
8453]
8454[upload.py: more tracker-vs-server cleanup
8455warner@lothar.com**20110227011107
8456 Ignore-this: bb75ed2afef55e47c085b35def2de315
8457]
8458[upload.py: fix var names to avoid confusion between 'trackers' and 'servers'
8459warner@lothar.com**20110227011103
8460 Ignore-this: 5d5e3415b7d2732d92f42413c25d205d
8461]
8462[refactor: s/peer/server/ in immutable/upload, happinessutil.py, test_upload
8463warner@lothar.com**20110227011100
8464 Ignore-this: 7ea858755cbe5896ac212a925840fe68
8465 
8466 No behavioral changes, just updating variable/method names and log messages.
8467 The effects outside these three files should be minimal: some exception
8468 messages changed (to say "server" instead of "peer"), and some internal class
8469 names were changed. A few things still use "peer" to minimize external
8470 changes, like UploadResults.timings["peer_selection"] and
8471 happinessutil.merge_peers, which can be changed later.
8472]
8473[storage_client.py: clean up test_add_server/test_add_descriptor, remove .test_servers
8474warner@lothar.com**20110227011056
8475 Ignore-this: efad933e78179d3d5fdcd6d1ef2b19cc
8476]
8477[test_client.py, upload.py:: remove KiB/MiB/etc constants, and other dead code
8478warner@lothar.com**20110227011051
8479 Ignore-this: dc83c5794c2afc4f81e592f689c0dc2d
8480]
8481[test: increase timeout on a network test because Francois's ARM machine hit that timeout
8482zooko@zooko.com**20110317165909
8483 Ignore-this: 380c345cdcbd196268ca5b65664ac85b
8484 I'm skeptical that the test was proceeding correctly but ran out of time. It seems more likely that it had gotten hung. But if we raise the timeout to an even more extravagant number then we can be even more certain that the test was never going to finish.
8485]
8486[docs/configuration.rst: add a "Frontend Configuration" section
8487Brian Warner <warner@lothar.com>**20110222014323
8488 Ignore-this: 657018aa501fe4f0efef9851628444ca
8489 
8490 this points to docs/frontends/*.rst, which were previously underlinked
8491]
8492[web/filenode.py: avoid calling req.finish() on closed HTTP connections. Closes #1366
8493"Brian Warner <warner@lothar.com>"**20110221061544
8494 Ignore-this: 799d4de19933f2309b3c0c19a63bb888
8495]
8496[Add unit tests for cross_check_pkg_resources_versus_import, and a regression test for ref #1355. This requires a little refactoring to make it testable.
8497david-sarah@jacaranda.org**20110221015817
8498 Ignore-this: 51d181698f8c20d3aca58b057e9c475a
8499]
8500[allmydata/__init__.py: .name was used in place of the correct .__name__ when printing an exception. Also, robustify string formatting by using %r instead of %s in some places. fixes #1355.
8501david-sarah@jacaranda.org**20110221020125
8502 Ignore-this: b0744ed58f161bf188e037bad077fc48
8503]
8504[Refactor StorageFarmBroker handling of servers
8505Brian Warner <warner@lothar.com>**20110221015804
8506 Ignore-this: 842144ed92f5717699b8f580eab32a51
8507 
8508 Pass around IServer instance instead of (peerid, rref) tuple. Replace
8509 "descriptor" with "server". Other replacements:
8510 
8511  get_all_servers -> get_connected_servers/get_known_servers
8512  get_servers_for_index -> get_servers_for_psi (now returns IServers)
8513 
8514 This change still needs to be pushed further down: lots of code is now
8515 getting the IServer and then distributing (peerid, rref) internally.
8516 Instead, it ought to distribute the IServer internally and delay
8517 extracting a serverid or rref until the last moment.
8518 
8519 no_network.py was updated to retain parallelism.
8520]
8521[TAG allmydata-tahoe-1.8.2
8522warner@lothar.com**20110131020101]
8523Patch bundle hash:
852420d54ef0ff4ef19e76dff02e58e35747fde82611