1 | Fri Mar 25 14:35:14 MDT 2011 wilcoxjg@gmail.com |
---|
2 | * storage: new mocking tests of storage server read and write |
---|
3 | There are already tests of read and functionality in test_storage.py, but those tests let the code under test use a real filesystem whereas these tests mock all file system calls. |
---|
4 | |
---|
5 | Fri Jun 24 14:28:50 MDT 2011 wilcoxjg@gmail.com |
---|
6 | * server.py, test_backends.py, interfaces.py, immutable.py (others?): working patch for implementation of backends plugin |
---|
7 | sloppy not for production |
---|
8 | |
---|
9 | Sat Jun 25 23:27:32 MDT 2011 wilcoxjg@gmail.com |
---|
10 | * a temp patch used as a snapshot |
---|
11 | |
---|
12 | Sat Jun 25 23:32:44 MDT 2011 wilcoxjg@gmail.com |
---|
13 | * snapshot of progress on backend implementation (not suitable for trunk) |
---|
14 | |
---|
15 | Sun Jun 26 10:57:15 MDT 2011 wilcoxjg@gmail.com |
---|
16 | * checkpoint patch |
---|
17 | |
---|
18 | Tue Jun 28 14:22:02 MDT 2011 wilcoxjg@gmail.com |
---|
19 | * checkpoint4 |
---|
20 | |
---|
21 | Mon Jul 4 21:46:26 MDT 2011 wilcoxjg@gmail.com |
---|
22 | * checkpoint5 |
---|
23 | |
---|
24 | Wed Jul 6 13:08:24 MDT 2011 wilcoxjg@gmail.com |
---|
25 | * checkpoint 6 |
---|
26 | |
---|
27 | Wed Jul 6 14:08:20 MDT 2011 wilcoxjg@gmail.com |
---|
28 | * checkpoint 7 |
---|
29 | |
---|
30 | Wed Jul 6 16:31:26 MDT 2011 wilcoxjg@gmail.com |
---|
31 | * checkpoint8 |
---|
32 | The nullbackend is necessary to test unlimited space in a backend. It is a mock-like object. |
---|
33 | |
---|
34 | Wed Jul 6 22:29:42 MDT 2011 wilcoxjg@gmail.com |
---|
35 | * checkpoint 9 |
---|
36 | |
---|
37 | Thu Jul 7 11:20:49 MDT 2011 wilcoxjg@gmail.com |
---|
38 | * checkpoint10 |
---|
39 | |
---|
40 | Fri Jul 8 15:39:19 MDT 2011 wilcoxjg@gmail.com |
---|
41 | * jacp 11 |
---|
42 | |
---|
43 | Sun Jul 10 13:19:15 MDT 2011 wilcoxjg@gmail.com |
---|
44 | * checkpoint12 testing correct behavior with regard to incoming and final |
---|
45 | |
---|
46 | Sun Jul 10 13:51:39 MDT 2011 wilcoxjg@gmail.com |
---|
47 | * fix inconsistent naming of storage_index vs storageindex in storage/server.py |
---|
48 | |
---|
49 | Sun Jul 10 16:06:23 MDT 2011 wilcoxjg@gmail.com |
---|
50 | * adding comments to clarify what I'm about to do. |
---|
51 | |
---|
52 | Mon Jul 11 13:08:49 MDT 2011 wilcoxjg@gmail.com |
---|
53 | * branching back, no longer attempting to mock inside TestServerFSBackend |
---|
54 | |
---|
55 | Mon Jul 11 13:33:57 MDT 2011 wilcoxjg@gmail.com |
---|
56 | * checkpoint12 TestServerFSBackend no longer mocks filesystem |
---|
57 | |
---|
58 | Mon Jul 11 13:44:07 MDT 2011 wilcoxjg@gmail.com |
---|
59 | * JACP |
---|
60 | |
---|
61 | Mon Jul 11 15:02:24 MDT 2011 wilcoxjg@gmail.com |
---|
62 | * testing get incoming |
---|
63 | |
---|
64 | Mon Jul 11 15:14:24 MDT 2011 wilcoxjg@gmail.com |
---|
65 | * ImmutableShareFile does not know its StorageIndex |
---|
66 | |
---|
67 | Mon Jul 11 20:51:57 MDT 2011 wilcoxjg@gmail.com |
---|
68 | * get_incoming correctly reports the 0 share after it has arrived |
---|
69 | |
---|
70 | New patches: |
---|
71 | |
---|
72 | [storage: new mocking tests of storage server read and write |
---|
73 | wilcoxjg@gmail.com**20110325203514 |
---|
74 | Ignore-this: df65c3c4f061dd1516f88662023fdb41 |
---|
75 | There are already tests of read and functionality in test_storage.py, but those tests let the code under test use a real filesystem whereas these tests mock all file system calls. |
---|
76 | ] { |
---|
77 | addfile ./src/allmydata/test/test_server.py |
---|
78 | hunk ./src/allmydata/test/test_server.py 1 |
---|
79 | +from twisted.trial import unittest |
---|
80 | + |
---|
81 | +from StringIO import StringIO |
---|
82 | + |
---|
83 | +from allmydata.test.common_util import ReallyEqualMixin |
---|
84 | + |
---|
85 | +import mock |
---|
86 | + |
---|
87 | +# This is the code that we're going to be testing. |
---|
88 | +from allmydata.storage.server import StorageServer |
---|
89 | + |
---|
90 | +# The following share file contents was generated with |
---|
91 | +# storage.immutable.ShareFile from Tahoe-LAFS v1.8.2 |
---|
92 | +# with share data == 'a'. |
---|
93 | +share_data = 'a\x00\x00\x00\x00xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy\x00(\xde\x80' |
---|
94 | +share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data |
---|
95 | + |
---|
96 | +sharefname = 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a/0' |
---|
97 | + |
---|
98 | +class TestServerConstruction(unittest.TestCase, ReallyEqualMixin): |
---|
99 | + @mock.patch('__builtin__.open') |
---|
100 | + def test_create_server(self, mockopen): |
---|
101 | + """ This tests whether a server instance can be constructed. """ |
---|
102 | + |
---|
103 | + def call_open(fname, mode): |
---|
104 | + if fname == 'testdir/bucket_counter.state': |
---|
105 | + raise IOError(2, "No such file or directory: 'testdir/bucket_counter.state'") |
---|
106 | + elif fname == 'testdir/lease_checker.state': |
---|
107 | + raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'") |
---|
108 | + elif fname == 'testdir/lease_checker.history': |
---|
109 | + return StringIO() |
---|
110 | + mockopen.side_effect = call_open |
---|
111 | + |
---|
112 | + # Now begin the test. |
---|
113 | + s = StorageServer('testdir', 'testnodeidxxxxxxxxxx') |
---|
114 | + |
---|
115 | + # You passed! |
---|
116 | + |
---|
117 | +class TestServer(unittest.TestCase, ReallyEqualMixin): |
---|
118 | + @mock.patch('__builtin__.open') |
---|
119 | + def setUp(self, mockopen): |
---|
120 | + def call_open(fname, mode): |
---|
121 | + if fname == 'testdir/bucket_counter.state': |
---|
122 | + raise IOError(2, "No such file or directory: 'testdir/bucket_counter.state'") |
---|
123 | + elif fname == 'testdir/lease_checker.state': |
---|
124 | + raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'") |
---|
125 | + elif fname == 'testdir/lease_checker.history': |
---|
126 | + return StringIO() |
---|
127 | + mockopen.side_effect = call_open |
---|
128 | + |
---|
129 | + self.s = StorageServer('testdir', 'testnodeidxxxxxxxxxx') |
---|
130 | + |
---|
131 | + |
---|
132 | + @mock.patch('time.time') |
---|
133 | + @mock.patch('os.mkdir') |
---|
134 | + @mock.patch('__builtin__.open') |
---|
135 | + @mock.patch('os.listdir') |
---|
136 | + @mock.patch('os.path.isdir') |
---|
137 | + def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime): |
---|
138 | + """Handle a report of corruption.""" |
---|
139 | + |
---|
140 | + def call_listdir(dirname): |
---|
141 | + self.failUnlessReallyEqual(dirname, 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a') |
---|
142 | + raise OSError(2, "No such file or directory: 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a'") |
---|
143 | + |
---|
144 | + mocklistdir.side_effect = call_listdir |
---|
145 | + |
---|
146 | + class MockFile: |
---|
147 | + def __init__(self): |
---|
148 | + self.buffer = '' |
---|
149 | + self.pos = 0 |
---|
150 | + def write(self, instring): |
---|
151 | + begin = self.pos |
---|
152 | + padlen = begin - len(self.buffer) |
---|
153 | + if padlen > 0: |
---|
154 | + self.buffer += '\x00' * padlen |
---|
155 | + end = self.pos + len(instring) |
---|
156 | + self.buffer = self.buffer[:begin]+instring+self.buffer[end:] |
---|
157 | + self.pos = end |
---|
158 | + def close(self): |
---|
159 | + pass |
---|
160 | + def seek(self, pos): |
---|
161 | + self.pos = pos |
---|
162 | + def read(self, numberbytes): |
---|
163 | + return self.buffer[self.pos:self.pos+numberbytes] |
---|
164 | + def tell(self): |
---|
165 | + return self.pos |
---|
166 | + |
---|
167 | + mocktime.return_value = 0 |
---|
168 | + |
---|
169 | + sharefile = MockFile() |
---|
170 | + def call_open(fname, mode): |
---|
171 | + self.failUnlessReallyEqual(fname, 'testdir/shares/incoming/or/orsxg5dtorxxeylhmvpws3temv4a/0' ) |
---|
172 | + return sharefile |
---|
173 | + |
---|
174 | + mockopen.side_effect = call_open |
---|
175 | + # Now begin the test. |
---|
176 | + alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock()) |
---|
177 | + print bs |
---|
178 | + bs[0].remote_write(0, 'a') |
---|
179 | + self.failUnlessReallyEqual(sharefile.buffer, share_file_data) |
---|
180 | + |
---|
181 | + |
---|
182 | + @mock.patch('os.path.exists') |
---|
183 | + @mock.patch('os.path.getsize') |
---|
184 | + @mock.patch('__builtin__.open') |
---|
185 | + @mock.patch('os.listdir') |
---|
186 | + def test_read_share(self, mocklistdir, mockopen, mockgetsize, mockexists): |
---|
187 | + """ This tests whether the code correctly finds and reads |
---|
188 | + shares written out by old (Tahoe-LAFS <= v1.8.2) |
---|
189 | + servers. There is a similar test in test_download, but that one |
---|
190 | + is from the perspective of the client and exercises a deeper |
---|
191 | + stack of code. This one is for exercising just the |
---|
192 | + StorageServer object. """ |
---|
193 | + |
---|
194 | + def call_listdir(dirname): |
---|
195 | + self.failUnlessReallyEqual(dirname,'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a') |
---|
196 | + return ['0'] |
---|
197 | + |
---|
198 | + mocklistdir.side_effect = call_listdir |
---|
199 | + |
---|
200 | + def call_open(fname, mode): |
---|
201 | + self.failUnlessReallyEqual(fname, sharefname) |
---|
202 | + self.failUnless('r' in mode, mode) |
---|
203 | + self.failUnless('b' in mode, mode) |
---|
204 | + |
---|
205 | + return StringIO(share_file_data) |
---|
206 | + mockopen.side_effect = call_open |
---|
207 | + |
---|
208 | + datalen = len(share_file_data) |
---|
209 | + def call_getsize(fname): |
---|
210 | + self.failUnlessReallyEqual(fname, sharefname) |
---|
211 | + return datalen |
---|
212 | + mockgetsize.side_effect = call_getsize |
---|
213 | + |
---|
214 | + def call_exists(fname): |
---|
215 | + self.failUnlessReallyEqual(fname, sharefname) |
---|
216 | + return True |
---|
217 | + mockexists.side_effect = call_exists |
---|
218 | + |
---|
219 | + # Now begin the test. |
---|
220 | + bs = self.s.remote_get_buckets('teststorage_index') |
---|
221 | + |
---|
222 | + self.failUnlessEqual(len(bs), 1) |
---|
223 | + b = bs[0] |
---|
224 | + self.failUnlessReallyEqual(b.remote_read(0, datalen), share_data) |
---|
225 | + # If you try to read past the end you get the as much data as is there. |
---|
226 | + self.failUnlessReallyEqual(b.remote_read(0, datalen+20), share_data) |
---|
227 | + # If you start reading past the end of the file you get the empty string. |
---|
228 | + self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '') |
---|
229 | } |
---|
230 | [server.py, test_backends.py, interfaces.py, immutable.py (others?): working patch for implementation of backends plugin |
---|
231 | wilcoxjg@gmail.com**20110624202850 |
---|
232 | Ignore-this: ca6f34987ee3b0d25cac17c1fc22d50c |
---|
233 | sloppy not for production |
---|
234 | ] { |
---|
235 | move ./src/allmydata/test/test_server.py ./src/allmydata/test/test_backends.py |
---|
236 | hunk ./src/allmydata/storage/crawler.py 13 |
---|
237 | pass |
---|
238 | |
---|
239 | class ShareCrawler(service.MultiService): |
---|
240 | - """A ShareCrawler subclass is attached to a StorageServer, and |
---|
241 | + """A subcless of ShareCrawler is attached to a StorageServer, and |
---|
242 | periodically walks all of its shares, processing each one in some |
---|
243 | fashion. This crawl is rate-limited, to reduce the IO burden on the host, |
---|
244 | since large servers can easily have a terabyte of shares, in several |
---|
245 | hunk ./src/allmydata/storage/crawler.py 31 |
---|
246 | We assume that the normal upload/download/get_buckets traffic of a tahoe |
---|
247 | grid will cause the prefixdir contents to be mostly cached in the kernel, |
---|
248 | or that the number of buckets in each prefixdir will be small enough to |
---|
249 | - load quickly. A 1TB allmydata.com server was measured to have 2.56M |
---|
250 | + load quickly. A 1TB allmydata.com server was measured to have 2.56 * 10^6 |
---|
251 | buckets, spread into the 1024 prefixdirs, with about 2500 buckets per |
---|
252 | prefix. On this server, each prefixdir took 130ms-200ms to list the first |
---|
253 | time, and 17ms to list the second time. |
---|
254 | hunk ./src/allmydata/storage/crawler.py 68 |
---|
255 | cpu_slice = 1.0 # use up to 1.0 seconds before yielding |
---|
256 | minimum_cycle_time = 300 # don't run a cycle faster than this |
---|
257 | |
---|
258 | - def __init__(self, server, statefile, allowed_cpu_percentage=None): |
---|
259 | + def __init__(self, backend, statefile, allowed_cpu_percentage=None): |
---|
260 | service.MultiService.__init__(self) |
---|
261 | if allowed_cpu_percentage is not None: |
---|
262 | self.allowed_cpu_percentage = allowed_cpu_percentage |
---|
263 | hunk ./src/allmydata/storage/crawler.py 72 |
---|
264 | - self.server = server |
---|
265 | - self.sharedir = server.sharedir |
---|
266 | - self.statefile = statefile |
---|
267 | + self.backend = backend |
---|
268 | self.prefixes = [si_b2a(struct.pack(">H", i << (16-10)))[:2] |
---|
269 | for i in range(2**10)] |
---|
270 | self.prefixes.sort() |
---|
271 | hunk ./src/allmydata/storage/crawler.py 446 |
---|
272 | |
---|
273 | minimum_cycle_time = 60*60 # we don't need this more than once an hour |
---|
274 | |
---|
275 | - def __init__(self, server, statefile, num_sample_prefixes=1): |
---|
276 | - ShareCrawler.__init__(self, server, statefile) |
---|
277 | + def __init__(self, statefile, num_sample_prefixes=1): |
---|
278 | + ShareCrawler.__init__(self, statefile) |
---|
279 | self.num_sample_prefixes = num_sample_prefixes |
---|
280 | |
---|
281 | def add_initial_state(self): |
---|
282 | hunk ./src/allmydata/storage/expirer.py 15 |
---|
283 | removed. |
---|
284 | |
---|
285 | I collect statistics on the leases and make these available to a web |
---|
286 | - status page, including:: |
---|
287 | + status page, including: |
---|
288 | |
---|
289 | Space recovered during this cycle-so-far: |
---|
290 | actual (only if expiration_enabled=True): |
---|
291 | hunk ./src/allmydata/storage/expirer.py 51 |
---|
292 | slow_start = 360 # wait 6 minutes after startup |
---|
293 | minimum_cycle_time = 12*60*60 # not more than twice per day |
---|
294 | |
---|
295 | - def __init__(self, server, statefile, historyfile, |
---|
296 | + def __init__(self, statefile, historyfile, |
---|
297 | expiration_enabled, mode, |
---|
298 | override_lease_duration, # used if expiration_mode=="age" |
---|
299 | cutoff_date, # used if expiration_mode=="cutoff-date" |
---|
300 | hunk ./src/allmydata/storage/expirer.py 71 |
---|
301 | else: |
---|
302 | raise ValueError("GC mode '%s' must be 'age' or 'cutoff-date'" % mode) |
---|
303 | self.sharetypes_to_expire = sharetypes |
---|
304 | - ShareCrawler.__init__(self, server, statefile) |
---|
305 | + ShareCrawler.__init__(self, statefile) |
---|
306 | |
---|
307 | def add_initial_state(self): |
---|
308 | # we fill ["cycle-to-date"] here (even though they will be reset in |
---|
309 | hunk ./src/allmydata/storage/immutable.py 44 |
---|
310 | sharetype = "immutable" |
---|
311 | |
---|
312 | def __init__(self, filename, max_size=None, create=False): |
---|
313 | - """ If max_size is not None then I won't allow more than max_size to be written to me. If create=True and max_size must not be None. """ |
---|
314 | + """ If max_size is not None then I won't allow more than |
---|
315 | + max_size to be written to me. If create=True then max_size |
---|
316 | + must not be None. """ |
---|
317 | precondition((max_size is not None) or (not create), max_size, create) |
---|
318 | self.home = filename |
---|
319 | self._max_size = max_size |
---|
320 | hunk ./src/allmydata/storage/immutable.py 87 |
---|
321 | |
---|
322 | def read_share_data(self, offset, length): |
---|
323 | precondition(offset >= 0) |
---|
324 | - # reads beyond the end of the data are truncated. Reads that start |
---|
325 | - # beyond the end of the data return an empty string. I wonder why |
---|
326 | - # Python doesn't do the following computation for me? |
---|
327 | + # Reads beyond the end of the data are truncated. Reads that start |
---|
328 | + # beyond the end of the data return an empty string. |
---|
329 | seekpos = self._data_offset+offset |
---|
330 | fsize = os.path.getsize(self.home) |
---|
331 | actuallength = max(0, min(length, fsize-seekpos)) |
---|
332 | hunk ./src/allmydata/storage/immutable.py 198 |
---|
333 | space_freed += os.stat(self.home)[stat.ST_SIZE] |
---|
334 | self.unlink() |
---|
335 | return space_freed |
---|
336 | +class NullBucketWriter(Referenceable): |
---|
337 | + implements(RIBucketWriter) |
---|
338 | |
---|
339 | hunk ./src/allmydata/storage/immutable.py 201 |
---|
340 | + def remote_write(self, offset, data): |
---|
341 | + return |
---|
342 | |
---|
343 | class BucketWriter(Referenceable): |
---|
344 | implements(RIBucketWriter) |
---|
345 | hunk ./src/allmydata/storage/server.py 7 |
---|
346 | from twisted.application import service |
---|
347 | |
---|
348 | from zope.interface import implements |
---|
349 | -from allmydata.interfaces import RIStorageServer, IStatsProducer |
---|
350 | +from allmydata.interfaces import RIStorageServer, IStatsProducer, IShareStore |
---|
351 | from allmydata.util import fileutil, idlib, log, time_format |
---|
352 | import allmydata # for __full_version__ |
---|
353 | |
---|
354 | hunk ./src/allmydata/storage/server.py 16 |
---|
355 | from allmydata.storage.lease import LeaseInfo |
---|
356 | from allmydata.storage.mutable import MutableShareFile, EmptyShare, \ |
---|
357 | create_mutable_sharefile |
---|
358 | -from allmydata.storage.immutable import ShareFile, BucketWriter, BucketReader |
---|
359 | +from allmydata.storage.immutable import ShareFile, NullBucketWriter, BucketWriter, BucketReader |
---|
360 | from allmydata.storage.crawler import BucketCountingCrawler |
---|
361 | from allmydata.storage.expirer import LeaseCheckingCrawler |
---|
362 | |
---|
363 | hunk ./src/allmydata/storage/server.py 20 |
---|
364 | +from zope.interface import implements |
---|
365 | + |
---|
366 | +# A Backend is a MultiService so that its server's crawlers (if the server has any) can |
---|
367 | +# be started and stopped. |
---|
368 | +class Backend(service.MultiService): |
---|
369 | + implements(IStatsProducer) |
---|
370 | + def __init__(self): |
---|
371 | + service.MultiService.__init__(self) |
---|
372 | + |
---|
373 | + def get_bucket_shares(self): |
---|
374 | + """XXX""" |
---|
375 | + raise NotImplementedError |
---|
376 | + |
---|
377 | + def get_share(self): |
---|
378 | + """XXX""" |
---|
379 | + raise NotImplementedError |
---|
380 | + |
---|
381 | + def make_bucket_writer(self): |
---|
382 | + """XXX""" |
---|
383 | + raise NotImplementedError |
---|
384 | + |
---|
385 | +class NullBackend(Backend): |
---|
386 | + def __init__(self): |
---|
387 | + Backend.__init__(self) |
---|
388 | + |
---|
389 | + def get_available_space(self): |
---|
390 | + return None |
---|
391 | + |
---|
392 | + def get_bucket_shares(self, storage_index): |
---|
393 | + return set() |
---|
394 | + |
---|
395 | + def get_share(self, storage_index, sharenum): |
---|
396 | + return None |
---|
397 | + |
---|
398 | + def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary): |
---|
399 | + return NullBucketWriter() |
---|
400 | + |
---|
401 | +class FSBackend(Backend): |
---|
402 | + def __init__(self, storedir, readonly=False, reserved_space=0): |
---|
403 | + Backend.__init__(self) |
---|
404 | + |
---|
405 | + self._setup_storage(storedir, readonly, reserved_space) |
---|
406 | + self._setup_corruption_advisory() |
---|
407 | + self._setup_bucket_counter() |
---|
408 | + self._setup_lease_checkerf() |
---|
409 | + |
---|
410 | + def _setup_storage(self, storedir, readonly, reserved_space): |
---|
411 | + self.storedir = storedir |
---|
412 | + self.readonly = readonly |
---|
413 | + self.reserved_space = int(reserved_space) |
---|
414 | + if self.reserved_space: |
---|
415 | + if self.get_available_space() is None: |
---|
416 | + log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored", |
---|
417 | + umid="0wZ27w", level=log.UNUSUAL) |
---|
418 | + |
---|
419 | + self.sharedir = os.path.join(self.storedir, "shares") |
---|
420 | + fileutil.make_dirs(self.sharedir) |
---|
421 | + self.incomingdir = os.path.join(self.sharedir, 'incoming') |
---|
422 | + self._clean_incomplete() |
---|
423 | + |
---|
424 | + def _clean_incomplete(self): |
---|
425 | + fileutil.rm_dir(self.incomingdir) |
---|
426 | + fileutil.make_dirs(self.incomingdir) |
---|
427 | + |
---|
428 | + def _setup_corruption_advisory(self): |
---|
429 | + # we don't actually create the corruption-advisory dir until necessary |
---|
430 | + self.corruption_advisory_dir = os.path.join(self.storedir, |
---|
431 | + "corruption-advisories") |
---|
432 | + |
---|
433 | + def _setup_bucket_counter(self): |
---|
434 | + statefile = os.path.join(self.storedir, "bucket_counter.state") |
---|
435 | + self.bucket_counter = BucketCountingCrawler(statefile) |
---|
436 | + self.bucket_counter.setServiceParent(self) |
---|
437 | + |
---|
438 | + def _setup_lease_checkerf(self): |
---|
439 | + statefile = os.path.join(self.storedir, "lease_checker.state") |
---|
440 | + historyfile = os.path.join(self.storedir, "lease_checker.history") |
---|
441 | + self.lease_checker = LeaseCheckingCrawler(statefile, historyfile, |
---|
442 | + expiration_enabled, expiration_mode, |
---|
443 | + expiration_override_lease_duration, |
---|
444 | + expiration_cutoff_date, |
---|
445 | + expiration_sharetypes) |
---|
446 | + self.lease_checker.setServiceParent(self) |
---|
447 | + |
---|
448 | + def get_available_space(self): |
---|
449 | + if self.readonly: |
---|
450 | + return 0 |
---|
451 | + return fileutil.get_available_space(self.storedir, self.reserved_space) |
---|
452 | + |
---|
453 | + def get_bucket_shares(self, storage_index): |
---|
454 | + """Return a list of (shnum, pathname) tuples for files that hold |
---|
455 | + shares for this storage_index. In each tuple, 'shnum' will always be |
---|
456 | + the integer form of the last component of 'pathname'.""" |
---|
457 | + storagedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index)) |
---|
458 | + try: |
---|
459 | + for f in os.listdir(storagedir): |
---|
460 | + if NUM_RE.match(f): |
---|
461 | + filename = os.path.join(storagedir, f) |
---|
462 | + yield (int(f), filename) |
---|
463 | + except OSError: |
---|
464 | + # Commonly caused by there being no buckets at all. |
---|
465 | + pass |
---|
466 | + |
---|
467 | # storage/ |
---|
468 | # storage/shares/incoming |
---|
469 | # incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will |
---|
470 | hunk ./src/allmydata/storage/server.py 143 |
---|
471 | name = 'storage' |
---|
472 | LeaseCheckerClass = LeaseCheckingCrawler |
---|
473 | |
---|
474 | - def __init__(self, storedir, nodeid, reserved_space=0, |
---|
475 | - discard_storage=False, readonly_storage=False, |
---|
476 | + def __init__(self, nodeid, backend, reserved_space=0, |
---|
477 | + readonly_storage=False, |
---|
478 | stats_provider=None, |
---|
479 | expiration_enabled=False, |
---|
480 | expiration_mode="age", |
---|
481 | hunk ./src/allmydata/storage/server.py 155 |
---|
482 | assert isinstance(nodeid, str) |
---|
483 | assert len(nodeid) == 20 |
---|
484 | self.my_nodeid = nodeid |
---|
485 | - self.storedir = storedir |
---|
486 | - sharedir = os.path.join(storedir, "shares") |
---|
487 | - fileutil.make_dirs(sharedir) |
---|
488 | - self.sharedir = sharedir |
---|
489 | - # we don't actually create the corruption-advisory dir until necessary |
---|
490 | - self.corruption_advisory_dir = os.path.join(storedir, |
---|
491 | - "corruption-advisories") |
---|
492 | - self.reserved_space = int(reserved_space) |
---|
493 | - self.no_storage = discard_storage |
---|
494 | - self.readonly_storage = readonly_storage |
---|
495 | self.stats_provider = stats_provider |
---|
496 | if self.stats_provider: |
---|
497 | self.stats_provider.register_producer(self) |
---|
498 | hunk ./src/allmydata/storage/server.py 158 |
---|
499 | - self.incomingdir = os.path.join(sharedir, 'incoming') |
---|
500 | - self._clean_incomplete() |
---|
501 | - fileutil.make_dirs(self.incomingdir) |
---|
502 | self._active_writers = weakref.WeakKeyDictionary() |
---|
503 | hunk ./src/allmydata/storage/server.py 159 |
---|
504 | + self.backend = backend |
---|
505 | + self.backend.setServiceParent(self) |
---|
506 | log.msg("StorageServer created", facility="tahoe.storage") |
---|
507 | |
---|
508 | hunk ./src/allmydata/storage/server.py 163 |
---|
509 | - if reserved_space: |
---|
510 | - if self.get_available_space() is None: |
---|
511 | - log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored", |
---|
512 | - umin="0wZ27w", level=log.UNUSUAL) |
---|
513 | - |
---|
514 | self.latencies = {"allocate": [], # immutable |
---|
515 | "write": [], |
---|
516 | "close": [], |
---|
517 | hunk ./src/allmydata/storage/server.py 174 |
---|
518 | "renew": [], |
---|
519 | "cancel": [], |
---|
520 | } |
---|
521 | - self.add_bucket_counter() |
---|
522 | - |
---|
523 | - statefile = os.path.join(self.storedir, "lease_checker.state") |
---|
524 | - historyfile = os.path.join(self.storedir, "lease_checker.history") |
---|
525 | - klass = self.LeaseCheckerClass |
---|
526 | - self.lease_checker = klass(self, statefile, historyfile, |
---|
527 | - expiration_enabled, expiration_mode, |
---|
528 | - expiration_override_lease_duration, |
---|
529 | - expiration_cutoff_date, |
---|
530 | - expiration_sharetypes) |
---|
531 | - self.lease_checker.setServiceParent(self) |
---|
532 | |
---|
533 | def __repr__(self): |
---|
534 | return "<StorageServer %s>" % (idlib.shortnodeid_b2a(self.my_nodeid),) |
---|
535 | hunk ./src/allmydata/storage/server.py 178 |
---|
536 | |
---|
537 | - def add_bucket_counter(self): |
---|
538 | - statefile = os.path.join(self.storedir, "bucket_counter.state") |
---|
539 | - self.bucket_counter = BucketCountingCrawler(self, statefile) |
---|
540 | - self.bucket_counter.setServiceParent(self) |
---|
541 | - |
---|
542 | def count(self, name, delta=1): |
---|
543 | if self.stats_provider: |
---|
544 | self.stats_provider.count("storage_server." + name, delta) |
---|
545 | hunk ./src/allmydata/storage/server.py 233 |
---|
546 | kwargs["facility"] = "tahoe.storage" |
---|
547 | return log.msg(*args, **kwargs) |
---|
548 | |
---|
549 | - def _clean_incomplete(self): |
---|
550 | - fileutil.rm_dir(self.incomingdir) |
---|
551 | - |
---|
552 | def get_stats(self): |
---|
553 | # remember: RIStatsProvider requires that our return dict |
---|
554 | # contains numeric values. |
---|
555 | hunk ./src/allmydata/storage/server.py 269 |
---|
556 | stats['storage_server.total_bucket_count'] = bucket_count |
---|
557 | return stats |
---|
558 | |
---|
559 | - def get_available_space(self): |
---|
560 | - """Returns available space for share storage in bytes, or None if no |
---|
561 | - API to get this information is available.""" |
---|
562 | - |
---|
563 | - if self.readonly_storage: |
---|
564 | - return 0 |
---|
565 | - return fileutil.get_available_space(self.storedir, self.reserved_space) |
---|
566 | - |
---|
567 | def allocated_size(self): |
---|
568 | space = 0 |
---|
569 | for bw in self._active_writers: |
---|
570 | hunk ./src/allmydata/storage/server.py 276 |
---|
571 | return space |
---|
572 | |
---|
573 | def remote_get_version(self): |
---|
574 | - remaining_space = self.get_available_space() |
---|
575 | + remaining_space = self.backend.get_available_space() |
---|
576 | if remaining_space is None: |
---|
577 | # We're on a platform that has no API to get disk stats. |
---|
578 | remaining_space = 2**64 |
---|
579 | hunk ./src/allmydata/storage/server.py 301 |
---|
580 | self.count("allocate") |
---|
581 | alreadygot = set() |
---|
582 | bucketwriters = {} # k: shnum, v: BucketWriter |
---|
583 | - si_dir = storage_index_to_dir(storage_index) |
---|
584 | - si_s = si_b2a(storage_index) |
---|
585 | |
---|
586 | hunk ./src/allmydata/storage/server.py 302 |
---|
587 | + si_s = si_b2a(storage_index) |
---|
588 | log.msg("storage: allocate_buckets %s" % si_s) |
---|
589 | |
---|
590 | # in this implementation, the lease information (including secrets) |
---|
591 | hunk ./src/allmydata/storage/server.py 316 |
---|
592 | |
---|
593 | max_space_per_bucket = allocated_size |
---|
594 | |
---|
595 | - remaining_space = self.get_available_space() |
---|
596 | + remaining_space = self.backend.get_available_space() |
---|
597 | limited = remaining_space is not None |
---|
598 | if limited: |
---|
599 | # this is a bit conservative, since some of this allocated_size() |
---|
600 | hunk ./src/allmydata/storage/server.py 329 |
---|
601 | # they asked about: this will save them a lot of work. Add or update |
---|
602 | # leases for all of them: if they want us to hold shares for this |
---|
603 | # file, they'll want us to hold leases for this file. |
---|
604 | - for (shnum, fn) in self._get_bucket_shares(storage_index): |
---|
605 | + for (shnum, fn) in self.backend.get_bucket_shares(storage_index): |
---|
606 | alreadygot.add(shnum) |
---|
607 | sf = ShareFile(fn) |
---|
608 | sf.add_or_renew_lease(lease_info) |
---|
609 | hunk ./src/allmydata/storage/server.py 335 |
---|
610 | |
---|
611 | for shnum in sharenums: |
---|
612 | - incominghome = os.path.join(self.incomingdir, si_dir, "%d" % shnum) |
---|
613 | - finalhome = os.path.join(self.sharedir, si_dir, "%d" % shnum) |
---|
614 | - if os.path.exists(finalhome): |
---|
615 | + share = self.backend.get_share(storage_index, shnum) |
---|
616 | + |
---|
617 | + if not share: |
---|
618 | + if (not limited) or (remaining_space >= max_space_per_bucket): |
---|
619 | + # ok! we need to create the new share file. |
---|
620 | + bw = self.backend.make_bucket_writer(storage_index, shnum, |
---|
621 | + max_space_per_bucket, lease_info, canary) |
---|
622 | + bucketwriters[shnum] = bw |
---|
623 | + self._active_writers[bw] = 1 |
---|
624 | + if limited: |
---|
625 | + remaining_space -= max_space_per_bucket |
---|
626 | + else: |
---|
627 | + # bummer! not enough space to accept this bucket |
---|
628 | + pass |
---|
629 | + |
---|
630 | + elif share.is_complete(): |
---|
631 | # great! we already have it. easy. |
---|
632 | pass |
---|
633 | hunk ./src/allmydata/storage/server.py 353 |
---|
634 | - elif os.path.exists(incominghome): |
---|
635 | + elif not share.is_complete(): |
---|
636 | # Note that we don't create BucketWriters for shnums that |
---|
637 | # have a partial share (in incoming/), so if a second upload |
---|
638 | # occurs while the first is still in progress, the second |
---|
639 | hunk ./src/allmydata/storage/server.py 359 |
---|
640 | # uploader will use different storage servers. |
---|
641 | pass |
---|
642 | - elif (not limited) or (remaining_space >= max_space_per_bucket): |
---|
643 | - # ok! we need to create the new share file. |
---|
644 | - bw = BucketWriter(self, incominghome, finalhome, |
---|
645 | - max_space_per_bucket, lease_info, canary) |
---|
646 | - if self.no_storage: |
---|
647 | - bw.throw_out_all_data = True |
---|
648 | - bucketwriters[shnum] = bw |
---|
649 | - self._active_writers[bw] = 1 |
---|
650 | - if limited: |
---|
651 | - remaining_space -= max_space_per_bucket |
---|
652 | - else: |
---|
653 | - # bummer! not enough space to accept this bucket |
---|
654 | - pass |
---|
655 | - |
---|
656 | - if bucketwriters: |
---|
657 | - fileutil.make_dirs(os.path.join(self.sharedir, si_dir)) |
---|
658 | |
---|
659 | self.add_latency("allocate", time.time() - start) |
---|
660 | return alreadygot, bucketwriters |
---|
661 | hunk ./src/allmydata/storage/server.py 437 |
---|
662 | self.stats_provider.count('storage_server.bytes_added', consumed_size) |
---|
663 | del self._active_writers[bw] |
---|
664 | |
---|
665 | - def _get_bucket_shares(self, storage_index): |
---|
666 | - """Return a list of (shnum, pathname) tuples for files that hold |
---|
667 | - shares for this storage_index. In each tuple, 'shnum' will always be |
---|
668 | - the integer form of the last component of 'pathname'.""" |
---|
669 | - storagedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index)) |
---|
670 | - try: |
---|
671 | - for f in os.listdir(storagedir): |
---|
672 | - if NUM_RE.match(f): |
---|
673 | - filename = os.path.join(storagedir, f) |
---|
674 | - yield (int(f), filename) |
---|
675 | - except OSError: |
---|
676 | - # Commonly caused by there being no buckets at all. |
---|
677 | - pass |
---|
678 | |
---|
679 | def remote_get_buckets(self, storage_index): |
---|
680 | start = time.time() |
---|
681 | hunk ./src/allmydata/storage/server.py 444 |
---|
682 | si_s = si_b2a(storage_index) |
---|
683 | log.msg("storage: get_buckets %s" % si_s) |
---|
684 | bucketreaders = {} # k: sharenum, v: BucketReader |
---|
685 | - for shnum, filename in self._get_bucket_shares(storage_index): |
---|
686 | + for shnum, filename in self.backend.get_bucket_shares(storage_index): |
---|
687 | bucketreaders[shnum] = BucketReader(self, filename, |
---|
688 | storage_index, shnum) |
---|
689 | self.add_latency("get", time.time() - start) |
---|
690 | hunk ./src/allmydata/test/test_backends.py 10 |
---|
691 | import mock |
---|
692 | |
---|
693 | # This is the code that we're going to be testing. |
---|
694 | -from allmydata.storage.server import StorageServer |
---|
695 | +from allmydata.storage.server import StorageServer, FSBackend, NullBackend |
---|
696 | |
---|
697 | # The following share file contents was generated with |
---|
698 | # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2 |
---|
699 | hunk ./src/allmydata/test/test_backends.py 21 |
---|
700 | sharefname = 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a/0' |
---|
701 | |
---|
702 | class TestServerConstruction(unittest.TestCase, ReallyEqualMixin): |
---|
703 | + @mock.patch('time.time') |
---|
704 | + @mock.patch('os.mkdir') |
---|
705 | + @mock.patch('__builtin__.open') |
---|
706 | + @mock.patch('os.listdir') |
---|
707 | + @mock.patch('os.path.isdir') |
---|
708 | + def test_create_server_null_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime): |
---|
709 | + """ This tests whether a server instance can be constructed |
---|
710 | + with a null backend. The server instance fails the test if it |
---|
711 | + tries to read or write to the file system. """ |
---|
712 | + |
---|
713 | + # Now begin the test. |
---|
714 | + s = StorageServer('testnodeidxxxxxxxxxx', backend=NullBackend()) |
---|
715 | + |
---|
716 | + self.failIf(mockisdir.called) |
---|
717 | + self.failIf(mocklistdir.called) |
---|
718 | + self.failIf(mockopen.called) |
---|
719 | + self.failIf(mockmkdir.called) |
---|
720 | + |
---|
721 | + # You passed! |
---|
722 | + |
---|
723 | + @mock.patch('time.time') |
---|
724 | + @mock.patch('os.mkdir') |
---|
725 | @mock.patch('__builtin__.open') |
---|
726 | hunk ./src/allmydata/test/test_backends.py 44 |
---|
727 | - def test_create_server(self, mockopen): |
---|
728 | - """ This tests whether a server instance can be constructed. """ |
---|
729 | + @mock.patch('os.listdir') |
---|
730 | + @mock.patch('os.path.isdir') |
---|
731 | + def test_create_server_fs_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime): |
---|
732 | + """ This tests whether a server instance can be constructed |
---|
733 | + with a filesystem backend. To pass the test, it has to use the |
---|
734 | + filesystem in only the prescribed ways. """ |
---|
735 | |
---|
736 | def call_open(fname, mode): |
---|
737 | if fname == 'testdir/bucket_counter.state': |
---|
738 | hunk ./src/allmydata/test/test_backends.py 58 |
---|
739 | raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'") |
---|
740 | elif fname == 'testdir/lease_checker.history': |
---|
741 | return StringIO() |
---|
742 | + else: |
---|
743 | + self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode)) |
---|
744 | mockopen.side_effect = call_open |
---|
745 | |
---|
746 | # Now begin the test. |
---|
747 | hunk ./src/allmydata/test/test_backends.py 63 |
---|
748 | - s = StorageServer('testdir', 'testnodeidxxxxxxxxxx') |
---|
749 | + s = StorageServer('testnodeidxxxxxxxxxx', backend=FSBackend('teststoredir')) |
---|
750 | + |
---|
751 | + self.failIf(mockisdir.called) |
---|
752 | + self.failIf(mocklistdir.called) |
---|
753 | + self.failIf(mockopen.called) |
---|
754 | + self.failIf(mockmkdir.called) |
---|
755 | + self.failIf(mocktime.called) |
---|
756 | |
---|
757 | # You passed! |
---|
758 | |
---|
759 | hunk ./src/allmydata/test/test_backends.py 73 |
---|
760 | -class TestServer(unittest.TestCase, ReallyEqualMixin): |
---|
761 | +class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin): |
---|
762 | + def setUp(self): |
---|
763 | + self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullBackend()) |
---|
764 | + |
---|
765 | + @mock.patch('os.mkdir') |
---|
766 | + @mock.patch('__builtin__.open') |
---|
767 | + @mock.patch('os.listdir') |
---|
768 | + @mock.patch('os.path.isdir') |
---|
769 | + def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir): |
---|
770 | + """ Write a new share. """ |
---|
771 | + |
---|
772 | + # Now begin the test. |
---|
773 | + alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock()) |
---|
774 | + bs[0].remote_write(0, 'a') |
---|
775 | + self.failIf(mockisdir.called) |
---|
776 | + self.failIf(mocklistdir.called) |
---|
777 | + self.failIf(mockopen.called) |
---|
778 | + self.failIf(mockmkdir.called) |
---|
779 | + |
---|
780 | + @mock.patch('os.path.exists') |
---|
781 | + @mock.patch('os.path.getsize') |
---|
782 | + @mock.patch('__builtin__.open') |
---|
783 | + @mock.patch('os.listdir') |
---|
784 | + def test_read_share(self, mocklistdir, mockopen, mockgetsize, mockexists): |
---|
785 | + """ This tests whether the code correctly finds and reads |
---|
786 | + shares written out by old (Tahoe-LAFS <= v1.8.2) |
---|
787 | + servers. There is a similar test in test_download, but that one |
---|
788 | + is from the perspective of the client and exercises a deeper |
---|
789 | + stack of code. This one is for exercising just the |
---|
790 | + StorageServer object. """ |
---|
791 | + |
---|
792 | + # Now begin the test. |
---|
793 | + bs = self.s.remote_get_buckets('teststorage_index') |
---|
794 | + |
---|
795 | + self.failUnlessEqual(len(bs), 0) |
---|
796 | + self.failIf(mocklistdir.called) |
---|
797 | + self.failIf(mockopen.called) |
---|
798 | + self.failIf(mockgetsize.called) |
---|
799 | + self.failIf(mockexists.called) |
---|
800 | + |
---|
801 | + |
---|
802 | +class TestServerFSBackend(unittest.TestCase, ReallyEqualMixin): |
---|
803 | @mock.patch('__builtin__.open') |
---|
804 | def setUp(self, mockopen): |
---|
805 | def call_open(fname, mode): |
---|
806 | hunk ./src/allmydata/test/test_backends.py 126 |
---|
807 | return StringIO() |
---|
808 | mockopen.side_effect = call_open |
---|
809 | |
---|
810 | - self.s = StorageServer('testdir', 'testnodeidxxxxxxxxxx') |
---|
811 | - |
---|
812 | + self.s = StorageServer('testnodeidxxxxxxxxxx', backend=FSBackend('teststoredir')) |
---|
813 | |
---|
814 | @mock.patch('time.time') |
---|
815 | @mock.patch('os.mkdir') |
---|
816 | hunk ./src/allmydata/test/test_backends.py 134 |
---|
817 | @mock.patch('os.listdir') |
---|
818 | @mock.patch('os.path.isdir') |
---|
819 | def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime): |
---|
820 | - """Handle a report of corruption.""" |
---|
821 | + """ Write a new share. """ |
---|
822 | |
---|
823 | def call_listdir(dirname): |
---|
824 | self.failUnlessReallyEqual(dirname, 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a') |
---|
825 | hunk ./src/allmydata/test/test_backends.py 173 |
---|
826 | mockopen.side_effect = call_open |
---|
827 | # Now begin the test. |
---|
828 | alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock()) |
---|
829 | - print bs |
---|
830 | bs[0].remote_write(0, 'a') |
---|
831 | self.failUnlessReallyEqual(sharefile.buffer, share_file_data) |
---|
832 | |
---|
833 | hunk ./src/allmydata/test/test_backends.py 176 |
---|
834 | - |
---|
835 | @mock.patch('os.path.exists') |
---|
836 | @mock.patch('os.path.getsize') |
---|
837 | @mock.patch('__builtin__.open') |
---|
838 | hunk ./src/allmydata/test/test_backends.py 218 |
---|
839 | |
---|
840 | self.failUnlessEqual(len(bs), 1) |
---|
841 | b = bs[0] |
---|
842 | + # These should match by definition, the next two cases cover cases without (completely) unambiguous behaviors. |
---|
843 | self.failUnlessReallyEqual(b.remote_read(0, datalen), share_data) |
---|
844 | # If you try to read past the end you get the as much data as is there. |
---|
845 | self.failUnlessReallyEqual(b.remote_read(0, datalen+20), share_data) |
---|
846 | hunk ./src/allmydata/test/test_backends.py 224 |
---|
847 | # If you start reading past the end of the file you get the empty string. |
---|
848 | self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '') |
---|
849 | + |
---|
850 | + |
---|
851 | } |
---|
852 | [a temp patch used as a snapshot |
---|
853 | wilcoxjg@gmail.com**20110626052732 |
---|
854 | Ignore-this: 95f05e314eaec870afa04c76d979aa44 |
---|
855 | ] { |
---|
856 | hunk ./docs/configuration.rst 637 |
---|
857 | [storage] |
---|
858 | enabled = True |
---|
859 | readonly = True |
---|
860 | - sizelimit = 10000000000 |
---|
861 | |
---|
862 | |
---|
863 | [helper] |
---|
864 | hunk ./docs/garbage-collection.rst 16 |
---|
865 | |
---|
866 | When a file or directory in the virtual filesystem is no longer referenced, |
---|
867 | the space that its shares occupied on each storage server can be freed, |
---|
868 | -making room for other shares. Tahoe currently uses a garbage collection |
---|
869 | +making room for other shares. Tahoe uses a garbage collection |
---|
870 | ("GC") mechanism to implement this space-reclamation process. Each share has |
---|
871 | one or more "leases", which are managed by clients who want the |
---|
872 | file/directory to be retained. The storage server accepts each share for a |
---|
873 | hunk ./docs/garbage-collection.rst 34 |
---|
874 | the `<lease-tradeoffs.svg>`_ diagram to get an idea for the tradeoffs involved. |
---|
875 | If lease renewal occurs quickly and with 100% reliability, than any renewal |
---|
876 | time that is shorter than the lease duration will suffice, but a larger ratio |
---|
877 | -of duration-over-renewal-time will be more robust in the face of occasional |
---|
878 | +of lease duration to renewal time will be more robust in the face of occasional |
---|
879 | delays or failures. |
---|
880 | |
---|
881 | The current recommended values for a small Tahoe grid are to renew the leases |
---|
882 | replace ./docs/garbage-collection.rst [A-Za-z_0-9\-\.] Tahoe Tahoe-LAFS |
---|
883 | hunk ./src/allmydata/client.py 260 |
---|
884 | sharetypes.append("mutable") |
---|
885 | expiration_sharetypes = tuple(sharetypes) |
---|
886 | |
---|
887 | + if self.get_config("storage", "backend", "filesystem") == "filesystem": |
---|
888 | + xyz |
---|
889 | + xyz |
---|
890 | ss = StorageServer(storedir, self.nodeid, |
---|
891 | reserved_space=reserved, |
---|
892 | discard_storage=discard, |
---|
893 | hunk ./src/allmydata/storage/crawler.py 234 |
---|
894 | f = open(tmpfile, "wb") |
---|
895 | pickle.dump(self.state, f) |
---|
896 | f.close() |
---|
897 | - fileutil.move_into_place(tmpfile, self.statefile) |
---|
898 | + fileutil.move_into_place(tmpfile, self.statefname) |
---|
899 | |
---|
900 | def startService(self): |
---|
901 | # arrange things to look like we were just sleeping, so |
---|
902 | } |
---|
903 | [snapshot of progress on backend implementation (not suitable for trunk) |
---|
904 | wilcoxjg@gmail.com**20110626053244 |
---|
905 | Ignore-this: 50c764af791c2b99ada8289546806a0a |
---|
906 | ] { |
---|
907 | adddir ./src/allmydata/storage/backends |
---|
908 | adddir ./src/allmydata/storage/backends/das |
---|
909 | move ./src/allmydata/storage/expirer.py ./src/allmydata/storage/backends/das/expirer.py |
---|
910 | adddir ./src/allmydata/storage/backends/null |
---|
911 | hunk ./src/allmydata/interfaces.py 270 |
---|
912 | store that on disk. |
---|
913 | """ |
---|
914 | |
---|
915 | +class IStorageBackend(Interface): |
---|
916 | + """ |
---|
917 | + Objects of this kind live on the server side and are used by the |
---|
918 | + storage server object. |
---|
919 | + """ |
---|
920 | + def get_available_space(self, reserved_space): |
---|
921 | + """ Returns available space for share storage in bytes, or |
---|
922 | + None if this information is not available or if the available |
---|
923 | + space is unlimited. |
---|
924 | + |
---|
925 | + If the backend is configured for read-only mode then this will |
---|
926 | + return 0. |
---|
927 | + |
---|
928 | + reserved_space is how many bytes to subtract from the answer, so |
---|
929 | + you can pass how many bytes you would like to leave unused on this |
---|
930 | + filesystem as reserved_space. """ |
---|
931 | + |
---|
932 | + def get_bucket_shares(self): |
---|
933 | + """XXX""" |
---|
934 | + |
---|
935 | + def get_share(self): |
---|
936 | + """XXX""" |
---|
937 | + |
---|
938 | + def make_bucket_writer(self): |
---|
939 | + """XXX""" |
---|
940 | + |
---|
941 | +class IStorageBackendShare(Interface): |
---|
942 | + """ |
---|
943 | + This object contains as much as all of the share data. It is intended |
---|
944 | + for lazy evaluation such that in many use cases substantially less than |
---|
945 | + all of the share data will be accessed. |
---|
946 | + """ |
---|
947 | + def is_complete(self): |
---|
948 | + """ |
---|
949 | + Returns the share state, or None if the share does not exist. |
---|
950 | + """ |
---|
951 | + |
---|
952 | class IStorageBucketWriter(Interface): |
---|
953 | """ |
---|
954 | Objects of this kind live on the client side. |
---|
955 | hunk ./src/allmydata/interfaces.py 2492 |
---|
956 | |
---|
957 | class EmptyPathnameComponentError(Exception): |
---|
958 | """The webapi disallows empty pathname components.""" |
---|
959 | + |
---|
960 | +class IShareStore(Interface): |
---|
961 | + pass |
---|
962 | + |
---|
963 | addfile ./src/allmydata/storage/backends/__init__.py |
---|
964 | addfile ./src/allmydata/storage/backends/das/__init__.py |
---|
965 | addfile ./src/allmydata/storage/backends/das/core.py |
---|
966 | hunk ./src/allmydata/storage/backends/das/core.py 1 |
---|
967 | +from allmydata.interfaces import IStorageBackend |
---|
968 | +from allmydata.storage.backends.base import Backend |
---|
969 | +from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir |
---|
970 | +from allmydata.util.assertutil import precondition |
---|
971 | + |
---|
972 | +import os, re, weakref, struct, time |
---|
973 | + |
---|
974 | +from foolscap.api import Referenceable |
---|
975 | +from twisted.application import service |
---|
976 | + |
---|
977 | +from zope.interface import implements |
---|
978 | +from allmydata.interfaces import RIStorageServer, IStatsProducer, IShareStore |
---|
979 | +from allmydata.util import fileutil, idlib, log, time_format |
---|
980 | +import allmydata # for __full_version__ |
---|
981 | + |
---|
982 | +from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir |
---|
983 | +_pyflakes_hush = [si_b2a, si_a2b, storage_index_to_dir] # re-exported |
---|
984 | +from allmydata.storage.lease import LeaseInfo |
---|
985 | +from allmydata.storage.mutable import MutableShareFile, EmptyShare, \ |
---|
986 | + create_mutable_sharefile |
---|
987 | +from allmydata.storage.backends.das.immutable import NullBucketWriter, BucketWriter, BucketReader |
---|
988 | +from allmydata.storage.crawler import FSBucketCountingCrawler |
---|
989 | +from allmydata.storage.backends.das.expirer import FSLeaseCheckingCrawler |
---|
990 | + |
---|
991 | +from zope.interface import implements |
---|
992 | + |
---|
993 | +class DASCore(Backend): |
---|
994 | + implements(IStorageBackend) |
---|
995 | + def __init__(self, storedir, expiration_policy, readonly=False, reserved_space=0): |
---|
996 | + Backend.__init__(self) |
---|
997 | + |
---|
998 | + self._setup_storage(storedir, readonly, reserved_space) |
---|
999 | + self._setup_corruption_advisory() |
---|
1000 | + self._setup_bucket_counter() |
---|
1001 | + self._setup_lease_checkerf(expiration_policy) |
---|
1002 | + |
---|
1003 | + def _setup_storage(self, storedir, readonly, reserved_space): |
---|
1004 | + self.storedir = storedir |
---|
1005 | + self.readonly = readonly |
---|
1006 | + self.reserved_space = int(reserved_space) |
---|
1007 | + if self.reserved_space: |
---|
1008 | + if self.get_available_space() is None: |
---|
1009 | + log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored", |
---|
1010 | + umid="0wZ27w", level=log.UNUSUAL) |
---|
1011 | + |
---|
1012 | + self.sharedir = os.path.join(self.storedir, "shares") |
---|
1013 | + fileutil.make_dirs(self.sharedir) |
---|
1014 | + self.incomingdir = os.path.join(self.sharedir, 'incoming') |
---|
1015 | + self._clean_incomplete() |
---|
1016 | + |
---|
1017 | + def _clean_incomplete(self): |
---|
1018 | + fileutil.rm_dir(self.incomingdir) |
---|
1019 | + fileutil.make_dirs(self.incomingdir) |
---|
1020 | + |
---|
1021 | + def _setup_corruption_advisory(self): |
---|
1022 | + # we don't actually create the corruption-advisory dir until necessary |
---|
1023 | + self.corruption_advisory_dir = os.path.join(self.storedir, |
---|
1024 | + "corruption-advisories") |
---|
1025 | + |
---|
1026 | + def _setup_bucket_counter(self): |
---|
1027 | + statefname = os.path.join(self.storedir, "bucket_counter.state") |
---|
1028 | + self.bucket_counter = FSBucketCountingCrawler(statefname) |
---|
1029 | + self.bucket_counter.setServiceParent(self) |
---|
1030 | + |
---|
1031 | + def _setup_lease_checkerf(self, expiration_policy): |
---|
1032 | + statefile = os.path.join(self.storedir, "lease_checker.state") |
---|
1033 | + historyfile = os.path.join(self.storedir, "lease_checker.history") |
---|
1034 | + self.lease_checker = FSLeaseCheckingCrawler(statefile, historyfile, expiration_policy) |
---|
1035 | + self.lease_checker.setServiceParent(self) |
---|
1036 | + |
---|
1037 | + def get_available_space(self): |
---|
1038 | + if self.readonly: |
---|
1039 | + return 0 |
---|
1040 | + return fileutil.get_available_space(self.storedir, self.reserved_space) |
---|
1041 | + |
---|
1042 | + def get_shares(self, storage_index): |
---|
1043 | + """Return a list of the FSBShare objects that correspond to the passed storage_index.""" |
---|
1044 | + finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index)) |
---|
1045 | + try: |
---|
1046 | + for f in os.listdir(finalstoragedir): |
---|
1047 | + if NUM_RE.match(f): |
---|
1048 | + filename = os.path.join(finalstoragedir, f) |
---|
1049 | + yield FSBShare(filename, int(f)) |
---|
1050 | + except OSError: |
---|
1051 | + # Commonly caused by there being no buckets at all. |
---|
1052 | + pass |
---|
1053 | + |
---|
1054 | + def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary): |
---|
1055 | + immsh = ImmutableShare(self.sharedir, storage_index, shnum, max_size=max_space_per_bucket, create=True) |
---|
1056 | + bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary) |
---|
1057 | + return bw |
---|
1058 | + |
---|
1059 | + |
---|
1060 | +# each share file (in storage/shares/$SI/$SHNUM) contains lease information |
---|
1061 | +# and share data. The share data is accessed by RIBucketWriter.write and |
---|
1062 | +# RIBucketReader.read . The lease information is not accessible through these |
---|
1063 | +# interfaces. |
---|
1064 | + |
---|
1065 | +# The share file has the following layout: |
---|
1066 | +# 0x00: share file version number, four bytes, current version is 1 |
---|
1067 | +# 0x04: share data length, four bytes big-endian = A # See Footnote 1 below. |
---|
1068 | +# 0x08: number of leases, four bytes big-endian |
---|
1069 | +# 0x0c: beginning of share data (see immutable.layout.WriteBucketProxy) |
---|
1070 | +# A+0x0c = B: first lease. Lease format is: |
---|
1071 | +# B+0x00: owner number, 4 bytes big-endian, 0 is reserved for no-owner |
---|
1072 | +# B+0x04: renew secret, 32 bytes (SHA256) |
---|
1073 | +# B+0x24: cancel secret, 32 bytes (SHA256) |
---|
1074 | +# B+0x44: expiration time, 4 bytes big-endian seconds-since-epoch |
---|
1075 | +# B+0x48: next lease, or end of record |
---|
1076 | + |
---|
1077 | +# Footnote 1: as of Tahoe v1.3.0 this field is not used by storage servers, |
---|
1078 | +# but it is still filled in by storage servers in case the storage server |
---|
1079 | +# software gets downgraded from >= Tahoe v1.3.0 to < Tahoe v1.3.0, or the |
---|
1080 | +# share file is moved from one storage server to another. The value stored in |
---|
1081 | +# this field is truncated, so if the actual share data length is >= 2**32, |
---|
1082 | +# then the value stored in this field will be the actual share data length |
---|
1083 | +# modulo 2**32. |
---|
1084 | + |
---|
1085 | +class ImmutableShare: |
---|
1086 | + LEASE_SIZE = struct.calcsize(">L32s32sL") |
---|
1087 | + sharetype = "immutable" |
---|
1088 | + |
---|
1089 | + def __init__(self, sharedir, storageindex, shnum, max_size=None, create=False): |
---|
1090 | + """ If max_size is not None then I won't allow more than |
---|
1091 | + max_size to be written to me. If create=True then max_size |
---|
1092 | + must not be None. """ |
---|
1093 | + precondition((max_size is not None) or (not create), max_size, create) |
---|
1094 | + self.shnum = shnum |
---|
1095 | + self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum)) |
---|
1096 | + self._max_size = max_size |
---|
1097 | + if create: |
---|
1098 | + # touch the file, so later callers will see that we're working on |
---|
1099 | + # it. Also construct the metadata. |
---|
1100 | + assert not os.path.exists(self.fname) |
---|
1101 | + fileutil.make_dirs(os.path.dirname(self.fname)) |
---|
1102 | + f = open(self.fname, 'wb') |
---|
1103 | + # The second field -- the four-byte share data length -- is no |
---|
1104 | + # longer used as of Tahoe v1.3.0, but we continue to write it in |
---|
1105 | + # there in case someone downgrades a storage server from >= |
---|
1106 | + # Tahoe-1.3.0 to < Tahoe-1.3.0, or moves a share file from one |
---|
1107 | + # server to another, etc. We do saturation -- a share data length |
---|
1108 | + # larger than 2**32-1 (what can fit into the field) is marked as |
---|
1109 | + # the largest length that can fit into the field. That way, even |
---|
1110 | + # if this does happen, the old < v1.3.0 server will still allow |
---|
1111 | + # clients to read the first part of the share. |
---|
1112 | + f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0)) |
---|
1113 | + f.close() |
---|
1114 | + self._lease_offset = max_size + 0x0c |
---|
1115 | + self._num_leases = 0 |
---|
1116 | + else: |
---|
1117 | + f = open(self.fname, 'rb') |
---|
1118 | + filesize = os.path.getsize(self.fname) |
---|
1119 | + (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc)) |
---|
1120 | + f.close() |
---|
1121 | + if version != 1: |
---|
1122 | + msg = "sharefile %s had version %d but we wanted 1" % \ |
---|
1123 | + (self.fname, version) |
---|
1124 | + raise UnknownImmutableContainerVersionError(msg) |
---|
1125 | + self._num_leases = num_leases |
---|
1126 | + self._lease_offset = filesize - (num_leases * self.LEASE_SIZE) |
---|
1127 | + self._data_offset = 0xc |
---|
1128 | + |
---|
1129 | + def unlink(self): |
---|
1130 | + os.unlink(self.fname) |
---|
1131 | + |
---|
1132 | + def read_share_data(self, offset, length): |
---|
1133 | + precondition(offset >= 0) |
---|
1134 | + # Reads beyond the end of the data are truncated. Reads that start |
---|
1135 | + # beyond the end of the data return an empty string. |
---|
1136 | + seekpos = self._data_offset+offset |
---|
1137 | + fsize = os.path.getsize(self.fname) |
---|
1138 | + actuallength = max(0, min(length, fsize-seekpos)) |
---|
1139 | + if actuallength == 0: |
---|
1140 | + return "" |
---|
1141 | + f = open(self.fname, 'rb') |
---|
1142 | + f.seek(seekpos) |
---|
1143 | + return f.read(actuallength) |
---|
1144 | + |
---|
1145 | + def write_share_data(self, offset, data): |
---|
1146 | + length = len(data) |
---|
1147 | + precondition(offset >= 0, offset) |
---|
1148 | + if self._max_size is not None and offset+length > self._max_size: |
---|
1149 | + raise DataTooLargeError(self._max_size, offset, length) |
---|
1150 | + f = open(self.fname, 'rb+') |
---|
1151 | + real_offset = self._data_offset+offset |
---|
1152 | + f.seek(real_offset) |
---|
1153 | + assert f.tell() == real_offset |
---|
1154 | + f.write(data) |
---|
1155 | + f.close() |
---|
1156 | + |
---|
1157 | + def _write_lease_record(self, f, lease_number, lease_info): |
---|
1158 | + offset = self._lease_offset + lease_number * self.LEASE_SIZE |
---|
1159 | + f.seek(offset) |
---|
1160 | + assert f.tell() == offset |
---|
1161 | + f.write(lease_info.to_immutable_data()) |
---|
1162 | + |
---|
1163 | + def _read_num_leases(self, f): |
---|
1164 | + f.seek(0x08) |
---|
1165 | + (num_leases,) = struct.unpack(">L", f.read(4)) |
---|
1166 | + return num_leases |
---|
1167 | + |
---|
1168 | + def _write_num_leases(self, f, num_leases): |
---|
1169 | + f.seek(0x08) |
---|
1170 | + f.write(struct.pack(">L", num_leases)) |
---|
1171 | + |
---|
1172 | + def _truncate_leases(self, f, num_leases): |
---|
1173 | + f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE) |
---|
1174 | + |
---|
1175 | + def get_leases(self): |
---|
1176 | + """Yields a LeaseInfo instance for all leases.""" |
---|
1177 | + f = open(self.fname, 'rb') |
---|
1178 | + (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc)) |
---|
1179 | + f.seek(self._lease_offset) |
---|
1180 | + for i in range(num_leases): |
---|
1181 | + data = f.read(self.LEASE_SIZE) |
---|
1182 | + if data: |
---|
1183 | + yield LeaseInfo().from_immutable_data(data) |
---|
1184 | + |
---|
1185 | + def add_lease(self, lease_info): |
---|
1186 | + f = open(self.fname, 'rb+') |
---|
1187 | + num_leases = self._read_num_leases(f) |
---|
1188 | + self._write_lease_record(f, num_leases, lease_info) |
---|
1189 | + self._write_num_leases(f, num_leases+1) |
---|
1190 | + f.close() |
---|
1191 | + |
---|
1192 | + def renew_lease(self, renew_secret, new_expire_time): |
---|
1193 | + for i,lease in enumerate(self.get_leases()): |
---|
1194 | + if constant_time_compare(lease.renew_secret, renew_secret): |
---|
1195 | + # yup. See if we need to update the owner time. |
---|
1196 | + if new_expire_time > lease.expiration_time: |
---|
1197 | + # yes |
---|
1198 | + lease.expiration_time = new_expire_time |
---|
1199 | + f = open(self.fname, 'rb+') |
---|
1200 | + self._write_lease_record(f, i, lease) |
---|
1201 | + f.close() |
---|
1202 | + return |
---|
1203 | + raise IndexError("unable to renew non-existent lease") |
---|
1204 | + |
---|
1205 | + def add_or_renew_lease(self, lease_info): |
---|
1206 | + try: |
---|
1207 | + self.renew_lease(lease_info.renew_secret, |
---|
1208 | + lease_info.expiration_time) |
---|
1209 | + except IndexError: |
---|
1210 | + self.add_lease(lease_info) |
---|
1211 | + |
---|
1212 | + |
---|
1213 | + def cancel_lease(self, cancel_secret): |
---|
1214 | + """Remove a lease with the given cancel_secret. If the last lease is |
---|
1215 | + cancelled, the file will be removed. Return the number of bytes that |
---|
1216 | + were freed (by truncating the list of leases, and possibly by |
---|
1217 | + deleting the file. Raise IndexError if there was no lease with the |
---|
1218 | + given cancel_secret. |
---|
1219 | + """ |
---|
1220 | + |
---|
1221 | + leases = list(self.get_leases()) |
---|
1222 | + num_leases_removed = 0 |
---|
1223 | + for i,lease in enumerate(leases): |
---|
1224 | + if constant_time_compare(lease.cancel_secret, cancel_secret): |
---|
1225 | + leases[i] = None |
---|
1226 | + num_leases_removed += 1 |
---|
1227 | + if not num_leases_removed: |
---|
1228 | + raise IndexError("unable to find matching lease to cancel") |
---|
1229 | + if num_leases_removed: |
---|
1230 | + # pack and write out the remaining leases. We write these out in |
---|
1231 | + # the same order as they were added, so that if we crash while |
---|
1232 | + # doing this, we won't lose any non-cancelled leases. |
---|
1233 | + leases = [l for l in leases if l] # remove the cancelled leases |
---|
1234 | + f = open(self.fname, 'rb+') |
---|
1235 | + for i,lease in enumerate(leases): |
---|
1236 | + self._write_lease_record(f, i, lease) |
---|
1237 | + self._write_num_leases(f, len(leases)) |
---|
1238 | + self._truncate_leases(f, len(leases)) |
---|
1239 | + f.close() |
---|
1240 | + space_freed = self.LEASE_SIZE * num_leases_removed |
---|
1241 | + if not len(leases): |
---|
1242 | + space_freed += os.stat(self.fname)[stat.ST_SIZE] |
---|
1243 | + self.unlink() |
---|
1244 | + return space_freed |
---|
1245 | hunk ./src/allmydata/storage/backends/das/expirer.py 2 |
---|
1246 | import time, os, pickle, struct |
---|
1247 | -from allmydata.storage.crawler import ShareCrawler |
---|
1248 | -from allmydata.storage.shares import get_share_file |
---|
1249 | +from allmydata.storage.crawler import FSShareCrawler |
---|
1250 | from allmydata.storage.common import UnknownMutableContainerVersionError, \ |
---|
1251 | UnknownImmutableContainerVersionError |
---|
1252 | from twisted.python import log as twlog |
---|
1253 | hunk ./src/allmydata/storage/backends/das/expirer.py 7 |
---|
1254 | |
---|
1255 | -class LeaseCheckingCrawler(ShareCrawler): |
---|
1256 | +class FSLeaseCheckingCrawler(FSShareCrawler): |
---|
1257 | """I examine the leases on all shares, determining which are still valid |
---|
1258 | and which have expired. I can remove the expired leases (if so |
---|
1259 | configured), and the share will be deleted when the last lease is |
---|
1260 | hunk ./src/allmydata/storage/backends/das/expirer.py 50 |
---|
1261 | slow_start = 360 # wait 6 minutes after startup |
---|
1262 | minimum_cycle_time = 12*60*60 # not more than twice per day |
---|
1263 | |
---|
1264 | - def __init__(self, statefile, historyfile, |
---|
1265 | - expiration_enabled, mode, |
---|
1266 | - override_lease_duration, # used if expiration_mode=="age" |
---|
1267 | - cutoff_date, # used if expiration_mode=="cutoff-date" |
---|
1268 | - sharetypes): |
---|
1269 | + def __init__(self, statefile, historyfile, expiration_policy): |
---|
1270 | self.historyfile = historyfile |
---|
1271 | hunk ./src/allmydata/storage/backends/das/expirer.py 52 |
---|
1272 | - self.expiration_enabled = expiration_enabled |
---|
1273 | - self.mode = mode |
---|
1274 | + self.expiration_enabled = expiration_policy['enabled'] |
---|
1275 | + self.mode = expiration_policy['mode'] |
---|
1276 | self.override_lease_duration = None |
---|
1277 | self.cutoff_date = None |
---|
1278 | if self.mode == "age": |
---|
1279 | hunk ./src/allmydata/storage/backends/das/expirer.py 57 |
---|
1280 | - assert isinstance(override_lease_duration, (int, type(None))) |
---|
1281 | - self.override_lease_duration = override_lease_duration # seconds |
---|
1282 | + assert isinstance(expiration_policy['override_lease_duration'], (int, type(None))) |
---|
1283 | + self.override_lease_duration = expiration_policy['override_lease_duration']# seconds |
---|
1284 | elif self.mode == "cutoff-date": |
---|
1285 | hunk ./src/allmydata/storage/backends/das/expirer.py 60 |
---|
1286 | - assert isinstance(cutoff_date, int) # seconds-since-epoch |
---|
1287 | + assert isinstance(expiration_policy['cutoff_date'], int) # seconds-since-epoch |
---|
1288 | assert cutoff_date is not None |
---|
1289 | hunk ./src/allmydata/storage/backends/das/expirer.py 62 |
---|
1290 | - self.cutoff_date = cutoff_date |
---|
1291 | + self.cutoff_date = expiration_policy['cutoff_date'] |
---|
1292 | else: |
---|
1293 | hunk ./src/allmydata/storage/backends/das/expirer.py 64 |
---|
1294 | - raise ValueError("GC mode '%s' must be 'age' or 'cutoff-date'" % mode) |
---|
1295 | - self.sharetypes_to_expire = sharetypes |
---|
1296 | - ShareCrawler.__init__(self, statefile) |
---|
1297 | + raise ValueError("GC mode '%s' must be 'age' or 'cutoff-date'" % expiration_policy['mode']) |
---|
1298 | + self.sharetypes_to_expire = expiration_policy['sharetypes'] |
---|
1299 | + FSShareCrawler.__init__(self, statefile) |
---|
1300 | |
---|
1301 | def add_initial_state(self): |
---|
1302 | # we fill ["cycle-to-date"] here (even though they will be reset in |
---|
1303 | hunk ./src/allmydata/storage/backends/das/expirer.py 156 |
---|
1304 | |
---|
1305 | def process_share(self, sharefilename): |
---|
1306 | # first, find out what kind of a share it is |
---|
1307 | - sf = get_share_file(sharefilename) |
---|
1308 | + f = open(sharefilename, "rb") |
---|
1309 | + prefix = f.read(32) |
---|
1310 | + f.close() |
---|
1311 | + if prefix == MutableShareFile.MAGIC: |
---|
1312 | + sf = MutableShareFile(sharefilename) |
---|
1313 | + else: |
---|
1314 | + # otherwise assume it's immutable |
---|
1315 | + sf = FSBShare(sharefilename) |
---|
1316 | sharetype = sf.sharetype |
---|
1317 | now = time.time() |
---|
1318 | s = self.stat(sharefilename) |
---|
1319 | addfile ./src/allmydata/storage/backends/null/__init__.py |
---|
1320 | addfile ./src/allmydata/storage/backends/null/core.py |
---|
1321 | hunk ./src/allmydata/storage/backends/null/core.py 1 |
---|
1322 | +from allmydata.storage.backends.base import Backend |
---|
1323 | + |
---|
1324 | +class NullCore(Backend): |
---|
1325 | + def __init__(self): |
---|
1326 | + Backend.__init__(self) |
---|
1327 | + |
---|
1328 | + def get_available_space(self): |
---|
1329 | + return None |
---|
1330 | + |
---|
1331 | + def get_shares(self, storage_index): |
---|
1332 | + return set() |
---|
1333 | + |
---|
1334 | + def get_share(self, storage_index, sharenum): |
---|
1335 | + return None |
---|
1336 | + |
---|
1337 | + def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary): |
---|
1338 | + return NullBucketWriter() |
---|
1339 | hunk ./src/allmydata/storage/crawler.py 12 |
---|
1340 | class TimeSliceExceeded(Exception): |
---|
1341 | pass |
---|
1342 | |
---|
1343 | -class ShareCrawler(service.MultiService): |
---|
1344 | +class FSShareCrawler(service.MultiService): |
---|
1345 | """A subcless of ShareCrawler is attached to a StorageServer, and |
---|
1346 | periodically walks all of its shares, processing each one in some |
---|
1347 | fashion. This crawl is rate-limited, to reduce the IO burden on the host, |
---|
1348 | hunk ./src/allmydata/storage/crawler.py 68 |
---|
1349 | cpu_slice = 1.0 # use up to 1.0 seconds before yielding |
---|
1350 | minimum_cycle_time = 300 # don't run a cycle faster than this |
---|
1351 | |
---|
1352 | - def __init__(self, backend, statefile, allowed_cpu_percentage=None): |
---|
1353 | + def __init__(self, statefname, allowed_cpu_percentage=None): |
---|
1354 | service.MultiService.__init__(self) |
---|
1355 | if allowed_cpu_percentage is not None: |
---|
1356 | self.allowed_cpu_percentage = allowed_cpu_percentage |
---|
1357 | hunk ./src/allmydata/storage/crawler.py 72 |
---|
1358 | - self.backend = backend |
---|
1359 | + self.statefname = statefname |
---|
1360 | self.prefixes = [si_b2a(struct.pack(">H", i << (16-10)))[:2] |
---|
1361 | for i in range(2**10)] |
---|
1362 | self.prefixes.sort() |
---|
1363 | hunk ./src/allmydata/storage/crawler.py 192 |
---|
1364 | # of the last bucket to be processed, or |
---|
1365 | # None if we are sleeping between cycles |
---|
1366 | try: |
---|
1367 | - f = open(self.statefile, "rb") |
---|
1368 | + f = open(self.statefname, "rb") |
---|
1369 | state = pickle.load(f) |
---|
1370 | f.close() |
---|
1371 | except EnvironmentError: |
---|
1372 | hunk ./src/allmydata/storage/crawler.py 230 |
---|
1373 | else: |
---|
1374 | last_complete_prefix = self.prefixes[lcpi] |
---|
1375 | self.state["last-complete-prefix"] = last_complete_prefix |
---|
1376 | - tmpfile = self.statefile + ".tmp" |
---|
1377 | + tmpfile = self.statefname + ".tmp" |
---|
1378 | f = open(tmpfile, "wb") |
---|
1379 | pickle.dump(self.state, f) |
---|
1380 | f.close() |
---|
1381 | hunk ./src/allmydata/storage/crawler.py 433 |
---|
1382 | pass |
---|
1383 | |
---|
1384 | |
---|
1385 | -class BucketCountingCrawler(ShareCrawler): |
---|
1386 | +class FSBucketCountingCrawler(FSShareCrawler): |
---|
1387 | """I keep track of how many buckets are being managed by this server. |
---|
1388 | This is equivalent to the number of distributed files and directories for |
---|
1389 | which I am providing storage. The actual number of files+directories in |
---|
1390 | hunk ./src/allmydata/storage/crawler.py 446 |
---|
1391 | |
---|
1392 | minimum_cycle_time = 60*60 # we don't need this more than once an hour |
---|
1393 | |
---|
1394 | - def __init__(self, statefile, num_sample_prefixes=1): |
---|
1395 | - ShareCrawler.__init__(self, statefile) |
---|
1396 | + def __init__(self, statefname, num_sample_prefixes=1): |
---|
1397 | + FSShareCrawler.__init__(self, statefname) |
---|
1398 | self.num_sample_prefixes = num_sample_prefixes |
---|
1399 | |
---|
1400 | def add_initial_state(self): |
---|
1401 | hunk ./src/allmydata/storage/immutable.py 14 |
---|
1402 | from allmydata.storage.common import UnknownImmutableContainerVersionError, \ |
---|
1403 | DataTooLargeError |
---|
1404 | |
---|
1405 | -# each share file (in storage/shares/$SI/$SHNUM) contains lease information |
---|
1406 | -# and share data. The share data is accessed by RIBucketWriter.write and |
---|
1407 | -# RIBucketReader.read . The lease information is not accessible through these |
---|
1408 | -# interfaces. |
---|
1409 | - |
---|
1410 | -# The share file has the following layout: |
---|
1411 | -# 0x00: share file version number, four bytes, current version is 1 |
---|
1412 | -# 0x04: share data length, four bytes big-endian = A # See Footnote 1 below. |
---|
1413 | -# 0x08: number of leases, four bytes big-endian |
---|
1414 | -# 0x0c: beginning of share data (see immutable.layout.WriteBucketProxy) |
---|
1415 | -# A+0x0c = B: first lease. Lease format is: |
---|
1416 | -# B+0x00: owner number, 4 bytes big-endian, 0 is reserved for no-owner |
---|
1417 | -# B+0x04: renew secret, 32 bytes (SHA256) |
---|
1418 | -# B+0x24: cancel secret, 32 bytes (SHA256) |
---|
1419 | -# B+0x44: expiration time, 4 bytes big-endian seconds-since-epoch |
---|
1420 | -# B+0x48: next lease, or end of record |
---|
1421 | - |
---|
1422 | -# Footnote 1: as of Tahoe v1.3.0 this field is not used by storage servers, |
---|
1423 | -# but it is still filled in by storage servers in case the storage server |
---|
1424 | -# software gets downgraded from >= Tahoe v1.3.0 to < Tahoe v1.3.0, or the |
---|
1425 | -# share file is moved from one storage server to another. The value stored in |
---|
1426 | -# this field is truncated, so if the actual share data length is >= 2**32, |
---|
1427 | -# then the value stored in this field will be the actual share data length |
---|
1428 | -# modulo 2**32. |
---|
1429 | - |
---|
1430 | -class ShareFile: |
---|
1431 | - LEASE_SIZE = struct.calcsize(">L32s32sL") |
---|
1432 | - sharetype = "immutable" |
---|
1433 | - |
---|
1434 | - def __init__(self, filename, max_size=None, create=False): |
---|
1435 | - """ If max_size is not None then I won't allow more than |
---|
1436 | - max_size to be written to me. If create=True then max_size |
---|
1437 | - must not be None. """ |
---|
1438 | - precondition((max_size is not None) or (not create), max_size, create) |
---|
1439 | - self.home = filename |
---|
1440 | - self._max_size = max_size |
---|
1441 | - if create: |
---|
1442 | - # touch the file, so later callers will see that we're working on |
---|
1443 | - # it. Also construct the metadata. |
---|
1444 | - assert not os.path.exists(self.home) |
---|
1445 | - fileutil.make_dirs(os.path.dirname(self.home)) |
---|
1446 | - f = open(self.home, 'wb') |
---|
1447 | - # The second field -- the four-byte share data length -- is no |
---|
1448 | - # longer used as of Tahoe v1.3.0, but we continue to write it in |
---|
1449 | - # there in case someone downgrades a storage server from >= |
---|
1450 | - # Tahoe-1.3.0 to < Tahoe-1.3.0, or moves a share file from one |
---|
1451 | - # server to another, etc. We do saturation -- a share data length |
---|
1452 | - # larger than 2**32-1 (what can fit into the field) is marked as |
---|
1453 | - # the largest length that can fit into the field. That way, even |
---|
1454 | - # if this does happen, the old < v1.3.0 server will still allow |
---|
1455 | - # clients to read the first part of the share. |
---|
1456 | - f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0)) |
---|
1457 | - f.close() |
---|
1458 | - self._lease_offset = max_size + 0x0c |
---|
1459 | - self._num_leases = 0 |
---|
1460 | - else: |
---|
1461 | - f = open(self.home, 'rb') |
---|
1462 | - filesize = os.path.getsize(self.home) |
---|
1463 | - (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc)) |
---|
1464 | - f.close() |
---|
1465 | - if version != 1: |
---|
1466 | - msg = "sharefile %s had version %d but we wanted 1" % \ |
---|
1467 | - (filename, version) |
---|
1468 | - raise UnknownImmutableContainerVersionError(msg) |
---|
1469 | - self._num_leases = num_leases |
---|
1470 | - self._lease_offset = filesize - (num_leases * self.LEASE_SIZE) |
---|
1471 | - self._data_offset = 0xc |
---|
1472 | - |
---|
1473 | - def unlink(self): |
---|
1474 | - os.unlink(self.home) |
---|
1475 | - |
---|
1476 | - def read_share_data(self, offset, length): |
---|
1477 | - precondition(offset >= 0) |
---|
1478 | - # Reads beyond the end of the data are truncated. Reads that start |
---|
1479 | - # beyond the end of the data return an empty string. |
---|
1480 | - seekpos = self._data_offset+offset |
---|
1481 | - fsize = os.path.getsize(self.home) |
---|
1482 | - actuallength = max(0, min(length, fsize-seekpos)) |
---|
1483 | - if actuallength == 0: |
---|
1484 | - return "" |
---|
1485 | - f = open(self.home, 'rb') |
---|
1486 | - f.seek(seekpos) |
---|
1487 | - return f.read(actuallength) |
---|
1488 | - |
---|
1489 | - def write_share_data(self, offset, data): |
---|
1490 | - length = len(data) |
---|
1491 | - precondition(offset >= 0, offset) |
---|
1492 | - if self._max_size is not None and offset+length > self._max_size: |
---|
1493 | - raise DataTooLargeError(self._max_size, offset, length) |
---|
1494 | - f = open(self.home, 'rb+') |
---|
1495 | - real_offset = self._data_offset+offset |
---|
1496 | - f.seek(real_offset) |
---|
1497 | - assert f.tell() == real_offset |
---|
1498 | - f.write(data) |
---|
1499 | - f.close() |
---|
1500 | - |
---|
1501 | - def _write_lease_record(self, f, lease_number, lease_info): |
---|
1502 | - offset = self._lease_offset + lease_number * self.LEASE_SIZE |
---|
1503 | - f.seek(offset) |
---|
1504 | - assert f.tell() == offset |
---|
1505 | - f.write(lease_info.to_immutable_data()) |
---|
1506 | - |
---|
1507 | - def _read_num_leases(self, f): |
---|
1508 | - f.seek(0x08) |
---|
1509 | - (num_leases,) = struct.unpack(">L", f.read(4)) |
---|
1510 | - return num_leases |
---|
1511 | - |
---|
1512 | - def _write_num_leases(self, f, num_leases): |
---|
1513 | - f.seek(0x08) |
---|
1514 | - f.write(struct.pack(">L", num_leases)) |
---|
1515 | - |
---|
1516 | - def _truncate_leases(self, f, num_leases): |
---|
1517 | - f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE) |
---|
1518 | - |
---|
1519 | - def get_leases(self): |
---|
1520 | - """Yields a LeaseInfo instance for all leases.""" |
---|
1521 | - f = open(self.home, 'rb') |
---|
1522 | - (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc)) |
---|
1523 | - f.seek(self._lease_offset) |
---|
1524 | - for i in range(num_leases): |
---|
1525 | - data = f.read(self.LEASE_SIZE) |
---|
1526 | - if data: |
---|
1527 | - yield LeaseInfo().from_immutable_data(data) |
---|
1528 | - |
---|
1529 | - def add_lease(self, lease_info): |
---|
1530 | - f = open(self.home, 'rb+') |
---|
1531 | - num_leases = self._read_num_leases(f) |
---|
1532 | - self._write_lease_record(f, num_leases, lease_info) |
---|
1533 | - self._write_num_leases(f, num_leases+1) |
---|
1534 | - f.close() |
---|
1535 | - |
---|
1536 | - def renew_lease(self, renew_secret, new_expire_time): |
---|
1537 | - for i,lease in enumerate(self.get_leases()): |
---|
1538 | - if constant_time_compare(lease.renew_secret, renew_secret): |
---|
1539 | - # yup. See if we need to update the owner time. |
---|
1540 | - if new_expire_time > lease.expiration_time: |
---|
1541 | - # yes |
---|
1542 | - lease.expiration_time = new_expire_time |
---|
1543 | - f = open(self.home, 'rb+') |
---|
1544 | - self._write_lease_record(f, i, lease) |
---|
1545 | - f.close() |
---|
1546 | - return |
---|
1547 | - raise IndexError("unable to renew non-existent lease") |
---|
1548 | - |
---|
1549 | - def add_or_renew_lease(self, lease_info): |
---|
1550 | - try: |
---|
1551 | - self.renew_lease(lease_info.renew_secret, |
---|
1552 | - lease_info.expiration_time) |
---|
1553 | - except IndexError: |
---|
1554 | - self.add_lease(lease_info) |
---|
1555 | - |
---|
1556 | - |
---|
1557 | - def cancel_lease(self, cancel_secret): |
---|
1558 | - """Remove a lease with the given cancel_secret. If the last lease is |
---|
1559 | - cancelled, the file will be removed. Return the number of bytes that |
---|
1560 | - were freed (by truncating the list of leases, and possibly by |
---|
1561 | - deleting the file. Raise IndexError if there was no lease with the |
---|
1562 | - given cancel_secret. |
---|
1563 | - """ |
---|
1564 | - |
---|
1565 | - leases = list(self.get_leases()) |
---|
1566 | - num_leases_removed = 0 |
---|
1567 | - for i,lease in enumerate(leases): |
---|
1568 | - if constant_time_compare(lease.cancel_secret, cancel_secret): |
---|
1569 | - leases[i] = None |
---|
1570 | - num_leases_removed += 1 |
---|
1571 | - if not num_leases_removed: |
---|
1572 | - raise IndexError("unable to find matching lease to cancel") |
---|
1573 | - if num_leases_removed: |
---|
1574 | - # pack and write out the remaining leases. We write these out in |
---|
1575 | - # the same order as they were added, so that if we crash while |
---|
1576 | - # doing this, we won't lose any non-cancelled leases. |
---|
1577 | - leases = [l for l in leases if l] # remove the cancelled leases |
---|
1578 | - f = open(self.home, 'rb+') |
---|
1579 | - for i,lease in enumerate(leases): |
---|
1580 | - self._write_lease_record(f, i, lease) |
---|
1581 | - self._write_num_leases(f, len(leases)) |
---|
1582 | - self._truncate_leases(f, len(leases)) |
---|
1583 | - f.close() |
---|
1584 | - space_freed = self.LEASE_SIZE * num_leases_removed |
---|
1585 | - if not len(leases): |
---|
1586 | - space_freed += os.stat(self.home)[stat.ST_SIZE] |
---|
1587 | - self.unlink() |
---|
1588 | - return space_freed |
---|
1589 | -class NullBucketWriter(Referenceable): |
---|
1590 | - implements(RIBucketWriter) |
---|
1591 | - |
---|
1592 | - def remote_write(self, offset, data): |
---|
1593 | - return |
---|
1594 | - |
---|
1595 | class BucketWriter(Referenceable): |
---|
1596 | implements(RIBucketWriter) |
---|
1597 | |
---|
1598 | hunk ./src/allmydata/storage/immutable.py 17 |
---|
1599 | - def __init__(self, ss, incominghome, finalhome, max_size, lease_info, canary): |
---|
1600 | + def __init__(self, ss, immutableshare, max_size, lease_info, canary): |
---|
1601 | self.ss = ss |
---|
1602 | hunk ./src/allmydata/storage/immutable.py 19 |
---|
1603 | - self.incominghome = incominghome |
---|
1604 | - self.finalhome = finalhome |
---|
1605 | self._max_size = max_size # don't allow the client to write more than this |
---|
1606 | self._canary = canary |
---|
1607 | self._disconnect_marker = canary.notifyOnDisconnect(self._disconnected) |
---|
1608 | hunk ./src/allmydata/storage/immutable.py 24 |
---|
1609 | self.closed = False |
---|
1610 | self.throw_out_all_data = False |
---|
1611 | - self._sharefile = ShareFile(incominghome, create=True, max_size=max_size) |
---|
1612 | + self._sharefile = immutableshare |
---|
1613 | # also, add our lease to the file now, so that other ones can be |
---|
1614 | # added by simultaneous uploaders |
---|
1615 | self._sharefile.add_lease(lease_info) |
---|
1616 | hunk ./src/allmydata/storage/server.py 16 |
---|
1617 | from allmydata.storage.lease import LeaseInfo |
---|
1618 | from allmydata.storage.mutable import MutableShareFile, EmptyShare, \ |
---|
1619 | create_mutable_sharefile |
---|
1620 | -from allmydata.storage.immutable import ShareFile, NullBucketWriter, BucketWriter, BucketReader |
---|
1621 | -from allmydata.storage.crawler import BucketCountingCrawler |
---|
1622 | -from allmydata.storage.expirer import LeaseCheckingCrawler |
---|
1623 | |
---|
1624 | from zope.interface import implements |
---|
1625 | |
---|
1626 | hunk ./src/allmydata/storage/server.py 19 |
---|
1627 | -# A Backend is a MultiService so that its server's crawlers (if the server has any) can |
---|
1628 | -# be started and stopped. |
---|
1629 | -class Backend(service.MultiService): |
---|
1630 | - implements(IStatsProducer) |
---|
1631 | - def __init__(self): |
---|
1632 | - service.MultiService.__init__(self) |
---|
1633 | - |
---|
1634 | - def get_bucket_shares(self): |
---|
1635 | - """XXX""" |
---|
1636 | - raise NotImplementedError |
---|
1637 | - |
---|
1638 | - def get_share(self): |
---|
1639 | - """XXX""" |
---|
1640 | - raise NotImplementedError |
---|
1641 | - |
---|
1642 | - def make_bucket_writer(self): |
---|
1643 | - """XXX""" |
---|
1644 | - raise NotImplementedError |
---|
1645 | - |
---|
1646 | -class NullBackend(Backend): |
---|
1647 | - def __init__(self): |
---|
1648 | - Backend.__init__(self) |
---|
1649 | - |
---|
1650 | - def get_available_space(self): |
---|
1651 | - return None |
---|
1652 | - |
---|
1653 | - def get_bucket_shares(self, storage_index): |
---|
1654 | - return set() |
---|
1655 | - |
---|
1656 | - def get_share(self, storage_index, sharenum): |
---|
1657 | - return None |
---|
1658 | - |
---|
1659 | - def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary): |
---|
1660 | - return NullBucketWriter() |
---|
1661 | - |
---|
1662 | -class FSBackend(Backend): |
---|
1663 | - def __init__(self, storedir, readonly=False, reserved_space=0): |
---|
1664 | - Backend.__init__(self) |
---|
1665 | - |
---|
1666 | - self._setup_storage(storedir, readonly, reserved_space) |
---|
1667 | - self._setup_corruption_advisory() |
---|
1668 | - self._setup_bucket_counter() |
---|
1669 | - self._setup_lease_checkerf() |
---|
1670 | - |
---|
1671 | - def _setup_storage(self, storedir, readonly, reserved_space): |
---|
1672 | - self.storedir = storedir |
---|
1673 | - self.readonly = readonly |
---|
1674 | - self.reserved_space = int(reserved_space) |
---|
1675 | - if self.reserved_space: |
---|
1676 | - if self.get_available_space() is None: |
---|
1677 | - log.msg("warning: [storage]reserved_space= is set, but this platform does not support an API to get disk statistics (statvfs(2) or GetDiskFreeSpaceEx), so this reservation cannot be honored", |
---|
1678 | - umid="0wZ27w", level=log.UNUSUAL) |
---|
1679 | - |
---|
1680 | - self.sharedir = os.path.join(self.storedir, "shares") |
---|
1681 | - fileutil.make_dirs(self.sharedir) |
---|
1682 | - self.incomingdir = os.path.join(self.sharedir, 'incoming') |
---|
1683 | - self._clean_incomplete() |
---|
1684 | - |
---|
1685 | - def _clean_incomplete(self): |
---|
1686 | - fileutil.rm_dir(self.incomingdir) |
---|
1687 | - fileutil.make_dirs(self.incomingdir) |
---|
1688 | - |
---|
1689 | - def _setup_corruption_advisory(self): |
---|
1690 | - # we don't actually create the corruption-advisory dir until necessary |
---|
1691 | - self.corruption_advisory_dir = os.path.join(self.storedir, |
---|
1692 | - "corruption-advisories") |
---|
1693 | - |
---|
1694 | - def _setup_bucket_counter(self): |
---|
1695 | - statefile = os.path.join(self.storedir, "bucket_counter.state") |
---|
1696 | - self.bucket_counter = BucketCountingCrawler(statefile) |
---|
1697 | - self.bucket_counter.setServiceParent(self) |
---|
1698 | - |
---|
1699 | - def _setup_lease_checkerf(self): |
---|
1700 | - statefile = os.path.join(self.storedir, "lease_checker.state") |
---|
1701 | - historyfile = os.path.join(self.storedir, "lease_checker.history") |
---|
1702 | - self.lease_checker = LeaseCheckingCrawler(statefile, historyfile, |
---|
1703 | - expiration_enabled, expiration_mode, |
---|
1704 | - expiration_override_lease_duration, |
---|
1705 | - expiration_cutoff_date, |
---|
1706 | - expiration_sharetypes) |
---|
1707 | - self.lease_checker.setServiceParent(self) |
---|
1708 | - |
---|
1709 | - def get_available_space(self): |
---|
1710 | - if self.readonly: |
---|
1711 | - return 0 |
---|
1712 | - return fileutil.get_available_space(self.storedir, self.reserved_space) |
---|
1713 | - |
---|
1714 | - def get_bucket_shares(self, storage_index): |
---|
1715 | - """Return a list of (shnum, pathname) tuples for files that hold |
---|
1716 | - shares for this storage_index. In each tuple, 'shnum' will always be |
---|
1717 | - the integer form of the last component of 'pathname'.""" |
---|
1718 | - storagedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index)) |
---|
1719 | - try: |
---|
1720 | - for f in os.listdir(storagedir): |
---|
1721 | - if NUM_RE.match(f): |
---|
1722 | - filename = os.path.join(storagedir, f) |
---|
1723 | - yield (int(f), filename) |
---|
1724 | - except OSError: |
---|
1725 | - # Commonly caused by there being no buckets at all. |
---|
1726 | - pass |
---|
1727 | - |
---|
1728 | # storage/ |
---|
1729 | # storage/shares/incoming |
---|
1730 | # incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will |
---|
1731 | hunk ./src/allmydata/storage/server.py 32 |
---|
1732 | # $SHARENUM matches this regex: |
---|
1733 | NUM_RE=re.compile("^[0-9]+$") |
---|
1734 | |
---|
1735 | - |
---|
1736 | - |
---|
1737 | class StorageServer(service.MultiService, Referenceable): |
---|
1738 | implements(RIStorageServer, IStatsProducer) |
---|
1739 | name = 'storage' |
---|
1740 | hunk ./src/allmydata/storage/server.py 35 |
---|
1741 | - LeaseCheckerClass = LeaseCheckingCrawler |
---|
1742 | |
---|
1743 | def __init__(self, nodeid, backend, reserved_space=0, |
---|
1744 | readonly_storage=False, |
---|
1745 | hunk ./src/allmydata/storage/server.py 38 |
---|
1746 | - stats_provider=None, |
---|
1747 | - expiration_enabled=False, |
---|
1748 | - expiration_mode="age", |
---|
1749 | - expiration_override_lease_duration=None, |
---|
1750 | - expiration_cutoff_date=None, |
---|
1751 | - expiration_sharetypes=("mutable", "immutable")): |
---|
1752 | + stats_provider=None ): |
---|
1753 | service.MultiService.__init__(self) |
---|
1754 | assert isinstance(nodeid, str) |
---|
1755 | assert len(nodeid) == 20 |
---|
1756 | hunk ./src/allmydata/storage/server.py 217 |
---|
1757 | # they asked about: this will save them a lot of work. Add or update |
---|
1758 | # leases for all of them: if they want us to hold shares for this |
---|
1759 | # file, they'll want us to hold leases for this file. |
---|
1760 | - for (shnum, fn) in self.backend.get_bucket_shares(storage_index): |
---|
1761 | - alreadygot.add(shnum) |
---|
1762 | - sf = ShareFile(fn) |
---|
1763 | - sf.add_or_renew_lease(lease_info) |
---|
1764 | - |
---|
1765 | - for shnum in sharenums: |
---|
1766 | - share = self.backend.get_share(storage_index, shnum) |
---|
1767 | + for share in self.backend.get_shares(storage_index): |
---|
1768 | + alreadygot.add(share.shnum) |
---|
1769 | + share.add_or_renew_lease(lease_info) |
---|
1770 | |
---|
1771 | hunk ./src/allmydata/storage/server.py 221 |
---|
1772 | - if not share: |
---|
1773 | - if (not limited) or (remaining_space >= max_space_per_bucket): |
---|
1774 | - # ok! we need to create the new share file. |
---|
1775 | - bw = self.backend.make_bucket_writer(storage_index, shnum, |
---|
1776 | - max_space_per_bucket, lease_info, canary) |
---|
1777 | - bucketwriters[shnum] = bw |
---|
1778 | - self._active_writers[bw] = 1 |
---|
1779 | - if limited: |
---|
1780 | - remaining_space -= max_space_per_bucket |
---|
1781 | - else: |
---|
1782 | - # bummer! not enough space to accept this bucket |
---|
1783 | - pass |
---|
1784 | + for shnum in (sharenums - alreadygot): |
---|
1785 | + if (not limited) or (remaining_space >= max_space_per_bucket): |
---|
1786 | + #XXX or should the following line occur in storage server construtor? ok! we need to create the new share file. |
---|
1787 | + self.backend.set_storage_server(self) |
---|
1788 | + bw = self.backend.make_bucket_writer(storage_index, shnum, |
---|
1789 | + max_space_per_bucket, lease_info, canary) |
---|
1790 | + bucketwriters[shnum] = bw |
---|
1791 | + self._active_writers[bw] = 1 |
---|
1792 | + if limited: |
---|
1793 | + remaining_space -= max_space_per_bucket |
---|
1794 | |
---|
1795 | hunk ./src/allmydata/storage/server.py 232 |
---|
1796 | - elif share.is_complete(): |
---|
1797 | - # great! we already have it. easy. |
---|
1798 | - pass |
---|
1799 | - elif not share.is_complete(): |
---|
1800 | - # Note that we don't create BucketWriters for shnums that |
---|
1801 | - # have a partial share (in incoming/), so if a second upload |
---|
1802 | - # occurs while the first is still in progress, the second |
---|
1803 | - # uploader will use different storage servers. |
---|
1804 | - pass |
---|
1805 | + #XXX We SHOULD DOCUMENT LATER. |
---|
1806 | |
---|
1807 | self.add_latency("allocate", time.time() - start) |
---|
1808 | return alreadygot, bucketwriters |
---|
1809 | hunk ./src/allmydata/storage/server.py 238 |
---|
1810 | |
---|
1811 | def _iter_share_files(self, storage_index): |
---|
1812 | - for shnum, filename in self._get_bucket_shares(storage_index): |
---|
1813 | + for shnum, filename in self._get_shares(storage_index): |
---|
1814 | f = open(filename, 'rb') |
---|
1815 | header = f.read(32) |
---|
1816 | f.close() |
---|
1817 | hunk ./src/allmydata/storage/server.py 318 |
---|
1818 | si_s = si_b2a(storage_index) |
---|
1819 | log.msg("storage: get_buckets %s" % si_s) |
---|
1820 | bucketreaders = {} # k: sharenum, v: BucketReader |
---|
1821 | - for shnum, filename in self.backend.get_bucket_shares(storage_index): |
---|
1822 | + for shnum, filename in self.backend.get_shares(storage_index): |
---|
1823 | bucketreaders[shnum] = BucketReader(self, filename, |
---|
1824 | storage_index, shnum) |
---|
1825 | self.add_latency("get", time.time() - start) |
---|
1826 | hunk ./src/allmydata/storage/server.py 334 |
---|
1827 | # since all shares get the same lease data, we just grab the leases |
---|
1828 | # from the first share |
---|
1829 | try: |
---|
1830 | - shnum, filename = self._get_bucket_shares(storage_index).next() |
---|
1831 | + shnum, filename = self._get_shares(storage_index).next() |
---|
1832 | sf = ShareFile(filename) |
---|
1833 | return sf.get_leases() |
---|
1834 | except StopIteration: |
---|
1835 | hunk ./src/allmydata/storage/shares.py 1 |
---|
1836 | -#! /usr/bin/python |
---|
1837 | - |
---|
1838 | -from allmydata.storage.mutable import MutableShareFile |
---|
1839 | -from allmydata.storage.immutable import ShareFile |
---|
1840 | - |
---|
1841 | -def get_share_file(filename): |
---|
1842 | - f = open(filename, "rb") |
---|
1843 | - prefix = f.read(32) |
---|
1844 | - f.close() |
---|
1845 | - if prefix == MutableShareFile.MAGIC: |
---|
1846 | - return MutableShareFile(filename) |
---|
1847 | - # otherwise assume it's immutable |
---|
1848 | - return ShareFile(filename) |
---|
1849 | - |
---|
1850 | rmfile ./src/allmydata/storage/shares.py |
---|
1851 | hunk ./src/allmydata/test/common_util.py 20 |
---|
1852 | |
---|
1853 | def flip_one_bit(s, offset=0, size=None): |
---|
1854 | """ flip one random bit of the string s, in a byte greater than or equal to offset and less |
---|
1855 | - than offset+size. """ |
---|
1856 | + than offset+size. Return the new string. """ |
---|
1857 | if size is None: |
---|
1858 | size=len(s)-offset |
---|
1859 | i = randrange(offset, offset+size) |
---|
1860 | hunk ./src/allmydata/test/test_backends.py 7 |
---|
1861 | |
---|
1862 | from allmydata.test.common_util import ReallyEqualMixin |
---|
1863 | |
---|
1864 | -import mock |
---|
1865 | +import mock, os |
---|
1866 | |
---|
1867 | # This is the code that we're going to be testing. |
---|
1868 | hunk ./src/allmydata/test/test_backends.py 10 |
---|
1869 | -from allmydata.storage.server import StorageServer, FSBackend, NullBackend |
---|
1870 | +from allmydata.storage.server import StorageServer |
---|
1871 | + |
---|
1872 | +from allmydata.storage.backends.das.core import DASCore |
---|
1873 | +from allmydata.storage.backends.null.core import NullCore |
---|
1874 | + |
---|
1875 | |
---|
1876 | # The following share file contents was generated with |
---|
1877 | # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2 |
---|
1878 | hunk ./src/allmydata/test/test_backends.py 22 |
---|
1879 | share_data = 'a\x00\x00\x00\x00xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy\x00(\xde\x80' |
---|
1880 | share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data |
---|
1881 | |
---|
1882 | -sharefname = 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a/0' |
---|
1883 | +tempdir = 'teststoredir' |
---|
1884 | +sharedirname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a') |
---|
1885 | +sharefname = os.path.join(sharedirname, '0') |
---|
1886 | |
---|
1887 | class TestServerConstruction(unittest.TestCase, ReallyEqualMixin): |
---|
1888 | @mock.patch('time.time') |
---|
1889 | hunk ./src/allmydata/test/test_backends.py 58 |
---|
1890 | filesystem in only the prescribed ways. """ |
---|
1891 | |
---|
1892 | def call_open(fname, mode): |
---|
1893 | - if fname == 'testdir/bucket_counter.state': |
---|
1894 | - raise IOError(2, "No such file or directory: 'testdir/bucket_counter.state'") |
---|
1895 | - elif fname == 'testdir/lease_checker.state': |
---|
1896 | - raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'") |
---|
1897 | - elif fname == 'testdir/lease_checker.history': |
---|
1898 | + if fname == os.path.join(tempdir,'bucket_counter.state'): |
---|
1899 | + raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state')) |
---|
1900 | + elif fname == os.path.join(tempdir, 'lease_checker.state'): |
---|
1901 | + raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state')) |
---|
1902 | + elif fname == os.path.join(tempdir, 'lease_checker.history'): |
---|
1903 | return StringIO() |
---|
1904 | else: |
---|
1905 | self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode)) |
---|
1906 | hunk ./src/allmydata/test/test_backends.py 124 |
---|
1907 | @mock.patch('__builtin__.open') |
---|
1908 | def setUp(self, mockopen): |
---|
1909 | def call_open(fname, mode): |
---|
1910 | - if fname == 'testdir/bucket_counter.state': |
---|
1911 | - raise IOError(2, "No such file or directory: 'testdir/bucket_counter.state'") |
---|
1912 | - elif fname == 'testdir/lease_checker.state': |
---|
1913 | - raise IOError(2, "No such file or directory: 'testdir/lease_checker.state'") |
---|
1914 | - elif fname == 'testdir/lease_checker.history': |
---|
1915 | + if fname == os.path.join(tempdir, 'bucket_counter.state'): |
---|
1916 | + raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state')) |
---|
1917 | + elif fname == os.path.join(tempdir, 'lease_checker.state'): |
---|
1918 | + raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state')) |
---|
1919 | + elif fname == os.path.join(tempdir, 'lease_checker.history'): |
---|
1920 | return StringIO() |
---|
1921 | mockopen.side_effect = call_open |
---|
1922 | hunk ./src/allmydata/test/test_backends.py 131 |
---|
1923 | - |
---|
1924 | - self.s = StorageServer('testnodeidxxxxxxxxxx', backend=FSBackend('teststoredir')) |
---|
1925 | + expiration_policy = {'enabled' : False, |
---|
1926 | + 'mode' : 'age', |
---|
1927 | + 'override_lease_duration' : None, |
---|
1928 | + 'cutoff_date' : None, |
---|
1929 | + 'sharetypes' : None} |
---|
1930 | + testbackend = DASCore(tempdir, expiration_policy) |
---|
1931 | + self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) ) |
---|
1932 | |
---|
1933 | @mock.patch('time.time') |
---|
1934 | @mock.patch('os.mkdir') |
---|
1935 | hunk ./src/allmydata/test/test_backends.py 148 |
---|
1936 | """ Write a new share. """ |
---|
1937 | |
---|
1938 | def call_listdir(dirname): |
---|
1939 | - self.failUnlessReallyEqual(dirname, 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a') |
---|
1940 | - raise OSError(2, "No such file or directory: 'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a'") |
---|
1941 | + self.failUnlessReallyEqual(dirname, sharedirname) |
---|
1942 | + raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares/or/orsxg5dtorxxeylhmvpws3temv4a')) |
---|
1943 | |
---|
1944 | mocklistdir.side_effect = call_listdir |
---|
1945 | |
---|
1946 | hunk ./src/allmydata/test/test_backends.py 178 |
---|
1947 | |
---|
1948 | sharefile = MockFile() |
---|
1949 | def call_open(fname, mode): |
---|
1950 | - self.failUnlessReallyEqual(fname, 'testdir/shares/incoming/or/orsxg5dtorxxeylhmvpws3temv4a/0' ) |
---|
1951 | + self.failUnlessReallyEqual(fname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a', '0' )) |
---|
1952 | return sharefile |
---|
1953 | |
---|
1954 | mockopen.side_effect = call_open |
---|
1955 | hunk ./src/allmydata/test/test_backends.py 200 |
---|
1956 | StorageServer object. """ |
---|
1957 | |
---|
1958 | def call_listdir(dirname): |
---|
1959 | - self.failUnlessReallyEqual(dirname,'testdir/shares/or/orsxg5dtorxxeylhmvpws3temv4a') |
---|
1960 | + self.failUnlessReallyEqual(dirname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')) |
---|
1961 | return ['0'] |
---|
1962 | |
---|
1963 | mocklistdir.side_effect = call_listdir |
---|
1964 | } |
---|
1965 | [checkpoint patch |
---|
1966 | wilcoxjg@gmail.com**20110626165715 |
---|
1967 | Ignore-this: fbfce2e8a1c1bb92715793b8ad6854d5 |
---|
1968 | ] { |
---|
1969 | hunk ./src/allmydata/storage/backends/das/core.py 21 |
---|
1970 | from allmydata.storage.lease import LeaseInfo |
---|
1971 | from allmydata.storage.mutable import MutableShareFile, EmptyShare, \ |
---|
1972 | create_mutable_sharefile |
---|
1973 | -from allmydata.storage.backends.das.immutable import NullBucketWriter, BucketWriter, BucketReader |
---|
1974 | +from allmydata.storage.immutable import BucketWriter, BucketReader |
---|
1975 | from allmydata.storage.crawler import FSBucketCountingCrawler |
---|
1976 | from allmydata.storage.backends.das.expirer import FSLeaseCheckingCrawler |
---|
1977 | |
---|
1978 | hunk ./src/allmydata/storage/backends/das/core.py 27 |
---|
1979 | from zope.interface import implements |
---|
1980 | |
---|
1981 | +# $SHARENUM matches this regex: |
---|
1982 | +NUM_RE=re.compile("^[0-9]+$") |
---|
1983 | + |
---|
1984 | class DASCore(Backend): |
---|
1985 | implements(IStorageBackend) |
---|
1986 | def __init__(self, storedir, expiration_policy, readonly=False, reserved_space=0): |
---|
1987 | hunk ./src/allmydata/storage/backends/das/core.py 80 |
---|
1988 | return fileutil.get_available_space(self.storedir, self.reserved_space) |
---|
1989 | |
---|
1990 | def get_shares(self, storage_index): |
---|
1991 | - """Return a list of the FSBShare objects that correspond to the passed storage_index.""" |
---|
1992 | + """Return a list of the ImmutableShare objects that correspond to the passed storage_index.""" |
---|
1993 | finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index)) |
---|
1994 | try: |
---|
1995 | for f in os.listdir(finalstoragedir): |
---|
1996 | hunk ./src/allmydata/storage/backends/das/core.py 86 |
---|
1997 | if NUM_RE.match(f): |
---|
1998 | filename = os.path.join(finalstoragedir, f) |
---|
1999 | - yield FSBShare(filename, int(f)) |
---|
2000 | + yield ImmutableShare(self.sharedir, storage_index, int(f)) |
---|
2001 | except OSError: |
---|
2002 | # Commonly caused by there being no buckets at all. |
---|
2003 | pass |
---|
2004 | hunk ./src/allmydata/storage/backends/das/core.py 95 |
---|
2005 | immsh = ImmutableShare(self.sharedir, storage_index, shnum, max_size=max_space_per_bucket, create=True) |
---|
2006 | bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary) |
---|
2007 | return bw |
---|
2008 | + |
---|
2009 | + def set_storage_server(self, ss): |
---|
2010 | + self.ss = ss |
---|
2011 | |
---|
2012 | |
---|
2013 | # each share file (in storage/shares/$SI/$SHNUM) contains lease information |
---|
2014 | hunk ./src/allmydata/storage/server.py 29 |
---|
2015 | # Where "$START" denotes the first 10 bits worth of $STORAGEINDEX (that's 2 |
---|
2016 | # base-32 chars). |
---|
2017 | |
---|
2018 | -# $SHARENUM matches this regex: |
---|
2019 | -NUM_RE=re.compile("^[0-9]+$") |
---|
2020 | |
---|
2021 | class StorageServer(service.MultiService, Referenceable): |
---|
2022 | implements(RIStorageServer, IStatsProducer) |
---|
2023 | } |
---|
2024 | [checkpoint4 |
---|
2025 | wilcoxjg@gmail.com**20110628202202 |
---|
2026 | Ignore-this: 9778596c10bb066b58fc211f8c1707b7 |
---|
2027 | ] { |
---|
2028 | hunk ./src/allmydata/storage/backends/das/core.py 96 |
---|
2029 | bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary) |
---|
2030 | return bw |
---|
2031 | |
---|
2032 | + def make_bucket_reader(self, share): |
---|
2033 | + return BucketReader(self.ss, share) |
---|
2034 | + |
---|
2035 | def set_storage_server(self, ss): |
---|
2036 | self.ss = ss |
---|
2037 | |
---|
2038 | hunk ./src/allmydata/storage/backends/das/core.py 138 |
---|
2039 | must not be None. """ |
---|
2040 | precondition((max_size is not None) or (not create), max_size, create) |
---|
2041 | self.shnum = shnum |
---|
2042 | + self.storage_index = storageindex |
---|
2043 | self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum)) |
---|
2044 | self._max_size = max_size |
---|
2045 | if create: |
---|
2046 | hunk ./src/allmydata/storage/backends/das/core.py 173 |
---|
2047 | self._lease_offset = filesize - (num_leases * self.LEASE_SIZE) |
---|
2048 | self._data_offset = 0xc |
---|
2049 | |
---|
2050 | + def get_shnum(self): |
---|
2051 | + return self.shnum |
---|
2052 | + |
---|
2053 | def unlink(self): |
---|
2054 | os.unlink(self.fname) |
---|
2055 | |
---|
2056 | hunk ./src/allmydata/storage/backends/null/core.py 2 |
---|
2057 | from allmydata.storage.backends.base import Backend |
---|
2058 | +from allmydata.storage.immutable import BucketWriter, BucketReader |
---|
2059 | |
---|
2060 | class NullCore(Backend): |
---|
2061 | def __init__(self): |
---|
2062 | hunk ./src/allmydata/storage/backends/null/core.py 17 |
---|
2063 | def get_share(self, storage_index, sharenum): |
---|
2064 | return None |
---|
2065 | |
---|
2066 | - def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary): |
---|
2067 | - return NullBucketWriter() |
---|
2068 | + def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary): |
---|
2069 | + |
---|
2070 | + return BucketWriter(self.ss, immutableshare, max_space_per_bucket, lease_info, canary) |
---|
2071 | + |
---|
2072 | + def set_storage_server(self, ss): |
---|
2073 | + self.ss = ss |
---|
2074 | + |
---|
2075 | +class ImmutableShare: |
---|
2076 | + sharetype = "immutable" |
---|
2077 | + |
---|
2078 | + def __init__(self, sharedir, storageindex, shnum, max_size=None, create=False): |
---|
2079 | + """ If max_size is not None then I won't allow more than |
---|
2080 | + max_size to be written to me. If create=True then max_size |
---|
2081 | + must not be None. """ |
---|
2082 | + precondition((max_size is not None) or (not create), max_size, create) |
---|
2083 | + self.shnum = shnum |
---|
2084 | + self.storage_index = storageindex |
---|
2085 | + self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum)) |
---|
2086 | + self._max_size = max_size |
---|
2087 | + if create: |
---|
2088 | + # touch the file, so later callers will see that we're working on |
---|
2089 | + # it. Also construct the metadata. |
---|
2090 | + assert not os.path.exists(self.fname) |
---|
2091 | + fileutil.make_dirs(os.path.dirname(self.fname)) |
---|
2092 | + f = open(self.fname, 'wb') |
---|
2093 | + # The second field -- the four-byte share data length -- is no |
---|
2094 | + # longer used as of Tahoe v1.3.0, but we continue to write it in |
---|
2095 | + # there in case someone downgrades a storage server from >= |
---|
2096 | + # Tahoe-1.3.0 to < Tahoe-1.3.0, or moves a share file from one |
---|
2097 | + # server to another, etc. We do saturation -- a share data length |
---|
2098 | + # larger than 2**32-1 (what can fit into the field) is marked as |
---|
2099 | + # the largest length that can fit into the field. That way, even |
---|
2100 | + # if this does happen, the old < v1.3.0 server will still allow |
---|
2101 | + # clients to read the first part of the share. |
---|
2102 | + f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0)) |
---|
2103 | + f.close() |
---|
2104 | + self._lease_offset = max_size + 0x0c |
---|
2105 | + self._num_leases = 0 |
---|
2106 | + else: |
---|
2107 | + f = open(self.fname, 'rb') |
---|
2108 | + filesize = os.path.getsize(self.fname) |
---|
2109 | + (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc)) |
---|
2110 | + f.close() |
---|
2111 | + if version != 1: |
---|
2112 | + msg = "sharefile %s had version %d but we wanted 1" % \ |
---|
2113 | + (self.fname, version) |
---|
2114 | + raise UnknownImmutableContainerVersionError(msg) |
---|
2115 | + self._num_leases = num_leases |
---|
2116 | + self._lease_offset = filesize - (num_leases * self.LEASE_SIZE) |
---|
2117 | + self._data_offset = 0xc |
---|
2118 | + |
---|
2119 | + def get_shnum(self): |
---|
2120 | + return self.shnum |
---|
2121 | + |
---|
2122 | + def unlink(self): |
---|
2123 | + os.unlink(self.fname) |
---|
2124 | + |
---|
2125 | + def read_share_data(self, offset, length): |
---|
2126 | + precondition(offset >= 0) |
---|
2127 | + # Reads beyond the end of the data are truncated. Reads that start |
---|
2128 | + # beyond the end of the data return an empty string. |
---|
2129 | + seekpos = self._data_offset+offset |
---|
2130 | + fsize = os.path.getsize(self.fname) |
---|
2131 | + actuallength = max(0, min(length, fsize-seekpos)) |
---|
2132 | + if actuallength == 0: |
---|
2133 | + return "" |
---|
2134 | + f = open(self.fname, 'rb') |
---|
2135 | + f.seek(seekpos) |
---|
2136 | + return f.read(actuallength) |
---|
2137 | + |
---|
2138 | + def write_share_data(self, offset, data): |
---|
2139 | + length = len(data) |
---|
2140 | + precondition(offset >= 0, offset) |
---|
2141 | + if self._max_size is not None and offset+length > self._max_size: |
---|
2142 | + raise DataTooLargeError(self._max_size, offset, length) |
---|
2143 | + f = open(self.fname, 'rb+') |
---|
2144 | + real_offset = self._data_offset+offset |
---|
2145 | + f.seek(real_offset) |
---|
2146 | + assert f.tell() == real_offset |
---|
2147 | + f.write(data) |
---|
2148 | + f.close() |
---|
2149 | + |
---|
2150 | + def _write_lease_record(self, f, lease_number, lease_info): |
---|
2151 | + offset = self._lease_offset + lease_number * self.LEASE_SIZE |
---|
2152 | + f.seek(offset) |
---|
2153 | + assert f.tell() == offset |
---|
2154 | + f.write(lease_info.to_immutable_data()) |
---|
2155 | + |
---|
2156 | + def _read_num_leases(self, f): |
---|
2157 | + f.seek(0x08) |
---|
2158 | + (num_leases,) = struct.unpack(">L", f.read(4)) |
---|
2159 | + return num_leases |
---|
2160 | + |
---|
2161 | + def _write_num_leases(self, f, num_leases): |
---|
2162 | + f.seek(0x08) |
---|
2163 | + f.write(struct.pack(">L", num_leases)) |
---|
2164 | + |
---|
2165 | + def _truncate_leases(self, f, num_leases): |
---|
2166 | + f.truncate(self._lease_offset + num_leases * self.LEASE_SIZE) |
---|
2167 | + |
---|
2168 | + def get_leases(self): |
---|
2169 | + """Yields a LeaseInfo instance for all leases.""" |
---|
2170 | + f = open(self.fname, 'rb') |
---|
2171 | + (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc)) |
---|
2172 | + f.seek(self._lease_offset) |
---|
2173 | + for i in range(num_leases): |
---|
2174 | + data = f.read(self.LEASE_SIZE) |
---|
2175 | + if data: |
---|
2176 | + yield LeaseInfo().from_immutable_data(data) |
---|
2177 | + |
---|
2178 | + def add_lease(self, lease_info): |
---|
2179 | + f = open(self.fname, 'rb+') |
---|
2180 | + num_leases = self._read_num_leases(f) |
---|
2181 | + self._write_lease_record(f, num_leases, lease_info) |
---|
2182 | + self._write_num_leases(f, num_leases+1) |
---|
2183 | + f.close() |
---|
2184 | + |
---|
2185 | + def renew_lease(self, renew_secret, new_expire_time): |
---|
2186 | + for i,lease in enumerate(self.get_leases()): |
---|
2187 | + if constant_time_compare(lease.renew_secret, renew_secret): |
---|
2188 | + # yup. See if we need to update the owner time. |
---|
2189 | + if new_expire_time > lease.expiration_time: |
---|
2190 | + # yes |
---|
2191 | + lease.expiration_time = new_expire_time |
---|
2192 | + f = open(self.fname, 'rb+') |
---|
2193 | + self._write_lease_record(f, i, lease) |
---|
2194 | + f.close() |
---|
2195 | + return |
---|
2196 | + raise IndexError("unable to renew non-existent lease") |
---|
2197 | + |
---|
2198 | + def add_or_renew_lease(self, lease_info): |
---|
2199 | + try: |
---|
2200 | + self.renew_lease(lease_info.renew_secret, |
---|
2201 | + lease_info.expiration_time) |
---|
2202 | + except IndexError: |
---|
2203 | + self.add_lease(lease_info) |
---|
2204 | + |
---|
2205 | + |
---|
2206 | + def cancel_lease(self, cancel_secret): |
---|
2207 | + """Remove a lease with the given cancel_secret. If the last lease is |
---|
2208 | + cancelled, the file will be removed. Return the number of bytes that |
---|
2209 | + were freed (by truncating the list of leases, and possibly by |
---|
2210 | + deleting the file. Raise IndexError if there was no lease with the |
---|
2211 | + given cancel_secret. |
---|
2212 | + """ |
---|
2213 | + |
---|
2214 | + leases = list(self.get_leases()) |
---|
2215 | + num_leases_removed = 0 |
---|
2216 | + for i,lease in enumerate(leases): |
---|
2217 | + if constant_time_compare(lease.cancel_secret, cancel_secret): |
---|
2218 | + leases[i] = None |
---|
2219 | + num_leases_removed += 1 |
---|
2220 | + if not num_leases_removed: |
---|
2221 | + raise IndexError("unable to find matching lease to cancel") |
---|
2222 | + if num_leases_removed: |
---|
2223 | + # pack and write out the remaining leases. We write these out in |
---|
2224 | + # the same order as they were added, so that if we crash while |
---|
2225 | + # doing this, we won't lose any non-cancelled leases. |
---|
2226 | + leases = [l for l in leases if l] # remove the cancelled leases |
---|
2227 | + f = open(self.fname, 'rb+') |
---|
2228 | + for i,lease in enumerate(leases): |
---|
2229 | + self._write_lease_record(f, i, lease) |
---|
2230 | + self._write_num_leases(f, len(leases)) |
---|
2231 | + self._truncate_leases(f, len(leases)) |
---|
2232 | + f.close() |
---|
2233 | + space_freed = self.LEASE_SIZE * num_leases_removed |
---|
2234 | + if not len(leases): |
---|
2235 | + space_freed += os.stat(self.fname)[stat.ST_SIZE] |
---|
2236 | + self.unlink() |
---|
2237 | + return space_freed |
---|
2238 | hunk ./src/allmydata/storage/immutable.py 114 |
---|
2239 | class BucketReader(Referenceable): |
---|
2240 | implements(RIBucketReader) |
---|
2241 | |
---|
2242 | - def __init__(self, ss, sharefname, storage_index=None, shnum=None): |
---|
2243 | + def __init__(self, ss, share): |
---|
2244 | self.ss = ss |
---|
2245 | hunk ./src/allmydata/storage/immutable.py 116 |
---|
2246 | - self._share_file = ShareFile(sharefname) |
---|
2247 | - self.storage_index = storage_index |
---|
2248 | - self.shnum = shnum |
---|
2249 | + self._share_file = share |
---|
2250 | + self.storage_index = share.storage_index |
---|
2251 | + self.shnum = share.shnum |
---|
2252 | |
---|
2253 | def __repr__(self): |
---|
2254 | return "<%s %s %s>" % (self.__class__.__name__, |
---|
2255 | hunk ./src/allmydata/storage/server.py 316 |
---|
2256 | si_s = si_b2a(storage_index) |
---|
2257 | log.msg("storage: get_buckets %s" % si_s) |
---|
2258 | bucketreaders = {} # k: sharenum, v: BucketReader |
---|
2259 | - for shnum, filename in self.backend.get_shares(storage_index): |
---|
2260 | - bucketreaders[shnum] = BucketReader(self, filename, |
---|
2261 | - storage_index, shnum) |
---|
2262 | + self.backend.set_storage_server(self) |
---|
2263 | + for share in self.backend.get_shares(storage_index): |
---|
2264 | + bucketreaders[share.get_shnum()] = self.backend.make_bucket_reader(share) |
---|
2265 | self.add_latency("get", time.time() - start) |
---|
2266 | return bucketreaders |
---|
2267 | |
---|
2268 | hunk ./src/allmydata/test/test_backends.py 25 |
---|
2269 | tempdir = 'teststoredir' |
---|
2270 | sharedirname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a') |
---|
2271 | sharefname = os.path.join(sharedirname, '0') |
---|
2272 | +expiration_policy = {'enabled' : False, |
---|
2273 | + 'mode' : 'age', |
---|
2274 | + 'override_lease_duration' : None, |
---|
2275 | + 'cutoff_date' : None, |
---|
2276 | + 'sharetypes' : None} |
---|
2277 | |
---|
2278 | class TestServerConstruction(unittest.TestCase, ReallyEqualMixin): |
---|
2279 | @mock.patch('time.time') |
---|
2280 | hunk ./src/allmydata/test/test_backends.py 43 |
---|
2281 | tries to read or write to the file system. """ |
---|
2282 | |
---|
2283 | # Now begin the test. |
---|
2284 | - s = StorageServer('testnodeidxxxxxxxxxx', backend=NullBackend()) |
---|
2285 | + s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore()) |
---|
2286 | |
---|
2287 | self.failIf(mockisdir.called) |
---|
2288 | self.failIf(mocklistdir.called) |
---|
2289 | hunk ./src/allmydata/test/test_backends.py 74 |
---|
2290 | mockopen.side_effect = call_open |
---|
2291 | |
---|
2292 | # Now begin the test. |
---|
2293 | - s = StorageServer('testnodeidxxxxxxxxxx', backend=FSBackend('teststoredir')) |
---|
2294 | + s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore('teststoredir', expiration_policy)) |
---|
2295 | |
---|
2296 | self.failIf(mockisdir.called) |
---|
2297 | self.failIf(mocklistdir.called) |
---|
2298 | hunk ./src/allmydata/test/test_backends.py 86 |
---|
2299 | |
---|
2300 | class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin): |
---|
2301 | def setUp(self): |
---|
2302 | - self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullBackend()) |
---|
2303 | + self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore()) |
---|
2304 | |
---|
2305 | @mock.patch('os.mkdir') |
---|
2306 | @mock.patch('__builtin__.open') |
---|
2307 | hunk ./src/allmydata/test/test_backends.py 136 |
---|
2308 | elif fname == os.path.join(tempdir, 'lease_checker.history'): |
---|
2309 | return StringIO() |
---|
2310 | mockopen.side_effect = call_open |
---|
2311 | - expiration_policy = {'enabled' : False, |
---|
2312 | - 'mode' : 'age', |
---|
2313 | - 'override_lease_duration' : None, |
---|
2314 | - 'cutoff_date' : None, |
---|
2315 | - 'sharetypes' : None} |
---|
2316 | testbackend = DASCore(tempdir, expiration_policy) |
---|
2317 | self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) ) |
---|
2318 | |
---|
2319 | } |
---|
2320 | [checkpoint5 |
---|
2321 | wilcoxjg@gmail.com**20110705034626 |
---|
2322 | Ignore-this: 255780bd58299b0aa33c027e9d008262 |
---|
2323 | ] { |
---|
2324 | addfile ./src/allmydata/storage/backends/base.py |
---|
2325 | hunk ./src/allmydata/storage/backends/base.py 1 |
---|
2326 | +from twisted.application import service |
---|
2327 | + |
---|
2328 | +class Backend(service.MultiService): |
---|
2329 | + def __init__(self): |
---|
2330 | + service.MultiService.__init__(self) |
---|
2331 | hunk ./src/allmydata/storage/backends/null/core.py 19 |
---|
2332 | |
---|
2333 | def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary): |
---|
2334 | |
---|
2335 | + immutableshare = ImmutableShare() |
---|
2336 | return BucketWriter(self.ss, immutableshare, max_space_per_bucket, lease_info, canary) |
---|
2337 | |
---|
2338 | def set_storage_server(self, ss): |
---|
2339 | hunk ./src/allmydata/storage/backends/null/core.py 28 |
---|
2340 | class ImmutableShare: |
---|
2341 | sharetype = "immutable" |
---|
2342 | |
---|
2343 | - def __init__(self, sharedir, storageindex, shnum, max_size=None, create=False): |
---|
2344 | + def __init__(self): |
---|
2345 | """ If max_size is not None then I won't allow more than |
---|
2346 | max_size to be written to me. If create=True then max_size |
---|
2347 | must not be None. """ |
---|
2348 | hunk ./src/allmydata/storage/backends/null/core.py 32 |
---|
2349 | - precondition((max_size is not None) or (not create), max_size, create) |
---|
2350 | - self.shnum = shnum |
---|
2351 | - self.storage_index = storageindex |
---|
2352 | - self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum)) |
---|
2353 | - self._max_size = max_size |
---|
2354 | - if create: |
---|
2355 | - # touch the file, so later callers will see that we're working on |
---|
2356 | - # it. Also construct the metadata. |
---|
2357 | - assert not os.path.exists(self.fname) |
---|
2358 | - fileutil.make_dirs(os.path.dirname(self.fname)) |
---|
2359 | - f = open(self.fname, 'wb') |
---|
2360 | - # The second field -- the four-byte share data length -- is no |
---|
2361 | - # longer used as of Tahoe v1.3.0, but we continue to write it in |
---|
2362 | - # there in case someone downgrades a storage server from >= |
---|
2363 | - # Tahoe-1.3.0 to < Tahoe-1.3.0, or moves a share file from one |
---|
2364 | - # server to another, etc. We do saturation -- a share data length |
---|
2365 | - # larger than 2**32-1 (what can fit into the field) is marked as |
---|
2366 | - # the largest length that can fit into the field. That way, even |
---|
2367 | - # if this does happen, the old < v1.3.0 server will still allow |
---|
2368 | - # clients to read the first part of the share. |
---|
2369 | - f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0)) |
---|
2370 | - f.close() |
---|
2371 | - self._lease_offset = max_size + 0x0c |
---|
2372 | - self._num_leases = 0 |
---|
2373 | - else: |
---|
2374 | - f = open(self.fname, 'rb') |
---|
2375 | - filesize = os.path.getsize(self.fname) |
---|
2376 | - (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc)) |
---|
2377 | - f.close() |
---|
2378 | - if version != 1: |
---|
2379 | - msg = "sharefile %s had version %d but we wanted 1" % \ |
---|
2380 | - (self.fname, version) |
---|
2381 | - raise UnknownImmutableContainerVersionError(msg) |
---|
2382 | - self._num_leases = num_leases |
---|
2383 | - self._lease_offset = filesize - (num_leases * self.LEASE_SIZE) |
---|
2384 | - self._data_offset = 0xc |
---|
2385 | + pass |
---|
2386 | |
---|
2387 | def get_shnum(self): |
---|
2388 | return self.shnum |
---|
2389 | hunk ./src/allmydata/storage/backends/null/core.py 54 |
---|
2390 | return f.read(actuallength) |
---|
2391 | |
---|
2392 | def write_share_data(self, offset, data): |
---|
2393 | - length = len(data) |
---|
2394 | - precondition(offset >= 0, offset) |
---|
2395 | - if self._max_size is not None and offset+length > self._max_size: |
---|
2396 | - raise DataTooLargeError(self._max_size, offset, length) |
---|
2397 | - f = open(self.fname, 'rb+') |
---|
2398 | - real_offset = self._data_offset+offset |
---|
2399 | - f.seek(real_offset) |
---|
2400 | - assert f.tell() == real_offset |
---|
2401 | - f.write(data) |
---|
2402 | - f.close() |
---|
2403 | + pass |
---|
2404 | |
---|
2405 | def _write_lease_record(self, f, lease_number, lease_info): |
---|
2406 | offset = self._lease_offset + lease_number * self.LEASE_SIZE |
---|
2407 | hunk ./src/allmydata/storage/backends/null/core.py 84 |
---|
2408 | if data: |
---|
2409 | yield LeaseInfo().from_immutable_data(data) |
---|
2410 | |
---|
2411 | - def add_lease(self, lease_info): |
---|
2412 | - f = open(self.fname, 'rb+') |
---|
2413 | - num_leases = self._read_num_leases(f) |
---|
2414 | - self._write_lease_record(f, num_leases, lease_info) |
---|
2415 | - self._write_num_leases(f, num_leases+1) |
---|
2416 | - f.close() |
---|
2417 | + def add_lease(self, lease): |
---|
2418 | + pass |
---|
2419 | |
---|
2420 | def renew_lease(self, renew_secret, new_expire_time): |
---|
2421 | for i,lease in enumerate(self.get_leases()): |
---|
2422 | hunk ./src/allmydata/test/test_backends.py 32 |
---|
2423 | 'sharetypes' : None} |
---|
2424 | |
---|
2425 | class TestServerConstruction(unittest.TestCase, ReallyEqualMixin): |
---|
2426 | - @mock.patch('time.time') |
---|
2427 | - @mock.patch('os.mkdir') |
---|
2428 | - @mock.patch('__builtin__.open') |
---|
2429 | - @mock.patch('os.listdir') |
---|
2430 | - @mock.patch('os.path.isdir') |
---|
2431 | - def test_create_server_null_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime): |
---|
2432 | - """ This tests whether a server instance can be constructed |
---|
2433 | - with a null backend. The server instance fails the test if it |
---|
2434 | - tries to read or write to the file system. """ |
---|
2435 | - |
---|
2436 | - # Now begin the test. |
---|
2437 | - s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore()) |
---|
2438 | - |
---|
2439 | - self.failIf(mockisdir.called) |
---|
2440 | - self.failIf(mocklistdir.called) |
---|
2441 | - self.failIf(mockopen.called) |
---|
2442 | - self.failIf(mockmkdir.called) |
---|
2443 | - |
---|
2444 | - # You passed! |
---|
2445 | - |
---|
2446 | @mock.patch('time.time') |
---|
2447 | @mock.patch('os.mkdir') |
---|
2448 | @mock.patch('__builtin__.open') |
---|
2449 | hunk ./src/allmydata/test/test_backends.py 53 |
---|
2450 | self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode)) |
---|
2451 | mockopen.side_effect = call_open |
---|
2452 | |
---|
2453 | - # Now begin the test. |
---|
2454 | - s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore('teststoredir', expiration_policy)) |
---|
2455 | - |
---|
2456 | - self.failIf(mockisdir.called) |
---|
2457 | - self.failIf(mocklistdir.called) |
---|
2458 | - self.failIf(mockopen.called) |
---|
2459 | - self.failIf(mockmkdir.called) |
---|
2460 | - self.failIf(mocktime.called) |
---|
2461 | - |
---|
2462 | - # You passed! |
---|
2463 | - |
---|
2464 | -class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin): |
---|
2465 | - def setUp(self): |
---|
2466 | - self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore()) |
---|
2467 | - |
---|
2468 | - @mock.patch('os.mkdir') |
---|
2469 | - @mock.patch('__builtin__.open') |
---|
2470 | - @mock.patch('os.listdir') |
---|
2471 | - @mock.patch('os.path.isdir') |
---|
2472 | - def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir): |
---|
2473 | - """ Write a new share. """ |
---|
2474 | - |
---|
2475 | - # Now begin the test. |
---|
2476 | - alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock()) |
---|
2477 | - bs[0].remote_write(0, 'a') |
---|
2478 | - self.failIf(mockisdir.called) |
---|
2479 | - self.failIf(mocklistdir.called) |
---|
2480 | - self.failIf(mockopen.called) |
---|
2481 | - self.failIf(mockmkdir.called) |
---|
2482 | + def call_isdir(fname): |
---|
2483 | + if fname == os.path.join(tempdir,'shares'): |
---|
2484 | + return True |
---|
2485 | + elif fname == os.path.join(tempdir,'shares', 'incoming'): |
---|
2486 | + return True |
---|
2487 | + else: |
---|
2488 | + self.fail("Server with FS backend tried to idsir '%s'" % (fname,)) |
---|
2489 | + mockisdir.side_effect = call_isdir |
---|
2490 | |
---|
2491 | hunk ./src/allmydata/test/test_backends.py 62 |
---|
2492 | - @mock.patch('os.path.exists') |
---|
2493 | - @mock.patch('os.path.getsize') |
---|
2494 | - @mock.patch('__builtin__.open') |
---|
2495 | - @mock.patch('os.listdir') |
---|
2496 | - def test_read_share(self, mocklistdir, mockopen, mockgetsize, mockexists): |
---|
2497 | - """ This tests whether the code correctly finds and reads |
---|
2498 | - shares written out by old (Tahoe-LAFS <= v1.8.2) |
---|
2499 | - servers. There is a similar test in test_download, but that one |
---|
2500 | - is from the perspective of the client and exercises a deeper |
---|
2501 | - stack of code. This one is for exercising just the |
---|
2502 | - StorageServer object. """ |
---|
2503 | + def call_mkdir(fname, mode): |
---|
2504 | + """XXX something is calling mkdir teststoredir and teststoredir/shares twice... this is odd!""" |
---|
2505 | + self.failUnlessEqual(0777, mode) |
---|
2506 | + if fname == tempdir: |
---|
2507 | + return None |
---|
2508 | + elif fname == os.path.join(tempdir,'shares'): |
---|
2509 | + return None |
---|
2510 | + elif fname == os.path.join(tempdir,'shares', 'incoming'): |
---|
2511 | + return None |
---|
2512 | + else: |
---|
2513 | + self.fail("Server with FS backend tried to mkdir '%s'" % (fname,)) |
---|
2514 | + mockmkdir.side_effect = call_mkdir |
---|
2515 | |
---|
2516 | # Now begin the test. |
---|
2517 | hunk ./src/allmydata/test/test_backends.py 76 |
---|
2518 | - bs = self.s.remote_get_buckets('teststorage_index') |
---|
2519 | + s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore('teststoredir', expiration_policy)) |
---|
2520 | |
---|
2521 | hunk ./src/allmydata/test/test_backends.py 78 |
---|
2522 | - self.failUnlessEqual(len(bs), 0) |
---|
2523 | - self.failIf(mocklistdir.called) |
---|
2524 | - self.failIf(mockopen.called) |
---|
2525 | - self.failIf(mockgetsize.called) |
---|
2526 | - self.failIf(mockexists.called) |
---|
2527 | + self.failIf(mocklistdir.called, mocklistdir.call_args_list) |
---|
2528 | |
---|
2529 | |
---|
2530 | class TestServerFSBackend(unittest.TestCase, ReallyEqualMixin): |
---|
2531 | hunk ./src/allmydata/test/test_backends.py 193 |
---|
2532 | self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '') |
---|
2533 | |
---|
2534 | |
---|
2535 | + |
---|
2536 | +class TestBackendConstruction(unittest.TestCase, ReallyEqualMixin): |
---|
2537 | + @mock.patch('time.time') |
---|
2538 | + @mock.patch('os.mkdir') |
---|
2539 | + @mock.patch('__builtin__.open') |
---|
2540 | + @mock.patch('os.listdir') |
---|
2541 | + @mock.patch('os.path.isdir') |
---|
2542 | + def test_create_fs_backend(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime): |
---|
2543 | + """ This tests whether a file system backend instance can be |
---|
2544 | + constructed. To pass the test, it has to use the |
---|
2545 | + filesystem in only the prescribed ways. """ |
---|
2546 | + |
---|
2547 | + def call_open(fname, mode): |
---|
2548 | + if fname == os.path.join(tempdir,'bucket_counter.state'): |
---|
2549 | + raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'bucket_counter.state')) |
---|
2550 | + elif fname == os.path.join(tempdir, 'lease_checker.state'): |
---|
2551 | + raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state')) |
---|
2552 | + elif fname == os.path.join(tempdir, 'lease_checker.history'): |
---|
2553 | + return StringIO() |
---|
2554 | + else: |
---|
2555 | + self.fail("Server with FS backend tried to open '%s' in mode '%s'" % (fname, mode)) |
---|
2556 | + mockopen.side_effect = call_open |
---|
2557 | + |
---|
2558 | + def call_isdir(fname): |
---|
2559 | + if fname == os.path.join(tempdir,'shares'): |
---|
2560 | + return True |
---|
2561 | + elif fname == os.path.join(tempdir,'shares', 'incoming'): |
---|
2562 | + return True |
---|
2563 | + else: |
---|
2564 | + self.fail("Server with FS backend tried to idsir '%s'" % (fname,)) |
---|
2565 | + mockisdir.side_effect = call_isdir |
---|
2566 | + |
---|
2567 | + def call_mkdir(fname, mode): |
---|
2568 | + """XXX something is calling mkdir teststoredir and teststoredir/shares twice... this is odd!""" |
---|
2569 | + self.failUnlessEqual(0777, mode) |
---|
2570 | + if fname == tempdir: |
---|
2571 | + return None |
---|
2572 | + elif fname == os.path.join(tempdir,'shares'): |
---|
2573 | + return None |
---|
2574 | + elif fname == os.path.join(tempdir,'shares', 'incoming'): |
---|
2575 | + return None |
---|
2576 | + else: |
---|
2577 | + self.fail("Server with FS backend tried to mkdir '%s'" % (fname,)) |
---|
2578 | + mockmkdir.side_effect = call_mkdir |
---|
2579 | + |
---|
2580 | + # Now begin the test. |
---|
2581 | + DASCore('teststoredir', expiration_policy) |
---|
2582 | + |
---|
2583 | + self.failIf(mocklistdir.called, mocklistdir.call_args_list) |
---|
2584 | } |
---|
2585 | [checkpoint 6 |
---|
2586 | wilcoxjg@gmail.com**20110706190824 |
---|
2587 | Ignore-this: 2fb2d722b53fe4a72c99118c01fceb69 |
---|
2588 | ] { |
---|
2589 | hunk ./src/allmydata/interfaces.py 100 |
---|
2590 | renew_secret=LeaseRenewSecret, |
---|
2591 | cancel_secret=LeaseCancelSecret, |
---|
2592 | sharenums=SetOf(int, maxLength=MAX_BUCKETS), |
---|
2593 | - allocated_size=Offset, canary=Referenceable): |
---|
2594 | + allocated_size=Offset, |
---|
2595 | + canary=Referenceable): |
---|
2596 | """ |
---|
2597 | hunk ./src/allmydata/interfaces.py 103 |
---|
2598 | - @param storage_index: the index of the bucket to be created or |
---|
2599 | + @param storage_index: the index of the shares to be created or |
---|
2600 | increfed. |
---|
2601 | hunk ./src/allmydata/interfaces.py 105 |
---|
2602 | - @param sharenums: these are the share numbers (probably between 0 and |
---|
2603 | - 99) that the sender is proposing to store on this |
---|
2604 | - server. |
---|
2605 | - @param renew_secret: This is the secret used to protect bucket refresh |
---|
2606 | + @param renew_secret: This is the secret used to protect shares refresh |
---|
2607 | This secret is generated by the client and |
---|
2608 | stored for later comparison by the server. Each |
---|
2609 | server is given a different secret. |
---|
2610 | hunk ./src/allmydata/interfaces.py 109 |
---|
2611 | - @param cancel_secret: Like renew_secret, but protects bucket decref. |
---|
2612 | - @param canary: If the canary is lost before close(), the bucket is |
---|
2613 | + @param cancel_secret: Like renew_secret, but protects shares decref. |
---|
2614 | + @param sharenums: these are the share numbers (probably between 0 and |
---|
2615 | + 99) that the sender is proposing to store on this |
---|
2616 | + server. |
---|
2617 | + @param allocated_size: XXX The size of the shares the client wishes to store. |
---|
2618 | + @param canary: If the canary is lost before close(), the shares are |
---|
2619 | deleted. |
---|
2620 | hunk ./src/allmydata/interfaces.py 116 |
---|
2621 | + |
---|
2622 | @return: tuple of (alreadygot, allocated), where alreadygot is what we |
---|
2623 | already have and allocated is what we hereby agree to accept. |
---|
2624 | New leases are added for shares in both lists. |
---|
2625 | hunk ./src/allmydata/interfaces.py 128 |
---|
2626 | renew_secret=LeaseRenewSecret, |
---|
2627 | cancel_secret=LeaseCancelSecret): |
---|
2628 | """ |
---|
2629 | - Add a new lease on the given bucket. If the renew_secret matches an |
---|
2630 | + Add a new lease on the given shares. If the renew_secret matches an |
---|
2631 | existing lease, that lease will be renewed instead. If there is no |
---|
2632 | bucket for the given storage_index, return silently. (note that in |
---|
2633 | tahoe-1.3.0 and earlier, IndexError was raised if there was no |
---|
2634 | hunk ./src/allmydata/storage/server.py 17 |
---|
2635 | from allmydata.storage.mutable import MutableShareFile, EmptyShare, \ |
---|
2636 | create_mutable_sharefile |
---|
2637 | |
---|
2638 | -from zope.interface import implements |
---|
2639 | - |
---|
2640 | # storage/ |
---|
2641 | # storage/shares/incoming |
---|
2642 | # incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will |
---|
2643 | hunk ./src/allmydata/test/test_backends.py 6 |
---|
2644 | from StringIO import StringIO |
---|
2645 | |
---|
2646 | from allmydata.test.common_util import ReallyEqualMixin |
---|
2647 | +from allmydata.util.assertutil import _assert |
---|
2648 | |
---|
2649 | import mock, os |
---|
2650 | |
---|
2651 | hunk ./src/allmydata/test/test_backends.py 92 |
---|
2652 | raise IOError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'lease_checker.state')) |
---|
2653 | elif fname == os.path.join(tempdir, 'lease_checker.history'): |
---|
2654 | return StringIO() |
---|
2655 | + else: |
---|
2656 | + _assert(False, "The tester code doesn't recognize this case.") |
---|
2657 | + |
---|
2658 | mockopen.side_effect = call_open |
---|
2659 | testbackend = DASCore(tempdir, expiration_policy) |
---|
2660 | self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) ) |
---|
2661 | hunk ./src/allmydata/test/test_backends.py 109 |
---|
2662 | |
---|
2663 | def call_listdir(dirname): |
---|
2664 | self.failUnlessReallyEqual(dirname, sharedirname) |
---|
2665 | - raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares/or/orsxg5dtorxxeylhmvpws3temv4a')) |
---|
2666 | + raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')) |
---|
2667 | |
---|
2668 | mocklistdir.side_effect = call_listdir |
---|
2669 | |
---|
2670 | hunk ./src/allmydata/test/test_backends.py 113 |
---|
2671 | + def call_isdir(dirname): |
---|
2672 | + self.failUnlessReallyEqual(dirname, sharedirname) |
---|
2673 | + return True |
---|
2674 | + |
---|
2675 | + mockisdir.side_effect = call_isdir |
---|
2676 | + |
---|
2677 | + def call_mkdir(dirname, permissions): |
---|
2678 | + if dirname not in [sharedirname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511: |
---|
2679 | + self.Fail |
---|
2680 | + else: |
---|
2681 | + return True |
---|
2682 | + |
---|
2683 | + mockmkdir.side_effect = call_mkdir |
---|
2684 | + |
---|
2685 | class MockFile: |
---|
2686 | def __init__(self): |
---|
2687 | self.buffer = '' |
---|
2688 | hunk ./src/allmydata/test/test_backends.py 156 |
---|
2689 | return sharefile |
---|
2690 | |
---|
2691 | mockopen.side_effect = call_open |
---|
2692 | + |
---|
2693 | # Now begin the test. |
---|
2694 | alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock()) |
---|
2695 | bs[0].remote_write(0, 'a') |
---|
2696 | hunk ./src/allmydata/test/test_backends.py 161 |
---|
2697 | self.failUnlessReallyEqual(sharefile.buffer, share_file_data) |
---|
2698 | + |
---|
2699 | + # Now test the allocated_size method. |
---|
2700 | + spaceint = self.s.allocated_size() |
---|
2701 | |
---|
2702 | @mock.patch('os.path.exists') |
---|
2703 | @mock.patch('os.path.getsize') |
---|
2704 | } |
---|
2705 | [checkpoint 7 |
---|
2706 | wilcoxjg@gmail.com**20110706200820 |
---|
2707 | Ignore-this: 16b790efc41a53964cbb99c0e86dafba |
---|
2708 | ] hunk ./src/allmydata/test/test_backends.py 164 |
---|
2709 | |
---|
2710 | # Now test the allocated_size method. |
---|
2711 | spaceint = self.s.allocated_size() |
---|
2712 | + self.failUnlessReallyEqual(spaceint, 1) |
---|
2713 | |
---|
2714 | @mock.patch('os.path.exists') |
---|
2715 | @mock.patch('os.path.getsize') |
---|
2716 | [checkpoint8 |
---|
2717 | wilcoxjg@gmail.com**20110706223126 |
---|
2718 | Ignore-this: 97336180883cb798b16f15411179f827 |
---|
2719 | The nullbackend is necessary to test unlimited space in a backend. It is a mock-like object. |
---|
2720 | ] hunk ./src/allmydata/test/test_backends.py 32 |
---|
2721 | 'cutoff_date' : None, |
---|
2722 | 'sharetypes' : None} |
---|
2723 | |
---|
2724 | +class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin): |
---|
2725 | + def setUp(self): |
---|
2726 | + self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore()) |
---|
2727 | + |
---|
2728 | + @mock.patch('os.mkdir') |
---|
2729 | + @mock.patch('__builtin__.open') |
---|
2730 | + @mock.patch('os.listdir') |
---|
2731 | + @mock.patch('os.path.isdir') |
---|
2732 | + def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir): |
---|
2733 | + """ Write a new share. """ |
---|
2734 | + |
---|
2735 | + # Now begin the test. |
---|
2736 | + alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock()) |
---|
2737 | + bs[0].remote_write(0, 'a') |
---|
2738 | + self.failIf(mockisdir.called) |
---|
2739 | + self.failIf(mocklistdir.called) |
---|
2740 | + self.failIf(mockopen.called) |
---|
2741 | + self.failIf(mockmkdir.called) |
---|
2742 | + |
---|
2743 | class TestServerConstruction(unittest.TestCase, ReallyEqualMixin): |
---|
2744 | @mock.patch('time.time') |
---|
2745 | @mock.patch('os.mkdir') |
---|
2746 | [checkpoint 9 |
---|
2747 | wilcoxjg@gmail.com**20110707042942 |
---|
2748 | Ignore-this: 75396571fd05944755a104a8fc38aaf6 |
---|
2749 | ] { |
---|
2750 | hunk ./src/allmydata/storage/backends/das/core.py 88 |
---|
2751 | filename = os.path.join(finalstoragedir, f) |
---|
2752 | yield ImmutableShare(self.sharedir, storage_index, int(f)) |
---|
2753 | except OSError: |
---|
2754 | - # Commonly caused by there being no buckets at all. |
---|
2755 | + # Commonly caused by there being no shares at all. |
---|
2756 | pass |
---|
2757 | |
---|
2758 | def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary): |
---|
2759 | hunk ./src/allmydata/storage/backends/das/core.py 141 |
---|
2760 | self.storage_index = storageindex |
---|
2761 | self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum)) |
---|
2762 | self._max_size = max_size |
---|
2763 | + self.incomingdir = os.path.join(sharedir, 'incoming') |
---|
2764 | + si_dir = storage_index_to_dir(storageindex) |
---|
2765 | + self.incominghome = os.path.join(self.incomingdir, si_dir, "%d" % shnum) |
---|
2766 | + self.finalhome = os.path.join(sharedir, si_dir, "%d" % shnum) |
---|
2767 | if create: |
---|
2768 | # touch the file, so later callers will see that we're working on |
---|
2769 | # it. Also construct the metadata. |
---|
2770 | hunk ./src/allmydata/storage/backends/das/core.py 177 |
---|
2771 | self._lease_offset = filesize - (num_leases * self.LEASE_SIZE) |
---|
2772 | self._data_offset = 0xc |
---|
2773 | |
---|
2774 | + def close(self): |
---|
2775 | + fileutil.make_dirs(os.path.dirname(self.finalhome)) |
---|
2776 | + fileutil.rename(self.incominghome, self.finalhome) |
---|
2777 | + try: |
---|
2778 | + # self.incominghome is like storage/shares/incoming/ab/abcde/4 . |
---|
2779 | + # We try to delete the parent (.../ab/abcde) to avoid leaving |
---|
2780 | + # these directories lying around forever, but the delete might |
---|
2781 | + # fail if we're working on another share for the same storage |
---|
2782 | + # index (like ab/abcde/5). The alternative approach would be to |
---|
2783 | + # use a hierarchy of objects (PrefixHolder, BucketHolder, |
---|
2784 | + # ShareWriter), each of which is responsible for a single |
---|
2785 | + # directory on disk, and have them use reference counting of |
---|
2786 | + # their children to know when they should do the rmdir. This |
---|
2787 | + # approach is simpler, but relies on os.rmdir refusing to delete |
---|
2788 | + # a non-empty directory. Do *not* use fileutil.rm_dir() here! |
---|
2789 | + os.rmdir(os.path.dirname(self.incominghome)) |
---|
2790 | + # we also delete the grandparent (prefix) directory, .../ab , |
---|
2791 | + # again to avoid leaving directories lying around. This might |
---|
2792 | + # fail if there is another bucket open that shares a prefix (like |
---|
2793 | + # ab/abfff). |
---|
2794 | + os.rmdir(os.path.dirname(os.path.dirname(self.incominghome))) |
---|
2795 | + # we leave the great-grandparent (incoming/) directory in place. |
---|
2796 | + except EnvironmentError: |
---|
2797 | + # ignore the "can't rmdir because the directory is not empty" |
---|
2798 | + # exceptions, those are normal consequences of the |
---|
2799 | + # above-mentioned conditions. |
---|
2800 | + pass |
---|
2801 | + pass |
---|
2802 | + |
---|
2803 | + def stat(self): |
---|
2804 | + return os.stat(self.finalhome)[stat.ST_SIZE] |
---|
2805 | + |
---|
2806 | def get_shnum(self): |
---|
2807 | return self.shnum |
---|
2808 | |
---|
2809 | hunk ./src/allmydata/storage/immutable.py 7 |
---|
2810 | |
---|
2811 | from zope.interface import implements |
---|
2812 | from allmydata.interfaces import RIBucketWriter, RIBucketReader |
---|
2813 | -from allmydata.util import base32, fileutil, log |
---|
2814 | +from allmydata.util import base32, log |
---|
2815 | from allmydata.util.assertutil import precondition |
---|
2816 | from allmydata.util.hashutil import constant_time_compare |
---|
2817 | from allmydata.storage.lease import LeaseInfo |
---|
2818 | hunk ./src/allmydata/storage/immutable.py 44 |
---|
2819 | def remote_close(self): |
---|
2820 | precondition(not self.closed) |
---|
2821 | start = time.time() |
---|
2822 | - |
---|
2823 | - fileutil.make_dirs(os.path.dirname(self.finalhome)) |
---|
2824 | - fileutil.rename(self.incominghome, self.finalhome) |
---|
2825 | - try: |
---|
2826 | - # self.incominghome is like storage/shares/incoming/ab/abcde/4 . |
---|
2827 | - # We try to delete the parent (.../ab/abcde) to avoid leaving |
---|
2828 | - # these directories lying around forever, but the delete might |
---|
2829 | - # fail if we're working on another share for the same storage |
---|
2830 | - # index (like ab/abcde/5). The alternative approach would be to |
---|
2831 | - # use a hierarchy of objects (PrefixHolder, BucketHolder, |
---|
2832 | - # ShareWriter), each of which is responsible for a single |
---|
2833 | - # directory on disk, and have them use reference counting of |
---|
2834 | - # their children to know when they should do the rmdir. This |
---|
2835 | - # approach is simpler, but relies on os.rmdir refusing to delete |
---|
2836 | - # a non-empty directory. Do *not* use fileutil.rm_dir() here! |
---|
2837 | - os.rmdir(os.path.dirname(self.incominghome)) |
---|
2838 | - # we also delete the grandparent (prefix) directory, .../ab , |
---|
2839 | - # again to avoid leaving directories lying around. This might |
---|
2840 | - # fail if there is another bucket open that shares a prefix (like |
---|
2841 | - # ab/abfff). |
---|
2842 | - os.rmdir(os.path.dirname(os.path.dirname(self.incominghome))) |
---|
2843 | - # we leave the great-grandparent (incoming/) directory in place. |
---|
2844 | - except EnvironmentError: |
---|
2845 | - # ignore the "can't rmdir because the directory is not empty" |
---|
2846 | - # exceptions, those are normal consequences of the |
---|
2847 | - # above-mentioned conditions. |
---|
2848 | - pass |
---|
2849 | + self._sharefile.close() |
---|
2850 | self._sharefile = None |
---|
2851 | self.closed = True |
---|
2852 | self._canary.dontNotifyOnDisconnect(self._disconnect_marker) |
---|
2853 | hunk ./src/allmydata/storage/immutable.py 49 |
---|
2854 | |
---|
2855 | - filelen = os.stat(self.finalhome)[stat.ST_SIZE] |
---|
2856 | + filelen = self._sharefile.stat() |
---|
2857 | self.ss.bucket_writer_closed(self, filelen) |
---|
2858 | self.ss.add_latency("close", time.time() - start) |
---|
2859 | self.ss.count("close") |
---|
2860 | hunk ./src/allmydata/storage/server.py 45 |
---|
2861 | self._active_writers = weakref.WeakKeyDictionary() |
---|
2862 | self.backend = backend |
---|
2863 | self.backend.setServiceParent(self) |
---|
2864 | + self.backend.set_storage_server(self) |
---|
2865 | log.msg("StorageServer created", facility="tahoe.storage") |
---|
2866 | |
---|
2867 | self.latencies = {"allocate": [], # immutable |
---|
2868 | hunk ./src/allmydata/storage/server.py 220 |
---|
2869 | |
---|
2870 | for shnum in (sharenums - alreadygot): |
---|
2871 | if (not limited) or (remaining_space >= max_space_per_bucket): |
---|
2872 | - #XXX or should the following line occur in storage server construtor? ok! we need to create the new share file. |
---|
2873 | - self.backend.set_storage_server(self) |
---|
2874 | bw = self.backend.make_bucket_writer(storage_index, shnum, |
---|
2875 | max_space_per_bucket, lease_info, canary) |
---|
2876 | bucketwriters[shnum] = bw |
---|
2877 | hunk ./src/allmydata/test/test_backends.py 117 |
---|
2878 | mockopen.side_effect = call_open |
---|
2879 | testbackend = DASCore(tempdir, expiration_policy) |
---|
2880 | self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) ) |
---|
2881 | - |
---|
2882 | + |
---|
2883 | + @mock.patch('allmydata.util.fileutil.get_available_space') |
---|
2884 | @mock.patch('time.time') |
---|
2885 | @mock.patch('os.mkdir') |
---|
2886 | @mock.patch('__builtin__.open') |
---|
2887 | hunk ./src/allmydata/test/test_backends.py 124 |
---|
2888 | @mock.patch('os.listdir') |
---|
2889 | @mock.patch('os.path.isdir') |
---|
2890 | - def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime): |
---|
2891 | + def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime,\ |
---|
2892 | + mockget_available_space): |
---|
2893 | """ Write a new share. """ |
---|
2894 | |
---|
2895 | def call_listdir(dirname): |
---|
2896 | hunk ./src/allmydata/test/test_backends.py 148 |
---|
2897 | |
---|
2898 | mockmkdir.side_effect = call_mkdir |
---|
2899 | |
---|
2900 | + def call_get_available_space(storedir, reserved_space): |
---|
2901 | + self.failUnlessReallyEqual(storedir, tempdir) |
---|
2902 | + return 1 |
---|
2903 | + |
---|
2904 | + mockget_available_space.side_effect = call_get_available_space |
---|
2905 | + |
---|
2906 | class MockFile: |
---|
2907 | def __init__(self): |
---|
2908 | self.buffer = '' |
---|
2909 | hunk ./src/allmydata/test/test_backends.py 188 |
---|
2910 | alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock()) |
---|
2911 | bs[0].remote_write(0, 'a') |
---|
2912 | self.failUnlessReallyEqual(sharefile.buffer, share_file_data) |
---|
2913 | - |
---|
2914 | + |
---|
2915 | + # What happens when there's not enough space for the client's request? |
---|
2916 | + alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock()) |
---|
2917 | + |
---|
2918 | # Now test the allocated_size method. |
---|
2919 | spaceint = self.s.allocated_size() |
---|
2920 | self.failUnlessReallyEqual(spaceint, 1) |
---|
2921 | } |
---|
2922 | [checkpoint10 |
---|
2923 | wilcoxjg@gmail.com**20110707172049 |
---|
2924 | Ignore-this: 9dd2fb8bee93a88cea2625058decff32 |
---|
2925 | ] { |
---|
2926 | hunk ./src/allmydata/test/test_backends.py 20 |
---|
2927 | # The following share file contents was generated with |
---|
2928 | # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2 |
---|
2929 | # with share data == 'a'. |
---|
2930 | -share_data = 'a\x00\x00\x00\x00xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy\x00(\xde\x80' |
---|
2931 | +renew_secret = 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx' |
---|
2932 | +cancel_secret = 'yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy' |
---|
2933 | +share_data = 'a\x00\x00\x00\x00' + renew_secret + cancel_secret + '\x00(\xde\x80' |
---|
2934 | share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data |
---|
2935 | |
---|
2936 | hunk ./src/allmydata/test/test_backends.py 25 |
---|
2937 | +testnodeid = 'testnodeidxxxxxxxxxx' |
---|
2938 | tempdir = 'teststoredir' |
---|
2939 | sharedirname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a') |
---|
2940 | sharefname = os.path.join(sharedirname, '0') |
---|
2941 | hunk ./src/allmydata/test/test_backends.py 37 |
---|
2942 | |
---|
2943 | class TestServerNullBackend(unittest.TestCase, ReallyEqualMixin): |
---|
2944 | def setUp(self): |
---|
2945 | - self.s = StorageServer('testnodeidxxxxxxxxxx', backend=NullCore()) |
---|
2946 | + self.s = StorageServer(testnodeid, backend=NullCore()) |
---|
2947 | |
---|
2948 | @mock.patch('os.mkdir') |
---|
2949 | @mock.patch('__builtin__.open') |
---|
2950 | hunk ./src/allmydata/test/test_backends.py 99 |
---|
2951 | mockmkdir.side_effect = call_mkdir |
---|
2952 | |
---|
2953 | # Now begin the test. |
---|
2954 | - s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore('teststoredir', expiration_policy)) |
---|
2955 | + s = StorageServer(testnodeid, backend=DASCore('teststoredir', expiration_policy)) |
---|
2956 | |
---|
2957 | self.failIf(mocklistdir.called, mocklistdir.call_args_list) |
---|
2958 | |
---|
2959 | hunk ./src/allmydata/test/test_backends.py 119 |
---|
2960 | |
---|
2961 | mockopen.side_effect = call_open |
---|
2962 | testbackend = DASCore(tempdir, expiration_policy) |
---|
2963 | - self.s = StorageServer('testnodeidxxxxxxxxxx', backend=DASCore(tempdir, expiration_policy) ) |
---|
2964 | - |
---|
2965 | + self.s = StorageServer(testnodeid, backend=DASCore(tempdir, expiration_policy) ) |
---|
2966 | + |
---|
2967 | + @mock.patch('allmydata.storage.backends.das.core.DASCore.get_shares') |
---|
2968 | @mock.patch('allmydata.util.fileutil.get_available_space') |
---|
2969 | @mock.patch('time.time') |
---|
2970 | @mock.patch('os.mkdir') |
---|
2971 | hunk ./src/allmydata/test/test_backends.py 129 |
---|
2972 | @mock.patch('os.listdir') |
---|
2973 | @mock.patch('os.path.isdir') |
---|
2974 | def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime,\ |
---|
2975 | - mockget_available_space): |
---|
2976 | + mockget_available_space, mockget_shares): |
---|
2977 | """ Write a new share. """ |
---|
2978 | |
---|
2979 | def call_listdir(dirname): |
---|
2980 | hunk ./src/allmydata/test/test_backends.py 139 |
---|
2981 | mocklistdir.side_effect = call_listdir |
---|
2982 | |
---|
2983 | def call_isdir(dirname): |
---|
2984 | + #XXX Should there be any other tests here? |
---|
2985 | self.failUnlessReallyEqual(dirname, sharedirname) |
---|
2986 | return True |
---|
2987 | |
---|
2988 | hunk ./src/allmydata/test/test_backends.py 159 |
---|
2989 | |
---|
2990 | mockget_available_space.side_effect = call_get_available_space |
---|
2991 | |
---|
2992 | + mocktime.return_value = 0 |
---|
2993 | + class MockShare: |
---|
2994 | + def __init__(self): |
---|
2995 | + self.shnum = 1 |
---|
2996 | + |
---|
2997 | + def add_or_renew_lease(elf, lease_info): |
---|
2998 | + self.failUnlessReallyEqual(lease_info.renew_secret, renew_secret) |
---|
2999 | + self.failUnlessReallyEqual(lease_info.cancel_secret, cancel_secret) |
---|
3000 | + self.failUnlessReallyEqual(lease_info.owner_num, 0) |
---|
3001 | + self.failUnlessReallyEqual(lease_info.expiration_time, mocktime() + 31*24*60*60) |
---|
3002 | + self.failUnlessReallyEqual(lease_info.nodeid, testnodeid) |
---|
3003 | + |
---|
3004 | + |
---|
3005 | + share = MockShare() |
---|
3006 | + def call_get_shares(storageindex): |
---|
3007 | + return [share] |
---|
3008 | + |
---|
3009 | + mockget_shares.side_effect = call_get_shares |
---|
3010 | + |
---|
3011 | class MockFile: |
---|
3012 | def __init__(self): |
---|
3013 | self.buffer = '' |
---|
3014 | hunk ./src/allmydata/test/test_backends.py 199 |
---|
3015 | def tell(self): |
---|
3016 | return self.pos |
---|
3017 | |
---|
3018 | - mocktime.return_value = 0 |
---|
3019 | |
---|
3020 | sharefile = MockFile() |
---|
3021 | def call_open(fname, mode): |
---|
3022 | } |
---|
3023 | [jacp 11 |
---|
3024 | wilcoxjg@gmail.com**20110708213919 |
---|
3025 | Ignore-this: b8f81b264800590b3e2bfc6fffd21ff9 |
---|
3026 | ] { |
---|
3027 | hunk ./src/allmydata/storage/backends/das/core.py 144 |
---|
3028 | self.incomingdir = os.path.join(sharedir, 'incoming') |
---|
3029 | si_dir = storage_index_to_dir(storageindex) |
---|
3030 | self.incominghome = os.path.join(self.incomingdir, si_dir, "%d" % shnum) |
---|
3031 | + #XXX self.fname and self.finalhome need to be resolve/merged. |
---|
3032 | self.finalhome = os.path.join(sharedir, si_dir, "%d" % shnum) |
---|
3033 | if create: |
---|
3034 | # touch the file, so later callers will see that we're working on |
---|
3035 | hunk ./src/allmydata/storage/backends/das/core.py 208 |
---|
3036 | pass |
---|
3037 | |
---|
3038 | def stat(self): |
---|
3039 | - return os.stat(self.finalhome)[stat.ST_SIZE] |
---|
3040 | + return os.stat(self.finalhome)[os.stat.ST_SIZE] |
---|
3041 | |
---|
3042 | def get_shnum(self): |
---|
3043 | return self.shnum |
---|
3044 | hunk ./src/allmydata/storage/immutable.py 44 |
---|
3045 | def remote_close(self): |
---|
3046 | precondition(not self.closed) |
---|
3047 | start = time.time() |
---|
3048 | + |
---|
3049 | self._sharefile.close() |
---|
3050 | hunk ./src/allmydata/storage/immutable.py 46 |
---|
3051 | + filelen = self._sharefile.stat() |
---|
3052 | self._sharefile = None |
---|
3053 | hunk ./src/allmydata/storage/immutable.py 48 |
---|
3054 | + |
---|
3055 | self.closed = True |
---|
3056 | self._canary.dontNotifyOnDisconnect(self._disconnect_marker) |
---|
3057 | |
---|
3058 | hunk ./src/allmydata/storage/immutable.py 52 |
---|
3059 | - filelen = self._sharefile.stat() |
---|
3060 | self.ss.bucket_writer_closed(self, filelen) |
---|
3061 | self.ss.add_latency("close", time.time() - start) |
---|
3062 | self.ss.count("close") |
---|
3063 | hunk ./src/allmydata/storage/server.py 220 |
---|
3064 | |
---|
3065 | for shnum in (sharenums - alreadygot): |
---|
3066 | if (not limited) or (remaining_space >= max_space_per_bucket): |
---|
3067 | - bw = self.backend.make_bucket_writer(storage_index, shnum, |
---|
3068 | - max_space_per_bucket, lease_info, canary) |
---|
3069 | + bw = self.backend.make_bucket_writer(storage_index, shnum, max_space_per_bucket, lease_info, canary) |
---|
3070 | bucketwriters[shnum] = bw |
---|
3071 | self._active_writers[bw] = 1 |
---|
3072 | if limited: |
---|
3073 | hunk ./src/allmydata/test/test_backends.py 20 |
---|
3074 | # The following share file contents was generated with |
---|
3075 | # storage.immutable.ShareFile from Tahoe-LAFS v1.8.2 |
---|
3076 | # with share data == 'a'. |
---|
3077 | -renew_secret = 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx' |
---|
3078 | -cancel_secret = 'yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy' |
---|
3079 | +renew_secret = 'x'*32 |
---|
3080 | +cancel_secret = 'y'*32 |
---|
3081 | share_data = 'a\x00\x00\x00\x00' + renew_secret + cancel_secret + '\x00(\xde\x80' |
---|
3082 | share_file_data = '\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01' + share_data |
---|
3083 | |
---|
3084 | hunk ./src/allmydata/test/test_backends.py 27 |
---|
3085 | testnodeid = 'testnodeidxxxxxxxxxx' |
---|
3086 | tempdir = 'teststoredir' |
---|
3087 | -sharedirname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a') |
---|
3088 | -sharefname = os.path.join(sharedirname, '0') |
---|
3089 | +sharedirfinalname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a') |
---|
3090 | +sharedirincomingname = os.path.join(tempdir, 'shares', 'incoming', 'or', 'orsxg5dtorxxeylhmvpws3temv4a') |
---|
3091 | +shareincomingname = os.path.join(sharedirincomingname, '0') |
---|
3092 | +sharefname = os.path.join(sharedirfinalname, '0') |
---|
3093 | + |
---|
3094 | expiration_policy = {'enabled' : False, |
---|
3095 | 'mode' : 'age', |
---|
3096 | 'override_lease_duration' : None, |
---|
3097 | hunk ./src/allmydata/test/test_backends.py 123 |
---|
3098 | mockopen.side_effect = call_open |
---|
3099 | testbackend = DASCore(tempdir, expiration_policy) |
---|
3100 | self.s = StorageServer(testnodeid, backend=DASCore(tempdir, expiration_policy) ) |
---|
3101 | - |
---|
3102 | + |
---|
3103 | + @mock.patch('allmydata.util.fileutil.rename') |
---|
3104 | + @mock.patch('allmydata.util.fileutil.make_dirs') |
---|
3105 | + @mock.patch('os.path.exists') |
---|
3106 | + @mock.patch('os.stat') |
---|
3107 | @mock.patch('allmydata.storage.backends.das.core.DASCore.get_shares') |
---|
3108 | @mock.patch('allmydata.util.fileutil.get_available_space') |
---|
3109 | @mock.patch('time.time') |
---|
3110 | hunk ./src/allmydata/test/test_backends.py 136 |
---|
3111 | @mock.patch('os.listdir') |
---|
3112 | @mock.patch('os.path.isdir') |
---|
3113 | def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime,\ |
---|
3114 | - mockget_available_space, mockget_shares): |
---|
3115 | + mockget_available_space, mockget_shares, mockstat, mockexists, \ |
---|
3116 | + mockmake_dirs, mockrename): |
---|
3117 | """ Write a new share. """ |
---|
3118 | |
---|
3119 | def call_listdir(dirname): |
---|
3120 | hunk ./src/allmydata/test/test_backends.py 141 |
---|
3121 | - self.failUnlessReallyEqual(dirname, sharedirname) |
---|
3122 | + self.failUnlessReallyEqual(dirname, sharedirfinalname) |
---|
3123 | raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')) |
---|
3124 | |
---|
3125 | mocklistdir.side_effect = call_listdir |
---|
3126 | hunk ./src/allmydata/test/test_backends.py 148 |
---|
3127 | |
---|
3128 | def call_isdir(dirname): |
---|
3129 | #XXX Should there be any other tests here? |
---|
3130 | - self.failUnlessReallyEqual(dirname, sharedirname) |
---|
3131 | + self.failUnlessReallyEqual(dirname, sharedirfinalname) |
---|
3132 | return True |
---|
3133 | |
---|
3134 | mockisdir.side_effect = call_isdir |
---|
3135 | hunk ./src/allmydata/test/test_backends.py 154 |
---|
3136 | |
---|
3137 | def call_mkdir(dirname, permissions): |
---|
3138 | - if dirname not in [sharedirname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511: |
---|
3139 | + if dirname not in [sharedirfinalname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511: |
---|
3140 | self.Fail |
---|
3141 | else: |
---|
3142 | return True |
---|
3143 | hunk ./src/allmydata/test/test_backends.py 208 |
---|
3144 | return self.pos |
---|
3145 | |
---|
3146 | |
---|
3147 | - sharefile = MockFile() |
---|
3148 | + fobj = MockFile() |
---|
3149 | def call_open(fname, mode): |
---|
3150 | self.failUnlessReallyEqual(fname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a', '0' )) |
---|
3151 | hunk ./src/allmydata/test/test_backends.py 211 |
---|
3152 | - return sharefile |
---|
3153 | + return fobj |
---|
3154 | |
---|
3155 | mockopen.side_effect = call_open |
---|
3156 | |
---|
3157 | hunk ./src/allmydata/test/test_backends.py 215 |
---|
3158 | + def call_make_dirs(dname): |
---|
3159 | + self.failUnlessReallyEqual(dname, sharedirfinalname) |
---|
3160 | + |
---|
3161 | + mockmake_dirs.side_effect = call_make_dirs |
---|
3162 | + |
---|
3163 | + def call_rename(src, dst): |
---|
3164 | + self.failUnlessReallyEqual(src, shareincomingname) |
---|
3165 | + self.failUnlessReallyEqual(dst, sharefname) |
---|
3166 | + |
---|
3167 | + mockrename.side_effect = call_rename |
---|
3168 | + |
---|
3169 | + def call_exists(fname): |
---|
3170 | + self.failUnlessReallyEqual(fname, sharefname) |
---|
3171 | + |
---|
3172 | + mockexists.side_effect = call_exists |
---|
3173 | + |
---|
3174 | # Now begin the test. |
---|
3175 | alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock()) |
---|
3176 | bs[0].remote_write(0, 'a') |
---|
3177 | hunk ./src/allmydata/test/test_backends.py 234 |
---|
3178 | - self.failUnlessReallyEqual(sharefile.buffer, share_file_data) |
---|
3179 | + self.failUnlessReallyEqual(fobj.buffer, share_file_data) |
---|
3180 | + spaceint = self.s.allocated_size() |
---|
3181 | + self.failUnlessReallyEqual(spaceint, 1) |
---|
3182 | + |
---|
3183 | + bs[0].remote_close() |
---|
3184 | |
---|
3185 | # What happens when there's not enough space for the client's request? |
---|
3186 | hunk ./src/allmydata/test/test_backends.py 241 |
---|
3187 | - alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock()) |
---|
3188 | + # XXX Need to uncomment! alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock()) |
---|
3189 | |
---|
3190 | # Now test the allocated_size method. |
---|
3191 | hunk ./src/allmydata/test/test_backends.py 244 |
---|
3192 | - spaceint = self.s.allocated_size() |
---|
3193 | - self.failUnlessReallyEqual(spaceint, 1) |
---|
3194 | + #self.failIf(mockexists.called, mockexists.call_args_list) |
---|
3195 | + #self.failIf(mockmake_dirs.called, mockmake_dirs.call_args_list) |
---|
3196 | + #self.failIf(mockrename.called, mockrename.call_args_list) |
---|
3197 | + #self.failIf(mockstat.called, mockstat.call_args_list) |
---|
3198 | |
---|
3199 | @mock.patch('os.path.exists') |
---|
3200 | @mock.patch('os.path.getsize') |
---|
3201 | } |
---|
3202 | [checkpoint12 testing correct behavior with regard to incoming and final |
---|
3203 | wilcoxjg@gmail.com**20110710191915 |
---|
3204 | Ignore-this: 34413c6dc100f8aec3c1bb217eaa6bc7 |
---|
3205 | ] { |
---|
3206 | hunk ./src/allmydata/storage/backends/das/core.py 74 |
---|
3207 | self.lease_checker = FSLeaseCheckingCrawler(statefile, historyfile, expiration_policy) |
---|
3208 | self.lease_checker.setServiceParent(self) |
---|
3209 | |
---|
3210 | + def get_incoming(self, storageindex): |
---|
3211 | + return set((1,)) |
---|
3212 | + |
---|
3213 | def get_available_space(self): |
---|
3214 | if self.readonly: |
---|
3215 | return 0 |
---|
3216 | hunk ./src/allmydata/storage/server.py 77 |
---|
3217 | """Return a dict, indexed by category, that contains a dict of |
---|
3218 | latency numbers for each category. If there are sufficient samples |
---|
3219 | for unambiguous interpretation, each dict will contain the |
---|
3220 | - following keys: mean, 01_0_percentile, 10_0_percentile, |
---|
3221 | + following keys: samplesize, mean, 01_0_percentile, 10_0_percentile, |
---|
3222 | 50_0_percentile (median), 90_0_percentile, 95_0_percentile, |
---|
3223 | 99_0_percentile, 99_9_percentile. If there are insufficient |
---|
3224 | samples for a given percentile to be interpreted unambiguously |
---|
3225 | hunk ./src/allmydata/storage/server.py 120 |
---|
3226 | |
---|
3227 | def get_stats(self): |
---|
3228 | # remember: RIStatsProvider requires that our return dict |
---|
3229 | - # contains numeric values. |
---|
3230 | + # contains numeric, or None values. |
---|
3231 | stats = { 'storage_server.allocated': self.allocated_size(), } |
---|
3232 | stats['storage_server.reserved_space'] = self.reserved_space |
---|
3233 | for category,ld in self.get_latencies().items(): |
---|
3234 | hunk ./src/allmydata/storage/server.py 185 |
---|
3235 | start = time.time() |
---|
3236 | self.count("allocate") |
---|
3237 | alreadygot = set() |
---|
3238 | + incoming = set() |
---|
3239 | bucketwriters = {} # k: shnum, v: BucketWriter |
---|
3240 | |
---|
3241 | si_s = si_b2a(storage_index) |
---|
3242 | hunk ./src/allmydata/storage/server.py 219 |
---|
3243 | alreadygot.add(share.shnum) |
---|
3244 | share.add_or_renew_lease(lease_info) |
---|
3245 | |
---|
3246 | - for shnum in (sharenums - alreadygot): |
---|
3247 | + # fill incoming with all shares that are incoming use a set operation since there's no need to operate on individual pieces |
---|
3248 | + incoming = self.backend.get_incoming(storageindex) |
---|
3249 | + |
---|
3250 | + for shnum in ((sharenums - alreadygot) - incoming): |
---|
3251 | if (not limited) or (remaining_space >= max_space_per_bucket): |
---|
3252 | bw = self.backend.make_bucket_writer(storage_index, shnum, max_space_per_bucket, lease_info, canary) |
---|
3253 | bucketwriters[shnum] = bw |
---|
3254 | hunk ./src/allmydata/storage/server.py 229 |
---|
3255 | self._active_writers[bw] = 1 |
---|
3256 | if limited: |
---|
3257 | remaining_space -= max_space_per_bucket |
---|
3258 | - |
---|
3259 | - #XXX We SHOULD DOCUMENT LATER. |
---|
3260 | + else: |
---|
3261 | + # Bummer not enough space to accept this share. |
---|
3262 | + pass |
---|
3263 | |
---|
3264 | self.add_latency("allocate", time.time() - start) |
---|
3265 | return alreadygot, bucketwriters |
---|
3266 | hunk ./src/allmydata/storage/server.py 323 |
---|
3267 | self.add_latency("get", time.time() - start) |
---|
3268 | return bucketreaders |
---|
3269 | |
---|
3270 | - def get_leases(self, storage_index): |
---|
3271 | + def remote_get_incoming(self, storageindex): |
---|
3272 | + incoming_share_set = self.backend.get_incoming(storageindex) |
---|
3273 | + return incoming_share_set |
---|
3274 | + |
---|
3275 | + def get_leases(self, storageindex): |
---|
3276 | """Provide an iterator that yields all of the leases attached to this |
---|
3277 | bucket. Each lease is returned as a LeaseInfo instance. |
---|
3278 | |
---|
3279 | hunk ./src/allmydata/storage/server.py 337 |
---|
3280 | # since all shares get the same lease data, we just grab the leases |
---|
3281 | # from the first share |
---|
3282 | try: |
---|
3283 | - shnum, filename = self._get_shares(storage_index).next() |
---|
3284 | + shnum, filename = self._get_shares(storageindex).next() |
---|
3285 | sf = ShareFile(filename) |
---|
3286 | return sf.get_leases() |
---|
3287 | except StopIteration: |
---|
3288 | hunk ./src/allmydata/test/test_backends.py 182 |
---|
3289 | |
---|
3290 | share = MockShare() |
---|
3291 | def call_get_shares(storageindex): |
---|
3292 | - return [share] |
---|
3293 | + #XXX Whether or not to return an empty list depends on which case of get_shares we are interested in. |
---|
3294 | + return []#share] |
---|
3295 | |
---|
3296 | mockget_shares.side_effect = call_get_shares |
---|
3297 | |
---|
3298 | hunk ./src/allmydata/test/test_backends.py 222 |
---|
3299 | mockmake_dirs.side_effect = call_make_dirs |
---|
3300 | |
---|
3301 | def call_rename(src, dst): |
---|
3302 | - self.failUnlessReallyEqual(src, shareincomingname) |
---|
3303 | - self.failUnlessReallyEqual(dst, sharefname) |
---|
3304 | + self.failUnlessReallyEqual(src, shareincomingname) |
---|
3305 | + self.failUnlessReallyEqual(dst, sharefname) |
---|
3306 | |
---|
3307 | mockrename.side_effect = call_rename |
---|
3308 | |
---|
3309 | hunk ./src/allmydata/test/test_backends.py 233 |
---|
3310 | mockexists.side_effect = call_exists |
---|
3311 | |
---|
3312 | # Now begin the test. |
---|
3313 | + |
---|
3314 | + # XXX (0) ??? Fail unless something is not properly set-up? |
---|
3315 | alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock()) |
---|
3316 | hunk ./src/allmydata/test/test_backends.py 236 |
---|
3317 | + |
---|
3318 | + # XXX (1) Inspect incoming and fail unless the sharenum is listed there. |
---|
3319 | + alreadygota, bsa = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock()) |
---|
3320 | + |
---|
3321 | + self.failUnlessEqual(self.s.remote_get_incoming('teststorage_index'), set((0,))) |
---|
3322 | + # XXX (2) Test that no bucketwriter results from a remote_allocate_buckets |
---|
3323 | + # with the same si, until BucketWriter.remote_close() has been called. |
---|
3324 | + # self.failIf(bsa) |
---|
3325 | + |
---|
3326 | + # XXX (3) Inspect final and fail unless there's nothing there. |
---|
3327 | bs[0].remote_write(0, 'a') |
---|
3328 | hunk ./src/allmydata/test/test_backends.py 247 |
---|
3329 | + # XXX (4a) Inspect final and fail unless share 0 is there. |
---|
3330 | + # XXX (4b) Inspect incoming and fail unless share 0 is NOT there. |
---|
3331 | self.failUnlessReallyEqual(fobj.buffer, share_file_data) |
---|
3332 | spaceint = self.s.allocated_size() |
---|
3333 | self.failUnlessReallyEqual(spaceint, 1) |
---|
3334 | hunk ./src/allmydata/test/test_backends.py 253 |
---|
3335 | |
---|
3336 | + # If there's something in self.alreadygot prior to remote_close() then fail. |
---|
3337 | bs[0].remote_close() |
---|
3338 | |
---|
3339 | # What happens when there's not enough space for the client's request? |
---|
3340 | hunk ./src/allmydata/test/test_backends.py 260 |
---|
3341 | # XXX Need to uncomment! alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock()) |
---|
3342 | |
---|
3343 | # Now test the allocated_size method. |
---|
3344 | - #self.failIf(mockexists.called, mockexists.call_args_list) |
---|
3345 | + # self.failIf(mockexists.called, mockexists.call_args_list) |
---|
3346 | #self.failIf(mockmake_dirs.called, mockmake_dirs.call_args_list) |
---|
3347 | #self.failIf(mockrename.called, mockrename.call_args_list) |
---|
3348 | #self.failIf(mockstat.called, mockstat.call_args_list) |
---|
3349 | } |
---|
3350 | [fix inconsistent naming of storage_index vs storageindex in storage/server.py |
---|
3351 | wilcoxjg@gmail.com**20110710195139 |
---|
3352 | Ignore-this: 3b05cf549f3374f2c891159a8d4015aa |
---|
3353 | ] { |
---|
3354 | hunk ./src/allmydata/storage/server.py 220 |
---|
3355 | share.add_or_renew_lease(lease_info) |
---|
3356 | |
---|
3357 | # fill incoming with all shares that are incoming use a set operation since there's no need to operate on individual pieces |
---|
3358 | - incoming = self.backend.get_incoming(storageindex) |
---|
3359 | + incoming = self.backend.get_incoming(storage_index) |
---|
3360 | |
---|
3361 | for shnum in ((sharenums - alreadygot) - incoming): |
---|
3362 | if (not limited) or (remaining_space >= max_space_per_bucket): |
---|
3363 | hunk ./src/allmydata/storage/server.py 323 |
---|
3364 | self.add_latency("get", time.time() - start) |
---|
3365 | return bucketreaders |
---|
3366 | |
---|
3367 | - def remote_get_incoming(self, storageindex): |
---|
3368 | - incoming_share_set = self.backend.get_incoming(storageindex) |
---|
3369 | + def remote_get_incoming(self, storage_index): |
---|
3370 | + incoming_share_set = self.backend.get_incoming(storage_index) |
---|
3371 | return incoming_share_set |
---|
3372 | |
---|
3373 | hunk ./src/allmydata/storage/server.py 327 |
---|
3374 | - def get_leases(self, storageindex): |
---|
3375 | + def get_leases(self, storage_index): |
---|
3376 | """Provide an iterator that yields all of the leases attached to this |
---|
3377 | bucket. Each lease is returned as a LeaseInfo instance. |
---|
3378 | |
---|
3379 | hunk ./src/allmydata/storage/server.py 337 |
---|
3380 | # since all shares get the same lease data, we just grab the leases |
---|
3381 | # from the first share |
---|
3382 | try: |
---|
3383 | - shnum, filename = self._get_shares(storageindex).next() |
---|
3384 | + shnum, filename = self._get_shares(storage_index).next() |
---|
3385 | sf = ShareFile(filename) |
---|
3386 | return sf.get_leases() |
---|
3387 | except StopIteration: |
---|
3388 | replace ./src/allmydata/storage/server.py [A-Za-z_0-9] storage_index storageindex |
---|
3389 | } |
---|
3390 | [adding comments to clarify what I'm about to do. |
---|
3391 | wilcoxjg@gmail.com**20110710220623 |
---|
3392 | Ignore-this: 44f97633c3eac1047660272e2308dd7c |
---|
3393 | ] { |
---|
3394 | hunk ./src/allmydata/storage/backends/das/core.py 8 |
---|
3395 | |
---|
3396 | import os, re, weakref, struct, time |
---|
3397 | |
---|
3398 | -from foolscap.api import Referenceable |
---|
3399 | +#from foolscap.api import Referenceable |
---|
3400 | from twisted.application import service |
---|
3401 | |
---|
3402 | from zope.interface import implements |
---|
3403 | hunk ./src/allmydata/storage/backends/das/core.py 12 |
---|
3404 | -from allmydata.interfaces import RIStorageServer, IStatsProducer, IShareStore |
---|
3405 | +from allmydata.interfaces import IStatsProducer, IShareStore# XXX, RIStorageServer |
---|
3406 | from allmydata.util import fileutil, idlib, log, time_format |
---|
3407 | import allmydata # for __full_version__ |
---|
3408 | |
---|
3409 | hunk ./src/allmydata/storage/server.py 219 |
---|
3410 | alreadygot.add(share.shnum) |
---|
3411 | share.add_or_renew_lease(lease_info) |
---|
3412 | |
---|
3413 | - # fill incoming with all shares that are incoming use a set operation since there's no need to operate on individual pieces |
---|
3414 | + # fill incoming with all shares that are incoming use a set operation |
---|
3415 | + # since there's no need to operate on individual pieces |
---|
3416 | incoming = self.backend.get_incoming(storageindex) |
---|
3417 | |
---|
3418 | for shnum in ((sharenums - alreadygot) - incoming): |
---|
3419 | hunk ./src/allmydata/test/test_backends.py 245 |
---|
3420 | # with the same si, until BucketWriter.remote_close() has been called. |
---|
3421 | # self.failIf(bsa) |
---|
3422 | |
---|
3423 | - # XXX (3) Inspect final and fail unless there's nothing there. |
---|
3424 | bs[0].remote_write(0, 'a') |
---|
3425 | hunk ./src/allmydata/test/test_backends.py 246 |
---|
3426 | - # XXX (4a) Inspect final and fail unless share 0 is there. |
---|
3427 | - # XXX (4b) Inspect incoming and fail unless share 0 is NOT there. |
---|
3428 | self.failUnlessReallyEqual(fobj.buffer, share_file_data) |
---|
3429 | spaceint = self.s.allocated_size() |
---|
3430 | self.failUnlessReallyEqual(spaceint, 1) |
---|
3431 | hunk ./src/allmydata/test/test_backends.py 250 |
---|
3432 | |
---|
3433 | - # If there's something in self.alreadygot prior to remote_close() then fail. |
---|
3434 | + # XXX (3) Inspect final and fail unless there's nothing there. |
---|
3435 | bs[0].remote_close() |
---|
3436 | hunk ./src/allmydata/test/test_backends.py 252 |
---|
3437 | + # XXX (4a) Inspect final and fail unless share 0 is there. |
---|
3438 | + # XXX (4b) Inspect incoming and fail unless share 0 is NOT there. |
---|
3439 | |
---|
3440 | # What happens when there's not enough space for the client's request? |
---|
3441 | # XXX Need to uncomment! alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 2, mock.Mock()) |
---|
3442 | } |
---|
3443 | [branching back, no longer attempting to mock inside TestServerFSBackend |
---|
3444 | wilcoxjg@gmail.com**20110711190849 |
---|
3445 | Ignore-this: e72c9560f8d05f1f93d46c91d2354df0 |
---|
3446 | ] { |
---|
3447 | hunk ./src/allmydata/storage/backends/das/core.py 75 |
---|
3448 | self.lease_checker.setServiceParent(self) |
---|
3449 | |
---|
3450 | def get_incoming(self, storageindex): |
---|
3451 | - return set((1,)) |
---|
3452 | - |
---|
3453 | - def get_available_space(self): |
---|
3454 | - if self.readonly: |
---|
3455 | - return 0 |
---|
3456 | - return fileutil.get_available_space(self.storedir, self.reserved_space) |
---|
3457 | + """Return the set of incoming shnums.""" |
---|
3458 | + return set(os.listdir(self.incomingdir)) |
---|
3459 | |
---|
3460 | def get_shares(self, storage_index): |
---|
3461 | """Return a list of the ImmutableShare objects that correspond to the passed storage_index.""" |
---|
3462 | hunk ./src/allmydata/storage/backends/das/core.py 90 |
---|
3463 | # Commonly caused by there being no shares at all. |
---|
3464 | pass |
---|
3465 | |
---|
3466 | + def get_available_space(self): |
---|
3467 | + if self.readonly: |
---|
3468 | + return 0 |
---|
3469 | + return fileutil.get_available_space(self.storedir, self.reserved_space) |
---|
3470 | + |
---|
3471 | def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary): |
---|
3472 | immsh = ImmutableShare(self.sharedir, storage_index, shnum, max_size=max_space_per_bucket, create=True) |
---|
3473 | bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary) |
---|
3474 | hunk ./src/allmydata/test/test_backends.py 27 |
---|
3475 | |
---|
3476 | testnodeid = 'testnodeidxxxxxxxxxx' |
---|
3477 | tempdir = 'teststoredir' |
---|
3478 | -sharedirfinalname = os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a') |
---|
3479 | -sharedirincomingname = os.path.join(tempdir, 'shares', 'incoming', 'or', 'orsxg5dtorxxeylhmvpws3temv4a') |
---|
3480 | +basedir = os.path.join(tempdir, 'shares') |
---|
3481 | +baseincdir = os.path.join(basedir, 'incoming') |
---|
3482 | +sharedirfinalname = os.path.join(basedir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a') |
---|
3483 | +sharedirincomingname = os.path.join(baseincdir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a') |
---|
3484 | shareincomingname = os.path.join(sharedirincomingname, '0') |
---|
3485 | sharefname = os.path.join(sharedirfinalname, '0') |
---|
3486 | |
---|
3487 | hunk ./src/allmydata/test/test_backends.py 142 |
---|
3488 | mockmake_dirs, mockrename): |
---|
3489 | """ Write a new share. """ |
---|
3490 | |
---|
3491 | - def call_listdir(dirname): |
---|
3492 | - self.failUnlessReallyEqual(dirname, sharedirfinalname) |
---|
3493 | - raise OSError(2, "No such file or directory: '%s'" % os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a')) |
---|
3494 | - |
---|
3495 | - mocklistdir.side_effect = call_listdir |
---|
3496 | - |
---|
3497 | - def call_isdir(dirname): |
---|
3498 | - #XXX Should there be any other tests here? |
---|
3499 | - self.failUnlessReallyEqual(dirname, sharedirfinalname) |
---|
3500 | - return True |
---|
3501 | - |
---|
3502 | - mockisdir.side_effect = call_isdir |
---|
3503 | - |
---|
3504 | - def call_mkdir(dirname, permissions): |
---|
3505 | - if dirname not in [sharedirfinalname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511: |
---|
3506 | - self.Fail |
---|
3507 | - else: |
---|
3508 | - return True |
---|
3509 | - |
---|
3510 | - mockmkdir.side_effect = call_mkdir |
---|
3511 | - |
---|
3512 | - def call_get_available_space(storedir, reserved_space): |
---|
3513 | - self.failUnlessReallyEqual(storedir, tempdir) |
---|
3514 | - return 1 |
---|
3515 | - |
---|
3516 | - mockget_available_space.side_effect = call_get_available_space |
---|
3517 | - |
---|
3518 | - mocktime.return_value = 0 |
---|
3519 | class MockShare: |
---|
3520 | def __init__(self): |
---|
3521 | self.shnum = 1 |
---|
3522 | hunk ./src/allmydata/test/test_backends.py 152 |
---|
3523 | self.failUnlessReallyEqual(lease_info.owner_num, 0) |
---|
3524 | self.failUnlessReallyEqual(lease_info.expiration_time, mocktime() + 31*24*60*60) |
---|
3525 | self.failUnlessReallyEqual(lease_info.nodeid, testnodeid) |
---|
3526 | - |
---|
3527 | |
---|
3528 | share = MockShare() |
---|
3529 | hunk ./src/allmydata/test/test_backends.py 154 |
---|
3530 | - def call_get_shares(storageindex): |
---|
3531 | - #XXX Whether or not to return an empty list depends on which case of get_shares we are interested in. |
---|
3532 | - return []#share] |
---|
3533 | - |
---|
3534 | - mockget_shares.side_effect = call_get_shares |
---|
3535 | |
---|
3536 | class MockFile: |
---|
3537 | def __init__(self): |
---|
3538 | hunk ./src/allmydata/test/test_backends.py 176 |
---|
3539 | def tell(self): |
---|
3540 | return self.pos |
---|
3541 | |
---|
3542 | - |
---|
3543 | fobj = MockFile() |
---|
3544 | hunk ./src/allmydata/test/test_backends.py 177 |
---|
3545 | + |
---|
3546 | + directories = {} |
---|
3547 | + def call_listdir(dirname): |
---|
3548 | + if dirname not in directories: |
---|
3549 | + raise OSError(2, "No such file or directory: '%s'" % os.path.join(basedir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a')) |
---|
3550 | + else: |
---|
3551 | + return directories[dirname].get_contents() |
---|
3552 | + |
---|
3553 | + mocklistdir.side_effect = call_listdir |
---|
3554 | + |
---|
3555 | + class MockDir: |
---|
3556 | + def __init__(self, dirname): |
---|
3557 | + self.name = dirname |
---|
3558 | + self.contents = [] |
---|
3559 | + |
---|
3560 | + def get_contents(self): |
---|
3561 | + return self.contents |
---|
3562 | + |
---|
3563 | + def call_isdir(dirname): |
---|
3564 | + #XXX Should there be any other tests here? |
---|
3565 | + self.failUnlessReallyEqual(dirname, sharedirfinalname) |
---|
3566 | + return True |
---|
3567 | + |
---|
3568 | + mockisdir.side_effect = call_isdir |
---|
3569 | + |
---|
3570 | + def call_mkdir(dirname, permissions): |
---|
3571 | + if dirname not in [sharedirfinalname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511: |
---|
3572 | + self.Fail |
---|
3573 | + if dirname in directories: |
---|
3574 | + raise OSError(17, "File exists: '%s'" % dirname) |
---|
3575 | + self.Fail |
---|
3576 | + elif dirname not in directories: |
---|
3577 | + directories[dirname] = MockDir(dirname) |
---|
3578 | + return True |
---|
3579 | + |
---|
3580 | + mockmkdir.side_effect = call_mkdir |
---|
3581 | + |
---|
3582 | + def call_get_available_space(storedir, reserved_space): |
---|
3583 | + self.failUnlessReallyEqual(storedir, tempdir) |
---|
3584 | + return 1 |
---|
3585 | + |
---|
3586 | + mockget_available_space.side_effect = call_get_available_space |
---|
3587 | + |
---|
3588 | + mocktime.return_value = 0 |
---|
3589 | + def call_get_shares(storageindex): |
---|
3590 | + #XXX Whether or not to return an empty list depends on which case of get_shares we are interested in. |
---|
3591 | + return []#share] |
---|
3592 | + |
---|
3593 | + mockget_shares.side_effect = call_get_shares |
---|
3594 | + |
---|
3595 | def call_open(fname, mode): |
---|
3596 | self.failUnlessReallyEqual(fname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a', '0' )) |
---|
3597 | return fobj |
---|
3598 | } |
---|
3599 | [checkpoint12 TestServerFSBackend no longer mocks filesystem |
---|
3600 | wilcoxjg@gmail.com**20110711193357 |
---|
3601 | Ignore-this: 48654a6c0eb02cf1e97e62fe24920b5f |
---|
3602 | ] { |
---|
3603 | hunk ./src/allmydata/storage/backends/das/core.py 23 |
---|
3604 | create_mutable_sharefile |
---|
3605 | from allmydata.storage.immutable import BucketWriter, BucketReader |
---|
3606 | from allmydata.storage.crawler import FSBucketCountingCrawler |
---|
3607 | +from allmydata.util.hashutil import constant_time_compare |
---|
3608 | from allmydata.storage.backends.das.expirer import FSLeaseCheckingCrawler |
---|
3609 | |
---|
3610 | from zope.interface import implements |
---|
3611 | hunk ./src/allmydata/storage/backends/das/core.py 28 |
---|
3612 | |
---|
3613 | +# storage/ |
---|
3614 | +# storage/shares/incoming |
---|
3615 | +# incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will |
---|
3616 | +# be moved to storage/shares/$START/$STORAGEINDEX/$SHARENUM upon success |
---|
3617 | +# storage/shares/$START/$STORAGEINDEX |
---|
3618 | +# storage/shares/$START/$STORAGEINDEX/$SHARENUM |
---|
3619 | + |
---|
3620 | +# Where "$START" denotes the first 10 bits worth of $STORAGEINDEX (that's 2 |
---|
3621 | +# base-32 chars). |
---|
3622 | # $SHARENUM matches this regex: |
---|
3623 | NUM_RE=re.compile("^[0-9]+$") |
---|
3624 | |
---|
3625 | hunk ./src/allmydata/test/test_backends.py 126 |
---|
3626 | testbackend = DASCore(tempdir, expiration_policy) |
---|
3627 | self.s = StorageServer(testnodeid, backend=DASCore(tempdir, expiration_policy) ) |
---|
3628 | |
---|
3629 | - @mock.patch('allmydata.util.fileutil.rename') |
---|
3630 | - @mock.patch('allmydata.util.fileutil.make_dirs') |
---|
3631 | - @mock.patch('os.path.exists') |
---|
3632 | - @mock.patch('os.stat') |
---|
3633 | - @mock.patch('allmydata.storage.backends.das.core.DASCore.get_shares') |
---|
3634 | - @mock.patch('allmydata.util.fileutil.get_available_space') |
---|
3635 | @mock.patch('time.time') |
---|
3636 | hunk ./src/allmydata/test/test_backends.py 127 |
---|
3637 | - @mock.patch('os.mkdir') |
---|
3638 | - @mock.patch('__builtin__.open') |
---|
3639 | - @mock.patch('os.listdir') |
---|
3640 | - @mock.patch('os.path.isdir') |
---|
3641 | - def test_write_share(self, mockisdir, mocklistdir, mockopen, mockmkdir, mocktime,\ |
---|
3642 | - mockget_available_space, mockget_shares, mockstat, mockexists, \ |
---|
3643 | - mockmake_dirs, mockrename): |
---|
3644 | + def test_write_share(self, mocktime): |
---|
3645 | """ Write a new share. """ |
---|
3646 | |
---|
3647 | class MockShare: |
---|
3648 | hunk ./src/allmydata/test/test_backends.py 143 |
---|
3649 | |
---|
3650 | share = MockShare() |
---|
3651 | |
---|
3652 | - class MockFile: |
---|
3653 | - def __init__(self): |
---|
3654 | - self.buffer = '' |
---|
3655 | - self.pos = 0 |
---|
3656 | - def write(self, instring): |
---|
3657 | - begin = self.pos |
---|
3658 | - padlen = begin - len(self.buffer) |
---|
3659 | - if padlen > 0: |
---|
3660 | - self.buffer += '\x00' * padlen |
---|
3661 | - end = self.pos + len(instring) |
---|
3662 | - self.buffer = self.buffer[:begin]+instring+self.buffer[end:] |
---|
3663 | - self.pos = end |
---|
3664 | - def close(self): |
---|
3665 | - pass |
---|
3666 | - def seek(self, pos): |
---|
3667 | - self.pos = pos |
---|
3668 | - def read(self, numberbytes): |
---|
3669 | - return self.buffer[self.pos:self.pos+numberbytes] |
---|
3670 | - def tell(self): |
---|
3671 | - return self.pos |
---|
3672 | - |
---|
3673 | - fobj = MockFile() |
---|
3674 | - |
---|
3675 | - directories = {} |
---|
3676 | - def call_listdir(dirname): |
---|
3677 | - if dirname not in directories: |
---|
3678 | - raise OSError(2, "No such file or directory: '%s'" % os.path.join(basedir, 'or', 'orsxg5dtorxxeylhmvpws3temv4a')) |
---|
3679 | - else: |
---|
3680 | - return directories[dirname].get_contents() |
---|
3681 | - |
---|
3682 | - mocklistdir.side_effect = call_listdir |
---|
3683 | - |
---|
3684 | - class MockDir: |
---|
3685 | - def __init__(self, dirname): |
---|
3686 | - self.name = dirname |
---|
3687 | - self.contents = [] |
---|
3688 | - |
---|
3689 | - def get_contents(self): |
---|
3690 | - return self.contents |
---|
3691 | - |
---|
3692 | - def call_isdir(dirname): |
---|
3693 | - #XXX Should there be any other tests here? |
---|
3694 | - self.failUnlessReallyEqual(dirname, sharedirfinalname) |
---|
3695 | - return True |
---|
3696 | - |
---|
3697 | - mockisdir.side_effect = call_isdir |
---|
3698 | - |
---|
3699 | - def call_mkdir(dirname, permissions): |
---|
3700 | - if dirname not in [sharedirfinalname, os.path.join('teststoredir', 'shares', 'or')] or permissions != 511: |
---|
3701 | - self.Fail |
---|
3702 | - if dirname in directories: |
---|
3703 | - raise OSError(17, "File exists: '%s'" % dirname) |
---|
3704 | - self.Fail |
---|
3705 | - elif dirname not in directories: |
---|
3706 | - directories[dirname] = MockDir(dirname) |
---|
3707 | - return True |
---|
3708 | - |
---|
3709 | - mockmkdir.side_effect = call_mkdir |
---|
3710 | - |
---|
3711 | - def call_get_available_space(storedir, reserved_space): |
---|
3712 | - self.failUnlessReallyEqual(storedir, tempdir) |
---|
3713 | - return 1 |
---|
3714 | - |
---|
3715 | - mockget_available_space.side_effect = call_get_available_space |
---|
3716 | - |
---|
3717 | - mocktime.return_value = 0 |
---|
3718 | - def call_get_shares(storageindex): |
---|
3719 | - #XXX Whether or not to return an empty list depends on which case of get_shares we are interested in. |
---|
3720 | - return []#share] |
---|
3721 | - |
---|
3722 | - mockget_shares.side_effect = call_get_shares |
---|
3723 | - |
---|
3724 | - def call_open(fname, mode): |
---|
3725 | - self.failUnlessReallyEqual(fname, os.path.join(tempdir, 'shares', 'or', 'orsxg5dtorxxeylhmvpws3temv4a', '0' )) |
---|
3726 | - return fobj |
---|
3727 | - |
---|
3728 | - mockopen.side_effect = call_open |
---|
3729 | - |
---|
3730 | - def call_make_dirs(dname): |
---|
3731 | - self.failUnlessReallyEqual(dname, sharedirfinalname) |
---|
3732 | - |
---|
3733 | - mockmake_dirs.side_effect = call_make_dirs |
---|
3734 | - |
---|
3735 | - def call_rename(src, dst): |
---|
3736 | - self.failUnlessReallyEqual(src, shareincomingname) |
---|
3737 | - self.failUnlessReallyEqual(dst, sharefname) |
---|
3738 | - |
---|
3739 | - mockrename.side_effect = call_rename |
---|
3740 | - |
---|
3741 | - def call_exists(fname): |
---|
3742 | - self.failUnlessReallyEqual(fname, sharefname) |
---|
3743 | - |
---|
3744 | - mockexists.side_effect = call_exists |
---|
3745 | - |
---|
3746 | # Now begin the test. |
---|
3747 | |
---|
3748 | # XXX (0) ??? Fail unless something is not properly set-up? |
---|
3749 | } |
---|
3750 | [JACP |
---|
3751 | wilcoxjg@gmail.com**20110711194407 |
---|
3752 | Ignore-this: b54745de777c4bb58d68d708f010bbb |
---|
3753 | ] { |
---|
3754 | hunk ./src/allmydata/storage/backends/das/core.py 86 |
---|
3755 | |
---|
3756 | def get_incoming(self, storageindex): |
---|
3757 | """Return the set of incoming shnums.""" |
---|
3758 | - return set(os.listdir(self.incomingdir)) |
---|
3759 | + try: |
---|
3760 | + incominglist = os.listdir(self.incomingdir) |
---|
3761 | + print "incominglist: ", incominglist |
---|
3762 | + return set(incominglist) |
---|
3763 | + except OSError: |
---|
3764 | + # XXX I'd like to make this more specific. Commonly caused by there being no shares at all. |
---|
3765 | + pass |
---|
3766 | |
---|
3767 | def get_shares(self, storage_index): |
---|
3768 | """Return a list of the ImmutableShare objects that correspond to the passed storage_index.""" |
---|
3769 | hunk ./src/allmydata/storage/server.py 17 |
---|
3770 | from allmydata.storage.mutable import MutableShareFile, EmptyShare, \ |
---|
3771 | create_mutable_sharefile |
---|
3772 | |
---|
3773 | -# storage/ |
---|
3774 | -# storage/shares/incoming |
---|
3775 | -# incoming/ holds temp dirs named $START/$STORAGEINDEX/$SHARENUM which will |
---|
3776 | -# be moved to storage/shares/$START/$STORAGEINDEX/$SHARENUM upon success |
---|
3777 | -# storage/shares/$START/$STORAGEINDEX |
---|
3778 | -# storage/shares/$START/$STORAGEINDEX/$SHARENUM |
---|
3779 | - |
---|
3780 | -# Where "$START" denotes the first 10 bits worth of $STORAGEINDEX (that's 2 |
---|
3781 | -# base-32 chars). |
---|
3782 | - |
---|
3783 | - |
---|
3784 | class StorageServer(service.MultiService, Referenceable): |
---|
3785 | implements(RIStorageServer, IStatsProducer) |
---|
3786 | name = 'storage' |
---|
3787 | } |
---|
3788 | [testing get incoming |
---|
3789 | wilcoxjg@gmail.com**20110711210224 |
---|
3790 | Ignore-this: 279ee530a7d1daff3c30421d9e3a2161 |
---|
3791 | ] { |
---|
3792 | hunk ./src/allmydata/storage/backends/das/core.py 87 |
---|
3793 | def get_incoming(self, storageindex): |
---|
3794 | """Return the set of incoming shnums.""" |
---|
3795 | try: |
---|
3796 | - incominglist = os.listdir(self.incomingdir) |
---|
3797 | + incomingsharesdir = os.path.join(self.incomingdir, storage_index_to_dir(storageindex)) |
---|
3798 | + incominglist = os.listdir(incomingsharesdir) |
---|
3799 | print "incominglist: ", incominglist |
---|
3800 | return set(incominglist) |
---|
3801 | except OSError: |
---|
3802 | hunk ./src/allmydata/storage/backends/das/core.py 92 |
---|
3803 | - # XXX I'd like to make this more specific. Commonly caused by there being no shares at all. |
---|
3804 | - pass |
---|
3805 | - |
---|
3806 | + # XXX I'd like to make this more specific. If there are no shares at all. |
---|
3807 | + return set() |
---|
3808 | + |
---|
3809 | def get_shares(self, storage_index): |
---|
3810 | """Return a list of the ImmutableShare objects that correspond to the passed storage_index.""" |
---|
3811 | finalstoragedir = os.path.join(self.sharedir, storage_index_to_dir(storage_index)) |
---|
3812 | hunk ./src/allmydata/test/test_backends.py 149 |
---|
3813 | alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock()) |
---|
3814 | |
---|
3815 | # XXX (1) Inspect incoming and fail unless the sharenum is listed there. |
---|
3816 | + self.failUnlessEqual(self.s.remote_get_incoming('teststorage_index'), set((0,))) |
---|
3817 | alreadygota, bsa = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock()) |
---|
3818 | |
---|
3819 | hunk ./src/allmydata/test/test_backends.py 152 |
---|
3820 | - self.failUnlessEqual(self.s.remote_get_incoming('teststorage_index'), set((0,))) |
---|
3821 | # XXX (2) Test that no bucketwriter results from a remote_allocate_buckets |
---|
3822 | # with the same si, until BucketWriter.remote_close() has been called. |
---|
3823 | # self.failIf(bsa) |
---|
3824 | } |
---|
3825 | [ImmutableShareFile does not know its StorageIndex |
---|
3826 | wilcoxjg@gmail.com**20110711211424 |
---|
3827 | Ignore-this: 595de5c2781b607e1c9ebf6f64a2898a |
---|
3828 | ] { |
---|
3829 | hunk ./src/allmydata/storage/backends/das/core.py 112 |
---|
3830 | return 0 |
---|
3831 | return fileutil.get_available_space(self.storedir, self.reserved_space) |
---|
3832 | |
---|
3833 | - def make_bucket_writer(self, storage_index, shnum, max_space_per_bucket, lease_info, canary): |
---|
3834 | - immsh = ImmutableShare(self.sharedir, storage_index, shnum, max_size=max_space_per_bucket, create=True) |
---|
3835 | + def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary): |
---|
3836 | + finalhome = os.path.join(self.sharedir, storage_index_to_dir(storageindex), shnum) |
---|
3837 | + incominghome = os.path.join(self.sharedir,'incoming', storage_index_to_dir(storageindex), shnum) |
---|
3838 | + immsh = ImmutableShare(self, finalhome, incominghome, max_size=max_space_per_bucket, create=True) |
---|
3839 | bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary) |
---|
3840 | return bw |
---|
3841 | |
---|
3842 | hunk ./src/allmydata/storage/backends/das/core.py 155 |
---|
3843 | LEASE_SIZE = struct.calcsize(">L32s32sL") |
---|
3844 | sharetype = "immutable" |
---|
3845 | |
---|
3846 | - def __init__(self, sharedir, storageindex, shnum, max_size=None, create=False): |
---|
3847 | + def __init__(self, finalhome, incominghome, max_size=None, create=False): |
---|
3848 | """ If max_size is not None then I won't allow more than |
---|
3849 | max_size to be written to me. If create=True then max_size |
---|
3850 | must not be None. """ |
---|
3851 | } |
---|
3852 | [get_incoming correctly reports the 0 share after it has arrived |
---|
3853 | wilcoxjg@gmail.com**20110712025157 |
---|
3854 | Ignore-this: 893b2df6e41391567fffc85e4799bb0b |
---|
3855 | ] { |
---|
3856 | hunk ./src/allmydata/storage/backends/das/core.py 1 |
---|
3857 | +import os, re, weakref, struct, time, stat |
---|
3858 | + |
---|
3859 | from allmydata.interfaces import IStorageBackend |
---|
3860 | from allmydata.storage.backends.base import Backend |
---|
3861 | from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir |
---|
3862 | hunk ./src/allmydata/storage/backends/das/core.py 8 |
---|
3863 | from allmydata.util.assertutil import precondition |
---|
3864 | |
---|
3865 | -import os, re, weakref, struct, time |
---|
3866 | - |
---|
3867 | #from foolscap.api import Referenceable |
---|
3868 | from twisted.application import service |
---|
3869 | |
---|
3870 | hunk ./src/allmydata/storage/backends/das/core.py 89 |
---|
3871 | try: |
---|
3872 | incomingsharesdir = os.path.join(self.incomingdir, storage_index_to_dir(storageindex)) |
---|
3873 | incominglist = os.listdir(incomingsharesdir) |
---|
3874 | - print "incominglist: ", incominglist |
---|
3875 | - return set(incominglist) |
---|
3876 | + incomingshnums = [int(x) for x in incominglist] |
---|
3877 | + return set(incomingshnums) |
---|
3878 | except OSError: |
---|
3879 | # XXX I'd like to make this more specific. If there are no shares at all. |
---|
3880 | return set() |
---|
3881 | hunk ./src/allmydata/storage/backends/das/core.py 113 |
---|
3882 | return fileutil.get_available_space(self.storedir, self.reserved_space) |
---|
3883 | |
---|
3884 | def make_bucket_writer(self, storageindex, shnum, max_space_per_bucket, lease_info, canary): |
---|
3885 | - finalhome = os.path.join(self.sharedir, storage_index_to_dir(storageindex), shnum) |
---|
3886 | - incominghome = os.path.join(self.sharedir,'incoming', storage_index_to_dir(storageindex), shnum) |
---|
3887 | - immsh = ImmutableShare(self, finalhome, incominghome, max_size=max_space_per_bucket, create=True) |
---|
3888 | + finalhome = os.path.join(self.sharedir, storage_index_to_dir(storageindex), str(shnum)) |
---|
3889 | + incominghome = os.path.join(self.sharedir,'incoming', storage_index_to_dir(storageindex), str(shnum)) |
---|
3890 | + immsh = ImmutableShare(finalhome, incominghome, max_size=max_space_per_bucket, create=True) |
---|
3891 | bw = BucketWriter(self.ss, immsh, max_space_per_bucket, lease_info, canary) |
---|
3892 | return bw |
---|
3893 | |
---|
3894 | hunk ./src/allmydata/storage/backends/das/core.py 160 |
---|
3895 | max_size to be written to me. If create=True then max_size |
---|
3896 | must not be None. """ |
---|
3897 | precondition((max_size is not None) or (not create), max_size, create) |
---|
3898 | - self.shnum = shnum |
---|
3899 | - self.storage_index = storageindex |
---|
3900 | - self.fname = os.path.join(sharedir, storage_index_to_dir(storageindex), str(shnum)) |
---|
3901 | self._max_size = max_size |
---|
3902 | hunk ./src/allmydata/storage/backends/das/core.py 161 |
---|
3903 | - self.incomingdir = os.path.join(sharedir, 'incoming') |
---|
3904 | - si_dir = storage_index_to_dir(storageindex) |
---|
3905 | - self.incominghome = os.path.join(self.incomingdir, si_dir, "%d" % shnum) |
---|
3906 | - #XXX self.fname and self.finalhome need to be resolve/merged. |
---|
3907 | - self.finalhome = os.path.join(sharedir, si_dir, "%d" % shnum) |
---|
3908 | + self.incominghome = incominghome |
---|
3909 | + self.finalhome = finalhome |
---|
3910 | if create: |
---|
3911 | # touch the file, so later callers will see that we're working on |
---|
3912 | # it. Also construct the metadata. |
---|
3913 | hunk ./src/allmydata/storage/backends/das/core.py 166 |
---|
3914 | - assert not os.path.exists(self.fname) |
---|
3915 | - fileutil.make_dirs(os.path.dirname(self.fname)) |
---|
3916 | - f = open(self.fname, 'wb') |
---|
3917 | + assert not os.path.exists(self.finalhome) |
---|
3918 | + fileutil.make_dirs(os.path.dirname(self.incominghome)) |
---|
3919 | + f = open(self.incominghome, 'wb') |
---|
3920 | # The second field -- the four-byte share data length -- is no |
---|
3921 | # longer used as of Tahoe v1.3.0, but we continue to write it in |
---|
3922 | # there in case someone downgrades a storage server from >= |
---|
3923 | hunk ./src/allmydata/storage/backends/das/core.py 183 |
---|
3924 | self._lease_offset = max_size + 0x0c |
---|
3925 | self._num_leases = 0 |
---|
3926 | else: |
---|
3927 | - f = open(self.fname, 'rb') |
---|
3928 | - filesize = os.path.getsize(self.fname) |
---|
3929 | + f = open(self.finalhome, 'rb') |
---|
3930 | + filesize = os.path.getsize(self.finalhome) |
---|
3931 | (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc)) |
---|
3932 | f.close() |
---|
3933 | if version != 1: |
---|
3934 | hunk ./src/allmydata/storage/backends/das/core.py 189 |
---|
3935 | msg = "sharefile %s had version %d but we wanted 1" % \ |
---|
3936 | - (self.fname, version) |
---|
3937 | + (self.finalhome, version) |
---|
3938 | raise UnknownImmutableContainerVersionError(msg) |
---|
3939 | self._num_leases = num_leases |
---|
3940 | self._lease_offset = filesize - (num_leases * self.LEASE_SIZE) |
---|
3941 | hunk ./src/allmydata/storage/backends/das/core.py 225 |
---|
3942 | pass |
---|
3943 | |
---|
3944 | def stat(self): |
---|
3945 | - return os.stat(self.finalhome)[os.stat.ST_SIZE] |
---|
3946 | + return os.stat(self.finalhome)[stat.ST_SIZE] |
---|
3947 | + #filelen = os.stat(self.finalhome)[stat.ST_SIZE] |
---|
3948 | |
---|
3949 | def get_shnum(self): |
---|
3950 | return self.shnum |
---|
3951 | hunk ./src/allmydata/storage/backends/das/core.py 232 |
---|
3952 | |
---|
3953 | def unlink(self): |
---|
3954 | - os.unlink(self.fname) |
---|
3955 | + os.unlink(self.finalhome) |
---|
3956 | |
---|
3957 | def read_share_data(self, offset, length): |
---|
3958 | precondition(offset >= 0) |
---|
3959 | hunk ./src/allmydata/storage/backends/das/core.py 239 |
---|
3960 | # Reads beyond the end of the data are truncated. Reads that start |
---|
3961 | # beyond the end of the data return an empty string. |
---|
3962 | seekpos = self._data_offset+offset |
---|
3963 | - fsize = os.path.getsize(self.fname) |
---|
3964 | + fsize = os.path.getsize(self.finalhome) |
---|
3965 | actuallength = max(0, min(length, fsize-seekpos)) |
---|
3966 | if actuallength == 0: |
---|
3967 | return "" |
---|
3968 | hunk ./src/allmydata/storage/backends/das/core.py 243 |
---|
3969 | - f = open(self.fname, 'rb') |
---|
3970 | + f = open(self.finalhome, 'rb') |
---|
3971 | f.seek(seekpos) |
---|
3972 | return f.read(actuallength) |
---|
3973 | |
---|
3974 | hunk ./src/allmydata/storage/backends/das/core.py 252 |
---|
3975 | precondition(offset >= 0, offset) |
---|
3976 | if self._max_size is not None and offset+length > self._max_size: |
---|
3977 | raise DataTooLargeError(self._max_size, offset, length) |
---|
3978 | - f = open(self.fname, 'rb+') |
---|
3979 | + f = open(self.incominghome, 'rb+') |
---|
3980 | real_offset = self._data_offset+offset |
---|
3981 | f.seek(real_offset) |
---|
3982 | assert f.tell() == real_offset |
---|
3983 | hunk ./src/allmydata/storage/backends/das/core.py 279 |
---|
3984 | |
---|
3985 | def get_leases(self): |
---|
3986 | """Yields a LeaseInfo instance for all leases.""" |
---|
3987 | - f = open(self.fname, 'rb') |
---|
3988 | + f = open(self.finalhome, 'rb') |
---|
3989 | (version, unused, num_leases) = struct.unpack(">LLL", f.read(0xc)) |
---|
3990 | f.seek(self._lease_offset) |
---|
3991 | for i in range(num_leases): |
---|
3992 | hunk ./src/allmydata/storage/backends/das/core.py 288 |
---|
3993 | yield LeaseInfo().from_immutable_data(data) |
---|
3994 | |
---|
3995 | def add_lease(self, lease_info): |
---|
3996 | - f = open(self.fname, 'rb+') |
---|
3997 | + f = open(self.incominghome, 'rb+') |
---|
3998 | num_leases = self._read_num_leases(f) |
---|
3999 | self._write_lease_record(f, num_leases, lease_info) |
---|
4000 | self._write_num_leases(f, num_leases+1) |
---|
4001 | hunk ./src/allmydata/storage/backends/das/core.py 301 |
---|
4002 | if new_expire_time > lease.expiration_time: |
---|
4003 | # yes |
---|
4004 | lease.expiration_time = new_expire_time |
---|
4005 | - f = open(self.fname, 'rb+') |
---|
4006 | + f = open(self.finalhome, 'rb+') |
---|
4007 | self._write_lease_record(f, i, lease) |
---|
4008 | f.close() |
---|
4009 | return |
---|
4010 | hunk ./src/allmydata/storage/backends/das/core.py 336 |
---|
4011 | # the same order as they were added, so that if we crash while |
---|
4012 | # doing this, we won't lose any non-cancelled leases. |
---|
4013 | leases = [l for l in leases if l] # remove the cancelled leases |
---|
4014 | - f = open(self.fname, 'rb+') |
---|
4015 | + f = open(self.finalhome, 'rb+') |
---|
4016 | for i,lease in enumerate(leases): |
---|
4017 | self._write_lease_record(f, i, lease) |
---|
4018 | self._write_num_leases(f, len(leases)) |
---|
4019 | hunk ./src/allmydata/storage/backends/das/core.py 344 |
---|
4020 | f.close() |
---|
4021 | space_freed = self.LEASE_SIZE * num_leases_removed |
---|
4022 | if not len(leases): |
---|
4023 | - space_freed += os.stat(self.fname)[stat.ST_SIZE] |
---|
4024 | + space_freed += os.stat(self.finalhome)[stat.ST_SIZE] |
---|
4025 | self.unlink() |
---|
4026 | return space_freed |
---|
4027 | hunk ./src/allmydata/test/test_backends.py 129 |
---|
4028 | @mock.patch('time.time') |
---|
4029 | def test_write_share(self, mocktime): |
---|
4030 | """ Write a new share. """ |
---|
4031 | - |
---|
4032 | - class MockShare: |
---|
4033 | - def __init__(self): |
---|
4034 | - self.shnum = 1 |
---|
4035 | - |
---|
4036 | - def add_or_renew_lease(elf, lease_info): |
---|
4037 | - self.failUnlessReallyEqual(lease_info.renew_secret, renew_secret) |
---|
4038 | - self.failUnlessReallyEqual(lease_info.cancel_secret, cancel_secret) |
---|
4039 | - self.failUnlessReallyEqual(lease_info.owner_num, 0) |
---|
4040 | - self.failUnlessReallyEqual(lease_info.expiration_time, mocktime() + 31*24*60*60) |
---|
4041 | - self.failUnlessReallyEqual(lease_info.nodeid, testnodeid) |
---|
4042 | - |
---|
4043 | - share = MockShare() |
---|
4044 | - |
---|
4045 | # Now begin the test. |
---|
4046 | |
---|
4047 | # XXX (0) ??? Fail unless something is not properly set-up? |
---|
4048 | hunk ./src/allmydata/test/test_backends.py 143 |
---|
4049 | # self.failIf(bsa) |
---|
4050 | |
---|
4051 | bs[0].remote_write(0, 'a') |
---|
4052 | - self.failUnlessReallyEqual(fobj.buffer, share_file_data) |
---|
4053 | + #self.failUnlessReallyEqual(fobj.buffer, share_file_data) |
---|
4054 | spaceint = self.s.allocated_size() |
---|
4055 | self.failUnlessReallyEqual(spaceint, 1) |
---|
4056 | |
---|
4057 | hunk ./src/allmydata/test/test_backends.py 161 |
---|
4058 | #self.failIf(mockrename.called, mockrename.call_args_list) |
---|
4059 | #self.failIf(mockstat.called, mockstat.call_args_list) |
---|
4060 | |
---|
4061 | + def test_handle_incoming(self): |
---|
4062 | + incomingset = self.s.backend.get_incoming('teststorage_index') |
---|
4063 | + self.failUnlessReallyEqual(incomingset, set()) |
---|
4064 | + |
---|
4065 | + alreadygot, bs = self.s.remote_allocate_buckets('teststorage_index', 'x'*32, 'y'*32, set((0,)), 1, mock.Mock()) |
---|
4066 | + |
---|
4067 | + incomingset = self.s.backend.get_incoming('teststorage_index') |
---|
4068 | + self.failUnlessReallyEqual(incomingset, set((0,))) |
---|
4069 | + |
---|
4070 | + bs[0].remote_close() |
---|
4071 | + self.failUnlessReallyEqual(incomingset, set()) |
---|
4072 | + |
---|
4073 | @mock.patch('os.path.exists') |
---|
4074 | @mock.patch('os.path.getsize') |
---|
4075 | @mock.patch('__builtin__.open') |
---|
4076 | hunk ./src/allmydata/test/test_backends.py 223 |
---|
4077 | self.failUnlessReallyEqual(b.remote_read(datalen+1, 3), '') |
---|
4078 | |
---|
4079 | |
---|
4080 | - |
---|
4081 | class TestBackendConstruction(unittest.TestCase, ReallyEqualMixin): |
---|
4082 | @mock.patch('time.time') |
---|
4083 | @mock.patch('os.mkdir') |
---|
4084 | hunk ./src/allmydata/test/test_backends.py 271 |
---|
4085 | DASCore('teststoredir', expiration_policy) |
---|
4086 | |
---|
4087 | self.failIf(mocklistdir.called, mocklistdir.call_args_list) |
---|
4088 | + |
---|
4089 | } |
---|
4090 | |
---|
4091 | Context: |
---|
4092 | |
---|
4093 | [add Protovis.js-based download-status timeline visualization |
---|
4094 | Brian Warner <warner@lothar.com>**20110629222606 |
---|
4095 | Ignore-this: 477ccef5c51b30e246f5b6e04ab4a127 |
---|
4096 | |
---|
4097 | provide status overlap info on the webapi t=json output, add decode/decrypt |
---|
4098 | rate tooltips, add zoomin/zoomout buttons |
---|
4099 | ] |
---|
4100 | [add more download-status data, fix tests |
---|
4101 | Brian Warner <warner@lothar.com>**20110629222555 |
---|
4102 | Ignore-this: e9e0b7e0163f1e95858aa646b9b17b8c |
---|
4103 | ] |
---|
4104 | [prepare for viz: improve DownloadStatus events |
---|
4105 | Brian Warner <warner@lothar.com>**20110629222542 |
---|
4106 | Ignore-this: 16d0bde6b734bb501aa6f1174b2b57be |
---|
4107 | |
---|
4108 | consolidate IDownloadStatusHandlingConsumer stuff into DownloadNode |
---|
4109 | ] |
---|
4110 | [docs: fix error in crypto specification that was noticed by Taylor R Campbell <campbell+tahoe@mumble.net> |
---|
4111 | zooko@zooko.com**20110629185711 |
---|
4112 | Ignore-this: b921ed60c1c8ba3c390737fbcbe47a67 |
---|
4113 | ] |
---|
4114 | [setup.py: don't make bin/tahoe.pyscript executable. fixes #1347 |
---|
4115 | david-sarah@jacaranda.org**20110130235809 |
---|
4116 | Ignore-this: 3454c8b5d9c2c77ace03de3ef2d9398a |
---|
4117 | ] |
---|
4118 | [Makefile: remove targets relating to 'setup.py check_auto_deps' which no longer exists. fixes #1345 |
---|
4119 | david-sarah@jacaranda.org**20110626054124 |
---|
4120 | Ignore-this: abb864427a1b91bd10d5132b4589fd90 |
---|
4121 | ] |
---|
4122 | [Makefile: add 'make check' as an alias for 'make test'. Also remove an unnecessary dependency of 'test' on 'build' and 'src/allmydata/_version.py'. fixes #1344 |
---|
4123 | david-sarah@jacaranda.org**20110623205528 |
---|
4124 | Ignore-this: c63e23146c39195de52fb17c7c49b2da |
---|
4125 | ] |
---|
4126 | [Rename test_package_initialization.py to (much shorter) test_import.py . |
---|
4127 | Brian Warner <warner@lothar.com>**20110611190234 |
---|
4128 | Ignore-this: 3eb3dbac73600eeff5cfa6b65d65822 |
---|
4129 | |
---|
4130 | The former name was making my 'ls' listings hard to read, by forcing them |
---|
4131 | down to just two columns. |
---|
4132 | ] |
---|
4133 | [tests: fix tests to accomodate [20110611153758-92b7f-0ba5e4726fb6318dac28fb762a6512a003f4c430] |
---|
4134 | zooko@zooko.com**20110611163741 |
---|
4135 | Ignore-this: 64073a5f39e7937e8e5e1314c1a302d1 |
---|
4136 | Apparently none of the two authors (stercor, terrell), three reviewers (warner, davidsarah, terrell), or one committer (me) actually ran the tests. This is presumably due to #20. |
---|
4137 | fixes #1412 |
---|
4138 | ] |
---|
4139 | [wui: right-align the size column in the WUI |
---|
4140 | zooko@zooko.com**20110611153758 |
---|
4141 | Ignore-this: 492bdaf4373c96f59f90581c7daf7cd7 |
---|
4142 | Thanks to Ted "stercor" Rolle Jr. and Terrell Russell. |
---|
4143 | fixes #1412 |
---|
4144 | ] |
---|
4145 | [docs: three minor fixes |
---|
4146 | zooko@zooko.com**20110610121656 |
---|
4147 | Ignore-this: fec96579eb95aceb2ad5fc01a814c8a2 |
---|
4148 | CREDITS for arc for stats tweak |
---|
4149 | fix link to .zip file in quickstart.rst (thanks to ChosenOne for noticing) |
---|
4150 | English usage tweak |
---|
4151 | ] |
---|
4152 | [docs/running.rst: fix stray HTML (not .rst) link noticed by ChosenOne. |
---|
4153 | david-sarah@jacaranda.org**20110609223719 |
---|
4154 | Ignore-this: fc50ac9c94792dcac6f1067df8ac0d4a |
---|
4155 | ] |
---|
4156 | [server.py: get_latencies now reports percentiles _only_ if there are sufficient observations for the interpretation of the percentile to be unambiguous. |
---|
4157 | wilcoxjg@gmail.com**20110527120135 |
---|
4158 | Ignore-this: 2e7029764bffc60e26f471d7c2b6611e |
---|
4159 | interfaces.py: modified the return type of RIStatsProvider.get_stats to allow for None as a return value |
---|
4160 | NEWS.rst, stats.py: documentation of change to get_latencies |
---|
4161 | stats.rst: now documents percentile modification in get_latencies |
---|
4162 | test_storage.py: test_latencies now expects None in output categories that contain too few samples for the associated percentile to be unambiguously reported. |
---|
4163 | fixes #1392 |
---|
4164 | ] |
---|
4165 | [docs: revert link in relnotes.txt from NEWS.rst to NEWS, since the former did not exist at revision 5000. |
---|
4166 | david-sarah@jacaranda.org**20110517011214 |
---|
4167 | Ignore-this: 6a5be6e70241e3ec0575641f64343df7 |
---|
4168 | ] |
---|
4169 | [docs: convert NEWS to NEWS.rst and change all references to it. |
---|
4170 | david-sarah@jacaranda.org**20110517010255 |
---|
4171 | Ignore-this: a820b93ea10577c77e9c8206dbfe770d |
---|
4172 | ] |
---|
4173 | [docs: remove out-of-date docs/testgrid/introducer.furl and containing directory. fixes #1404 |
---|
4174 | david-sarah@jacaranda.org**20110512140559 |
---|
4175 | Ignore-this: 784548fc5367fac5450df1c46890876d |
---|
4176 | ] |
---|
4177 | [scripts/common.py: don't assume that the default alias is always 'tahoe' (it is, but the API of get_alias doesn't say so). refs #1342 |
---|
4178 | david-sarah@jacaranda.org**20110130164923 |
---|
4179 | Ignore-this: a271e77ce81d84bb4c43645b891d92eb |
---|
4180 | ] |
---|
4181 | [setup: don't catch all Exception from check_requirement(), but only PackagingError and ImportError |
---|
4182 | zooko@zooko.com**20110128142006 |
---|
4183 | Ignore-this: 57d4bc9298b711e4bc9dc832c75295de |
---|
4184 | I noticed this because I had accidentally inserted a bug which caused AssertionError to be raised from check_requirement(). |
---|
4185 | ] |
---|
4186 | [M-x whitespace-cleanup |
---|
4187 | zooko@zooko.com**20110510193653 |
---|
4188 | Ignore-this: dea02f831298c0f65ad096960e7df5c7 |
---|
4189 | ] |
---|
4190 | [docs: fix typo in running.rst, thanks to arch_o_median |
---|
4191 | zooko@zooko.com**20110510193633 |
---|
4192 | Ignore-this: ca06de166a46abbc61140513918e79e8 |
---|
4193 | ] |
---|
4194 | [relnotes.txt: don't claim to work on Cygwin (which has been untested for some time). refs #1342 |
---|
4195 | david-sarah@jacaranda.org**20110204204902 |
---|
4196 | Ignore-this: 85ef118a48453d93fa4cddc32d65b25b |
---|
4197 | ] |
---|
4198 | [relnotes.txt: forseeable -> foreseeable. refs #1342 |
---|
4199 | david-sarah@jacaranda.org**20110204204116 |
---|
4200 | Ignore-this: 746debc4d82f4031ebf75ab4031b3a9 |
---|
4201 | ] |
---|
4202 | [replace remaining .html docs with .rst docs |
---|
4203 | zooko@zooko.com**20110510191650 |
---|
4204 | Ignore-this: d557d960a986d4ac8216d1677d236399 |
---|
4205 | Remove install.html (long since deprecated). |
---|
4206 | Also replace some obsolete references to install.html with references to quickstart.rst. |
---|
4207 | Fix some broken internal references within docs/historical/historical_known_issues.txt. |
---|
4208 | Thanks to Ravi Pinjala and Patrick McDonald. |
---|
4209 | refs #1227 |
---|
4210 | ] |
---|
4211 | [docs: FTP-and-SFTP.rst: fix a minor error and update the information about which version of Twisted fixes #1297 |
---|
4212 | zooko@zooko.com**20110428055232 |
---|
4213 | Ignore-this: b63cfb4ebdbe32fb3b5f885255db4d39 |
---|
4214 | ] |
---|
4215 | [munin tahoe_files plugin: fix incorrect file count |
---|
4216 | francois@ctrlaltdel.ch**20110428055312 |
---|
4217 | Ignore-this: 334ba49a0bbd93b4a7b06a25697aba34 |
---|
4218 | fixes #1391 |
---|
4219 | ] |
---|
4220 | [corrected "k must never be smaller than N" to "k must never be greater than N" |
---|
4221 | secorp@allmydata.org**20110425010308 |
---|
4222 | Ignore-this: 233129505d6c70860087f22541805eac |
---|
4223 | ] |
---|
4224 | [Fix a test failure in test_package_initialization on Python 2.4.x due to exceptions being stringified differently than in later versions of Python. refs #1389 |
---|
4225 | david-sarah@jacaranda.org**20110411190738 |
---|
4226 | Ignore-this: 7847d26bc117c328c679f08a7baee519 |
---|
4227 | ] |
---|
4228 | [tests: add test for including the ImportError message and traceback entry in the summary of errors from importing dependencies. refs #1389 |
---|
4229 | david-sarah@jacaranda.org**20110410155844 |
---|
4230 | Ignore-this: fbecdbeb0d06a0f875fe8d4030aabafa |
---|
4231 | ] |
---|
4232 | [allmydata/__init__.py: preserve the message and last traceback entry (file, line number, function, and source line) of ImportErrors in the package versions string. fixes #1389 |
---|
4233 | david-sarah@jacaranda.org**20110410155705 |
---|
4234 | Ignore-this: 2f87b8b327906cf8bfca9440a0904900 |
---|
4235 | ] |
---|
4236 | [remove unused variable detected by pyflakes |
---|
4237 | zooko@zooko.com**20110407172231 |
---|
4238 | Ignore-this: 7344652d5e0720af822070d91f03daf9 |
---|
4239 | ] |
---|
4240 | [allmydata/__init__.py: Nicer reporting of unparseable version numbers in dependencies. fixes #1388 |
---|
4241 | david-sarah@jacaranda.org**20110401202750 |
---|
4242 | Ignore-this: 9c6bd599259d2405e1caadbb3e0d8c7f |
---|
4243 | ] |
---|
4244 | [update FTP-and-SFTP.rst: the necessary patch is included in Twisted-10.1 |
---|
4245 | Brian Warner <warner@lothar.com>**20110325232511 |
---|
4246 | Ignore-this: d5307faa6900f143193bfbe14e0f01a |
---|
4247 | ] |
---|
4248 | [control.py: remove all uses of s.get_serverid() |
---|
4249 | warner@lothar.com**20110227011203 |
---|
4250 | Ignore-this: f80a787953bd7fa3d40e828bde00e855 |
---|
4251 | ] |
---|
4252 | [web: remove some uses of s.get_serverid(), not all |
---|
4253 | warner@lothar.com**20110227011159 |
---|
4254 | Ignore-this: a9347d9cf6436537a47edc6efde9f8be |
---|
4255 | ] |
---|
4256 | [immutable/downloader/fetcher.py: remove all get_serverid() calls |
---|
4257 | warner@lothar.com**20110227011156 |
---|
4258 | Ignore-this: fb5ef018ade1749348b546ec24f7f09a |
---|
4259 | ] |
---|
4260 | [immutable/downloader/fetcher.py: fix diversity bug in server-response handling |
---|
4261 | warner@lothar.com**20110227011153 |
---|
4262 | Ignore-this: bcd62232c9159371ae8a16ff63d22c1b |
---|
4263 | |
---|
4264 | When blocks terminate (either COMPLETE or CORRUPT/DEAD/BADSEGNUM), the |
---|
4265 | _shares_from_server dict was being popped incorrectly (using shnum as the |
---|
4266 | index instead of serverid). I'm still thinking through the consequences of |
---|
4267 | this bug. It was probably benign and really hard to detect. I think it would |
---|
4268 | cause us to incorrectly believe that we're pulling too many shares from a |
---|
4269 | server, and thus prefer a different server rather than asking for a second |
---|
4270 | share from the first server. The diversity code is intended to spread out the |
---|
4271 | number of shares simultaneously being requested from each server, but with |
---|
4272 | this bug, it might be spreading out the total number of shares requested at |
---|
4273 | all, not just simultaneously. (note that SegmentFetcher is scoped to a single |
---|
4274 | segment, so the effect doesn't last very long). |
---|
4275 | ] |
---|
4276 | [immutable/downloader/share.py: reduce get_serverid(), one left, update ext deps |
---|
4277 | warner@lothar.com**20110227011150 |
---|
4278 | Ignore-this: d8d56dd8e7b280792b40105e13664554 |
---|
4279 | |
---|
4280 | test_download.py: create+check MyShare instances better, make sure they share |
---|
4281 | Server objects, now that finder.py cares |
---|
4282 | ] |
---|
4283 | [immutable/downloader/finder.py: reduce use of get_serverid(), one left |
---|
4284 | warner@lothar.com**20110227011146 |
---|
4285 | Ignore-this: 5785be173b491ae8a78faf5142892020 |
---|
4286 | ] |
---|
4287 | [immutable/offloaded.py: reduce use of get_serverid() a bit more |
---|
4288 | warner@lothar.com**20110227011142 |
---|
4289 | Ignore-this: b48acc1b2ae1b311da7f3ba4ffba38f |
---|
4290 | ] |
---|
4291 | [immutable/upload.py: reduce use of get_serverid() |
---|
4292 | warner@lothar.com**20110227011138 |
---|
4293 | Ignore-this: ffdd7ff32bca890782119a6e9f1495f6 |
---|
4294 | ] |
---|
4295 | [immutable/checker.py: remove some uses of s.get_serverid(), not all |
---|
4296 | warner@lothar.com**20110227011134 |
---|
4297 | Ignore-this: e480a37efa9e94e8016d826c492f626e |
---|
4298 | ] |
---|
4299 | [add remaining get_* methods to storage_client.Server, NoNetworkServer, and |
---|
4300 | warner@lothar.com**20110227011132 |
---|
4301 | Ignore-this: 6078279ddf42b179996a4b53bee8c421 |
---|
4302 | MockIServer stubs |
---|
4303 | ] |
---|
4304 | [upload.py: rearrange _make_trackers a bit, no behavior changes |
---|
4305 | warner@lothar.com**20110227011128 |
---|
4306 | Ignore-this: 296d4819e2af452b107177aef6ebb40f |
---|
4307 | ] |
---|
4308 | [happinessutil.py: finally rename merge_peers to merge_servers |
---|
4309 | warner@lothar.com**20110227011124 |
---|
4310 | Ignore-this: c8cd381fea1dd888899cb71e4f86de6e |
---|
4311 | ] |
---|
4312 | [test_upload.py: factor out FakeServerTracker |
---|
4313 | warner@lothar.com**20110227011120 |
---|
4314 | Ignore-this: 6c182cba90e908221099472cc159325b |
---|
4315 | ] |
---|
4316 | [test_upload.py: server-vs-tracker cleanup |
---|
4317 | warner@lothar.com**20110227011115 |
---|
4318 | Ignore-this: 2915133be1a3ba456e8603885437e03 |
---|
4319 | ] |
---|
4320 | [happinessutil.py: server-vs-tracker cleanup |
---|
4321 | warner@lothar.com**20110227011111 |
---|
4322 | Ignore-this: b856c84033562d7d718cae7cb01085a9 |
---|
4323 | ] |
---|
4324 | [upload.py: more tracker-vs-server cleanup |
---|
4325 | warner@lothar.com**20110227011107 |
---|
4326 | Ignore-this: bb75ed2afef55e47c085b35def2de315 |
---|
4327 | ] |
---|
4328 | [upload.py: fix var names to avoid confusion between 'trackers' and 'servers' |
---|
4329 | warner@lothar.com**20110227011103 |
---|
4330 | Ignore-this: 5d5e3415b7d2732d92f42413c25d205d |
---|
4331 | ] |
---|
4332 | [refactor: s/peer/server/ in immutable/upload, happinessutil.py, test_upload |
---|
4333 | warner@lothar.com**20110227011100 |
---|
4334 | Ignore-this: 7ea858755cbe5896ac212a925840fe68 |
---|
4335 | |
---|
4336 | No behavioral changes, just updating variable/method names and log messages. |
---|
4337 | The effects outside these three files should be minimal: some exception |
---|
4338 | messages changed (to say "server" instead of "peer"), and some internal class |
---|
4339 | names were changed. A few things still use "peer" to minimize external |
---|
4340 | changes, like UploadResults.timings["peer_selection"] and |
---|
4341 | happinessutil.merge_peers, which can be changed later. |
---|
4342 | ] |
---|
4343 | [storage_client.py: clean up test_add_server/test_add_descriptor, remove .test_servers |
---|
4344 | warner@lothar.com**20110227011056 |
---|
4345 | Ignore-this: efad933e78179d3d5fdcd6d1ef2b19cc |
---|
4346 | ] |
---|
4347 | [test_client.py, upload.py:: remove KiB/MiB/etc constants, and other dead code |
---|
4348 | warner@lothar.com**20110227011051 |
---|
4349 | Ignore-this: dc83c5794c2afc4f81e592f689c0dc2d |
---|
4350 | ] |
---|
4351 | [test: increase timeout on a network test because Francois's ARM machine hit that timeout |
---|
4352 | zooko@zooko.com**20110317165909 |
---|
4353 | Ignore-this: 380c345cdcbd196268ca5b65664ac85b |
---|
4354 | I'm skeptical that the test was proceeding correctly but ran out of time. It seems more likely that it had gotten hung. But if we raise the timeout to an even more extravagant number then we can be even more certain that the test was never going to finish. |
---|
4355 | ] |
---|
4356 | [docs/configuration.rst: add a "Frontend Configuration" section |
---|
4357 | Brian Warner <warner@lothar.com>**20110222014323 |
---|
4358 | Ignore-this: 657018aa501fe4f0efef9851628444ca |
---|
4359 | |
---|
4360 | this points to docs/frontends/*.rst, which were previously underlinked |
---|
4361 | ] |
---|
4362 | [web/filenode.py: avoid calling req.finish() on closed HTTP connections. Closes #1366 |
---|
4363 | "Brian Warner <warner@lothar.com>"**20110221061544 |
---|
4364 | Ignore-this: 799d4de19933f2309b3c0c19a63bb888 |
---|
4365 | ] |
---|
4366 | [Add unit tests for cross_check_pkg_resources_versus_import, and a regression test for ref #1355. This requires a little refactoring to make it testable. |
---|
4367 | david-sarah@jacaranda.org**20110221015817 |
---|
4368 | Ignore-this: 51d181698f8c20d3aca58b057e9c475a |
---|
4369 | ] |
---|
4370 | [allmydata/__init__.py: .name was used in place of the correct .__name__ when printing an exception. Also, robustify string formatting by using %r instead of %s in some places. fixes #1355. |
---|
4371 | david-sarah@jacaranda.org**20110221020125 |
---|
4372 | Ignore-this: b0744ed58f161bf188e037bad077fc48 |
---|
4373 | ] |
---|
4374 | [Refactor StorageFarmBroker handling of servers |
---|
4375 | Brian Warner <warner@lothar.com>**20110221015804 |
---|
4376 | Ignore-this: 842144ed92f5717699b8f580eab32a51 |
---|
4377 | |
---|
4378 | Pass around IServer instance instead of (peerid, rref) tuple. Replace |
---|
4379 | "descriptor" with "server". Other replacements: |
---|
4380 | |
---|
4381 | get_all_servers -> get_connected_servers/get_known_servers |
---|
4382 | get_servers_for_index -> get_servers_for_psi (now returns IServers) |
---|
4383 | |
---|
4384 | This change still needs to be pushed further down: lots of code is now |
---|
4385 | getting the IServer and then distributing (peerid, rref) internally. |
---|
4386 | Instead, it ought to distribute the IServer internally and delay |
---|
4387 | extracting a serverid or rref until the last moment. |
---|
4388 | |
---|
4389 | no_network.py was updated to retain parallelism. |
---|
4390 | ] |
---|
4391 | [TAG allmydata-tahoe-1.8.2 |
---|
4392 | warner@lothar.com**20110131020101] |
---|
4393 | Patch bundle hash: |
---|
4394 | a52fe37e2ebd452af957b1f9376edfb6a68d8a76 |
---|