#999 closed enhancement (fixed)
support multiple storage backends, including amazon s3
Reported by: | zooko | Owned by: | davidsarah |
---|---|---|---|
Priority: | major | Milestone: | eventually |
Component: | code-storage | Version: | n/a |
Keywords: | s3-backend storage | Cc: | wilcoxjg@…, mk.fraggod@…, amontero@… |
Launchpad Bug: |
Description (last modified by amontero)
The focus of this ticket is (now) adapting the existing codebase to use multiple backends, rather than supporting any particular backend. We already have one backend -- the filesystem backend -- which I think should be a plugin in the same sense that the others will be plugins (i.e.: other code in tahoe-lafs can interact with a filesystem plugin without caring very much about how or where it is storing its files -- otherwise it doesn't seem very extensible). If you accept this, then we'd need to figure out what a backend plugin should look like. There is backend-independent logic in the current server implementation that we wouldn't want to duplicate in every other backend implementation. To address this, we could start by refactoring the existing code that reads or writes shares on disk, to use a local backend implementation supporting an IStorageProvider interface (probably a fairly simplistic filesystem-ish API). (This involves changing the code in src/allmydata/storage/server.py that reads from local disk in its _iter_share_files() method, and also changing storage/shares.py, storage/immutable.py, and storage/mutable.py that write shares to local disk.) At this point all the existing tests should still pass, since we haven't actually changed the behaviour. Then we have to add the ability to configure new storage providers. This involves figuring out how to map user configuration choices to what actually happens when a node is started, and how the credentials needed to log into a particular storage backend should be specified. The skeletal RIStorageServer would instantiate its IStorageProvider based on what the user configured, and use it to write/read data, get statistics, and so on. Naturally, all of this would require a decent amount of documentation and testing, too. Once we have all of this worked out, the rest of this project (probably to be handled in other tickets) would be identifying what other backends we'd want in tahoe-lafs, then documenting, implementing, and testing them. We already have Amazon S3 and Rackspace as targets -- users of tahoe-lafs will probably have their own suggestions, and more backends will come up with more research.
Attachments (68)
Change History (220)
comment:1 Changed at 2010-03-16T16:03:35Z by zooko
comment:2 Changed at 2010-03-24T04:52:20Z by kevan
(this is an email I sent to zooko a while ago with my thoughts on how this should be implemented:)
First, I'll summarize, to make sure that I understand what you had in mind. Please correct me if you disagree with any of this.
The "redundant array of inexpensive clouds" idea means extending the current storage server in tahoe-lafs to support storage backends that aren't what we have now (writing shares to the local filesystem). Well actually, the redundant array of inexpensive clouds idea means doing that, then implementing plugins for popular existing cloud storage services -- Amazon S3 and Rackspace are two that you've mentioned, but there are probably others (if we end up going through with this, I'll probably email tahoe-dev so I can get an idea of what else is out there/what else people want to see supported, in addition to my own research).
The benefit (or at least the benefit that seems clear to me from your explanation -- perhaps there are others that are more obvious if you run a big tahoe-lafs installation like allmydata.com, or if you're more familiar with tahoe-lafs than I am) is decoupling the ability of a tahoe-lafs node to store files from its physical filesystem. So if, say, allmydata.com were to start running tahoe-lafs nodes using S3 as a backend, and their grid was filled, they could create more space on the grid by buying more S3 buckets, rather than upgrading physical servers or adding new servers (I've never used S3, but I would bet that it is easier to buy more S3 buckets than to upgrade servers). Or, if you wanted to create a grid without purchasing a bunch of servers, you could run a bunch of nodes on one machine (I was thinking vmware images, but then I started wondering whether it was even necessary to have that level of separation between tahoe-lafs nodes -- is it? but that's not really on topic), each mapping to a different S3 bucket or buckets.
Am I missing anything (aside from more examples)?
It seems like -- at least for S3 -- you could already sort of do this. There are projects like s3fs, which provide a FUSE interface to an S3 bucket (though the last file for it is more than a year old. it seems like there should be other projects like that, though) (edit: this is actually wrong -- I just hadn't found the Google code project, which is at http://code.google.com/p/s3fs/). Using that, you could mount your S3 bucket somewhere in the filesystem of your server, then kajigger the basedir of the tahoe-lafs node so that it rests in that area of the filesystem, or otherwise configure the tahoe-lafs node to save files there. This requires more work than what we'd eventually want with "redundant array of inexpensive clouds", of course, and (depending on how well FUSE or other S3 interfaces play) may only work on tahoe-lafs nodes running one unix or other, but if an operator got it working, it seems like they'd have most of the benefit outlined above without any further work on my/our part.
(not that I mind working on this, of course, but I figured it would be worthwhile to mention that)
In any case, I think implementing this would come down to two basic parts.
The first part would be adapting the existing codebase to use multiple backends.
We already have one backend -- the filesystem backend -- which I think should be a plugin in the same sense that the others will be plugins (i.e.: other code in tahoe-lafs can interact with a filesystem plugin without caring very much about how or where it is storing its files -- otherwise it doesn't seem very extensible). If you accept this, then we'd need to figure out what a backend plugin should look like. Maybe we can make each plugin implement RIStorageServer, and leave it at that. Then we might not need to do very much work on the existing server to make it work with the rest of the (new) system. However, it's possible that there is backend-independent logic in the current server implementation that we wouldn't want to duplicate in every other backend implementation. To address this, we could instead make a sort of backend-agnostic storage server that implements RIStorageServer, then make another interface for backends to implement, say IStorageProvider. The skeletal RIStorageServer would instantiate its IStorageProvider based on what the user configured, and use it to write/read data, get statistics, and so on. Then IStorageProvider would be a fairly simplistic filesystem-ish API.
The other part of preparation would be figuring out how to map user configuration choices to what actually happens when a node is started. Also, we'd want to figure out how (if?) we need to do anything special with the credentials that users might need to log in to their storage backend. I'll have a better idea of how I'd implement this once I look at the way it works for other things that users configure.
Naturally, all of this would require a decent amount of documentation and testing, too.
(I'm open to other ideas, of course -- these are just what came to my mind)
Once we have all of this worked out, the rest of this project would be identifying what other backends we'd want in tahoe-lafs, then documenting, implementing, and testing those. We already have Amazon S3 and Rackspace as targets -- users of tahoe-lafs will probably have their own suggestions, and more backends will come up with more research.
comment:3 Changed at 2010-03-31T16:48:51Z by davidsarah
- Description modified (diff)
- Keywords backend s3 added
- Summary changed from amazon s3 backend to support multiple storage backends, including amazon s3
Generalizing this to include support for multiple backends (since I don't think we want to do it in a way that would only support S3 and local disk).
comment:5 Changed at 2010-03-31T17:17:57Z by davidsarah
- Description modified (diff)
Update description to reflect kevan's suggested approach.
comment:6 Changed at 2011-02-23T18:31:25Z by zooko
- Owner set to zooko
- Status changed from new to assigned
Changed at 2011-03-22T05:34:38Z by arch_o_median
Changed at 2011-03-25T20:41:34Z by arch_o_median
comment:7 Changed at 2011-04-06T20:41:29Z by zooko
Here is an incomplete patch for others (arc) to look at or improve.
Changed at 2011-04-06T20:41:41Z by zooko
Changed at 2011-04-06T21:00:11Z by zooko
comment:8 Changed at 2011-06-22T00:06:25Z by arch_o_median
- Owner changed from zooko to arch_o_median
- Status changed from assigned to new
Changed at 2011-06-24T20:32:00Z by arch_o_median
Implements tests of read and write for the nullbackend
Changed at 2011-06-28T20:24:26Z by arch_o_median
Changed at 2011-07-06T19:08:50Z by arch_o_median
backing myself up, some comments cleaned in interfaces, new tests in test_backends
Changed at 2011-07-06T20:07:36Z by arch_o_median
tiny change, now tests that allocated returns correct value
Changed at 2011-07-06T22:31:09Z by arch_o_median
The null backend test is useful for testing what happens when there's no effective limit on the backend
Changed at 2011-07-10T19:55:45Z by arch_o_median
all storage_index (word tokens) to storageindex in storage/server.py
Changed at 2011-07-12T02:52:35Z by arch_o_median
Changed at 2011-07-12T06:11:10Z by arch_o_median
Changed at 2011-07-13T06:06:01Z by arch_o_median
comment:9 Changed at 2011-07-13T06:07:08Z by arch_o_median
OK jacp15 contains a test that (almost) completely covers remote_allocate_buckets with the new backend. We should review this patches contents before writing more tests.
comment:10 Changed at 2011-07-13T15:45:17Z by davidsarah
- Keywords review-needed added
- Milestone changed from undecided to soon
- Owner changed from arch_o_median to davidsarah
- Status changed from new to assigned
I'll review this.
comment:11 Changed at 2011-07-13T18:19:39Z by arch_o_median
- Cc wilcoxjg@… added
- Keywords review-needed removed
- Owner changed from davidsarah to arch_o_median
- Status changed from assigned to new
Changed at 2011-07-14T00:31:09Z by zooko
Changed at 2011-07-14T21:24:15Z by zooko
Changed at 2011-07-15T19:16:16Z by zooko
Changed at 2011-07-20T06:10:25Z by zooko
comment:12 follow-up: ↓ 13 Changed at 2011-07-20T16:55:00Z by davidsarah
Before going much further in relying on twisted.python.filepath.FilePath, can we think about the Unicode issue raised in ticket:1437#comment:3? Currently, storage directories with Unicode paths are intended to be supported on Windows.
comment:13 in reply to: ↑ 12 ; follow-up: ↓ 14 Changed at 2011-07-20T20:17:23Z by arch_o_median
Replying to davidsarah:
Before going much further in relying on twisted.python.filepath.FilePath, can we think about the Unicode issue raised in ticket:1437#comment:3? Currently, storage directories with Unicode paths are intended to be supported on Windows.
OK... I guess that I should look into the twisted project's testing framework to determine what they know about this issue...
I'm currently snooping for leads here: http://twistedmatrix.com/trac/ticket/4736
comment:14 in reply to: ↑ 13 ; follow-up: ↓ 15 Changed at 2011-07-20T20:23:04Z by arch_o_median
Replying to arch_o_median:
Replying to davidsarah:
Before going much further in relying on twisted.python.filepath.FilePath, can we think about the Unicode issue raised in ticket:1437#comment:3? Currently, storage directories with Unicode paths are intended to be supported on Windows.
OK... I guess that I should look into the twisted project's testing framework to determine what they know about this issue...
I'm currently snooping for leads here: http://twistedmatrix.com/trac/ticket/4736
So it seems like there may be (but probably there is not) an issue regarding Windows path representations to users versus to "OS" API's snooping here:
comment:15 in reply to: ↑ 14 Changed at 2011-07-20T20:27:29Z by arch_o_median
Replying to arch_o_median:
Replying to arch_o_median:
Replying to davidsarah:
Before going much further in relying on twisted.python.filepath.FilePath, can we think about the Unicode issue raised in ticket:1437#comment:3? Currently, storage directories with Unicode paths are intended to be supported on Windows.
OK... I guess that I should look into the twisted project's testing framework to determine what they know about this issue...
I'm currently snooping for leads here: http://twistedmatrix.com/trac/ticket/4736
So it seems like there may be (but probably there is not) an issue regarding Windows path representations to users versus to "OS" API's snooping here:
(Is replying to myself bad form?) OK so I can't tell how 2366 is (or is not resolved) should I get a twisted login so I can ask about it on that ticket... I await direction.
comment:16 Changed at 2011-07-21T19:52:44Z by zooko
I did some investigation about non-ASCII filename handling in filepath and in Tahoe-LAFS and posted my notes on Twisted #5203.
Changed at 2011-07-22T07:03:25Z by arch_o_median
Changed at 2011-07-22T20:32:40Z by arch_o_median
Changed at 2011-07-23T03:19:05Z by arch_o_median
comment:17 Changed at 2011-07-25T20:39:34Z by arch_o_median
After some chatting with zooko and warner in IRC, I've tentatively decided to use composition to inform the base Crawler object about the backend it is associated with. I'm not sure, but I think passing the whole <backend>Core object might be appropriate.
comment:18 Changed at 2011-07-26T04:10:47Z by Zancas
- Owner changed from arch_o_median to Zancas
Changed at 2011-07-27T08:05:16Z by Zancas
comment:19 Changed at 2011-07-27T23:05:55Z by Zancas
My current test suite contains several tests that Zooko calls "transparent box". I need to decide whether they are appropriate:
- remote_allocate_buckets populates incoming with shnum(s)
- an attempt to allocate the same share (same ss) does _not_ create a new bucketwriter
- test allocated size
- together remote_write, remote_close, get_shares, and read_share_data behave
Since I am altering the location (from server to backend/core) of some of this functionality, and since I am altering the mechanism by which the filesystem is manipulated (to FilePath)... I think all of these tests are necessary.
It would be nice if the tests were designed to ensure the proper behavior independent of the underlying storage medium... but I think I need to assume a filesystem-like interface for at least (1,2, and 4), probably (3) as well...
Changed at 2011-07-28T07:23:47Z by Zancas
comment:20 Changed at 2011-07-29T02:31:31Z by Zancas
I'm confused about leases. When I look at the constructor for an immutable share file in a 'pristine' repository, (or in my latest version for that matter) I see that in the "create" clause of the constructor a python string representation of a big endian '0' is used for the number of leases.
http://tahoe-lafs.org/trac/tahoe-lafs/browser/trunk/src/allmydata/storage/immutable.py#L63
This is confusing because in my test vector data (created some time ago) I have '1' as the initial number of leases. My guess is that I somehow got a bum test-vector value, but it'd be nice to hear from an architect that immutable share files really should start life with '0' leases!
Changed at 2011-07-29T04:39:33Z by Zancas
Patch passes allmydata.test.test_backends.TestServerAndFSBackend.test_write_and_read_share
comment:21 Changed at 2011-07-29T14:24:35Z by zooko
Cool! Will review.
comment:22 Changed at 2011-07-30T04:23:07Z by Zancas
- Version changed from 1.6.0 to n/a
comment:23 Changed at 2011-08-11T04:40:10Z by Zancas
- Owner changed from Zancas to zancas
comment:24 Changed at 2011-08-29T16:45:15Z by zancas
Ticket 1465 more succinctly organizes the same code contained in these patches.
Changed at 2011-09-01T03:33:27Z by zooko
comment:25 Changed at 2011-09-01T03:36:21Z by zooko
I added attachment:backends-configuration-docs.darcs.patch which contains documentation of the configuration options for the backends feature. I like Brian Warner's approach to development where he writes the docs first, even before the tests. (He writes tests second.) I encourage anyone working on this ticket to read (and possibly improve/fix/extend) these docs!
comment:26 follow-up: ↓ 28 Changed at 2011-09-02T01:44:21Z by davidsarah
Review of backends-configuration-docs.darcs.patch:
s3.rst:
- Add a short introduction saying what S3 is and why anyone might want to use it.
- It's a bit inconsistent that the value of the backend option is uppercase "S3", but the other option names are lowercase "s3_*". Also, I would make it "s3.*", since that's similar to the use of "." to group other related options.
- Should the s3_url option include the scheme name, i.e. defaulting to http://s3.amazonaws.com ? We might want to support https in future (although there would be more to configure if we check certificates).
- In the description of s3_max_space, copy the paragraph starting "This string contains a number" from disk.rst rather than referring to it.
- "enabling ``s3_max_space`` causes an extra S3 usage query to be sent for each share upload, causing the upload process to run slightly slower and incur more S3 request charges."
Each space query could be amortized over several uploads, using an estimate of the used space in-between. (That wouldn't be accurate if there are several storage servers accessing the same bucket, but it would be accurate enough if the maximum number of such servers is limited.) Even if we don't implement that right away, I'm not sure that this performance issue needs to go in s3.rst.
disk.rst:
- "Storing Shares in local filesystem" -> "Storing Shares on a Local Filesystem"
- use backend = disk, not backend = local filesystem, and say that it is the default.
configuration.rst:
- "Clients will be unaware of what backend is used by the server." -> "Clients need not be aware of which backend is used by a server."
- "including how to limit the space that will be consumed" -> "including how to reserve a minimum amount of free space"
comment:27 Changed at 2011-09-02T04:47:04Z by zooko
I closed the subsidiary ticket #1465 as "fixed". The current patch set for this ticket as of this writing is attachment:20110829passespyflakes.darcs.patch (from that ticket) plus attachment:backends-configuration-docs.darcs.patch.
Changed at 2011-09-15T02:50:08Z by davidsarah
This is just a "flat" recording of my refactoring of pluggable backends. I'll do a better recording tomorrow, and explain the refactoring.
Changed at 2011-09-17T02:13:03Z by davidsarah
This is still just a flat recording (a lot more changes to tests were needed than I anticipated).
Changed at 2011-09-19T20:33:29Z by davidsarah
Bleeding edge pluggable backends code from David-Sarah. refs #999
Changed at 2011-09-19T23:38:51Z by davidsarah
Rerecording of pluggable-backends-davidsarah-v3.darcs.patch that should fix the darcs performance problem when applied to trunk.
Changed at 2011-09-20T03:42:59Z by davidsarah
Work-in-progress, includes fix to bug involving BucketWriter?. refs #999
comment:28 in reply to: ↑ 26 ; follow-ups: ↓ 29 ↓ 31 Changed at 2011-09-20T17:04:34Z by zancas
Replying to davidsarah:
Review of backends-configuration-docs.darcs.patch:
s3.rst:
- Add a short introduction saying what S3 is and why anyone might want to use it.
- It's a bit inconsistent that the value of the backend option is uppercase "S3", but the other option names are lowercase "s3_*". Also, I would make it "s3.*", since that's similar to the use of "." to group other related options.
- Should the s3_url option include the scheme name, i.e. defaulting to http://s3.amazonaws.com ? We might want to support https in future (although there would be more to configure if we check certificates).
- In the description of s3_max_space, copy the paragraph starting "This string contains a number" from disk.rst rather than referring to it.
- "enabling ``s3_max_space`` causes an extra S3 usage query to be sent for each share upload, causing the upload process to run slightly slower and incur more S3 request charges."
Each space query could be amortized over several uploads, using an estimate of the used space in-between. (That wouldn't be accurate if there are several storage servers accessing the same bucket, but it would be accurate enough if the maximum number of such servers is limited.) Even if we don't implement that right away, I'm not sure that this performance issue needs to go in s3.rst.
disk.rst:
- "Storing Shares in local filesystem" -> "Storing Shares on a Local Filesystem"
- use backend = disk, not backend = local filesystem, and say that it is the default.
configuration.rst:
- "Clients will be unaware of what backend is used by the server." -> "Clients need not be aware of which backend is used by a server."
- "including how to limit the space that will be consumed" -> "including how to reserve a minimum amount of free space"
- currently clients _are_ aware of backend type.
Changed at 2011-09-20T17:26:01Z by davidsarah
docs: document the configuration options for the new backends scheme. This takes into account ticket:999#comment:26 and is rerecorded to avoid darcs context problems.
comment:29 in reply to: ↑ 28 Changed at 2011-09-20T17:44:31Z by zooko
Replying to zancas:
- currently clients _are_ aware of backend type.
They are? I don't think so. How would they find out about the backend type?
comment:30 Changed at 2011-09-20T17:51:49Z by zooko
attachment:backends-configuration-docs-v2.darcs.patch looks good to me. One thing I would change is to remove the "Issues" section about the costs of querying S3 objects and the effects on our crawler/lease-renewal scheme. I'm not sure that this branch will eventually land without a lease-checker implemented, so that part is making a statement that might be wrong. Also I'm not really sure the costs of querying S3 objects are worth mentioning. The current S3 pricing has 10,000 GET requests for $0.01. Let's remove that documentation for now and add in documentation when we understand better what the actual limitations or costs will be.
comment:31 in reply to: ↑ 28 ; follow-up: ↓ 32 Changed at 2011-09-20T19:53:04Z by davidsarah
Replying to zancas:
Replying to davidsarah:
configuration.rst:
- "Clients will be unaware of what backend is used by the server." -> "Clients need not be aware of which backend is used by a server."
- currently clients _are_ aware of backend type.
The doc meant that client nodes need not be aware of backend type. Although the current hack to wire up a StorageServer to a backend in pluggable-backends-davidsarah-v5.darcs.patch is in allmydata/client.py, that code isn't actually run by clients, it is run only when setting up a storage server.
comment:32 in reply to: ↑ 31 Changed at 2011-09-20T19:57:02Z by davidsarah
Replying to davidsarah:
Replying to zancas:
Replying to davidsarah:
configuration.rst:
- "Clients will be unaware of what backend is used by the server." -> "Clients need not be aware of which backend is used by a server."
- currently clients _are_ aware of backend type.
The doc meant that client nodes need not be aware of backend type.
Ugh, I should never use the term "node" :-/. I meant the code that acts as a storage protocol client.
Changed at 2011-09-21T03:21:58Z by davidsarah
v6. Tests are looking in much better shape now -- still some problems with path vs FilePath? and other stale assumptions in the test framework, but the disk backend basically works now.
Changed at 2011-09-21T15:54:50Z by davidsarah
Add --trace-exceptions option to trace raised exceptions on stderr. refs #999
comment:33 Changed at 2011-09-22T15:40:59Z by davidsarah
Josh wrote, re: pluggable-backends-davidsarah-v8.darcs.patch:
I think the test_crawlers failure stems from ShareCrawler being passed a FilePath object in its constructor where it expects a string literal to use in an old-style call to open (specifically in its "load_state" method). I'm not certain yet, but I think I'll stop here for the night.
No, load_state uses pickle.loads(self.statefp.getContent()) which is correct. The state handling is a red herring for the test_crawlers failure, I think.
comment:34 follow-up: ↓ 35 Changed at 2011-09-22T15:48:02Z by davidsarah
In v9, allmydata.test.test_storage.LeaseCrawler.test_basic is hanging due to an infinite recursion in pickle.py. Use
bin/tahoe --trace-exceptions debug trial --rterror allmydata.test.test_storage.LeaseCrawler.test_basic
(with trace-exceptions-option.darcs.patch applied) to see the recursion. I'm on the case...
comment:35 in reply to: ↑ 34 Changed at 2011-09-22T16:09:36Z by davidsarah
Replying to davidsarah:
In v9, allmydata.test.test_storage.LeaseCrawler.test_basic is hanging due to an infinite recursion in pickle.py.
That was another red herring; there was an innocuous exception in pickle.py that was happening in each iteration of whatever other code is livelocking.
Changed at 2011-09-22T18:38:53Z by davidsarah
Fix most of the crawler tests. Reinstate the cancel_lease methods of ImmutableDiskShare? and MutableDiskShare?, since they are needed for lease expiry. refs #999
Changed at 2011-09-23T04:20:00Z by davidsarah
Includes a fix for iterating over a dict while removing entries from it in mutable/publish.py, some cosmetic changes, and a start on the S3 backend.
Changed at 2011-09-27T07:47:54Z by davidsarah
Includes fixes to test_status_bad_disk_stats and test_no_st_blocks in test_storage.py, and more work on the S3 backend.
Changed at 2011-09-27T07:48:49Z by davidsarah
Work in progress for asyncifying the backend interface (necessary to call txaws methods that return Deferreds). This is incomplete so lots of tests fail. refs #999
comment:36 Changed at 2011-09-28T00:09:58Z by davidsarah
In v13, test_storage.LeaseCrawler.test_share_corruption fails. However this is a test that is known to have race conditions -- it used to fail when logging was enabled (#923), and we tried to fix that in 3b1b0147a867759c, but in a way that in retrospect didn't really address the cause of the race condition. The problem is that it's trying to check for a particular instantaneous state of the lease crawler while it is running, which is inherently race-prone.
I suggest we not worry about this test for the current LAE iteration.
Changed at 2011-09-28T01:45:53Z by davidsarah
This does not include the asyncification changes from v14, but does include a couple of fixes for failures in test_system.
comment:37 Changed at 2011-09-28T09:24:27Z by zancas
Huh... weird, I can't apply v15...
0 /home/arc/sandbox/working 550 $ darcs apply pluggable-backends-davidsarah-v15.darcs.patch
darcs failed: Bad patch bundle! 2 /home/arc/sandbox/working 551 $
Changed at 2011-09-29T04:26:51Z by davidsarah
Differences, just in the S3 backend, between v13a and v16.
Changed at 2011-09-29T05:25:30Z by zooko
Changed at 2011-09-29T05:53:00Z by zooko
Changed at 2011-09-29T06:14:14Z by zooko
Changed at 2011-09-29T08:24:10Z by davidsarah
Completes the splitting of IStoredShare into IShareForReading and IShareForWriting. Does not include configuration changes.
Changed at 2011-09-29T18:33:41Z by davidsarah
Includes backend configuration (rerecorded from zooko's patch), and other minor fixes.
Changed at 2011-09-29T20:29:16Z by zooko
Changed at 2011-09-29T21:27:35Z by davidsarah
Include missing files for real and mock S3 backends. Also some fixes to tests, scripts/debug.py, and config parsing.
comment:38 Changed at 2011-09-29T23:51:43Z by david-sarah@…
comment:39 Changed at 2011-09-29T23:51:44Z by david-sarah@…
comment:40 Changed at 2011-09-29T23:51:45Z by david-sarah@…
comment:41 Changed at 2011-09-29T23:58:30Z by david-sarah@…
comment:42 Changed at 2011-09-30T00:15:11Z by david-sarah@…
comment:43 Changed at 2011-09-30T00:15:11Z by david-sarah@…
comment:44 Changed at 2011-09-30T02:19:02Z by david-sarah@…
Changed at 2011-09-30T06:05:43Z by zooko
comment:45 Changed at 2011-09-30T21:28:44Z by david-sarah@…
comment:46 Changed at 2011-10-04T01:12:02Z by david-sarah@…
comment:47 Changed at 2011-10-04T01:12:05Z by david-sarah@…
comment:48 Changed at 2011-10-04T01:12:05Z by david-sarah@…
comment:49 Changed at 2011-10-07T15:44:01Z by davidsarah
Re: pluggable-backends-davidsarah-v20.darcs.patch, I made a mistake in recording it that will cause a conflict with the ticket999-S3-backend branch. I'll attach a fixed version.
comment:50 Changed at 2011-10-07T19:39:49Z by david-sarah@…
comment:51 Changed at 2011-10-07T19:39:50Z by david-sarah@…
comment:52 Changed at 2011-10-07T19:39:51Z by david-sarah@…
comment:53 Changed at 2011-10-07T19:39:52Z by david-sarah@…
comment:54 Changed at 2011-10-07T19:39:53Z by david-sarah@…
comment:55 Changed at 2011-10-07T19:39:54Z by david-sarah@…
comment:56 Changed at 2011-10-07T19:39:55Z by david-sarah@…
comment:57 Changed at 2011-10-07T19:39:56Z by david-sarah@…
comment:58 Changed at 2011-10-07T19:39:57Z by david-sarah@…
comment:59 Changed at 2011-10-07T19:39:58Z by david-sarah@…
comment:60 Changed at 2011-10-07T19:39:59Z by david-sarah@…
comment:61 Changed at 2011-10-07T19:39:59Z by david-sarah@…
comment:62 Changed at 2011-10-07T19:59:24Z by david-sarah@…
comment:63 Changed at 2011-10-07T20:02:16Z by davidsarah
Please ignore pluggable-backends-davidsarah-v20.darcs.patch; the equivalent of that patch is on the ticket999-S3-backend branch now.
comment:64 Changed at 2011-10-09T23:25:13Z by david-sarah@…
comment:65 Changed at 2011-10-10T00:22:44Z by davidsarah
[5415/ticket999-S3-backend] fixes all but one of the tests in test_mutable.py.
comment:66 Changed at 2011-10-10T18:15:16Z by david-sarah@…
comment:67 Changed at 2011-10-10T19:19:47Z by david-sarah@…
comment:68 Changed at 2011-10-10T20:07:49Z by david-sarah@…
comment:69 Changed at 2011-10-10T20:10:57Z by david-sarah@…
comment:70 Changed at 2011-10-10T20:48:02Z by david-sarah@…
comment:71 Changed at 2011-10-10T20:48:02Z by david-sarah@…
comment:72 Changed at 2011-10-10T20:48:03Z by david-sarah@…
comment:73 Changed at 2011-10-10T23:17:29Z by david-sarah@…
comment:74 Changed at 2011-10-10T23:17:30Z by david-sarah@…
comment:75 Changed at 2011-10-10T23:17:31Z by david-sarah@…
comment:76 Changed at 2011-10-11T00:32:41Z by david-sarah@…
comment:77 Changed at 2011-10-11T04:44:30Z by david-sarah@…
comment:78 Changed at 2011-10-11T04:54:21Z by david-sarah@…
comment:79 Changed at 2011-10-11T04:59:26Z by david-sarah@…
comment:80 Changed at 2011-10-11T05:16:34Z by david-sarah@…
comment:81 Changed at 2011-10-11T05:20:45Z by david-sarah@…
comment:82 Changed at 2011-10-12T21:47:44Z by david-sarah@…
comment:83 Changed at 2011-10-12T21:47:47Z by david-sarah@…
comment:84 Changed at 2011-10-12T21:47:49Z by david-sarah@…
comment:85 Changed at 2011-10-12T21:47:50Z by david-sarah@…
comment:86 Changed at 2011-10-12T21:47:52Z by david-sarah@…
comment:87 Changed at 2011-10-12T21:47:53Z by david-sarah@…
comment:88 Changed at 2011-10-12T21:47:54Z by david-sarah@…
comment:89 Changed at 2011-10-12T21:47:56Z by david-sarah@…
comment:90 Changed at 2011-10-12T21:47:57Z by david-sarah@…
comment:91 Changed at 2011-10-12T21:47:58Z by david-sarah@…
comment:92 Changed at 2011-10-12T23:43:38Z by david-sarah@…
comment:93 Changed at 2011-10-12T23:43:40Z by david-sarah@…
comment:94 Changed at 2011-10-12T23:43:41Z by david-sarah@…
comment:95 Changed at 2011-10-12T23:43:42Z by david-sarah@…
comment:96 Changed at 2011-10-12T23:43:42Z by david-sarah@…
comment:97 Changed at 2011-10-13T03:53:24Z by david-sarah@…
comment:98 Changed at 2011-10-13T03:53:25Z by david-sarah@…
comment:99 Changed at 2011-10-13T05:08:45Z by david-sarah@…
comment:100 Changed at 2011-10-13T22:30:09Z by david-sarah@…
comment:101 Changed at 2011-10-13T23:30:32Z by david-sarah@…
comment:102 Changed at 2011-10-13T23:30:33Z by david-sarah@…
comment:103 Changed at 2011-10-13T23:30:33Z by david-sarah@…
comment:104 Changed at 2011-10-13T23:37:20Z by david-sarah@…
comment:105 Changed at 2011-10-13T23:44:17Z by david-sarah@…
comment:106 Changed at 2011-10-14T03:01:00Z by david-sarah@…
comment:107 Changed at 2011-10-14T06:21:15Z by david-sarah@…
comment:108 Changed at 2011-10-16T01:43:11Z by david-sarah@…
comment:109 Changed at 2011-10-16T03:53:15Z by david-sarah@…
comment:110 Changed at 2011-10-16T03:53:16Z by david-sarah@…
comment:111 Changed at 2011-10-16T04:45:11Z by david-sarah@…
comment:112 Changed at 2011-10-16T04:45:12Z by david-sarah@…
comment:113 Changed at 2011-10-18T06:47:04Z by david-sarah@…
comment:114 Changed at 2011-10-18T06:47:08Z by david-sarah@…
comment:115 Changed at 2011-10-18T06:47:09Z by david-sarah@…
comment:116 Changed at 2011-10-18T06:47:10Z by david-sarah@…
comment:117 Changed at 2011-10-18T17:30:39Z by davidsarah
In [5479/ticket999-S3-backend], there's also a fix to a preexisting bug in test_storage.LeaseCrawler.test_unpredictable_future, where it was checking the s["estimated-remaining-cycle"]["space-recovered"] key twice, rather than both that key and s["estimated-current-cycle"]["space-recovered"] as intended.
comment:118 Changed at 2011-10-18T18:35:12Z by david-sarah@…
comment:119 Changed at 2011-10-18T23:40:47Z by david-sarah@…
comment:120 Changed at 2011-10-19T06:19:29Z by david-sarah@…
comment:121 Changed at 2011-10-20T03:08:45Z by david-sarah@…
comment:122 Changed at 2011-10-20T03:08:47Z by david-sarah@…
comment:123 Changed at 2011-10-20T03:08:49Z by david-sarah@…
comment:124 Changed at 2011-10-20T03:08:53Z by david-sarah@…
comment:125 Changed at 2011-10-20T11:17:59Z by david-sarah@…
comment:126 Changed at 2011-10-20T11:18:00Z by david-sarah@…
comment:127 Changed at 2011-10-20T11:18:01Z by david-sarah@…
comment:128 Changed at 2011-10-20T11:56:25Z by david-sarah@…
comment:129 Changed at 2011-10-20T11:56:26Z by david-sarah@…
comment:130 Changed at 2011-10-20T11:56:27Z by david-sarah@…
comment:131 Changed at 2011-10-20T11:56:28Z by david-sarah@…
comment:132 Changed at 2011-10-20T17:35:41Z by david-sarah@…
comment:133 Changed at 2011-10-20T17:42:43Z by davidsarah
- Keywords gsoc removed
- Milestone changed from soon to 1.10.0
comment:134 Changed at 2011-10-21T00:18:57Z by david-sarah@…
comment:135 Changed at 2011-10-21T00:18:58Z by david-sarah@…
comment:136 Changed at 2011-10-21T01:11:39Z by david-sarah@…
comment:137 Changed at 2011-10-21T01:11:40Z by david-sarah@…
comment:138 Changed at 2011-10-21T01:52:43Z by david-sarah@…
comment:139 Changed at 2011-10-21T01:52:44Z by david-sarah@…
comment:140 Changed at 2011-10-21T03:22:38Z by david-sarah@…
comment:141 Changed at 2011-10-21T03:43:38Z by david-sarah@…
comment:142 Changed at 2011-10-21T04:42:32Z by david-sarah@…
comment:143 Changed at 2011-10-22T04:58:36Z by david-sarah@…
comment:144 Changed at 2011-10-24T18:31:36Z by david-sarah@…
comment:145 Changed at 2011-10-24T18:31:40Z by david-sarah@…
comment:146 Changed at 2011-10-25T10:10:05Z by david-sarah@…
comment:147 Changed at 2011-10-28T18:18:24Z by zancas
- Owner changed from zancas to davidsarah
comment:148 Changed at 2011-12-16T16:17:27Z by davidsarah
- Resolution set to fixed
- Status changed from new to closed
Further work on this functionality will be in ticket #1569.
comment:149 Changed at 2011-12-16T16:17:51Z by davidsarah
- Keywords s3-backend storage added; backend s3 removed
comment:150 Changed at 2012-03-31T23:58:14Z by davidsarah
- Milestone changed from 1.11.0 to eventually
comment:151 Changed at 2012-09-13T17:46:21Z by mk.fg
- Cc mk.fraggod@… added
comment:152 Changed at 2019-09-08T22:55:09Z by amontero
- Cc amontero@… added
- Description modified (diff)
See the RAIC diagram.