#1863 closed defect (invalid)
Work-Around for > ~150MB files on Least Authority TLoS3
Reported by: | nejucomo | Owned by: | davidsarah |
---|---|---|---|
Priority: | normal | Milestone: | undecided |
Component: | unknown | Version: | 1.9.2 |
Keywords: | lae usability large workaround | Cc: | |
Launchpad Bug: |
Description
Background:
TLoS3 aka Tahoe-LAFS on S3 currently has a deficiency where large files (> ~150MB) fail to upload. There is a ticket on their support site where they explain the fix is already implemented but not deployed.
Until the fix is deployed, users with large files need a work-around. This ticket collects folk wisdom about work around. Please add recipes which you have successfully used in comments.
Change History (5)
comment:1 Changed at 2012-11-18T07:48:35Z by nejucomo
comment:2 Changed at 2012-11-18T13:02:38Z by zooko
Fortunately LeastAuthority.com now has graphs of memory usage on all customer storage servers! So I can go look at that and see what effect your 200 MB upload had on your storage server. (The problem with large uploads is all about RAM usage in the storage server.)
Unfortunately, I currently don't have the password to LeastAuthority.com's graphs, so I'll have to wait til another member of the LeastAuthority.com team wakes up. ☺
comment:3 Changed at 2012-11-19T23:19:12Z by nejucomo
I'm migrating away from the strategy of using trac tickets for "work arounds" or "recipes" because for the former case, the work-arounds can go on the original bug ticket. In the latter case there's no clear close-ticket criteria.
In both cases if there's a large enough need, the work-around/recipe should probably have a wiki page.
Therefore I propose closing this ticket after we link to the ticket outlining the original issue and fix.
comment:4 Changed at 2012-11-19T23:21:31Z by nejucomo
- Resolution set to invalid
- Status changed from new to closed
comment:5 Changed at 2012-11-19T23:24:03Z by davidsarah
Related tickets:
- #1638 (S3 backend: Upload of large files consumes memory > twice the size of the file). This is closed because the cloud backend fixes it, and the S3 backend will never be merged.
- #1786 (cloud backend: limit memory usage)
- #1796 (refuse to upload/download a mutable file if it cannot be done in the available memory)
- #1819 (cloud backend: merge to trunk)
This "workaround" ticket may be unnecessary:
I just attempted to make a demo so that I could develop and paste a find and split based workaround, but it appears that it succeeded to upload a 200MB file: