Opened at 2012-12-05T03:32:23Z
Closed at 2020-10-30T12:35:44Z
#1885 closed defect (wontfix)
cloud backend: redundant reads of chunks from cloud when downloading large files
Reported by: | davidsarah | Owned by: | |
---|---|---|---|
Priority: | normal | Milestone: | 1.15.0 |
Component: | code-storage | Version: | 1.9.2 |
Keywords: | cloud-backend cache download performance | Cc: | |
Launchpad Bug: |
Description (last modified by daira)
I uploaded a 7.7 MiB video as an MDMF file using the cloud backend on S3 (as of 1819-cloud-merge/022796fb), and then downloaded it. From flogtool tailing the storage server, I saw that it was reading the same chunks multiple times during the download. That suggests that the chunk cache is not operating well enough.
The file was being downloaded by playing it as a video in Chromium; I don't think that makes a difference.
Update: this also applies to immutable files if they are large enough.
Change History (11)
comment:1 Changed at 2012-12-05T03:37:28Z by davidsarah
comment:2 Changed at 2012-12-05T03:43:09Z by davidsarah
Same behaviour for a straight download, rather than playing a video. Each chunk seems to get read 5 times, and the first chunk (containing the header) many more times.
comment:3 Changed at 2013-05-24T22:12:10Z by daira
- Description modified (diff)
- Summary changed from cloud backend: redundant reads of chunks from S3 when downloading large MDMF file to cloud backend: redundant reads of chunks from cloud when downloading large MDMF file
comment:4 Changed at 2013-05-24T22:13:15Z by daira
- Description modified (diff)
comment:5 Changed at 2013-05-28T16:01:45Z by daira
- Description modified (diff)
- Keywords mdmf removed
- Summary changed from cloud backend: redundant reads of chunks from cloud when downloading large MDMF file to cloud backend: redundant reads of chunks from cloud when downloading large files
I changed ChunkCache to use a true LRU replacement policy, and that seems to have fixed this problem. (LRU is not often used because keeping track of ages can be inefficient for a large cache, but here we only need a cache of a few elements. In practice 5 chunks seems to be sufficient for the sizes of files I've tested; will investigate whether it's enough for larger files later.)
comment:6 Changed at 2013-05-28T16:17:19Z by daira
Hmm, that's an improvement, but the immutable downloader is not able to max out my downstream bandwidth -- each HTTP request is finishing before the next can be started, so we're not getting any pipelining. (I am getting ~ 1 MiB/s and should be getting ~ 1.8 MiB/s.)
comment:7 Changed at 2013-07-22T20:48:41Z by daira
- Milestone changed from 1.11.0 to 1.12.0
comment:8 Changed at 2016-03-22T05:02:25Z by warner
- Milestone changed from 1.12.0 to 1.13.0
Milestone renamed
comment:9 Changed at 2016-06-28T18:17:14Z by warner
- Milestone changed from 1.13.0 to 1.14.0
renaming milestone
comment:10 Changed at 2020-06-30T14:45:13Z by exarkun
- Milestone changed from 1.14.0 to 1.15.0
Moving open issues out of closed milestones.
comment:11 Changed at 2020-10-30T12:35:44Z by exarkun
- Resolution set to wontfix
- Status changed from new to closed
The established line of development on the "cloud backend" branch has been abandoned. This ticket is being closed as part of a batch-ticket cleanup for "cloud backend"-related tickets.
If this is a bug, it is probably genuinely no longer relevant. The "cloud backend" branch is too large and unwieldy to ever be merged into the main line of development (particularly now that the Python 3 porting effort is significantly underway).
If this is a feature, it may be relevant to some future efforts - if they are sufficiently similar to the "cloud backend" effort - but I am still closing it because there are no immediate plans for a new development effort in such a direction.
Tickets related to the "leasedb" are included in this set because the "leasedb" code is in the "cloud backend" branch and fairly well intertwined with the "cloud backend". If there is interest in lease implementation change at some future time then that effort will essentially have to be restarted as well.
During the upload and download, the server memory usage didn't go above 50 MiB according to the statmover graph.