Opened at 2008-06-02T23:21:05Z
Closed at 2008-06-03T00:14:20Z
#439 closed defect (fixed)
don't write corrupt >12GiB files
Reported by: | warner | Owned by: | warner |
---|---|---|---|
Priority: | critical | Milestone: | 1.1.0 |
Component: | code-encoding | Version: | 1.0.0 |
Keywords: | Cc: | ||
Launchpad Bug: |
Description
I suspect that an attempt to write files that are larger than 12GiB will result in a corrupted file, as the "self._data_size" (i.e. share size) in WriteBucketProxy overflows the 4-byte space reserved for it. #346 is about removing that limit, but in the interim we need an assert or a precondition or something that will make sure we don't appear to succeed when we in fact fail.
I'm marking this as critical because it can cause data loss: you think you've uploaded the file, you get a read-cap for it, but then you can't read it back.
A precondition() in WriteBucketProxy.init would be sufficient.
Change History (1)
comment:1 Changed at 2008-06-03T00:14:20Z by warner
- Resolution set to fixed
- Status changed from new to closed
fixed, by 8c37b8e3af2f4d1b. I'm not sure what the limit is, but the new FileTooLargeError will be raised if the shares are too big for any of the fields to fit in their 4-byte containers (data_size or any of the offsets).
I think the actual size limit (i.e. the largest file you can upload) for k=3 is 12875464371 bytes. This is about 9.4MB short of 12GiB. The new assertion rejects all files which are larger than this. I don't actually know if you could upload a file of this size (is there some other limitation lurking in there?).
We still need something to prevent a client of the helper from trying to upload something this large, since it will just be a waste of time (the check that 8c37b8e3af2f4d1b adds is only for native uploads, which will be raised by the helper after they've transferred all the ciphertext over).