#1937 new defect

back up the content of a file even if the content changes without changing mtime

Reported by: zooko Owned by:
Priority: normal Milestone: undecided
Component: code Version: 1.9.2
Keywords: tahoe-backup reliability preservation Cc:
Launchpad Bug:

Description

From pipermail/tahoe-dev/2008-September/000809.html.

If an application writes to a file twice in quick succession, then the operating system may give that file the same mtime value both times. mtime granularity varies between OSes and filesystems, and is often coarser than you would wish:

¹ http://www.infosec.jmu.edu/documents/jmu-infosec-tr-2009-002.pdf

² http://msdn.microsoft.com/en-us/library/windows/desktop/ms724290%28v=vs.85%29.aspx

  • Linux/ext3 - 1 sec ![¹]
  • Linux/ext4 - 1 nanosec ![¹]; actually 1 millisec (observed by my experiment just now on linux 3.2, ext4)
  • FreeBSD/UFS - 1 sec ![¹]
  • Mac - 1 sec ![¹]
  • Windows/FAT - 2 sec, no timezone, when DST changes it is off by one hour until next reboot: ![¹]
  • Windows/NTFS - 100 nanosec: ![¹]; possibly actually 1.6 microsec ![²]?
  • Windows/* - mtime isn't necessarily updated until the filehandle is closed [¹, ²]

Note that FAT is the standard filesystem for removable media (isn't it?), so it is actually very common.

Now the problem is, what happens if

  1. an application writes some data, D1 into a file, and the timestamp gets updated to T1, and then
  1. tahoe backup reads D1, and then
  1. the app writes some new data, D2, and the timestamp doesn't get updated because steps 2 and 3 happened within the filesystem's granularity?

What happens is that tahoe backup has saved D1, but from then on it will never save D2, since it falsely believes it already saved it since its timestamp is still T1. If this were to happen in practice, the effect for the user would be that when they go to read the file from Tahoe-LAFS, they find the previous version of its contents — D1 — and not the most recent version — D2. This unfortunately user would probably not have any way to figure out what happened, and would justly blame Tahoe-LAFS for being unreliable.

The same problem can happen if the timestamp of a file gets reset to an earlier value, such as with the touch -t unix command, or by the system clock getting moved. (The system clock getting moved happens surprisingly often in the wild.)

A user can avoid this problem by passing --ignore-timestamps to tahoe backup, which will cause that run of tahoe backup to reupload every file. That is very expensive in terms of time, disk, and CPU usage (even if the files get deduplicated by the servers).

Change History (3)

comment:1 Changed at 2013-03-27T18:39:10Z by zooko

Here's a proposed solution which avoids the failure of preservation due to the race condition. This solution does not address the problem due to timestamps getting reset, e.g. by touch -t or by the system clock getting moved.

Let G be the local filesystem's worst-case Granularity in seconds times some fudge factor, such as 2. So if the filesystem is FAT, let G=4, if the filesystem is ext4, let G=0.002, if the filesystem is NTFS, let G=0.004, else let G=2.

When tahoe backup examines a file, if the file's current mtime is within G seconds of the current time, then don't read its contents at that time, but instead delay for G seconds and then try again.

comment:2 follow-up: Changed at 2013-03-28T04:15:32Z by daira

If we use the approach of comment:1, then I suggest using a fixed G = 4s instead of trying to guess what the timestamp granularity is. Also, after the file has been uploaded we should check the mtime again, in case it was modified while we were reading it.

Short of making a shadow copy on filesystems that support it, it's not possible to get a completely consistent snapshot of a filesystem that is being modified using POSIX APIs.

Version 1, edited at 2013-03-28T04:15:54Z by daira (previous) (next) (diff)

comment:3 in reply to: ↑ 2 Changed at 2013-03-28T12:59:44Z by zooko

Replying to daira:

If we use the approach of comment:1, then I suggest using a fixed G = 4s instead of trying to guess what the timestamp granularity is.

+1

Also, after the file has been uploaded we should check the mtime again, in case it was modified while we were reading it.

Short of making a shadow copy on filesystems that support it, it's not possible to get a completely consistent snapshot of a filesystem that is being modified, using POSIX APIs.

Hm, I think this is a separate issue. The problem that this ticket seeks to address is that different-contents-same-mtime can lead to data loss. The issue you raise in this comment is, I think, #427.

Note: See TracTickets for help on using tickets.