#671 new defect

bring back sizelimit (i.e. max consumed, not min free) — at Version 10

Reported by: zooko Owned by: davidsarah
Priority: major Milestone: soon
Component: code-nodeadmin Version: 1.3.0
Keywords: usability statistics sftp docs Cc: frederik.braun+tahoe@…
Launchpad Bug:

Description (last modified by zooko)

We used to have a sizelimit option which would do a recursive examination of the storage directory at startup and calculate approximately how much disk space was used, and refuse to accept new shares if the disk space would exceed the limit. #34 shows when it was implemented. It was later removed because it took a long time -- about 30 minutes -- on allmydata.com storage servers, and the servers remained unavailable to clients during this period, and because it was replaced by the reserved_space configuration, which was very fast and which satisfied the requirements of the allmydata.com storage servers.

This ticket is to reintroduce sizelimit because some users want it. This might mean that the storage server doesn't start serving clients until it finishes the disk space inspection at startup.

Note that sizelimit would impose a maximum limit on the amount of space consumed by the node's storage/shares/ directory, whereas reserved_space imposes a minimum limit on the amount of remaining available disk space. In general, reserved_space can be implemented by asking the OS for filesystem stats, whereas sizelimit must be implemented by tracking the node's own usage and accumulating the sizes over time.

To close this ticket, you do *not* need to implement some sort of interleaving of inspecting disk space and serving clients.

To close this ticket, you MUST NOT implement any sort of automatic deletion of shares to get back under the sizelimit if you find yourself over it (for example, if the user has changed the sizelimit to be lower after you've already filled it to the max), but you SHOULD implement some sort of warning message to the log if you detect this condition.

Change History (10)

comment:1 Changed at 2009-11-30T21:43:47Z by warner

  • Description modified (diff)
  • Summary changed from sizelimit to bring back sizelimit (i.e. max consumed, not min free)

(updated description)

Note that any sizelimit code is allowed to speed things up by remembering state from one run to the next. The old code did the slow recursive-traversal sharewalk to handle the (important) case where this state was inaccurate or unavailable (i.e. when shares had been deleted by some external process, or to handle the local-fs-level overhead that accounts for the difference between what /bin/ls and /bin/df each report). But we could trade off accuracy for speed: it should be acceptable to just ensure that the sizelimit is eventually approximately correct.

A modern implementation should probably use the "share crawler" mechanism, doing a stat on each share, and adding up the results. It can store state in the normal crawler stash, probably in the form of a single total-bytes value per prefixdir. The do-I-have-space test should use max(last-pass, current-pass), to handle the fact that the current-pass value will be low while the prefixdir is being scanned. The crawler would replace this state on each pass, so any stale information would go away within a few hours or days.

Ideally, the server code should also keep track of new shares that were written into each prefixdir, and add the sizes of those shares to the state value, but only until the next crawler pass had swung by and seen the new shares. You'd also want do to something similar with shares that were deleted (by the lease expirer). To accomplish this, you'd want to make a ShareCrawler subclass that tracks this extra space in a per-prefixdir dict, and have the storage-server/lease-expirer notify it every time a share was created or deleted. The ShareCrawler subclass is in the right position to know when the crawler has reached a bucket.

Doing this with the crawler would also have the nice side-effect of balancing fast startup with accurate size limiting. Even though this ticket has been defined as not requiring such a feature, I'm sure users would appreciate it.

comment:2 Changed at 2009-12-13T05:18:15Z by zooko

  • Milestone changed from 1.6.0 to eventually

Brian: did you intend to put this into Milestone 1.6? I assume not, so I'm moving it to eventually. Apologies if you meant to put it here and feel free to move it back.

comment:3 Changed at 2010-12-30T22:53:31Z by davidsarah

  • Keywords usability statistics sftp added

#1285 asks for the df command on a Tahoe filesystem mounted over SFTP to show some estimate for the space used on a grid (as well as the space available). However, by default we shouldn't slow down the startup process of storage servers in order to achieve that.

Note that on a conventional filesystem, the total size of files corresponds roughly to the amount of space used (ignoring per-file overhead). On a Tahoe filesystem, the latter is usually greater than the former by the expansion factor, N/k. However if the encoding parameters have changed, or if different gateways are using different parameters, then dividing the total space used by the current N/k on a given gateway would lead to an inaccurate estimate of total file size.

Both the total file size and the total space usage are potentially interesting. If we are periodically crawling all shares as this ticket suggests, then it is not significantly more difficult to compute both (under the assumption that N shares are stored for each file, which is true if the shares are optimally balanced).

OTOH, perhaps the total size of files and the total space usage are just not important enough to do all this work to compute them, given that storing shares on a separate filesystem is sufficient to achieve the goal of limiting total space usage.

OTGH, long-term preservation is improved by occasionally crawling all shares to ensure that they can still be read. (That requires actually reading the shares rather than just the metadata, though.)

comment:4 Changed at 2011-10-11T03:05:36Z by davidsarah

See also #940 (share-crawler should estimate+display space-used).

comment:5 Changed at 2012-10-25T20:49:52Z by davidsarah

Our current plan is to support this using the leasedb.

comment:6 Changed at 2012-10-25T20:50:06Z by davidsarah

  • Milestone changed from eventually to 1.11.0
  • Owner set to davidsarah
  • Status changed from new to assigned

comment:7 Changed at 2012-10-25T22:01:28Z by ChosenOne

  • Cc frederik.braun+tahoe@… added

comment:8 Changed at 2012-12-14T20:21:19Z by zooko

The next step is to implement #1836, then we can use that to implement this ticket!

comment:9 Changed at 2012-12-14T20:27:30Z by zooko

#1043 was a duplicate of this.

comment:10 Changed at 2013-07-04T17:21:16Z by zooko

  • Description modified (diff)
Note: See TracTickets for help on using tickets.