Changes between Initial Version and Version 1 of Ticket #1834


Ignore:
Timestamp:
2012-10-30T23:14:43Z (12 years ago)
Author:
zooko
Comment:

Legend:

Unmodified
Added
Removed
Modified
  • Ticket #1834 – Description

    initial v1  
    1111* Delete shares that have lost all their leases (by cancellation or expiry);
    1212
    13 I propose that this be done instead by the storage server maintaining a persistent set of shares to be deleted. When lease-updating step (which, in #666, is synchronous and fast) has identified a share that has no more leases, the share's id gets added to the persistent set of shares to delete. A long-running, persistent, duty-cycle-limited processes deletes those shares from the backend and removes their ids from the set of shares-to-delete. This is cleaner and more efficient than using a crawler, which has to visit ''all'' shares and which never stops twitching, since this has to visit only shares that have been marked as to-delete, and it quiesces when there is nothing to delete. (#1833)
     13I propose that this be done instead by the storage server maintaining a persistent set of shares to be deleted. When lease-updating step (which, in #666, is synchronous and fast) has identified a share that has no more leases, the share's id gets added to the persistent set of shares to delete. A long-running, persistent, duty-cycle-limited processes deletes those shares from the backend and removes their ids from the set of shares-to-delete. This is cleaner and more efficient than using a crawler, which has to visit ''all'' shares and which never stops twitching, since this has to visit only shares that have been marked as to-delete, and it quiesces when there is nothing to delete. (#1833 — storage server deletes garbage shares itself instead of waiting for crawler to notice them)
    1414
    1515* Discover newly added shares that the operator copied into the backend without notifying the storage server;
    1616
    17 I propose that we stop supporting this method of moving shares around. If we stop supporting this, that would leave two options for if you want to add a share to a server:
     17I propose that we stop supporting this use case. It can be replaced by some combination of: 1. requiring you to run a tahoe-lafs storage client tool (a share migration tool) to upload the shares through the server instead of copying the shares directly into the backend, 2. various kludgy workarounds, 3. a new tool for registering specific storage indexes in the leasedb after you've added the shares directly into the backend, or 4. simply requiring that the operator manually trigger the crawler to start instead of expecting the crawler to run continuously. (#1835 — stop grovelling the whole storage backend looking for externally-added shares to add a lease to)
    1818
    19 1. Send it through the front door —
     19* Count how many shares you have;
    2020
    21 I'm going to create this ticket in order to get a ticket number (probably #1834) that other tickets can reference, then come back and write more of this Description...
     21This can be nicely replaced by leasedb (a simple SQL "COUNT" query), and also the functionality can be extended to compute the aggregate sizes of data in addition to the mere number of objects, which would be very useful for customers of LeastAuthority.com (who pay per byte), among others. (#1836 — stop crawling share files in order to figure out how many shares you have)