[tahoe-dev] One Grid to Rule Them All

Callme Whatiwant nejucomo at gmail.com
Fri Jun 28 04:51:15 UTC 2013


On Thu, Jun 27, 2013 at 8:57 PM, Callme Whatiwant <nejucomo at gmail.com>wrote:

> Here's a set of trade-offs that I currently find intriguing:
>
> Non-Global Use Case: The One Grid technology is opt in.  By default
> tahoe-lafs behaves similar to its current incarnation.  The One Grid
> implementation could conceivably even be a separate code base, interface,
> process, etc...
>
> Storage Management Policy: The design is "hands off" and does not alter
> share placement, accounting, garbage collection, etc...
>
> Efficiency / Latency / Reliability: We sacrifice all of these in favor of
> the other trade-offs.
>
> Incentives: Grid Universalist fanatics run extra software / infrastructure
> to facilitate The One Grid because of their ideological brainwashing in the
> education camps.
>
> Mental Models: "It's just like normal Tahoe-LAFS, except whenever you
> create a new cap, after doing so, the One Grid technology publishes details
> of how to find appropriate storage servers to the magic global lookup
> mechanism in the sky; when you request a cap, the One Grid technology finds
> the appropriate storage servers, then it feeds the result to normal
> Tahoe-LAFS to fetch or update the content."
>
> Implementation Cost: The design is "hands off" so that it can be developed
> semi-independently from Tahoe-LAFS.  Failure to release will not affect
> mainstream Tahoe-LAFS users.  Its bugs and runtime costs, etc... will not
> affect mainstream Tahoe-LAFS users.
>
>
> Given those trade-offs, it sounds like the implementation:
>
> a. Lives in a separate codebase from tahoe trunk.  One option is a fork,
> but I'm kind of inclined to have a separate codebase and separate process
> and separate web interface which then talks to a stable version of mainline
> Tahoe.  We might advocate for minimal changes to mainline Tahoe to
> facilitate this, such as a new web api request of the form:
>
> "Please give me the list of storage servers currently storing $CAP on your
> grid."
>
> -and:
>
> "Please fetch/update $CAP using this list of storage servers: [...]"
>
>
It looks like I'm about 4 years behind the wave on this one:
https://tahoe-lafs.org/trac/tahoe-lafs/ticket/573




> b. Implements a DHT which maps $CAP to [list of storage servers].
>
> c. Provides a webapi that looks just like mainstream Tahoe-LAFS, but
> whenever a new cap is created (by delegating to a mainline gateway), it
> publishes the cap into the DHT, and whenever retrieving a cap, it looks up
> storage servers from the DHT, then delegates to a mainline gateway.
>
>
> Thoughts?
>
> Regards,
> Nathan
>
> """
> One Grid to rule them all, One DHT to find them,
> One Grid to transfer all data and in the Local Node erasure-decode and
> decrypt them.
> """
>
>
>
> On Thu, Jun 27, 2013 at 8:41 PM, Callme Whatiwant <nejucomo at gmail.com>wrote:
>
>> Dear Distributed Secure Storage fans,
>>
>> The time has come to shed our conspiratorial pretense of being nothing
>> but small disparate bands of neighborly do gooders sharing storage with
>> their friends.  It is time to reveal to the world our true conquest of
>> world domination and announce our intent to create The One Grid to Rule
>> Them All!
>>
>> What on earth is he talking about, you may be asking?  I'm talking about
>> extending the Boring Old Web (BOW) with the ossm-sauce that is Tahoe-LAFS
>> so that we can spring our despotic vision of provider independent security
>> on the unsuspecting subjects of our new world order!
>>
>> Capabilities should be universally shareable in the same contexts as URLs
>> in general!  This is our vision!  Dissent on this manner is heretical (but
>> we will still openly accept mailing list posts, ticket submission and
>> herding, documentation help, patches, and any other contributions from such
>> counter-revolutionaries).
>>
>>
>> Ok, enough thespianism:
>>
>> I personally want to be able to email or tweet or inscribe on papyrus a
>> URL containing a read cap, and anyone who sees that and has Tahoe-LAFS
>> version Glorious Future installed should have a reasonable chance to
>> retrieve the content.
>>
>> How could this be designed and implemented?  There are myriad trade-offs
>> to consider:
>>
>> Non-global use case:  A fair number of users probably want a *non global
>> grid* such as for their own enterprise or collective, so it would be nice
>> to avoid dumping more complexity on them.  On the other hand, if the
>> features were opt in, that adds configuration complexity.
>>
>> Storage Management Policy: Some schemes would automate share placement
>> using some fancy DHT related technology, but that would interfere with
>> individual users and storage operators from deciding where their data lives.
>>
>> Efficiency / Latency / Reliability:  Some schemes would add a separate
>> global resolution system, but this adds round trips (latency), reliability.
>>
>> Incentives: A non-global grid often has "natural incentives" (same
>> company, same friend group, etc..)  A global system has different incentive
>> issues.  See [1].
>>
>> Mental Models: Some schemes may be complex to understand, reducing the
>> ability of many users to anticipate the effects of their choices or whom or
>> what they are relying upon and for what features.
>>
>> Implementation Cost: Some schemes may be complex to implement, increasing
>> the time to implement, the chance of bugs, etc...
>>
>>
>> There are probably many other tradeoffs I fail to account for, I just
>> want to get the ball rolling on this.  Let's be really clear about the
>> costs of various approaches.
>>
>> As a concrete step, I propose a new ticket keyword: "globalcaps" for any
>> ticket related to making capabilities globally usable.
>>
>>
>> Regards,
>> Comrade Nathan
>> Grid Universalist
>>
>> References:
>>
>> [1] The Tahoe-LAFS community has a great awareness of incentive issues.
>>  This is a good starting page:
>>
>> https://tahoe-lafs.org/trac/tahoe-lafs/wiki/Ostrom
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://tahoe-lafs.org/pipermail/tahoe-dev/attachments/20130627/9e2a5797/attachment-0001.html>


More information about the tahoe-dev mailing list