[tahoe-dev] How Tahoe-LAFS fails to scale up and how to fix it (Re: Starvation amidst plenty)
Shawn Willden
shawn at willden.org
Sun Sep 26 05:02:07 UTC 2010
On Fri, Sep 24, 2010 at 10:07 PM, Ravi Pinjala <ravi at p-static.net> wrote:
> Another (possibly even sillier) question: Is there a performance
> reason not to generate as many shares as possible, and only upload as
> many unique shares as we can to different hosts? This would be a
> completely different allocation strategy than what Tahoe uses now, but
> it might be more reliable. It'd also use as much space as possible,
> though, and the space usage wouldn't be very predictable, so actually
> upon reflection this isn't that great an idea. Still worth mentioning
> though, I think.
>
The space usage could be controlled by also automatically varying K. Rather
than specifying K and M, you could define an expansion factor to use (e.g.
3.33). Then M=N and K=M/3.33. Happiness would probably be specified as a
multiple of K (e.g. 2.1). So for a grid with 10 active nodes, K=3, H=7 and
M=10, but for a grid with 50 active nodes, K=15, H=32 and M=50. There might
be a better way to choose H.
However, none of this addresses the issue right now with the volunteer grid,
which is that there are a small number of servers providing the bulk of the
storage space. Because that number is less than the default H (7), people
using that value for H cannot upload to the volunteer grid because all of
the smaller servers are full, in spite of the fact that there are nearly 20
active servers in the grid.
--
Shawn
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://tahoe-lafs.org/pipermail/tahoe-dev/attachments/20100925/4d4682f6/attachment.html>
More information about the tahoe-dev
mailing list