[tahoe-dev] 1.6.0 When storing shares (from another machine), Python consumes --> 100% of local (storage node) CPU

Zooko O'Whielacronx zookog at gmail.com
Mon Mar 8 20:35:44 PST 2010


On Mon, Mar 8, 2010 at 2:01 PM, Brian Warner <warner at lothar.com> wrote:
> I've identified a O(n**2) CPU/string-allocation misbehavior in Foolscap
> when receiving large strings. This could explain the memory and CPU
> problems in Tahoe storage servers when you're uploading large mutable
> files to them (where "large" means segments that are more than about
> 10MB, which for the default 3-of-10 means filesizes above 30MB). In
> particular, uploading a 100MB mutable file at 3-of-10, leading to 33MB
> blocks, appears to take about three minutes to unpack, using 100MB in
> the process (on each server).

So this could explain Jody Harris's CPU-usage problem (reported in
this thread), since he uses mutable files for his backups. Jody, was
the "largish" file that you were uploading mutable? What was its size
(order of magnitude)?

This would also explain Stott's report of CPU-usage problem when
uploading a large mutable file in the initial problem report of #962

> I still don't have an explanation for reports of slowdowns and large
> memory consumption for large *immutable* files.

What reports?

> http://foolscap.lothar.com/trac/ticket/149 has some more details. The
> fix for this will be inside Foolscap's receive-side token parser, and
> Tahoe won't even notice.

Do you think we can try to get this foolscap fix into Ubuntu Lucid?

Hm, it looks like this might be related to #383. (See also #327.)

Regards,

Zooko

http://allmydata.org/trac/tahoe-lafs/ticket/327# performance
measurement of directories
http://allmydata.org/trac/tahoe-lafs/ticket/383# large directories
take a long time to modify
http://allmydata.org/trac/tahoe-lafs/ticket/962# Performance problems
testing on a 45 disk Dual core 3.3Ghz 4G memory Box


More information about the tahoe-dev mailing list