Changes between Initial Version and Version 1 of Performance/Old


Ignore:
Timestamp:
2011-04-11T16:21:46Z (14 years ago)
Author:
zooko
Comment:

copy Performance to Performance/Old

Legend:

Unmodified
Added
Removed
Modified
  • Performance/Old

    v1 v1  
     1This is an archived copy of the old Performance page. It contains some data which should be of interest to a hacker who is investigating performance, but it is not of interest to a user of the modern versions of Tahoe-LAFS, since they are so different from the old versions that this data was about.
     2
     3Some basic notes on performance:
     4
     5DISCLAIMER: the memory footprint measurements documented on this page and graphed (see the hyperlinks below) are based on !VmSize in linux. !VmSize almost certainly doesn't correlate with what you care about. For example, it doesn't correlate very well at all with whether your server will go into swap thrash, or how much RAM you need to provision for your server, or, well, anything that you care about. Yes, in case it isn't clear, I (Zooko) consider this measurement to be useless. Please see ticket #227 in which I go into more detail about this.
     6 
     7== Memory Footprint ==
     8
     9We try to keep the Tahoe memory footprint low by continuously monitoring the
     10memory consumed by common operations like upload and download.
     11
     12For each currently active upload or download, we never handle more than a
     13single segment of data at a time. This serves to keep the data-driven
     14footprint down to something like 4MB or 5MB per active upload/download.
     15
     16Some other notes on memory footprint:
     17
     18 * importing sqlite (for the share-lease database) raised the static
     19   footprint by 6MB, going from 24.3MB to 31.5MB (as evidenced by the munin
     20   graph from 2007-08-29 to 2007-09-02).
     21
     22 * importing nevow and twisted.web (for the web interface) raises the static
     23   footprint by about 3MB (from 12.8MB to 15.7MB).
     24
     25 * importing pycryptopp (which began on 2007-11-09) raises the static footprint
     26   (on a 32-bit machine) by about 6MB (from 19MB to 25MB). The 64-bit machine
     27   footprint was raised by 17MB (from 122MB to 139MB).
     28
     29The
     30[http://allmydata.org/tahoe-figleaf-graph/hanford.allmydata.com-tahoe_memstats.html 32-bit memory usage graph]
     31shows our static memory footprint on a 32bit machine (starting a node but not doing
     32anything with it) to be about 24MB. Uploading one file at a time gets the
     33node to about 29MB. (we only process one segment at a time, so peak memory
     34consumption occurs when the file is a few MB in size and does not grow beyond
     35that). Uploading multiple files at once would increase this.
     36
     37We also have a
     38[http://allmydata.org/tahoe-figleaf-graph/hanford.allmydata.com-tahoe_memstats_64.html 64-bit memory usage graph], which currently shows a disturbingly large static footprint.
     39We've determined that simply importing a few of our support libraries (such
     40as Twisted) results in most of this expansion, before the node is ever even
     41started. The cause for this is still being investigated: we can think of plenty
     42of reasons for it to be 2x, but the results show something closer to 6x.
     43
     44== Network Speed ==
     45
     46=== Test Results ===
     47
     48Using a 3-server testnet in colo and an uploading node at home (on a DSL line
     49that gets about 78kBps upstream and has a 14ms ping time to colo) using
     500.5.1-34 takes 820ms-900ms per 1kB file uploaded (80-90s for 100 files, 819s
     51for 1000 files). The DSL speed results are occasionally worse than usual,
     52when the owner of the DSL line is using it for other purposes while a test is
     53taking place.
     54
     55'scp' of 3.3kB files (simulating expansion) takes 8.3s for 100 files and 79s
     56for 1000 files, 80ms each.
     57
     58Doing the same uploads locally on my laptop (both the uploading node and the
     59storage nodes are local) takes 46s for 100 1kB files and 369s for 1000 files.
     60
     61Small files seem to be limited by a per-file overhead. Large files are limited
     62by the link speed.
     63
     64The munin
     65[http://allmydata.org/tahoe-figleaf-graph/hanford.allmydata.com-tahoe_speedstats_delay.html delay graph] and
     66[http://allmydata.org/tahoe-figleaf-graph/hanford.allmydata.com-tahoe_speedstats_rate.html rate graph] show these Ax+B numbers for a node in colo and a node behind a DSL line.
     67
     68The
     69[http://allmydata.org/tahoe-figleaf-graph/hanford.allmydata.com-tahoe_speedstats_delay_rtt.html delay*RTT graph] shows this per-file delay as a multiple of the average round-trip
     70time between the client node and the testnet. Much of the work done to upload
     71a file involves waiting for message to make a round-trip, so expressing the
     72per-file delay in units of RTT helps to compare the observed performance
     73against the predicted value.
     74
     75=== Mutable Files ===
     76
     77Tahoe's mutable files (sometimes known as "SSK" files) are encoded
     78differently than the immutable ones (aka "CHK" files). Creating these mutable
     79file slots currently (in release 0.7.0) requires an RSA keypair generation.
     80[http://allmydata.org/tahoe-figleaf-graph/hanford.allmydata.com-tahoe_speedstats_SSK_creation.html This graph]
     81tracks the amount of time it takes to perform
     82this step.
     83
     84There is also per-file overhead for upload and download, just like with CHK
     85files, mostly involving the queries to find out which servers are holding
     86which versions of the file. The
     87[http://allmydata.org/tahoe-figleaf-graph/hanford.allmydata.com-tahoe_speedstats_delay_SSK.html mutable-file delay graph]
     88shows this "B" per-file latency value.
     89
     90The "A" transfer rate for SSK files is also tracked in this
     91[http://allmydata.org/tahoe-figleaf-graph/hanford.allmydata.com-tahoe_speedstats_rate_SSK.html SSK rate graph].
     92
     93=== Roundtrips ===
     94
     95The 0.5.1 release requires about 9 roundtrips for each share it uploads. The
     96upload algorithm sends data to all shareholders in parallel, but these 9
     97phases are done sequentially. The phases are:
     98
     99 1. allocate_buckets
     100 2. send_subshare (once per segment)
     101 3. send_plaintext_hash_tree
     102 4. send_crypttext_hash_tree
     103 5. send_subshare_hash_trees
     104 6. send_share_hash_trees
     105 7. send_UEB
     106 8. close
     107 9. dirnode update
     108
     109We need to keep the send_subshare calls sequential (to keep our memory
     110footprint down), and we need a barrier between the close and the dirnode
     111update (for robustness and clarity), but the others could be pipelined.
     1129*14ms=126ms, which accounts for about 15% of the measured upload time.
     113
     114Doing steps 2-8 in parallel (using the attached pipeline-sends.diff patch)
     115does indeed seem to bring the time-per-file down from 900ms to about 800ms,
     116although the results aren't conclusive.
     117
     118With the pipeline-sends patch, my uploads take A+B*size time, where A is 790ms
     119and B is 1/23.4kBps . 3.3/B gives the same speed that basic 'scp' gets, which
     120ought to be my upstream bandwidth. This suggests that the main limitation to
     121upload speed is the constant per-file overhead, and the FEC expansion factor.
     122
     123== Storage Servers ==
     124
     125== System Load ==
     126
     127The source:src/allmydata/test/check_load.py tool can be used to generate
     128random upload/download traffic, to see how much load a Tahoe grid imposes on
     129its hosts.
     130
     131=== test one: 10kB mean file size ===
     132
     133Preliminary results on the Allmydata test grid (14 storage servers spread
     134across four machines (each a 3ishGHz P4), two web servers): we used three
     135check_load.py clients running with 100ms delay between requests, an
     13680%-download/20%-upload traffic mix, and file sizes distributed exponentially
     137with a mean of 10kB. These three clients get about 8-15kBps downloaded,
     1382.5kBps uploaded, doing about one download per second and 0.25 uploads per
     139second. These traffic rates were higher at the beginning of the process (when
     140the directories were smaller and thus faster to traverse).
     141
     142The storage servers were minimally loaded. Each storage node was consuming
     143about 9% of its CPU at the start of the test, 5% at the end. These nodes were
     144receiving about 50kbps throughout, and sending 50kbps initially (increasing
     145to 150kbps as the dirnodes got larger). Memory usage was trivial, about 35MB
     146!VmSize per node, 25MB RSS. The load average on a 4-node box was about 0.3 .
     147
     148The two machines serving as web servers (performing all encryption, hashing,
     149and erasure-coding) were the most heavily loaded. The clients distribute
     150their requests randomly between the two web servers. Each server was
     151averaging 60%-80% CPU usage. Memory consumption is minor, 37MB !VmSize and
     15229MB RSS on one server, 45MB/33MB on the other. Load average grew from about
     1530.6 at the start of the test to about 0.8 at the end. Network traffic
     154(including both client-side plaintext and server-side shares) outbound was
     155about 600Kbps for the whole test, while the inbound traffic started at
     156200Kbps and rose to about 1Mbps at the end.
     157
     158=== test two: 1MB mean file size ===
     159
     160Same environment as before, but the mean file size was set to 1MB instead of
     16110kB.
     162
     163{{{
     164clients: 2MBps down, 340kBps up, 1.37 fps down, .36 fps up
     165tahoecs2: 60% CPU, 14Mbps out, 11Mbps in, load avg .74  (web server)
     166tahoecs1: 78% CPU, 7Mbps out, 17Mbps in, load avg .91  (web server)
     167tahoebs4: 26% CPU, 4.7Mbps out, 3Mbps in, load avg .50  (storage server)
     168tahoebs5: 34% CPU, 4.5Mbps out, 3Mbps in  (storage server)
     169}}}
     170
     171Load is about the same as before, but of course the bandwidths are larger.
     172For this file size, the per-file overhead seems to be more of a limiting
     173factor than per-byte overhead.
     174
     175=== test three: 80% upload, 20% download, 1MB mean file size ===
     176
     177Same environment as test 2, but 80% of the operations are uploads.
     178
     179{{{
     180clients: 150kBps down, 680kBps up, .14 fps down, .67 fps up
     181tahoecs1: 62% CPU, 11Mbps out, 2.9Mbps in, load avg .85
     182tahoecs2: 57% CPU, 10Mbps out, 4Mbps in, load avg .76
     183tahoebs4: 16% CPU, 700kBps out, 5.4Mbps in, load avg 0.4ish
     184tahoebs5: 21%, 870kBps out, 5.1Mbps in, load avg about 0.35
     185}}}
     186
     187Overall throughput is about half of the download case. Either uploading files
     188or modifying the dirnodes looks to be more expensive than downloading. The
     189CPU usage on the web servers was lower, suggesting that the expense might be
     190in round trips rather than actual computation.
     191
     192=== initial conclusions ===
     193
     194So far, Tahoe is scaling as designed: the client nodes are the ones doing
     195most of the work, since these are the easiest to scale. In a deployment where
     196central machines are doing encoding work, CPU on these machines will be the
     197first bottleneck. Profiling can be used to determine how the upload process
     198might be optimized: we don't yet know if encryption, hashing, or encoding is
     199a primary CPU consumer. We can change the upload/download ratio to examine
     200upload and download separately.
     201
     202Deploying large networks in which clients are not doing their own encoding
     203will require sufficient CPU resources. Storage servers use minimal CPU, so
     204having all storage servers also be web/encoding servers is a natural
     205approach.