source: trunk/docs/specifications/servers-of-happiness.rst

Last change on this file was 05f48c3, checked in by meejah <meejah@…>, at 2017-06-05T22:31:41Z

Various cleanups, fixes and improvements

Squashed all commits that were meejah's between
30d68fb499f300a393fa0ced5980229f4bb6efda
and
33c268ed3a8c63a809f4403e307ecc13d848b1ab
On the branch meejah:1382.markberger-rewrite-rebase.6 as
per review

  • Property mode set to 100644
File size: 8.8 KB
Line 
1.. -*- coding: utf-8-with-signature -*-
2
3====================
4Servers of Happiness
5====================
6
7When you upload a file to a Tahoe-LAFS grid, you expect that it will
8stay there for a while, and that it will do so even if a few of the
9peers on the grid stop working, or if something else goes wrong. An
10upload health metric helps to make sure that this actually happens.
11An upload health metric is a test that looks at a file on a Tahoe-LAFS
12grid and says whether or not that file is healthy; that is, whether it
13is distributed on the grid in such a way as to ensure that it will
14probably survive in good enough shape to be recoverable, even if a few
15things go wrong between the time of the test and the time that it is
16recovered. Our current upload health metric for immutable files is called
17'servers-of-happiness'; its predecessor was called 'shares-of-happiness'.
18
19shares-of-happiness used the number of encoded shares generated by a
20file upload to say whether or not it was healthy. If there were more
21shares than a user-configurable threshold, the file was reported to be
22healthy; otherwise, it was reported to be unhealthy. In normal
23situations, the upload process would distribute shares fairly evenly
24over the peers in the grid, and in that case shares-of-happiness
25worked fine. However, because it only considered the number of shares,
26and not where they were on the grid, it could not detect situations
27where a file was unhealthy because most or all of the shares generated
28from the file were stored on one or two peers.
29
30servers-of-happiness addresses this by extending the share-focused
31upload health metric to also consider the location of the shares on
32grid. servers-of-happiness looks at the mapping of peers to the shares
33that they hold, and compares the cardinality of the largest happy subset
34of those to a user-configurable threshold. A happy subset of peers has
35the property that any k (where k is as in k-of-n encoding) peers within
36the subset can reconstruct the source file. This definition of file
37health provides a stronger assurance of file availability over time;
38with 3-of-10 encoding, and happy=7, a healthy file is still guaranteed
39to be available even if 4 peers fail.
40
41Measuring Servers of Happiness
42==============================
43
44We calculate servers-of-happiness by computing a matching on a
45bipartite graph that is related to the layout of shares on the grid.
46One set of vertices is the peers on the grid, and one set of vertices is
47the shares. An edge connects a peer and a share if the peer will (or
48does, for existing shares) hold the share. The size of the maximum
49matching on this graph is the size of the largest happy peer set that
50exists for the upload.
51
52First, note that a bipartite matching of size n corresponds to a happy
53subset of size n. This is because a bipartite matching of size n implies
54that there are n peers such that each peer holds a share that no other
55peer holds. Then any k of those peers collectively hold k distinct
56shares, and can restore the file.
57
58A bipartite matching of size n is not necessary for a happy subset of
59size n, however (so it is not correct to say that the size of the
60maximum matching on this graph is the size of the largest happy subset
61of peers that exists for the upload). For example, consider a file with
62k = 3, and suppose that each peer has all three of those pieces.  Then,
63since any peer from the original upload can restore the file, if there
64are 10 peers holding shares, and the happiness threshold is 7, the
65upload should be declared happy, because there is a happy subset of size
6610, and 10 > 7. However, since a maximum matching on the bipartite graph
67related to this layout has only 3 edges, Tahoe-LAFS declares the upload
68unhealthy. Though it is not unhealthy, a share layout like this example
69is inefficient; for k = 3, and if there are n peers, it corresponds to
70an expansion factor of 10x. Layouts that are declared healthy by the
71bipartite graph matching approach have the property that they correspond
72to uploads that are either already relatively efficient in their
73utilization of space, or can be made to be so by deleting shares; and
74that place all of the shares that they generate, enabling redistribution
75of shares later without having to re-encode the file.  Also, it is
76computationally reasonable to compute a maximum matching in a bipartite
77graph, and there are well-studied algorithms to do that.
78
79Issues
80======
81
82The uploader is good at detecting unhealthy upload layouts, but it
83doesn't always know how to make an unhealthy upload into a healthy
84upload if it is possible to do so; it attempts to redistribute shares to
85achieve happiness, but only in certain circumstances. The redistribution
86algorithm isn't optimal, either, so even in these cases it will not
87always find a happy layout if one can be arrived at through
88redistribution. We are investigating improvements to address these
89issues.
90
91We don't use servers-of-happiness for mutable files yet; this fix will
92likely come in Tahoe-LAFS version 1.13.
93
94
95============================
96Upload Strategy of Happiness
97============================
98
99As mentioned above, the uploader is good at detecting instances which
100do not pass the servers-of-happiness test, but the share distribution algorithm
101is not always successful in instances where happiness can be achieved. A new
102placement algorithm designed to pass the servers-of-happiness test,  titled
103'Upload Strategy of Happiness', is meant to fix these instances where the uploader
104is unable to achieve happiness.
105
106Calculating Share Placements
107============================
108
109We calculate share placement like so:
110
1110. Start with an ordered list of servers. Maybe *2N* of them.
112
1131. Query all servers for existing shares.
114
1151a. Query remaining space from all servers. Every server that has
116    enough free space is considered "readwrite" and every server with too
117    little space is "readonly".
118
1192. Construct a bipartite graph G1 of *readonly* servers to pre-existing
120   shares, where an edge exists between an arbitrary readonly server S and an
121   arbitrary share T if and only if S currently holds T.
122
1233. Calculate a maximum matching graph of G1 (a set of S->T edges that has or
124   is-tied-for the highest "happiness score"). There is a clever efficient
125   algorithm for this, named "Ford-Fulkerson". There may be more than one
126   maximum matching for this graph; we choose one of them arbitrarily, but
127   prefer earlier servers. Call this particular placement M1. The placement
128   maps shares to servers, where each share appears at most once, and each
129   server appears at most once.
130
1314. Construct a bipartite graph G2 of readwrite servers to pre-existing
132   shares. Then remove any edge (from G2) that uses a server or a share found
133   in M1. Let an edge exist between server S and share T if and only if S
134   already holds T.
135
1365. Calculate a maximum matching graph of G2, call this M2, again preferring
137   earlier servers.
138
1396. Construct a bipartite graph G3 of (only readwrite) servers to
140   shares (some shares may already exist on a server). Then remove
141   (from G3) any servers and shares used in M1 or M2 (note that we
142   retain servers/shares that were in G1/G2 but *not* in the M1/M2
143   subsets)
144
1457. Calculate a maximum matching graph of G3, call this M3, preferring earlier
146   servers. The final placement table is the union of M1+M2+M3.
147
1488. Renew the shares on their respective servers from M1 and M2.
149
1509. Upload share T to server S if an edge exists between S and T in M3.
151
15210. If any placements from step 9 fail, mark the server as read-only. Go back
153    to step 2 (since we may discover a server is/has-become read-only, or has
154    failed, during step 9).
155
156Rationale (Step 4): when we see pre-existing shares on read-only servers, we
157prefer to rely upon those (rather than the ones on read-write servers), so we
158can maybe use the read-write servers for new shares. If we picked the
159read-write server's share, then we couldn't re-use that server for new ones
160(we only rely upon each server for one share, more or less).
161
162Properties of Upload Strategy of Happiness
163==========================================
164
165The size of the maximum bipartite matching is bounded by the size of the smaller
166set of vertices. Therefore in a situation where the set of servers is smaller
167than the set of shares, placement is not generated for a subset of shares. In
168this case the remaining shares are distributed as evenly as possible across the
169set of writable servers.
170
171If the servers-of-happiness criteria can be met, the upload strategy of
172happiness guarantees that H shares will be placed on the network. During file
173repair, if the set of servers is larger than N, the algorithm will only attempt
174to spread shares over N distinct servers. For both initial file upload and file
175repair, N should be viewed as the maximum number of distinct servers shares
176can be placed on, and H as the minimum amount. The uploader will fail if
177the number of distinct servers is less than H, and it will never attempt to
178exceed N.
Note: See TracBrowser for help on using the repository browser.