'''Q: What is special about Tahoe-LAFS? Why should anyone care about it instead of [http://tahoe-lafs.org/trac/tahoe/wiki/RelatedProjects#OtherProjects other distributed storage systems]?''' A1: Tahoe-LAFS is the first Free !Software/Open Source storage technology to offer ''provider-independent security''. ''Provider-independent security'' means that the integrity and confidentiality of your files is guaranteed by mathematics computed on the client side, and is independent of the servers, which may be owned and operated by someone else. To learn more, read [http://tahoe-lafs.org/source/tahoe/trunk/docs/about.html our one-page explanation]. A2: Tahoe-LAFS provides extremely reliable, fault-tolerant storage. Even if you do not need its security properties, you might want to use Tahoe-LAFS as an extremely reliable storage system. (Tahoe-LAFS's security features do an excellent job of staying out of your way when you don't need them.) '''Q: "Erasure-coding"? What's that?''' A: You know how with RAID-5 you can lose any one drive and still recover? And there is also something called RAID-6 where you can lose any two drives and still recover. Erasure coding is the generalization of this pattern: you get to configure it for how many drives you could lose and still recover. Tahoe-LAFS is typically configured to upload each file to 10 different drives, where you can lose any 7 of them and still recover the entire file. This gives radically better reliability than comparable RAID setups, at a cost of only 3.3 times the storage space that a single copy takes. (This technique is also known as "forward error correction" and as an "information dispersal algorithm".) '''[=#Q3_disable_encryption Q3]: Is there a way to disable the encryption phase and just use the encoding on the actual content? Won't that save a lot of CPU cycles?''' A: There isn't currently a way to disable or skip the encryption phase, but if you watch the status page on your local tahoe-lafs node for uploads, you'll see that the encryption time is orders (yes, plural) of magnitude smaller than the upload time, so there isn't much performance to be gained by skipping the encryption. We prefer 'secure by default', so without a compelling reason to allow insecure operation, our plan is to leave encryption turned on all the time. '''Q: Where should I look for current documentation about the Tahoe-LAFS protocols?''' A: http://tahoe-lafs.org/source/tahoe/trunk/docs/architecture.rst '''Q: Does Tahoe-LAFS work on embedded devices such as a [http://www.pogoplug.com PogoPlug] or an [http://openwrt.org OpenWRT] router?''' A: Yes! François Deppierraz contributes [http://tahoe-lafs.org/buildbot/builders/FranXois%20lenny-armv5tel a buildbot] which shows that Tahoe-LAFS builds and all the unit tests pass on his Intel SS4000-E NAS box running under Debian Squeeze. Zandr Milewski [http://tahoe-lafs.org/pipermail/tahoe-dev/2009-November/003157.html reported] that it took him only an hour to build, install, and test Tahoe-LAFS on a !PogoPlug. '''Q: Does Tahoe-LAFS work on Windows?''' A: Yes. Follow [http://tahoe-lafs.org/source/tahoe-lafs/trunk/docs/quickstart.html the standard quickstart instructions] to get Tahoe-LAFS running on Windows. (There was also an "Allmydata Windows client", but that is not actively maintained at the moment, and relied on some components that are not open-source.) '''Q: Does Tahoe-LAFS work on Mac OS X?''' A: Yes. Follow [http://tahoe-lafs.org/source/tahoe-lafs/trunk/docs/quickstart.html the standard quickstart instructions] on Mac OS X and it will result in a working command-line tool on Mac OS X just as it does on other Unixes. '''Q: Can there be more than one storage folder on a storage node? So if a storage server contains 3 drives without RAID, can it use all 3 for storage?''' A: Not directly. Each storage server has a single "base directory" which we abbreviate as $BASEDIR. The server keeps all of its shares in a subdirectory named $BASEDIR/storage/shares/ . (Note that you can symlink this to whatever you want: you can run most of the node from one place, and store all the shares somewhere else). Since there's only one such subdirectory, you can only use one filesystem per node.On the other hand, shares are stored in a set of 1024 subdirectories of that one, named $BASEDIR/storage/shares/aa/, $BASEDIR/storage/shares/ab/, etc. If you were to symlink the first third of these to one filesystem, the next third to a second filesystem, etc, (hopefully with a script!), then you'd get about 1/3rd of the shares stored on each disk. The "how much space is available" and space-reservation tools would be confused, but basically everything else should work normally. '''Q: Would it make sense to just use RAID-0 and let Tahoe-LAFS deal with the redundancy?''' A: The Allmydata grid didn't bother with RAID at all: each Tahoe-LAFS storage server node used a single spindle. The "RAID and/or Tahoe-LAFS" question depends upon how much you trust RAID vs how much you trust Tahoe-LAFS, and how expensive the different forms of repair would be. Tahoe-LAFS can correctly be thought of as a form of "application-level RAID", with more flexibility than the usual RAID0/4/5 styles (I think RAID-0 is equivalent to 1-of-2 encoding, and RAID-5 is like 2-of-3). Using RAID to achieve your redundancy gets you fairly fast repair, because it's all being handled by a controller that sits right on top of the raw drive. Tahoe-LAFS's repair is a lot slower, because it is driven by a client that's examining one file at a time, and since there are a lot of network roundtrips for each file. Doing a repair of a 1TB RAID-5 drive can easily be finished in a day. If that 1TB drive is filled with a million Tahoe-LAFS files, the repair could take a month. On the other hand, many RAID configurations degrade significantly when a drive is lost, and Tahoe-LAFS's read performance is nearly unaffected. So repair events may be infrequent enough to just let them happen quietly in the background and not care much about how long they take. The optimal choice is a complicated one. Given inputs of: * how much data will be stored, how it changes over time (inlet rate,churn)[[BR]] * expected drive failure rate (both single sector errors and complete fail)[[BR]] * server/datacenter layout, inter/intra-colo bandwidth, costs[[BR]] * drive/hardware costs[[BR]] it becomes a tradeoff between money (number of Tahoe-LAFS storage nodes, what sort of RAID [if any] you use for them, how many disks that means, how much those disks cost, how many computers you need to host them, how much bandwidth you spend doing upload/download/repair), bandwidth costs, read/write performance, and probability of file loss due to failures happening faster than repair. In addition, Tahoe-LAFS's current repair code is not particularly clever: it doesn't put the new shares in exactly the right places, so you can easily get shares doubled up and not distributed as evenly as if you'd done a single upload. This is being tracked in ticket #610. '''Q: Suppose I have a file of 100GB and 2 storage nodes each with 75GB available, will I be able to store the file or does it have to fit within the realms of a single node?''' A: The ability to store the file will depend upon how you set the encoding parameters: you get to choose the tradeoff between expansion (how much space gets used) and reliability. The default settings are "3-of-10" (very conservative), which means the file is encoded into 10 shares, and any 3 will be sufficient to reconstruct it. That means each share will be 1/3rd the size of the original file (plus a small overhead, less than 0.5% for large files). For your 100GB file, that means 10 shares, each of which is 33GB in size, which would not fit (it could get two shares on each server, but it couldn't place all ten, so it would return an error). But you could set the encoding to 2-of-2, which would give you two 50GB shares, and it would happily put one share on each server. That would store the file, but it wouldn't give you any redundancy: a failure of either server would prevent you from recovering the file. You could also set the encoding to 4-of-6, which would generate six 25GB shares, and put three on each server. This would still be vulnerable to either server being down (since neither server has enough shares to give you the whole file by itself), but would become tolerant to errors in an individual share (if only one share file were damaged, there are still five other shares, and we only need four). A lot of disk errors affect only a single file, so there's some benefit to this even if you're still vulnerable to a full disk/server failure. '''Q: Do I need to shutdown all clients/servers to add a storage node?''' A: No, You can add or remove clients or servers anytime you like. The central "Introducer" is responsible for telling clients and servers about each other, and it acts as a simple publish-subscribe hub, so everything is very dynamic. Clients re-evaluate the list of available servers each time they do an upload. This is great for long-term servers, but can be a bit surprising in the short-term: if you've just started your client and upload a file before it has a chance to connect to all of the servers, your file may be stored on a small subset of the servers, with less reliability than you wanted. We're still working on a good way to prevent this while still retaining the dynamic server discovery properties (probably in the form of a client-side configuration statement that lists all the servers that you expect to connect to, so it can refuse to do an upload until it's connected to at least those). A list like that might require a client restart when you wanted to add to this "required" list, but we could implement such a feature without a restart requirement too. '''Q: If I had 3 locations each with 5 storage nodes, could I configure the grid to ensure a file is written to each location so that I could handle all servers at a particular location going down?''' A: Not directly. We have a ticket about that one (#467, #302), but it's deeper than it looks and we haven't come to a conclusion on how to build it. The current system will try to distribute the shares as widely as possible, using a different pseudo-random permutation for each file, but it is completely unaware of server properties like "location". If you have more free servers than shares, it will only put one share on any given server, but you might wind up with more shares in one location than the others. For example, if you have 15 servers in three locations A:1/2/3/4/5, B:6/7/8/9/10, C:11/12/13/14/15, and use the default 3-of-10 encoding, your worst case is winding up with shares on 1/2/3/4/5/6/7/8/9/10, and not use location C at all. The most *likely* case is that you'll wind up with 3 or 4 shares in each location, but there's nothing in the system to enforce that: it's just shuffling all the servers into a ring, starting at 0, and assigning shares to servers around and around the ring until all the shares have a home. There's some math we could do to estimate the probability of things like this, but I'd have to dust off a stats textbook to remember what it is. (actually, since 15-choose-10 is only 3003). Ok, so the possibilities are: (3, 3, 4) 1500[[BR]] (2, 4, 4) 750[[BR]] (2, 3, 5) 600[[BR]] (1, 4, 5) 150[[BR]] (0, 5, 5) 3[[BR]] sum = 3003[[BR]] So you've got a 50% chance of the ideal distribution, and a 1/1000 chance of the worst-case distribution. '''Q: Is it possible to modify a mutable file by "patching" it? Also... if I have a file stored and I want to update a section of the file in the middle, is that possible or would be file need to be downloaded, patched and re-uploaded?''' A: Not at present. We've only implemented "Small Distributed Mutable Files" (SDMF) so far, which have the property that the whole file must be downloaded or uploaded at once. We have plans for "medium" MDMF files, which will fix this. MDMF files are broken into segments (default size is 128KiB), and you only have to replace the segments that are dirtied by the write, so changing a single byte would only require the upload of N/k*128KiB or about 440KiB for the default 3-of-10 encoding. Kevan Carstensen is spending his summer implementing MDMF, thanks to the sponsorship of Google Summer Of Code. Ticket #393 is tracking this work. '''Q: How can tahoe ensures that, every node id is unique ?''' A: The node ID is randomly-generated, so there is no way to guarantee its uniqueness. However, the ID is long enough that the probability of two randomly-generated IDs colliding is negligible. '''Q: If upload the same file again and again, tahoe will give the same capability string. How is tahoe identifies that the client is same, when i upload files mutiple times, is it based on node id ?''' A: For immutable files this is true. The capability string is derived from two pieces of information: The content of the file and the "convergence secret". By default, the convergence secret is randomly generated by the node when it first starts up, then stored and re-used after that. So the same file content uploaded from the same node will always have the same cap string. Uploading the file from a different node with a different convergence secret would result in a different cap string -- and a second copy of the file's contents stored in the grid, though there's no way to tell that the two stored files are the same, because they're encrypted with different keys. '''Q: When i stop a node and start it again, will the node have the same node id as of previous node start ?''' A: Yes. The node ID is stored in the my_nodeid file in your tahoe directory. '''Q: If i move the client node base directory to different maching and start the client there again, will the node have the same node id as of previous machine start ?''' A: Yes, as long as you move that my_nodeid file. '''Q: Is it possible to run multiple introducers on the same grid?''' A: Faruque Sarker has been working on this as a Google Summer of Code project. His changes are due to be integrated in Tahoe-LAFS v1.9.0. For more information please take a look at ticket #68 '''Q: Will this thing only run when I tell it to?''' A: Yes. First of all, it doesn't run except when you tell it to—you start it with {{{tahoe start}}} and stop it with {{{tahoe stop}}}. Secondly, the software doesn't act as a server unless you configure it to do so (it isn't like peer-to-peer software which automatically acts as a server as well as a client). Thirdly, the client doesn't do anything except in response to the user starting an upload or a download (it doesn't do anything automatically or in the background).