cabs
A Content Addressable Blob Store for Node.
This implements something similar to the object store from git
(the .git/objects
folder). By default blobs stored by cabs are sha256
hashed and stored in subdirectories to avoid putting too many files in a single directory. The hashing algorithm and depth of the directories are configurable.
write
Cabs;
Pipe in a blob stream (e.g. fs.createReadStream), get back objects for the various pieces stored. Each object has a hash of the block, as well as the starting and ending locations within the stream. You may optionally pass a string representing the hash function to use (default sha256
) and number representing the block size (defaults to 5 MB).
To learn more about hashing algorithm tradeoffs read the comments on this issue.
read
Cabs;
Pipe in the objects from write, returns a readable stream of the blob.
example
var fs = var Cabs = /** stream a movie into cabs, store hashes in hashes.json **/fs ; /** later, to retrieve the movie, stream the hashes into cabs **/fs
Low Level Class
you also have access to the base Cabs class located at cabs.Cabs
, initialize it with a location and optionally an options object wtih a string representing the hash function to use (Defaults to sha256), the block size limit (defaults to 5 MB) and the depth of folders to use.
var store = './location';// orvar store = path: './location' hashFunction: 'sha256' limit: 5 * 1024 *1024 depth: 3; store;//stores buffer, callback is called with the hash store;//calls the callback with the blob store;//removes the file with the given hash store;//deletes all the files related to the store, just a shortcut to rimraf so beware. store;//same as Cabs.read store;//same as Cabs.write store;//calls callback with true if it exists, otherwise false store;//similar to has but throws an error if the file doesn't exist//or it's hash doesn't match it's address hash store;//write to a single file on disc. Will only ever emit a single string//the hash for the combined file you streamed in.//unlike writeStream which chunks a big file into multiple smaller ones//which can be handled in memory, this method buffers to disk.