| Revision 208 (by dpavlin, 2009/11/21 13:45:54)
Sack - sharding memory hash in perl
Main design goal is to have interactive environment to query
perl hashes which are bigger than memory on single machine.
It implemented using TCP sockets between perl processes.
This allows horizontal scalability both on multi-core machines
as well as across the network to additional machines.
Reading data into hash is done using any perl module which
returns perl hash and supports offset and limit to select just
subset of data (this is required to create disjunctive shards).
Views are small perl snippets which are called for each record
on each shard with $rec. Views create data in $out hash which
is automatically merged in output.
You can influence default shard merge by adding + (plus sign)
in name of your key to indicate that key => values pairs below
should have sumed values when combining shards.
If you have long field names, add # to name of key above value
which you want to turn into integer value. This will reduce
memory usage on master node.
1. create cloud definition
etc/cloud-name IP addresses of nodes similar to /etc/hosts
etc/cloud-name.ssh ssh configuration (user, compression etc)
2. shard data
./bin/shards.pl (hard-coded to use WebPAC::Input::ISI for now)
3. start server
4. start repl
5. start locally for development