Syncing with bitpocket – a flexible, open source alternative to Dropbox

This continues from my previous post on the various online storage/sync solutions available today.

I’ve been a Dropbox (and Box, and Google Drive) user for a while now, and like it for its convenience. It is easy to use and setup, and lets you keep multiple devices in sync with next to no effort. However, I’ve always had some concerns over privacy and  security issues. In light of the recent attack on the service provider, I started wondering how safe my files and accounts really are (not just with Dropbox, but actually with any online storage solution, including a home-brewed one).

I also have some concerns regarding the privacy of my documents. Say, I’ve got some sensitive data uploaded to an online storage service. Who’s to say these documents are safe from data mining, or (god forbid) human eyes? (I’m not pointing fingers at any individual storage provider here. Some may respect your privacy, others may not.) Many people would be extremely wary of the possibility of information harvesting (even if it is completely anonymized and automated) and/or leakage.

Then of course, there are some less critical, but nevertheless important limitations:

  1. Only x GB of (free) storage space. One can always upgrade to a paid package, but I don’t want to pay for 50 GB of storage when I’m only going to use 10 GB in the foreseeable future. There are services who provide a large amount of storage space for free, but most of them still charge you for bandwidth usage above a fraction of the amount.
  2. No support for multiple profiles. You have to put EVERYTHING you want to sync under one single top-level folder. This may not be a suitable or acceptable restriction in all situations.
  3. Lack of flexibility – you don’t get to move your repository around if you need to. Once you subscribe to a service, you’re locked into using their storage infrastructure exclusively.

It is not necessary that the limitations I’ve described so far are all present in any single service, or even that they are a matter of concern for everybody. These are just a few issues that got me going on a personal quest to find a better alternative.

There are actually quite a few ways of setting up your own personal online storage and sync solution, whose security is limited only by your ability to configure it. But the most visible benefit over any existing service is the flexibility –

  1. to use a storage infrastructure of your choice, and
  2. to manage multiple profiles.

The rest of this post documents my experiments with one such solution, named bitpocket. It performs 2-way sync by using a wrapper script to run rsync twice (once on the master, once on the slave). It can also detect, and correctly propagate file deletions. It does have one limitation in that it doesn’t handle conflict resolution. You have been warned. (Unison is supposedly capable of this, but that is another post ;-).)

The basic setup instructions are right on the project landing page. Follow them and you’re all set. I’ll elaborate on two things here –

  1. how to do a multi-profile setup, and
  2. how to alleviate the problem of repeated remote lockouts when multiple slaves always try to sync at the same time.

Multiple profiles

I’ve got two folders on my laptop that I want to sync:

  1. /home/aditya/scripts
  2. /home/aditya/Documents

I want these two folder profiles to be self-contained, without requiring the tracking to be done at the common parent. Following the instructions on the project page, I did a bitpocket init inside each of the above folders. On the master side (I’m running an EC2 micro-instance on a 64-bit Amazon Linux AMI), I’ve got one folder: /home/ec2-user/syncroot where I want to track all synced profiles. So in the config file of the individual profile folders on the slave machine I set the REMOTE_PATH variable as follows:

  1. For /home/aditya/scripts
    REMOTE_PATH="/home/ec2-user/syncroot/scripts"
  2. For /home/aditya/Documents
    REMOTE_PATH="/home/ec2-user/syncroot/Documents"

That’s it! You can manage as many profiles as you want, with each slave deciding where to keep its local copy of each profile.

Preventing remote lockouts

Say, all your slaves are configured to sync their system clock over a network source. They are in sync with each other, often to the second (or finer). Now if all crons are configured to run at 5 minute intervals, then all the slaves attempt to connect to the master at exactly the same time. The first one to establish a connection starts syncing, and all the others get locked out. This happens on every cron run. The problem is further exacerbated by the fact that even blank syncing takes a few seconds at the very least, and the lockout is in force for that duration. We’re thus left with a very inefficient system which can sync ONLY one slave with every cron run. If one slave is on a network that enjoys consistently lower lag with the master than all the others, then the others basically never get a chance to connect! Even if that is not the case, the system overall always has a success rate of 1/N for N slaves, in each cron run. Not good.

One way to alleviate this (though not entirely) is to introduce a random delay (less than the cron interval) between when cron initiates and when the connection is actually attempted. Over several cron runs, this scheme spreads out the odds evenly (duh!), for each slave, of running into a remote lockout. Local lockouts are not a problem. Bitpocket uses a locking mechanism to prevent two local processes from syncing the same tracked directory at the same time. If a new process encounters a lock on a tracked directory, meaning the previously spawned process hasn’t finished syncing yet, it simply exits. The random delay is introduced as shown below (assuming a cron frequency of 5 min):

#! /usr/bin/env bash

cd $1
PIDFILE="$1/.bitpocket/run.pid"

sleep $[ ( $RANDOM % 300 ) ]s

if [ -e "${PIDFILE}" ] && (ps -u $USER -f | grep "[ ]$(cat ${PIDFILE})[ ]"); then
  echo "Already running."
  exit 99
fi

rm -rf .bitpocket/tmp/lock #Previously spawned proc is now dead. There should be no lock at this point. This step corrects for an unclean shutdown.
/usr/bin/bitpocket cron &

echo $! > "${PIDFILE}"
chmod 644 "${PIDFILE}"

That’s it! Assuming you’ve saved this file in /usr/bin/bpsync, edit your crontab entries like so, and you’re done:

*/5 * * * *     bpsync ~/Documents
*/5 * * * *     bpsync ~/scripts

Happy syncing!

EDIT: I ran into trouble with stale server-side locks preventing further syncs with any slave. This happens when a slave disconnects mid-sync for whatever reason. Lock cleanup is currently the responsibility of the slave process that created it. There is no mechanism on the server to detect and expire stale locks (See https://github.com/sickill/bitpocket/issues/16). This issue needs to be fixed before this syncing tool can be left to run indefinitely, without supervision.

EDIT #2: One quick way to dispose of stale master locks is by periodically running a little script on the server that checks each sync directory for any open files (i.e. some machine is currently running a sync). If none are found, it simply deletes the leftover lock files. The script and the corresponding crontab entries are as below:

#!/bin/bash

cd ~/syncroot
for DIR in *;
do
 OUT=`/usr/sbin/lsof +D $DIR`
 if [ "$OUT" = "" ];
 then
  rm -rf $DIR/.bitpocket/tmp/lock
 fi
done
*/5 * * * * /usr/bin/cleanup.sh
Advertisements

, ,

  1. #1 by deajan on August 20, 2013 - 5:23 pm

    Hello,
    Using bitpocket inspired me to write my own implementation of a bidirectionnal sync system under linux which is called Osync. Maybe you could have a look 🙂
    http://www.netpower.fr/osync

  2. #2 by Aditya Mukhopadhyay on August 20, 2013 - 9:12 pm

    This looks good! I will try it out.

  3. #3 by Aditya Mukhopadhyay on October 7, 2013 - 10:28 pm

    Tried osync today, and liked it very much. It is superior to bitpocket for the following reasons:
    1. Resumes previous interrupted transactions. The absence of this feature was a severe flaw in bitpocket, which could result in lost files in certain situations.
    2. Soft deletes for master and slave
    3. Intelligent locking during transfer (I had to implement this separately with bitpocket).
    4. ssh/rsync compress
    5. Preserves acl/xattrs
    6. Terminates (and cleans up) undead transfers.
    7. Conflict resolution (though this is very basic at the moment).
    8. Pre-run/Post-run hooks
    9. Email alerts
    10. Check remote availability before attempting sync. With bitpocket, I had implemented this separately.

    I think the transfer resume feature is really awesome, making this a better choice than unison even.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: