Rclone vs Movebot — The showdown
Why would I use Movebot instead of Rclone?
It’s a question the team here at Couchdrop gets asked on a weekly basis when chatting with customers about Movebot. And, it’s always a good challenge for our sales team.
Rclone is a mighty tool to have in the tool belt for transferring files between clouds. But when it comes to migrations and complex data moves, we believe Movebot is just better.
Hear me out.
Terminal or a Browser
Off the bat, Rclone is terminal based. Most techies and developers are at home in the terminal, but you cant deny that the terminal lacks in several areas.
For ease of use and reporting, a web browser is simply better. Movebot is completely browser based, so it can be configured, monitored and managed from virtually anywhere. The graphs are great. The config is easy. And the secure management of hundreds of migration projects or transfers is very very simple.
There is no need to consult the man pages, dig through the config files and take a quick look at bash history to figure out how you last used it. Just fire up the browser to https://admin.movebot.io and you are away laughing.
Even though I occasionally use Rclone (I’ll admit to that), I always have to check my notes or Google to remember how on earth I configured it last. With Movebot, it’s all web based, so no obscure command line options are needed.
Hosting not included
Personally I am a strong advocate for SaaS services that just work, but I spend a good amount of time in an SSH session on a VM somewhere in the cloud. This inevitably means that there are a few small VM’s lying around in Digital Ocean or AWS with my name on them. These utility VM’s get used for ad-hoc tasks and become pet servers of doom.
Since Rclone is run from the terminal, using a VM or server somewhere is required, inevitably, personal pet servers tend to be used for migration projects, purely because they are convenient.
When dealing data moves, this is a bad idea. Rclone requires that credentials be stored in configuration files locally, and data is copied into memory and in some cases swapped out to disk during the sync process, or stored locally in temp files.
Think about that for a second.
Is it really a good idea to have a server with customer credentials and data lying around on them; Generally speaking no. This is a just asking for trouble. The risk increases if you are performing these sorts of transfers frequently and for multiple clients.
While we all try to keep a tidy ship by removing old config, deleting temp files and potentially even trashing the VM after the move, life is not so predictable and the chance of forgetting or missing something in the cleanup process is high. To make things worse, migrations have a tenancy to last months.
With Movebot, infrastructure and security is baked in. The VM’s running the migration are temporary, being terminated once the transfer has run. They are dedicated to your migration project, with no shared resources. And they are hardened, with disk level encryption and a modern architecture. There is also no SSH access. The servers (aka moveworkers) are locked down. You as a Movebot customer of course have no access to them, and neither does our engineering team. Best of all, you don’t have to provision anything.
Cost
On the face of it, spinning up a VM in Azure or AWS might sound cheaper than using Movebot. But it’s not, really its not.
All the major cloud providers charge for data egress, and in most cases they charge like a wounded bull. If you are moving 10TB of data, through a beefy VM in AWS, Amazon are going to clip the ticket for every GB you move. Currently that rate is around $0.09 per GB. This adds up. Add the VM cost, setup and config time, cost of having to start from scratch after you forget to detach your terminal, the blood pressure increases and it soon adds up. Factor in your time for config and monitoring, and the true total cost gets up there.
Movebot on the other hand, charges a flat rate per GB. It scales to get the maximum performance out of your source and destination, with no additional cost and its as simple as click click click.
There are also no unexpected charges. Cloud providers are notorious for clipping the ticket and often there are charges that you did not expect, which can bring a few surprises when using the likes of AWS for ad-hoc workloads.
While in some cases, it might seem cheaper to use Rclone and “do it yourself”, the reality is that it’s not.
The niggly parts of a migration
Security, cost, and terminals vs browsers aside, the real benefit to Movebot comes down to the finer details of a migration.
For those who have tried it, transferring data out of one system into another is fraught with issues.
Filename characters are supported in one system but not another, gdocs can’t be moved to SharePoint, filesystems have depth limits, permission models are different.
The list goes on.
Movebot has been built not only to transfer data blazingly fast, but also to deal with all the niggly issues for you, and highlight the ones that need intervention before you migrate. The remaining failures are tracked, reported and surfaced so that you can deal with them and files do not get missed.
Movebot is also built to manage the migration and transfer of multiple buckets, team drives, users and shared content. Migrating an entire organisation with Rclone gets complicated very quickly. Movebot solves this problem by having projects and being built for easy mapping and configuration of multiple transfers.
And finally there is the reporting and scan functionality
Rclone provides no reporting. While this is fine for a small transfers, with migrations, reporting is key.
With Movebot, everything is tracked for you and you can pull detailed logs and pretty graphs from a browser up to 90 days after the migration has finished.
This is extremely important, not just from an operational perspective, but also for providing an audit and report for your customer/end user at the end of the project.
Before the migration even kicks off you can run a scan, which will provide detailed insights into what the dataset looks like, what issues are going to crop up and how long it will take.
So when should I use Rclone
Rclone is perfect for transferring small amounts of simple data across things like S3 buckets or single folders and users accounts.
It is a relatively low barrier to entry to get going, for simple transfers, it can be easy to configure and is on the face of it; free.
But for migrations and larger transfers of more complex and structured data, use a tool built for that purpose.