Open Source version of Redis backup system (or Persistence) takes snapshots of Redis data very 'X' seconds. But:

  • Between 2 snapshot time you can loose data
  • Data are stored in local file. If you want to save into different location (i.e. S3) you must do it your self.

What it does

It open a connection to Redis and configure it to listen for key changes. You can setup a Redis-Pattern.

Also open 2 or more concurrent connections to S3 (or compatible) bucket.

Once a change was made in some watching key, it upload new data to S3.

Also, it provides a mechanism to pre-load a empty Redis instance with data from S3.

How I built it

It was built using Python asyncio and be aware to keep a high concurrency and small footprint.

Also there's available a public Docker Image.

Challenges I ran into

Main challenges was:

  • Performance. Python is not known for performance, but asyncio modules is nice for I/O operations.
  • Maintain the upload order to S3. When a key was changed in a small period time, it's important to ensure order.
  • Lightweight
  • Embedded versioning system: For those cases that S3 has not enabled "versioning" flag, RDD allow to version each object that it upload by using a timestamps.

Accomplishments that I'm proud of

I'm using it in some productions environment with very good results.

This projects allows me not to need a conventional database. Why use Redis as Cache if Redis could be the final database?

What I learned

A lot about event system of Redis for changes notification.

What's next for Redis Realtime Backup

I would like to add more backends and support more write operations than "SET"

Built With

+ 5 more
Share this project: