Low-Cost Unlimited Server Storage using S3 and JuiceFS

Add an unlimited external file system to your server for just $6/TB/month

Unless you self-host at home on your own NAS, storage can quickly become expensive.

Storage options offered by your cloud provider are often limited and expensive.

So how should you best self-host data-intensive applications like Immich or Nextcloud?

In this article, you will learn how to set up an S3-compatible object storage as an external file system to extend your server to have practically unlimited storage.

Using S3 (object storage) as an external file system

At first, this may sound unconventional, but it actually offers some considerable benefits:

  • S3-compatible storage can be had for very cheap: Backblaze B2 runs you $6/month for 1 TB, with 3x your storage in data transfers included per month

  • Object storage is usually highly available and reliable, much more so than storage that is included with your server

  • Data transfer speeds are very high

  • You do not need to prepurchase a fixed amount of storage, but pay exactly for how much you use.

  • You can store practically unlimited amounts of data due to the nature of object storage

Although you might think using this approach instead of having the storage be in the same datacenter as the server would add a lot of latency, making your self-hosted apps practically unusable, this isn't the case.

JuiceFS seems to cache and buffer data quite well, such that I can use Immich (a photo library app) without any latency/loading issues.

The Setup

Backblaze B2 seems to be the best provider for this, being very affordable and reliable. You can create an account here, take care to choose the right region (closest to your server) as it cannot be changed afterward.

In the following, I'll assume you're running an Ubuntu server and already have one otherwise set up. If not, DigitalOcean has some good guides on this. Refer to this article, for a recommendation for a server provider.

Installation

Install JuiceFS with this single command on your server:

# run the one-command installer
curl -sSL https://d.juicefs.com/install | sh -

Connecting to Backblaze

We first need to connect JuiceFS to your B2 Bucket. In the Backblaze Dashboard, create a new bucket. You can leave the default options, but select encryption if you'd like to, this will be managed by Backblaze, so you'll have to trust them. Alternatively, you can encrypt data with your own key in JuiceFS, but I won't cover it further here.

Then go to “Application Keys”, you'll have to generate a master key first to be able to generate other keys, but you won't need to use it. Afterward, generate a new application key and give it access to your bucket. Leave the site open, as you'll need the credentials, which are only shown once.

Now we're ready to connect JuiceFS to our bucket:

juicefs format \
    --storage s3 \
    --bucket https://s3.eu-central-003.backblazeb2.com/bucket \
    --access-key key-id-here \
    --secret-key secret-key-here \
    sqlite3://myjfs.db
    myjfs

This will create a new JuiceFS instance, connected to your chosen Backblaze bucket. Take note of the following:

  • change eu-central-003 to your bucket region (you'll find it under endpoint in the bucket overview) and bucket to your bucket name in the URL

  • We'll use SQLite to store file metadata for simplicity. For self-hosting purposes, it should be very capable (for instance, used with pocketbase, it can handle 10000 simultaneous connections)

  • the SQLite database file will be created in your current directory (you can change this via the URL). In the rest of the article, we'll assume this is your home directory (~)

  • You do not need to worry about backing up your SQLite file — JuiceFS will automatically back it up to the bucket once a day

Automatically mounting JuiceFS

Now that we've connected to Backblaze we still need to mount it as a file system. We'll do this using a systemd service, which allows you to easily manage this and automatically do so on boot.

[Unit]
Description=JuiceFS Mount Service
After=network.target

[Service]
Type=simple
User=root
ExecStart=/usr/local/bin/juicefs mount sqlite3:///home/yourusername/myjfs.db /home/yourusername/jfs
Restart=always

[Install]
WantedBy=default.target

This service mounts JuiceFS to ~/jfs.

  • Change yourusername to your Linux user, if you did not create the SQLite file in your home directory as mentioned above, correct the paths

  • This is executed as root so that all users have access to ~/jfs. This makes it easy to work with Docker Compose and other deployment solutions

Save this to /etc/systemd/system/juicefs-mount.service and run sudo systemctl daemon-reload so systemd picks up on the file. Now you can manage it like any other systemd service: To get started, run sudo service juicefs-mount enable to start the service (mount juicefs) and enable it to run on boot automatically.

Usage Tips

You're all set and can use your ~/jfs like any other directory now. Almost at least… here are some tips and pitfalls you should avoid:

  • If you use it with Docker Compose, you can store data there easily by using a volume mount (instead of a named volume)

  • Do not mount or put database directories inside ~/jfs. It will not work (reliably, if at all). Storing documents and image directories works fine.

  • For databases, they probably won't outgrow the storage that came with your server, but to back them up, use a cron script. Here's an example for MariaDB:

# save this as a bash script, for example ~/mysql-backup.sh

# --- start of file

#!/bin/bash
# this will load env variables from .env
source .env

# Set the database name
DB_NAME=nextcloud
DB_USER=root
# i do not set the env variable directly here but use the one from .env via source
DB_PASS=$MYSQL_ROOT_PASSWORD

# Set the backup directory (outside Docker)
BACKUP_DIR=/home/ubuntu/jfs/nextcloud-backup

#Set the backup file name with timestamp
BACKUP_FILE="$DB_NAME-$(date +%Y%m%d-%H%M%S).sql"

# Run the MySQL dump command
docker exec -i $(docker compose ps -q db) mysqldump -u $DB_USER --password=$DB_PASS $DB_NAME > "$BACKUP_DIR/$BACKUP_FILE"

# --- end of file

# make it runnable
chmod +x ~/mysql-backup

# save it to the root crontab
# for example: add the line
# 0 3 * * * /home/youruser/mysql-backup.sh
sudo crontab -e