How I Finally Got My Supabase Daily Backups Working on Backblaze B2
I have a project running on Supabase's free plan. It has been live for about four months — small app, but real data. Sales records, inventories, user entries. The kind of stuff that doesn't grow back if it disappears.
I've always believed one thing about software: the backend logic and the frontend logic can crash, and you can rebuild them. But the database must always be backed up, because once it goes, it's gone.
So I wrote a GitHub Actions YAML file to automatically dump my Supabase database every day and ship it to Backblaze B2 for storage. That was the plan. The problem? It had been failing silently for months.
Every single day: failed. Failed. Failed.
Until I decided to actually sit down and fix it.
The Two Problems I Found
Issue 1 — PostgreSQL Version Mismatch
The first issue was a PostgreSQL version mismatch.
Supabase runs PostgreSQL 17. But when GitHub Actions installs the PostgreSQL client by default, it pulls version 16. So pg_dump v16 was trying to connect to a v17 server — and it refused.
The fix was straightforward: explicitly install postgresql-client-17 in the workflow instead of using the default.
- name: Install PostgreSQL client run: | sudo apt-get update sudo apt-get install -y gnupg curl curl -fsSL https://www.postgresql.org/media/keys/ACCC4CF8.asc \ | sudo gpg --dearmor -o /etc/apt/trusted.gpg.d/postgresql.gpg echo "deb http://apt.postgresql.org/pub/repos/apt $(lsb_release -cs)-pgdg main" \ | sudo tee /etc/apt/sources.list.d/pgdg.list sudo apt-get update sudo apt-get install -y postgresql-client-17
Issue 2 — IPv6 vs IPv4 (The Sneaky One)
After fixing the version mismatch, there was another twist.
When you open your Supabase dashboard and grab the direct connection string, it uses IPv6 by default. But GitHub Actions runners only support IPv4. The result is a connection error that asks: "Is the server accepting TCP connections?" — which looks like a password problem but isn't. It's purely a networking configuration issue.
If you're on a Supabase paid plan, you can enable an IPv4 add-on directly. On the free plan, the workaround is to use the Transaction Pooler instead of the direct connection or the session pooler.
The changes:
Setting | Direct Connection | Transaction Pooler |
Host | Direct host | Pooler host (different) |
Port | | |
Username | | |
The username must have your Supabase project ID appended to it. You'll find all of this under Project Settings → Database → Connection string → Transaction pooler in your Supabase dashboard.
The Full Working YAML
Once both issues were sorted, I stored all the connection values as GitHub Secrets and updated the workflow. I also switched from the AWS CLI to the Backblaze B2 CLI directly — simpler and no unnecessary dependencies.
name: Daily Supabase Backup to Backblaze B2
on:
schedule:
- cron: '0 2 * * *' # Runs daily at 2:00 AM UTC. You can change this
workflow_dispatch: # Allows manual trigger (on the github)
jobs:
backup:
runs-on: ubuntu-latest
steps:
- name: Install PostgreSQL client
run: |
sudo apt-get update
sudo apt-get install -y gnupg curl
curl -fsSL https://www.postgresql.org/media/keys/ACCC4CF8.asc \
| sudo gpg --dearmor -o /etc/apt/trusted.gpg.d/postgresql.gpg
echo "deb http://apt.postgresql.org/pub/repos/apt $(lsb_release -cs)-pgdg main" \
| sudo tee /etc/apt/sources.list.d/pgdg.list
sudo apt-get update
sudo apt-get install -y postgresql-client-17
- name: Dump Supabase database
env:
PGPASSWORD: ${{ secrets.SUPABASE_DB_PASSWORD }}
run: |
TIMESTAMP=$(date +%Y-%m-%d_%H-%M-%S)
FILENAME="supabase_backup_${TIMESTAMP}.sql.gz"
echo "FILENAME=$FILENAME" >> $GITHUB_ENV
pg_dump \
--host=${{ secrets.SUPABASE_DB_HOST }} \
--port=${{ secrets.SUPABASE_DB_PORT }} \
--username=${{ secrets.SUPABASE_DB_USER }} \
--dbname=${{ secrets.SUPABASE_DB_NAME }} \
--no-owner \
--no-acl \
--format=plain \
| gzip > "$FILENAME"
- name: Install B2 CLI
run: pip install b2
- name: Upload to Backblaze B2
env:
B2_APPLICATION_KEY_ID: ${{ secrets.B2_APPLICATION_KEY_ID }}
B2_APPLICATION_KEY: ${{ secrets.B2_APPLICATION_KEY }}
run: |
b2 authorize-account "$B2_APPLICATION_KEY_ID" "$B2_APPLICATION_KEY"
b2 upload-file ${{ secrets.B2_BUCKET_NAME }} "$FILENAME" "backups/$FILENAME"
- name: Clean up local dump file
run: rm -f "$FILENAME"
- name: Delete backups older than 30 days
env:
B2_APPLICATION_KEY_ID: ${{ secrets.B2_APPLICATION_KEY_ID }}
B2_APPLICATION_KEY: ${{ secrets.B2_APPLICATION_KEY }}
run: |
b2 authorize-account "$B2_APPLICATION_KEY_ID" "$B2_APPLICATION_KEY"
CUTOFF=$(date -d '30 days ago' +%s)000
b2 ls --json ${{ secrets.B2_BUCKET_NAME }} backups/ | \
python3 -c "
import sys, json
files = json.load(sys.stdin)
cutoff = $CUTOFF
for f in files:
if f['uploadTimestamp'] < cutoff:
print(f['fileName'])
" | while read fname; do
b2 hide-file ${{ secrets.B2_BUCKET_NAME }} "$fname"
doneGitHub Secrets to Configure
You'll need to add these secrets to your repository under Settings → Secrets and variables → Actions:
Secret | What it is |
| Transaction pooler host from Supabase dashboard |
|
|
| |
| Your database name (usually |
| Your database password |
| From Backblaze B2 app keys |
| From Backblaze B2 app keys |
| Your Backblaze bucket name |
The Result
When I ran the workflow after fixing everything, it went through smoothly and error free this time. Initially, the backup file was always around 20 bytes, which is basically an empty dump (just the pg_dump header, no actual data). After the fix, the compressed backup came in at 383 KB.
I extracted it and opened it and my database has been backed up — the schema, the sales records. All of it.. The beautiful thing is that it does it every single day without supervision. That was a good feeling.
A Note on Object Storage
I used Backblaze B2 because it's what I already use for my media and files. But this workflow works with any S3-compatible object storage — AWS S3, DigitalOcean Spaces, Cloudflare R2. The only thing you'd change is the CLI tool and its configuration. The pg_dump step and the GitHub Secrets pattern stay exactly the same.