Every server has two layers. The heavy layer is your database dumps, media uploads, and user-generated content — stuff that needs S3 buckets and proper backup tools. Then there’s the light layer. The .env files, Nginx configs, firewall rules, cron jobs, SSL settings, and small databases that hold everything together.
Most people back up the heavy layer and completely ignore the light layer. I know because I was one of them.
I lost a .env file during a server migration once. The app had 12 environment variables — API keys, JWT secrets, SMTP credentials, third-party tokens. Took me three hours to reconstruct everything from various dashboards, email threads, and password managers. Some of the keys I had to regenerate entirely, which meant updating three other services that depended on them.
The fix turned out to be stupid simple: a shell script, a cron job, and a free private GitLab repo. Total setup time was about 15 minutes. It’s been running for months now without a single issue.
Why Git Works for This
Git is made for code, not backups. But for lightweight server data, it has some genuine advantages over traditional backup tools:
You get full version history. Every backup is a commit. You can see exactly what changed and when. If someone modified an Nginx config three weeks ago and broke something, git log will tell you.
It’s free. GitLab gives you 5GB per project on the free tier. Your config files, small databases, and environment variables won’t use even 1% of that.
Off-server storage with zero setup. No S3 credentials, no Restic configuration, no cloud storage billing. Just git push.
Every developer already knows how to restore from it. git clone and copy files back. No special tooling needed.
The tradeoff is that Git doesn’t handle large binary files well. Once any single file crosses ~100MB, switch to a proper tool. For everything under that threshold, this is hard to beat.
What Gets Backed Up
Here’s what I include in my automated backup. All of these are small, text-based (or small binary), and change infrequently — the exact profile where Git works well.
Application secrets — .env files with API keys, JWT secrets, database credentials, SMTP settings. These are the files you can’t reconstruct without digging through dashboards and old emails.
Web server configs — Nginx site configs, SSL-related settings, and the main nginx.conf. Recreating these from scratch is tedious, especially when you’ve tuned proxy headers, caching rules, and rate limits over time.
Small databases — SQLite files under ~50MB. If your app uses SQLite (like Strapi with the default config), the entire database is one file that fits comfortably in a Git repo.
Process management — PM2 ecosystem dumps, systemd service files, or whatever keeps your apps running after a reboot.
System configs — cron jobs, fail2ban rules, UFW firewall rules, SSH config (not private keys — never back those up to a remote repo).
Setup
1. Create a Private GitLab Repo
Create a new project on GitLab. Call it something like server-backups (or with project name). It must be private — you’re pushing files that contain secrets.
Generate a dedicated SSH key on your server:
ssh-keygen -t ed25519 -C "deploy@backup" -f ~/.ssh/id_backup
cat ~/.ssh/id_backup.pub
Add the public key as a Deploy Key in your GitLab repo with write access enabled. Then configure SSH to use this specific key for backup operations:
(Here is the guide for adding a deploy key to GitLab)
# Add to ~/.ssh/config
Host gitlab-backup
HostName gitlab.com
User git
IdentityFile ~/.ssh/id_backup
Clone the repo:
# set your git identity if needed.
mkdir -p ~/backups
git clone git@gitlab-backup:your-username/server-backups.git ~/backups
2. The Backup Script
This is the actual script I run on my servers. It collects everything into the git repo, commits if anything changed, and pushes to GitLab.
vi ~/scripts/backup.sh
#!/bin/bash
set -e
# SQLite source(s) - safe backup via sqlite3 .backup
SQLITE_SOURCES=(
"/path-to/data.db"
)
# Plain file source(s) - copied as-is (e.g. .env, configs)
FILE_SOURCES=(
"/path-to/.env"
)
# Destination paths
BACKUP_DIR="/path-to/server-backups/db"
REPO_DIR="/path-to/server-backups"
mkdir -p "$BACKUP_DIR"
# Backup SQLite files (WAL-safe)
for SOURCE in "${SQLITE_SOURCES[@]}"; do
if [ ! -f "$SOURCE" ]; then
echo "SQLite source not found: $SOURCE"
exit 1
fi
sqlite3 "$SOURCE" ".backup '$BACKUP_DIR/$(basename "$SOURCE")'"
done
# Backup plain files
for SOURCE in "${FILE_SOURCES[@]}"; do
if [ ! -f "$SOURCE" ]; then
echo "File source not found: $SOURCE"
exit 1
fi
cp -p "$SOURCE" "$BACKUP_DIR/$(basename "$SOURCE")"
done
cd "$REPO_DIR"
git add .
# Only commit if there are changes
if git diff --staged --quiet; then
echo "No changes to backup"
exit 0
fi
git commit -m "Backup $(date +%Y-%m-%d_%H-%M)"
git push origin main
echo "Backup completed: $(date)"
Make it executable:
mkdir -p ~/scripts
chmod +x ~/scripts/backup.sh
3. Test It
~/scripts/backup.sh
cat ~/scripts/backup.log
You should see “Pushed to GitLab.” Check the repo to confirm your data added there.
4. Schedule with Cron
crontab -e
Add:
#your path to backup.sh
0 */6 * * * /home/deploy/scripts/backup.sh 2>&1
When to Switch to Something Heavier
This stops working when your SQLite file exceeds ~100MB (Git bloats with large binaries), when you need continuous real-time backup (look at Litestream — it streams SQLite changes to S3), or when you need encryption you control.
For those cases, Restic with B2 or S3 is the standard answer. But for a single database file and a .env, a shell script and a free GitLab repo is all you need.