Database Backup Solutions

Fix "Database Backup Failed: No Space Left"

Common Error Messages:
• "No space left on device"
• "Cannot allocate memory"
• "Disk full while backing up database"
• "mysqldump: Error 28: No space left on device"
• "pg_dump: error: could not write to output file"

What This Error Means

The "no space left on device" error occurs when your backup process runs out of disk space. This can happen at different stages:

Immediate Fix

Step 1: Check Available Disk Space

First, identify which partition is full:

# Check disk usage for all mounted filesystems df -h # Check specific directory where backup is being written df -h /path/to/backup/directory # Check inode usage (sometimes inodes are exhausted) df -i # Find largest directories du -sh /* | sort -hr | head -10

Step 2: Free Up Disk Space Immediately

Quick cleanup to free space for backup continuation:

# Clean system logs (be careful!) sudo journalctl --vacuum-time=3d # Keep only 3 days of logs sudo truncate -s 0 /var/log/syslog sudo truncate -s 0 /var/log/messages # Remove old backup files find /backup/directory -name "*.sql" -mtime +7 -delete find /backup/directory -name "*.dump" -mtime +7 -delete find /backup/directory -name "*.tar.gz" -mtime +14 -delete # Clean temporary files sudo rm -rf /tmp/* sudo rm -rf /var/tmp/* # Clean package manager cache sudo apt clean # Ubuntu/Debian sudo yum clean all # CentOS/RHEL

Step 3: Use Compression During Backup

Reduce backup file size with compression:

# MySQL with compression mysqldump -u root -p database_name | gzip > backup.sql.gz # PostgreSQL with compression pg_dump -U postgres -h localhost database_name | gzip > backup.sql.gz # MongoDB with compression mongodump --db database_name --gzip --archive=backup.archive.gz # For very large databases, use streaming compression mysqldump -u root -p --single-transaction database_name | \ gzip -c > backup_$(date +%Y%m%d).sql.gz

Step 4: Backup to Different Location

Use a different partition or remote location:

# Backup to different mount point mysqldump -u root -p database_name > /mnt/external/backup.sql # Backup directly to remote server via SSH mysqldump -u root -p database_name | \ ssh user@remote-server "gzip > /remote/backup/db_backup.sql.gz" # Backup to network storage mysqldump -u root -p database_name > /nfs/backup/backup.sql # Stream backup to AWS S3 mysqldump -u root -p database_name | \ aws s3 cp - s3://your-bucket/backups/backup.sql

Root Cause

1. Insufficient Backup Storage Planning

Most space issues stem from poor capacity planning:

2. Log Files Consuming Space

# Check database log sizes # MySQL du -sh /var/lib/mysql/mysql-bin.* du -sh /var/log/mysql/* # PostgreSQL du -sh /var/lib/postgresql/*/main/pg_log/* du -sh /var/lib/postgresql/*/main/pg_wal/* # MongoDB du -sh /var/lib/mongodb/*.log

3. Temporary File Usage

Some backup operations create large temporary files:

# Check temp space usage during backup watch -n 5 'df -h /tmp' # Monitor largest files being created sudo lsof +L1 # Find deleted but still open files

How to Prevent This

1. Implement Automated Cleanup

# Create cleanup script for old backups #!/bin/bash BACKUP_DIR="/backup/mysql" RETENTION_DAYS=7 # Remove backups older than retention period find $BACKUP_DIR -name "*.sql.gz" -mtime +$RETENTION_DAYS -delete find $BACKUP_DIR -name "*.dump" -mtime +$RETENTION_DAYS -delete # Rotate large log files logrotate /etc/logrotate.d/mysql-server # Run as daily cron job # 0 2 * * * /usr/local/bin/backup-cleanup.sh

2. Set Up Disk Space Monitoring

# Create monitoring script #!/bin/bash THRESHOLD=85 # Alert when 85% full for partition in $(df -h | grep -E '^/dev' | awk '{print $5}' | sed 's/%//'); do if [ $partition -gt $THRESHOLD ]; then df -h | mail -s "Disk Space Warning" admin@yourcompany.com fi done # Add to crontab to check every hour # 0 * * * * /usr/local/bin/disk-monitor.sh

3. Use Incremental Backups

# MySQL incremental backup using binary logs mysqldump --master-data=2 --single-transaction database_name > full_backup.sql mysql -e "FLUSH LOGS;" # PostgreSQL WAL-based incremental backup pg_basebackup -D /backup/postgresql/base -Ft -z # MongoDB incremental using oplog mongodump --oplog --gzip --archive=/backup/mongo/incremental.gz

4. Optimize Backup Strategy

# Use selective backups for large databases # MySQL - skip certain tables mysqldump --ignore-table=database.large_log_table database_name # PostgreSQL - schema only for large tables pg_dump --schema-only --table=large_table database_name # Compress on-the-fly with better algorithms mysqldump database_name | zstd -3 > backup.sql.zst # Better compression ratio mysqldump database_name | lz4 > backup.sql.lz4 # Faster compression
Prevention Tip: Always reserve at least 20% of your backup partition for unexpected database growth. Set up alerts when disk usage exceeds 80% to avoid emergency situations.

Never Run Out of Backup Space Again

Backuro automatically manages backup storage with intelligent compression, automated cleanup, and cloud storage options. Monitor space usage and get alerts before issues occur.

Get Smart Backup Management