Fix MySQL Binary Log Corrupted
Common Error Messages:
• "Got fatal error 1236 from master when reading data from binary log"
• "Binlog has bad magic number"
• "Error reading packet from server: log event entry exceeded max_allowed_packet"
• "Slave SQL thread stopped because of an error"
• "Got fatal error 1236 from master when reading data from binary log"
• "Binlog has bad magic number"
• "Error reading packet from server: log event entry exceeded max_allowed_packet"
• "Slave SQL thread stopped because of an error"
What This Error Means
MySQL binary log corruption occurs when binary log files become unreadable or contain invalid data. This breaks replication and can affect:
- Master-slave replication chains
- Point-in-time recovery capabilities
- Backup consistency
- Database cluster synchronization
Immediate Fix
Step 1: Identify Corrupted Binary Log
First, determine which binary log is corrupted:
-- Check replication status
SHOW SLAVE STATUS\G
-- Look for:
-- Master_Log_File: mysql-bin.000123
-- Read_Master_Log_Pos: 4567890
-- Last_SQL_Error: (error message)
-- Check binary log files on master
SHOW BINARY LOGS;
-- Test specific binary log
mysqlbinlog /var/lib/mysql/mysql-bin.000123
Step 2: Skip Corrupted Event (Temporary Fix)
If replication is broken, temporarily skip the problematic event:
-- On slave server, connect to MySQL
mysql -u root -p
-- Stop slave
STOP SLAVE;
-- Skip one event
SET GLOBAL sql_slave_skip_counter = 1;
-- Or for multiple events (be careful!)
SET GLOBAL sql_slave_skip_counter = 10;
-- Start slave
START SLAVE;
-- Check status
SHOW SLAVE STATUS\G
Step 3: Recover Using mysqlbinlog
For data recovery from partially corrupted logs:
# Extract readable events from corrupted log
mysqlbinlog --force-if-open --start-position=4 \
--stop-position=1234567 /var/lib/mysql/mysql-bin.000123 > recovery.sql
# Apply to database (review first!)
mysql -u root -p database_name < recovery.sql
# For remote binary log extraction
mysqlbinlog --read-from-remote-server --host=master-server \
--user=repl_user --password=repl_pass \
--start-position=4 mysql-bin.000123 > recovery.sql
Step 4: Reset Replication (If Necessary)
If corruption is extensive, you may need to rebuild replication:
-- On MASTER: Reset binary logs (DANGER: Will lose all binary logs)
RESET MASTER;
-- Get new master position
SHOW MASTER STATUS;
-- On SLAVE: Reset and reconfigure
STOP SLAVE;
RESET SLAVE ALL;
-- Reconfigure slave with new master position
CHANGE MASTER TO
MASTER_HOST='master-server',
MASTER_USER='repl_user',
MASTER_PASSWORD='repl_pass',
MASTER_LOG_FILE='mysql-bin.000001',
MASTER_LOG_POS=4;
START SLAVE;
Root Cause
1. Disk Issues
Hardware problems are the most common cause:
- Disk full during log writing
- Bad sectors on storage device
- Sudden power loss or system crash
- Network interruption during remote logging
2. Configuration Problems
# Check critical settings in my.cnf
[mysqld]
sync_binlog=1 # Force sync after each commit
innodb_flush_log_at_trx_commit=1 # Flush logs on transaction commit
binlog_cache_size=1M # Adequate cache size
max_binlog_size=1G # Reasonable log rotation size
expire_logs_days=7 # Automatic cleanup
3. Replication Lag and Timeouts
# Monitor replication lag
SHOW SLAVE STATUS\G
# Look at: Seconds_Behind_Master
# Adjust timeout settings
slave_net_timeout=60
slave_transaction_retries=10
How to Prevent This
1. Implement Robust Configuration
# Add to my.cnf for reliability
[mysqld]
# Binary logging settings
log-bin=mysql-bin
binlog_format=ROW
sync_binlog=1
binlog_cache_size=32M
max_binlog_size=1G
expire_logs_days=7
# InnoDB settings for durability
innodb_flush_log_at_trx_commit=1
innodb_support_xa=1
# Replication settings
slave_net_timeout=60
slave_transaction_retries=10
log_slave_updates=1
2. Monitor Binary Log Health
# Daily health check script
#!/bin/bash
MYSQL_USER="root"
MYSQL_PASS="your_password"
# Check for replication errors
mysql -u$MYSQL_USER -p$MYSQL_PASS -e "SHOW SLAVE STATUS\G" | grep "Last_SQL_Error"
# Verify binary log integrity
for binlog in $(mysql -u$MYSQL_USER -p$MYSQL_PASS -e "SHOW BINARY LOGS" | awk '{print $1}' | grep mysql-bin); do
mysqlbinlog /var/lib/mysql/$binlog > /dev/null 2>&1
if [ $? -ne 0 ]; then
echo "ALERT: Binary log $binlog is corrupted!"
fi
done
3. Implement Backup Strategy
# Regular binary log backup
#!/bin/bash
BACKUP_DIR="/backup/mysql-binlogs"
MYSQL_DIR="/var/lib/mysql"
# Flush logs before backup
mysql -u root -p -e "FLUSH LOGS;"
# Copy previous logs to backup
cp $MYSQL_DIR/mysql-bin.[0-9]* $BACKUP_DIR/
# Keep only recent backups
find $BACKUP_DIR -name "mysql-bin.*" -mtime +7 -delete
4. Use GTID for Better Recovery
# Enable GTID in my.cnf (MySQL 5.6+)
[mysqld]
gtid_mode=ON
enforce_gtid_consistency=ON
log_slave_updates=ON
# GTID makes recovery much easier:
# CHANGE MASTER TO MASTER_AUTO_POSITION=1;
Critical Warning: Always test recovery procedures on non-production systems first. Binary log corruption can lead to data loss if not handled properly. Consider stopping writes to the master during critical recovery operations.
Prevent MySQL Binary Log Issues
Backuro monitors MySQL binary log health and automatically handles replication issues. Protect your data with intelligent backup strategies and real-time corruption detection.
Get Professional MySQL Backup