High server loads when backup runs

When ever a backup runs it cause the Load Averages to go up to 4 + and as high as 7. I have moved the conjobs around to every ten minutes so the high loads will not be real bad. But would like to figure out why this happens , so I can go back to my normal schedule or backups.

When a backup runs, it requires a process (rdiff-backup) to read the current set of files for a server and compare it to the most current backup (and beyond). This is very I/O heavy. I’m not sure there’s any way to just reduce the CPU impact of each run.

That said, you shouldn’t back up every ten minutes (choose a longer time between backups). That level of granularity is a little outmoded for modern minecraft, where there’s just way too much content to check and you’re putting a pretty excessive strain on your system for very little potential gain.

Before I add another server the backups was not doing this, loads were only getting up to 2 never 4+ so my guess I have an issue with a file or something else?

2x servers with one cronjob each running at the same time means a doubling of processor use. Each backup job spawns one instance of rsync, so seeing your processor load double isn’t that weird.

I also would reccomend a slower backup interval than you are currently using, and making sure each backup runs at a separate time.

You can also confifure the cron job manually. Then you could have one job backing up every server (by comparing the contents of everything in the “server” director). This again would mean that you get more job when restoring only one server, or that you restore every server on a restore.

I would reccommend, again, a slower backup regiment, on a pr server basis.

I only have one cronjob running every ten minutes, I had each server running a backup at least every hour but cant’t anymore. my 15 3.6GHz with 32GB of ram is not doing the same job it was doing before I had issues with one of my servers [Survival]. The Survival server occasionally fails on making a archive.

This likely has less to do with your specs, and more to do with archives potentially failing if the files it is archiving are being accessed/modified as they’re being read.

This is a general limitation of file I/O, so the longer the duration of an archive, the more likely it could occur. The more backups running at the same time, the more I/O multitasking must be done. In other words, the more you’re backing up/archiving at the same time, the more likely there would be such a failure.

Needless to say, having ten minute backups leave even smaller windows for an archive to complete at optimal speed without competing for resources.


The only way to ensure an archive completes is to make ensure the server is stopped, so tar, in the background, will never throw tar: file changed as we read it errors (the error you’re not seeing).

I know nobody would be interested in shutting down their server for each archive, so it’s not required…but it also is the explanation for failures here, most likely.