This likely has less to do with your specs, and more to do with archives potentially failing if the files it is archiving are being accessed/modified as they’re being read.
This is a general limitation of file I/O, so the longer the duration of an archive, the more likely it could occur. The more backups running at the same time, the more I/O multitasking must be done. In other words, the more you’re backing up/archiving at the same time, the more likely there would be such a failure.
Needless to say, having ten minute backups leave even smaller windows for an archive to complete at optimal speed without competing for resources.
The only way to ensure an archive completes is to make ensure the server is stopped, so
tar, in the background, will never throw
tar: file changed as we read it errors (the error you’re not seeing).
I know nobody would be interested in shutting down their server for each archive, so it’s not required…but it also is the explanation for failures here, most likely.