Yes and no.
I’m mid-update of a massive backend rewrite, with some of the newer Ceph features to mitigate this buuuuuuttttt. Ceph and the hard limits of what it can move and PHP are separate matters.
1) PHP memory on most hosts tends to crap out after around 500megs. Less on shared, obviously.
2) PHP has a hard limit of 2G. https://docs.aws.amazon.com/aws-sdk-php/guide/latest/faq.html
3) The new SDK way of multipart uploads is a bear and not as logical as the old way, which is saying something. https://blogs.aws.amazon.com/php/post/Tx7PFHT4OJRJ42/Uploading-Archives-to-Amazon-Glacier-from-PHP
So the real issue with large backups and how this plugin works is item #1, when you break it down, since PHP will choke making the zip long before we hit the upload part, and that’s what I don’t have a great fix for. Even if I tell PHP to make a zip into it’s own multipart, it STILL uses a lot of memory.
The ultimate solution would be to build a VaultPress-esque back-end, OR a totally boto-rsync one that would stream uploads ‘live.’ Both are beyond my skill-set at the moment. Boto is likely to happen, from a DreamHost server level, faster and would be a great thing for a much bigger backup of the WHOLE site (not just WP). And … I kind of think on many levels that would be better for people, y’know?
Which may be why a Panel <-> WP interface is also on my massive to do list ?? “Panel, run a whole domain backup.” Affirmative, Captain.