There is sufficient storage and it’s a single backup.
Hello @nimonogi, is your complete or progress or do these errors is preventing it to complete?
This back is for the files:
[18-Feb-2025 19:59:03] Backup archive created.
[18-Feb-2025 19:59:03] Archive size is 95.61 GB.
[18-Feb-2025 19:59:03] 373781 Files with 95.34 GB in Archive.
[18-Feb-2025 19:59:03] 1. Trying to send backup file to S3 Service …
[18-Feb-2025 19:59:03] Connected to S3 Bucket “bucketname” in s3name
[18-Feb-2025 19:59:03] Checking for not aborted multipart Uploads …
[18-Feb-2025 19:59:03] Starting upload to S3 Service …
[18-Feb-2025 20:04:10] WARNING: Signal “SIGTERM” (Termination signal) is sent to script!
[18-Feb-2025 20:24:14] WARNING: Signal “SIGTERM” (Termination signal) is sent to script!
[18-Feb-2025 20:44:21] WARNING: Signal “SIGTERM” (Termination signal) is sent to script!
[18-Feb-2025 21:04:26] WARNING: Signal “SIGTERM” (Termination signal) is sent to script!
[18-Feb-2025 21:24:32] WARNING: Signal “SIGTERM” (Termination signal) is sent to script!
[18-Feb-2025 21:44:34] WARNING: Signal “SIGTERM” (Termination signal) is sent to script!
[18-Feb-2025 22:04:39] WARNING: Signal “SIGTERM” (Termination signal) is sent to script!
[19-Feb-2025 08:15:15] ERROR: Aborted, because no progress for one hour!
This is for the database:
[18-Feb-2025 00:00:54] Backup archive created.
[18-Feb-2025 00:00:54] Archive size is 599.94 MB.
[18-Feb-2025 00:00:54] 3 Files with 599.93 MB in Archive.
[18-Feb-2025 00:00:54] 1. Trying to send backup file to S3 Service …
[18-Feb-2025 00:00:54] Connected to S3 Bucket “bucketname” in <LocationConstraint xmlns=”http://s3.amazonaws.com/doc/2006-03-01/”>s3name
[18-Feb-2025 00:00:54] Checking for not aborted multipart Uploads …
[18-Feb-2025 00:00:54] Starting upload to S3 Service …
[18-Feb-2025 00:03:00] Backup transferred to https://URLs.tar.
[18-Feb-2025 00:03:01] Job done in 175 seconds.
Please note that I am using Hetzner’s S3
-
This reply was modified 1 year ago by
nimonogi.
As per Hetzner’s website they have the following limits:
- Up to 8 kB of metadata per object
- Up to 5 GB per object in a single PUT operation
- Up to 5 GB per object part in a multi-part upload
- Up to 10,000 parts in a multi-part upload
- Up to 5 TB per object
- Up to 256 active parallel (TCP) sessions per source IP
- Up to 750 requests/s per source IP
- Up to 750 requests/s per Bucket
- Up to 10 Gbit/s per Bucket (read or write)
- Up to 100 TB per Bucket
- Up to 50,000,000 objects per Bucket
- Up to 200 S3 credentials across all projects
- Up to 100 Buckets across all projects
Shall I disable “Destination supports multipart” from the plugin?
Hello @nimonogi, I shared the technical data you shared above with our developer. The developer said –
- Up to 5 GB per object in a single PUT operation
- Up to 5 GB per object part in a multi-part upload
These ^ two points mean the cloud operator doesn’t allow such big files. Let me know if there’s anything else I can help you with
Which big files? The backups are not transferred into pieces? Your response doesn’t make much sense to me.
@nimonogi Are you using the new default not compressed TAR archive? Is so, install the plugin helper to ZIP the backup (as it was before this ugly v. 5 release).
https://backwpup.com/docs/how-to-backup-in-zip-and-tar-gz-in-backwpup-v5/