• Resolved nfleischer

    (@nfleischer)


    Hi

    I am trying to backup a webshop.
    I have used updraftplus for other installations with out any problems.
    However here it seems that an upload folder of 460 MB and +6000 files, gets the backup to fail or at least just halt.
    Once I unchecked the upload folder I got a successful backup, except only the database files were uploaded to AWS. Prior to that nothing had happened for the AWS bucket.

    I would really like to get a full backup running, that includes also the upload folder.

    I have had the backup running since yesterday and my assumption is now that is has halted or failed.

    Without any in depth knowledge it would seem that the many files cause the backup to time out. It then resumes but seems to fail, at least for this run around the 19th time.

    Furthermore when I try to get the latest log file from the webinterface, it only shows 14 runs. But from the FTP updraftfolder I get a log file that says 19 times.

    Is there any way to extend the time out period to say forever or do I have to look elsewhere for the solution?

    Will you need to get the logfiles?

    Best regards

    Nikolaj

    http://wordpress.org/extend/plugins/updraftplus/

Viewing 15 replies - 1 through 15 (of 25 total)
  • Hi Nikolaj,

    Yes, full log please (of a failed run, not a successful one).

    460Mb is big, but whether it’s too big depends on your web hosting. We’ve seen 3Gb work, IIRC.

    The number of times it needs to resume depends on the speed, including of the upload method. Basically it runs again as many times as needed until it’s done, as long as something useful is still happening.

    David

    Thread Starter nfleischer

    (@nfleischer)

    Nope, it says that I don’t have access. pastebin.com is a good place to paste logs.

    Thread Starter nfleischer

    (@nfleischer)

    Ah yes, thank you.

    I see the problem. It’s a toxic combination of a huge number of tiny files and a web host providing low resources.

    I need to tweak the algorithm to try to pack things into the zip file earlier (i.e. smaller batches) to avoid hitting the limits.

    I’ll get back to you when a version is ready – hopefully today.

    David

    Thread Starter nfleischer

    (@nfleischer)

    Hi David

    Excellent! Thank you so much!
    Definitely considering the migration tool you have, as I move 1-3 sites each month, especially with this kind of support.

    Nikolaj

    It suddenly occurs to me that some changes in the development version may in fact *already* overcome your problem… (or may not)…

    Please update to the dev version and then post a new log (after an hour or so of starting the backup, unless it completes sooner):
    http://updraftplus.com/faqs/devversion

    Best wishes,
    David

    Thread Starter nfleischer

    (@nfleischer)

    hi David

    About 1h15min into the first backup with the dev version.

    I can’t decipher if it is stuck or still working. I hope you can 🙂

    Let me know how to proceed.

    Thank you
    /Nikolaj

    Hi Nikolaj,

    No, that run is having the same problem.

    Looking more closely at the log file, I realised that an existing part of the algorithm is meant to detect the timing/resource situation that is occurring in yours, but a subtlety was causing it to fail.

    Please update to the development version – it should shows as version 1.6.27 or later:
    http://updraftplus.com/faqs/devversion

    David

    Sorry – I spotted a logic error in the latest tweak – if you’d started updating, then please do so again, and verify that you’ve got version 1.6.29 or later.

    Thread Starter nfleischer

    (@nfleischer)

    Hi David

    I have updated to 1.6.29 and started a backup.
    I’ll let it run for an hour or so and share the logfile.

    Thanks for your continued support.

    Nikolaj

    Thread Starter nfleischer

    (@nfleischer)

    Hi David

    I have tried with the 1.6.29 and it has been running for 90 minutes, however seems to have stopped.

    What would be the next steps?

    Regards
    Nikolaj

    http://pastebin.com/L5mGWH9Y

    It got further before, though not far enough. It’s not getting particularly close, yet. The problem is two-fold: 1) your server is very slow, and 2) the low limit on time allowed for PHP to run. It’s not yet clear that PHP with the resources on that server can create a zip of that many files in that time limit, however the algorithm is sliced and diced. The symptoms are of the server being short of available memory rather than CPU speed (some initial operations are very fast, but later ones very slow – the slow-down suggests that the server is swapping memory to disk and back – all using up the precious limited time allowance).

    However, having said that, the above log suggests an obvious algorithmic tweak to allow more shots at it. Please update + try again (http://updraftplus.com/faqs/devversion – 1.6.30).

    My thoughts at this point, though, is that the web hosting constraints make it impossible. Look at the last couple of lines in the log:

    0011.361 (9) Adding batch to zip file: over 1.5 seconds have passed since the last write (15.6 Mb, 337 (337) files added so far); re-opening (prior size: 125430.1 Kb)

    That means that an attempt was made to add 15.6Mb, spread over 300 files, to an existing 125Mb zip file. At that point, 11 seconds had expired, so in theory there were still 39 left to perform that operation. It’s an operation that we’d expect to take about 4 seconds. That’s why I suspect swapping – that can freeze up processes for ages if it is severe.

    Anyway – please do try again and post the new log…

    Thread Starter nfleischer

    (@nfleischer)

    Hi David

    I think this is the same as before.

    Is there anything I can do other than to do a manual backup of the upload folder?

    Thanks

    Nikolaj

    log

    Hi Nikolaj,

    It depends what your timelines are.

    The feedback you’re giving is helping us to improve – the latest log shows that we’ve overcome a couple of glitches in the algorithm, and are reaching the “stop” point much quicker than before. That doesn’t help you much, since there’s still a “stop” point.

    This is basically as far as it gets:
    0002.222 (6) Adding batch to zip file: over 20.8 Mb added on this batch (22.5 Mb, 3127 files batched, 282 (282) added so far); re-opening (prior size: 129709 Kb)

    2 seconds gone – so 48 seconds remain to add that 20.8Mb of data to the existing 129Mb zip file, which is spread across 282 files. That is so easy (performance-wise), that I’m rather stumped as to why it can’t do that. It can’t be that you’re out of disk space (because we seem to be able to rack up another 129Mb every time we do a run).

    You could ask your web hosting company to raise your max_execution_time for PHP – from what I’ve seen, 90 or more is typical; your 50 is very low. That should make a huge difference. It looks like we’ve hit the limits of what can be done currently in that setup.

    David

Viewing 15 replies - 1 through 15 (of 25 total)

The topic ‘Backup will not finish – "big" upload folder’ is closed to new replies.