WordPress.org

Ready to get started?Download WordPress

Forums

W3 Total Cache
[resolved] CloudFront: Object Already Exists (Failed Transfer) (14 posts)

  1. sahaskatta
    Member
    Posted 3 years ago #

    Hey Everyone,
    If anyone has had this problem or found a solution, any help would be appreciated.

    I've been using W3TC and it's been amazing, but for the past month or two, I've been having some really odd issues.

    When I upload a new image through the native WP media uploader, a problem occurs with W3TC. The image gets uploaded to my server just fine. It also gets uploaded to my S3 Bucket for CloudFront just fine too.

    I know this is true, because when I upload a brand new file with a new filename, it appears on my site example.com/.../image.jpg and the static.example.com/.../image.jpg (from amazon) also works.

    Unfortunately, despite the successful upload, the item immediately appears in the Unsuccessful File Transfer Queue. The media links in the article aren't re-written from example.com to static.example.com either.

    However, links to older media appears just fine. Clearing the queue, clearing caches, and repeating that process each time fixes the issue with the newest articles.

    P.S. I've completely removed W3TC and reinstalled it. Even used the reset settings feature and started from scratch to debug. Didn't help.

  2. smoochyTweet
    Member
    Posted 3 years ago #

    OMG, had this problem for so long now! Thought it just me. Was bored, googled it this morning. This came up.

    Did you find a fix yet?

    I post just photos, only past posts have the S3 links. The new ones don't. also have upload list error showing too.

    Please anyone knows, fix this!

  3. shortster
    Member
    Posted 3 years ago #

    Took me ages to figure out this was the problem! No solution yet though.

    I use Amazon S3. All resources that are uploaded by auto-upload and which are already on the CDN get the error 'Object Already Exists' in the Unsuccessful file transfer queue. When this happens the resource urls won't point to the CDN anymore.

    Clearing the Unsuccessful file transfer queue resolves this. So my temporary solution is to disable the changed files auto-upload in the CDN tab.

  4. ronblaisdell
    Member
    Posted 3 years ago #

    I saw the same issue, so on the CDN setting page, I change the options and selected "Force over-writing of existing files" - no problems since then.

  5. shortster
    Member
    Posted 3 years ago #

    This will help, but also results in a lot more file transfers than necessary. Not a very optimal solution in terms of CDN costs.

  6. sahaskatta
    Member
    Posted 3 years ago #

    @shortster,

    I do have that issue with the auto-uploads as well, but disabling it doesn't solve my issue. If a user uploads a photo into the media library from a post, then the issue occurs.

    The file gets pushed onto amazon properly, but it still shows up as an error on the "file transfer queue". And as a result (i'm guessing) the links aren't written to the CDN file properly. Clearing cache and the queue each time solves the issue.

    @ronblaisdell,
    That is a potential fix, but for heavy traffic sites not a good idea. Will slow down your server since it will try uploading stuff several times. Also, since S3 charges for PUTS as well, you will be charged more.

  7. ronblaisdell
    Member
    Posted 3 years ago #

    @sahaskatta -- I agree, not optimal, but it does stop the problem.

    Of course, since the problem for you comes from the upload of new content to the CDN - this should solve the problem, since it is a PUT.

  8. sahaskatta
    Member
    Posted 3 years ago #

    Good point, I'll give that a shot!

    I was still deciding whether to keep using this or to switch to WP Super Cache + a plugin I found for uploading WP Media to S3/CloudFront

  9. ronblaisdell
    Member
    Posted 3 years ago #

    @sahaskatta --

    I think it is the features beyond adding a CDN that make this more usable.

    The ability to minify and combine JS & CSS into one file (of each type), the ability to automatically handle the S3/Cloudfront "gzip" support issue, the ability to add multiple CNAMES for your distributions to handle js and css (and make download slipstreaming possible).

    At least for me, it is much more than just adding CDN distribution. And it has helped to radically improve both our pagespeed and YSlow scores -- plus reduce load times.

  10. sahaskatta
    Member
    Posted 3 years ago #

    @ronbaisdell

    Yup, you are absolutely correct. I use tools like yui compressor or online tools to minify my JS and CSS into as little as possible and as few files as possible too.

    I try to follow the google or yahoo page speed guidelines to do it manually. So I generally pick and choose some features from W3TC and do others manually.

    But in case there was any miscommunication to anyone looking to us W3TC, it is an AWESOME plugin. Even if you don't have heavy traffic, it significantly speeds up websites. At times, I feel like either W3TC or WP Super Cache should be built into the core. (A very simple version at the least.)

  11. JMParsons
    Member
    Posted 3 years ago #

    I was concerned about upgrading to WP3 for W3 Total Cache compatibility since I believe it's essential with my S3 setup. I did it anyways and stumbled across the same "object exists" situation.

    From poking around the code and doing some testing I noticed that changing the meta data for an attachment triggers an upload. I've been using my own script where I adjust the meta data, then upload the file. Now just updating the meta data triggers the file upload.

    The attachment meta data is pretty much an array of image sizes, so if you have custom thumbnail sizes and what not they are stored in this attachment meta data and serialized.

    In the W3_Plugin_Cdn class there are new hooks, actions / filters. I think what is happening is when an image is uploaded, afterwards when the meta data is updated it gets triggered to upload again. It loads each image present in the meta data, in my case 8 different sizes. The time to upload an image is definitely noticeable now with the image processing and upload routine running twice - my php console shows 16 upload actions for 8 images.

    Since I haven't had the time to crack it open other than just skim across it and throw in some breakpoints, I have resorted to just use force over-writing. I toggle it off during custom or theme file uploads so I don't overwrite and have to invalidate (force cdn purge) my folders. I might try to make a temp fix for it..but with toggling the force over-write and dealing with the upload time, it hasn't been much of a problem.

  12. sahaskatta
    Member
    Posted 3 years ago #

    @JMParsons

    Thanks for that update. I actually didn't have luck with the force overwrite mode. It still ended up with new uploads in the failed queue.

    I will have to give tweaking the meta data a shot, which will hopefully work.

    P.S. The developer of this plugin is looking into the situation. He is doing some tests with my current configuration. Hopefully he will find a proper fix soon!

  13. Frederick Townes
    Member
    Plugin Author

    Posted 3 years ago #

    I believe that the development release does a better job with uploads using a new version of the available libraries for S3.

  14. chrishecker
    Member
    Posted 2 years ago #

    This was hosing me as well, so I fixed it. I don't know why the S3.php code was deciding that it was an error to not force an upload to an already existing file with a matching hash, but that seems silly.

    Chris

    === modified file 'wp-content/plugins/w3-total-cache/lib/W3/Cdn/S3.php'
    --- wp-content/plugins/w3-total-cache/lib/W3/Cdn/S3.php 2011-04-30 06:07:42 +0000
    +++ wp-content/plugins/w3-total-cache/lib/W3/Cdn/S3.php 2011-05-08 05:52:57 +0000
    @@ -128,7 +128,9 @@
                     $s3_hash = (isset($info['hash']) ? $info['hash'] : '');
    
                     if ($hash === $s3_hash) {
    -                    return $this->get_result($local_path, $remote_path, W3TC_CDN_RESULT_ERROR, 'Object already exists');
    +                    return $this->get_result($local_path, $remote_path, W3TC_CDN_RESULT_OK, 'Already exists, okay.');
    +                } else {
    +                    return $this->get_result($local_path, $remote_path, W3TC_CDN_RESULT_ERROR, 'Object already exists with different hash');
                     }
                 }
             }

    I assume the string doesn't have to be "OK" for a W3TC_CDN_RESULT_OK, so I made it more descriptive.

Topic Closed

This topic has been closed to new replies.

About this Plugin

About this Topic