• Resolved Alex Furr

    (@alexfurr)


    Hi,
    We’ve recently started using the delicious brains offload media plugin:
    https://en-gb.wordpress.org/plugins/amazon-s3-and-cloudfront/

    To move all our media to an s3 bucket.

    I note that now when cloning a site although the media items are created in the database, the objects don’t get created in the bucket. In fact, I’ve had to manually create the sites/blogID folder and copy the content across.

    Is this a known issue, or is it fixed with the pro version?
    Thanks again for the excellent plugin.
    BW,
    Alex

Viewing 4 replies - 1 through 4 (of 4 total)
  • Plugin Author Never Settle

    (@neversettle)

    Hi Alex,

    The Cloner is necessarily agnostic to and unaware of any additional CDN, Caching, or other external tools. WP Offload S3 (and all similar plugins) still rely on the out of the box WP Media Library. They just add an additional layer to copy the content to CDN (S3 for example) and then maintain active redirect patterns for media locations to serve the correct locations using the core WP redirect engine. However, you can always deactivate those plugins and still have your full Media Library (assuming you don’t configure them to auto-delete the originals). 

    There wouldn’t really be any reasonable way that Cloner could ever manage or “reach into” S3 buckets for cloning, for example. Everything it does is at a layer before that, and additional services on top of that like WP Offload S3 are expected to handle their piece of it in the WP architecture.

    In many cases it “just works” like on WPEngine’s platform with their EverCache system, the media for new clones is picked up with no problem from their new location copy in the normal WP structure of the filesystem, pushed to the CDN and all the links are dynamically correct by the system’s hooks into WP’s redirect engine.

    We can make a pretty good guess at what’s breaking in your case with WP Offload. But to understand it let’s look at the normal flow. Even on a site not using Cloner where WP Offload S3 is active, the normal flow looks like this:

    1. WP Media Library is used in the normal way.

    2. Media files are still uploaded to and saved by WP to the WP server under it’s normal /wp-content/uploads directory.

    3. WP Offload S3 then picks them up periodically, pushes them to S3 and manages the redirect pattern for each file uploaded to S3 to point to the bucket url. (A site can have a mix of S3 urls and regular WP urls for media files active at the same time). This is why WP Offload S3 relies so heavily on cron – it has to constantly keep pushing media files as they come in.

    By default, media files are left on the WP server forever, although there’s a setting to have them removed. This setting (removing the originals) is generally a pretty bad idea though… what if you want to change CDNs? Or migrate the site to a different host? The easiest (and in some cases only) way to do those kind of things is to keep the originals intact in the normal WP structure.

    The most robust and risk-free option requires a little extra work per Clone and can be streamlined using some cloning best practices:

    1. Don’t use a live site & domain as the template / source site. Always use a site something like template.site.com as the source.

    2. This allows you to keep key post-clone plugins deactivated. For example Domain Mapping (when you used to have to have a separate plugin for that) and CDN / Caching / Security plugins which are very site-specific. WP Offload S3 falls in this category. Site-specific settings like S3 Bucket mappings do NOT clone well and should NOT be active on the template site. Caching plugins are another great example. Cloning a whole cache for a site is a waste and the cache is stale and doesn’t apply after the clone is complete. Plugins like WP Offload S3 that are highly site-independent should not be active / cloned from the template.

    3. After Cloning the template site with normal media copy settings, all the media will be in the new site’s media directory.

    4. After Cloning, activate WP Offload S3 at the individual site level and configure it properly for the unique bucket to use for that site. WP Offload S3 will kick in and push the assets to S3 and take over from there.

    We highly recommend using a unique bucket for every site and taking the little extra time it takes to set that up per site after cloning. The biggest advantage is that you will segregate both total storage and bandwidth usage at the bucket level per site and have the ability to easily monitor and assess costs per site (per bucket) in case those costs need to be invoiced to a client or handled separately for billing entities / etc.

    Another option that I would be curious about – I’m not sure if this will work or not (it depends on how you have your Bucket(s) set up and configured on the source site as well as the level of the directory that WP Offset S3 syncs with S3) – but it might also be possible just to go into the clone site, and re-trigger the upload. IF (and it’s a big IF) WP Offload S3 manages a bucket from the root /wp-content/ or even wp-content/uploads/ or even wp-content/uploads/sites directory – it’s possible that the clone could re-use the existing bucket configuration from the source site and just sync all its content up. In fact, if given enough time for the schedule to kick in it might even self-heal, and get those files up there on its own and just start working. BUT if WP Offload S3 only maps a specific site to a specific Bucket like wp-content/uploads/sites/20 > Bucket ABC, then this will not be possible for the clone, and option 1 or 2 above is necessary. In any case it is not an “Access Denied” or credentials issue. Amazon S3 kicks that error out also when a file does NOT exist.

    Hopefully this sheds some light on the way things work and what the options are.’

    Let us know if you have any other questions!

    Thread Starter Alex Furr

    (@alexfurr)

    Hi,
    Thanks for the comprehensive reply and pointers. You’ve highlighted the problem we have, in that we have the S3 bucket to offload and remove the original – mainly because we have over 50 gig of media across several network sites and it was proving costly to keep on our host (Kinsta).

    I think an appropriate solution would be for us to trigger a copy content method once the site has been cloned, as a manual creation and copy content within AWS console itself fixed the issue.

    Therefore, is there a hook we can use that runs AFTER the cloning of a site is complete? Ideally something that returns both the original site ID and the new site ID.

    Best wishes,
    Alex

    Plugin Author Never Settle

    (@neversettle)

    Hi Alex,

    Yes, you can use the ns_cloner_process_finish action and access the original and new site ids like so:

    add_action( 'ns_cloner_process_finish', function(){
    	$source_id = ns_cloner_request()->get( 'source_id' ); // Original site id.
    	$target_id = ns_cloner_request()->get( 'target_id' ); // New site id.
    	// Do something here.
    });
    Thread Starter Alex Furr

    (@alexfurr)

    Brilliant! That’s fantastic thank you

Viewing 4 replies - 1 through 4 (of 4 total)
  • The topic ‘Copying objects from S3’ is closed to new replies.