Support » Plugin: Fast Velocity Minify » [request] paths inside css/js should stay relative

  • Resolved phloo

    (@phloo)


    First: great plugin and helper, use it on many sites.
    But for our staging system with load balancer it’s not more working

    Reason: FVM adds absolute paths including the domain name it was generated instead of using relative paths. Probably because they are moved to the upload directory.

    Problem: if you deploy the site to another domain (from staging to live), the paths are still the old ones, including the staging domain. You can purge the cache and regenerate on the production site but it will stop working when using a load balancer who uses multiple instances/knots of that deployment.

    Suggestion: get rid of absolute paths or remove the protocol/domain combination inside the generated cache files.

Viewing 6 replies - 1 through 6 (of 6 total)
  • Plugin Author Raul P.

    (@alignak)

    @phloo relative paths make sense in some situatons, but for FVM we need it to be absolute because sometimes we need to rewrite the cdn url inside the CSS files for example.

    Nevertheless, changing to relative paths won’t solve your problem after a cache purge, which btw, happens automatically under certain situations (updates, theme changes, settings, etc).

    With either relative or absolute paths, the browser will request the same exact url, so the problem you are talking about is related to the filesystem you are using.

    On a distributed autoscaling system for wordpress, you need a common storage point shared between all instances (Amazon EFS, Google Cloud Filestore, Azure Files, NFS mount, GlusterFS, etc) in the same way you need a database that is common to all sites.

    If you have individual nodes/instances with their own disks, and you use a load balancer doing for example, round robin… what will happen is that you are going to be hitting each node separately and purging caches for one of the nodes only, without purging the others.

    If you just deployed 2 instances and purge only once, the database is going to increment the FVM generated file names for all instances (so file names will match for all, which is what we want)… however, the newly generated FVM file will only be created on the instance you hit “uncached”.

    If you don’t use any page cache on any instance, FVM will always regenerate the missing files as you go. So you hit server 1 it creates a local file. You hit server 2, file is missing and therefore it will create it on the second server.

    If you use something like elastic cache, or some page cache at the load balancer level, reverse proxy caching, etc… you may be doing cached requests to other instances, and while the load balancer may be requesting static content from multiple instances/servers, FVM will never generate the missing files on those servers because it didn’t receive an uncached PHP request to actually regenerate the missing files.

    FVM file names are dynamic with timestamps and hash names, so you cannot deploy them to multiple file servers. File names will change periodically as needed, hence as I said, if you need scaling and HA you also need a file server.

    Only drawback is that Amazon EFS is going to be slow, unless you reserve a lot of storage (average performance is only 50Kb/s per GB) which will then add to your costs.

    If you need HA and you have a small site, it’s much cheaper to do a failover setup with a larger than needed instance, than to use autoscaling.

    If you really need autoscaling because even the largest instance is not enough, then you need a common mount point for files.

    You could create a shared mounting point just for the FVM cache and adjust the cache path on the plugin settings.

    For example, if you are on Amazon you can mount a 1GB EFS mount point on /yourpath/wp-content/fvm-cache/ and then add that path to the plugin settings.
    That way, your load balancer will always read those files, regardless of which instance it’s calling.

    phloo

    (@phloo)

    hey Paul, wow, that was quick and a very informative reply.

    I have to say that I am not the guy setting up the deployment system and currently we are in a progress of changing it again because of too many hickups lately (FVM for example after adding the load balancer).

    Will forward your reply to the IT guys and hope they can use your informations because I dont want to switch to a big plugin like W3 Total cache or something else.

    Thanks a lot and have a nice weekend!

    Plugin Author Raul P.

    (@alignak)

    Just a few more notes, which happen frequently:

    a) when migrating from staging to live, make sure to always update the database urls, else FVM (and others) will get the old url from it.

    b) If you use a pagecache plugin like W3TC, you will have the same exact problem. Cache files will be created per server (same as FVM) on a load balancer setup. That means, you will end up with different cached versions in difference servers, thus losing consistency.

    Recommended solution for load balancing, is common shared storage (as explained).
    Alternatively, use one big server with failover, where second will take over, if the first becomes unavailable.

    phloo

    (@phloo)

    a) yes, we do update the urls before we import the sql dump
    b) currently we switched to it and the urls work because it’s not using the domain name in the paths, so we the generated files on server1 (staging) are used on server2 (production) after moving the container there.
    c) I agree – another solution would help more – thats why we overthink the whole staging and server setup again

    thanks!

    Plugin Author Raul P.

    (@alignak)

    b) currently we switched to it and the urls work because it’s not using the domain name in the paths, so we the generated files on server1 (staging) are used on server2 (production) after moving the container there.

    I am sure think urls are only “working” because W3TC generate filenames which don’t change, even if you purge the cache.

    If you pre-generate the cache and deploy it to all servers, those files will exist forever on all of them. If you purge W3TC via wp-admin via load balancer, it will only purge the cache on one server, while preserving the second and third servers cache.

    You could verify that if you login via SFTP to all servers, then purging caches on W3TC… you would probably see that cache was actually only purged in one server (look at the timestamps for different generation times, or if the cache files have been deleted or not).

    This is similar to FVM, except that FVM is not a cache plugin and it won’t work if you are requesting a cached page (cache files which you deployed from staging, thus preventing FVM from regenerating those missing files because PHP doesn’t run).

    Load balancing can cause all sorts of inconsistencies with caches, unless you use a common storage mount point.

    Any file operation, even image uploads will fail to create the image on all servers, unless you have a plugin to upload images to a specific storage point, like a s3 bucket for example.

    You may not even notice it because files can be cached on the cdn nearest to you (depending on what you use)… hence later SEO issues arise, such as images not being found by google, when in fact you can see them with your own eyes.

    Your tech person should know what I am talking about, so just point him/her here if they have any questions.

    Thanks

    phloo

    (@phloo)

    We use a different approach: the production site will always be a new 100% clone from the staging site, meaning it gets build every time we deploy the whole site. No changes will be made on the production site, so it’s fine when the minified files are pre-generated on the dev site, as long as all urls are matching the current server request.

Viewing 6 replies - 1 through 6 (of 6 total)
  • You must be logged in to reply to this topic.