WordPress.org

Support

Support » Plugins and Hacks » [Plugin: NGINX Manager] Purge cache on 10 proxies

[Plugin: NGINX Manager] Purge cache on 10 proxies

Viewing 8 replies - 1 through 8 (of 8 total)
  • probably i found solution, i changed this function

    /**
                     * Purge the URL, if 'feed' then purge related feed page also.
                     * Used in the external script to purge the homepage.
                     *
                     * @param $url
                     * @param $feed
                     * @return unknown_type
                     */
                    function purgeUrl($url, $feed = true) {
                            $serverList=array("257.257.257.257","257.257.257.258","257.257.257.259");
                            foreach ($serverList as $ip ) {
                                    $this->log( "- [".$ip."] Purging URL | ".$url );
    
                                    $parse          = parse_url($url);
    
                                    $_url_purge = $parse['scheme'].'://'.$ip.'/purge'.$parse['path'];
                                    if ( $parse['query'] ) {
                                            $_url_purge .= '?'.$parse['query'];
                                    }
    
                                    $this->_do_remote_get( $_url_purge );
    
                                    if ($feed) {
                                            $feed_string = (substr($parse['path'], -1) != '/') ? "/feed/" : "feed/";
                                            $this->_do_remote_get($parse['scheme'].'://'.$ip.'/purge'.$parse['path'].$feed_string);
                                    }
                            }
    
                    }

    Plugin Author rukbat

    @rukbat

    Thx nicloay.

    We’ll consider your report for next releases…

    Plugin Author Hpatoio

    @hpatoio

    Hello nicloay.

    It’s better if you set a shared cache for all your RP (a memcache server for instance)

    Sharing the cache as the known different pros:

    • if the resource A was cached by RP1 also the RP4 can reply with the cached resource. this will reduce the number of requests that are passed to the backedn
    • the space for the disk/memory space for the cache is smaller
    • having only one cache is good when you have to debug
    • last you just need to invalidate the cache in one point

    The NGINX configuration for this setup is even easier then your because all the RP have the same cache key.

    Let me know if you need more details or help.

    Hpatoio,

    Thanks, for your suggestion.
    I use reverse proxy, because my site can be ddosed,
    and i use cheep VPS as RP to distribute the load for several boxes.
    In case if i’ll use one shared cache – all traffic will come to one machine.

    Please correct me if i’m wrong:
    Assume if i am having a 500 MB/s attack and 5 reverse proxy, each proxy will work on 100 MB/S. but in case of one shared proxy i will get 100 MB/S per each host and 500 MB/S for shared cache box.

    anyway current implementation works well, thanks again for a great job. just only one remark – that i need to rewrite a purgeUrl function to execute a purge statements to RP in parallel.

    Plugin Author Hpatoio

    @hpatoio

    Are all your RP in the same LAN or are they spread around the planet ?

    No, i’m not sure that in same, i bought on different providers, and i think in different data centers around the world.

    And even if all of RP will be in one lan, it will be impossible to mitigate 2GByte/s attack.
    but when we don’t have centralized cache, we just need to deploy more RP.

    Plugin Author Hpatoio

    @hpatoio

    If this is your network setup then it’s good what your are doing.

    PS: I’m just curious about what kind of website you are running to be afraid of 2GByte/s Ddos attack …

    It’s just my test project, like for fun. But i’m planing to use this architecture for news site.

Viewing 8 replies - 1 through 8 (of 8 total)
  • The topic ‘[Plugin: NGINX Manager] Purge cache on 10 proxies’ is closed to new replies.