Support » Plugin: Autoptimize » right now Autoptimize invalidate cache plugins

  • Hi,First, i love this plugin, it’s the best to optimize the scripts in the page, and i’ve tried all of them. With a caching plugins it works very very much better than w3 Total Cache. But…

    Because the plugin autoptimize creates so many different name files for the cached scripts and styles it conflicts with plugins like supercache or w3total cache.
    Because it has to make the file almost everytime the page is accessed the advantage of caching the files is lost.
    i saw in the code that the author is using the content from the scripts to create the hash for the file and using md5 for that.

    Md5() is broken, i don’t know why but it creates new hashes with the same string over time. I could see this in a couple of plugins i’m developing that uses hashes as well to cache the results. I would use Sha1() that works better and is a little bit faster.

    Also i’ve seen the plugin makes the hashes from the files content. I would use the file url in case of external js and the content when inline. Doing this changes for the javascript minfier i could get the load from 2.3 – 5.7 seconds (the second one is when it has to recreate the cache file, that actually is random) to 0.9 to 1.2 seconds. We are talking about a page 1.23MB and has a lot of scripts.

    Ok now, i can show you an example, mi site is this one:
    http://www.arturoemilio.es (i use it to develop live in it also, so if it’s down or weird looking just come back later)

    This is the speed i could get randomly (usually like 1.6 to 2 seconds) when autoptimize decided it hadn’t cached the page before.
    http://gtmetrix.com/reports/www.arturoemilio.es/V1nIxpWk

    Now this is my actual speeds (because autoptimize knows is has cached already the files)
    http://gtmetrix.com/reports/www.arturoemilio.es/jmutAWKG

    I’d like to ask the developer to add those changes to the plugin to fix the cache not being cached. This is the file modified:

    private $namejs = '';
    
    	//Reads the page and collects script tags
    	public function read($options)
    	{
    		//Remove everything that's not the header
    		if($options['justhead'] == true)
    		{
    			$content = explode('</head>',$this->content,2);
    			$this->content = $content[0].'</head>';
    			$this->restofcontent = $content[1];
    		}
    
    		$excludeJS = $options['js_exclude'];
    		$excludeJS = apply_filters( 'autoptimize_filter_js_exclude', $excludeJS );
    
    		if ($excludeJS!=="") {
    			$exclJSArr = array_filter(array_map('trim',explode(",",$excludeJS)));
    			$this->dontmove = array_merge($exclJSArr,$this->dontmove);
    		}
    
    		$this->domovelast = apply_filters( 'autoptimize_filter_js_movelast', $this->domovelast );
    
    		//Should we add try-catch?
    		if($options['trycatch'] == true)
    			$this->trycatch = true;
    
    		// force js in head?
    		if($options['forcehead'] == true)
    			$this->forcehead = true;
    
    		// get cdn url
    		$this->cdn_url = $options['cdn_url'];
    
    		// noptimize me
    		$this->content = $this->hide_noptimize($this->content);
    
    		// Save IE hacks
    		$this->content = $this->hide_iehacks($this->content);
    
    		// comments
    		$this->content = $this->hide_comments($this->content);
    
    		unset($namejs);
    
    		//Get script files
    		if(preg_match_all('#<script.*</script>#Usmi',$this->content,$matches)) {
    			foreach($matches[0] as $tag) {
    				if(preg_match('#src=("|\')(.*)("|\')#Usmi',$tag,$source)) {
    					//External script
    					$url = current(explode('?',$source[2],2));
    
    					$namejs .= $url;
    
    					$path = $this->getpath($url);
    					if($path !== false && preg_match('#\.js$#',$path)) {
    						//Inline
    						if($this->ismergeable($tag)) {
    							//We can merge it
    							$this->scripts[] = $path;
    						} else {
    							//No merge, but maybe we can move it
    							if($this->ismovable($tag)) {
    								//Yeah, move it
    								if($this->movetolast($tag)) {
    									$this->move['last'][] = $tag;
    								} else {
    									$this->move['first'][] = $tag;
    								}
    							} else {
    								//We shouldn't touch this
    								$tag = '';
    							}
    						}
    					} else {
    						//External script (example: google analytics)
    						//OR Script is dynamic (.php etc)
    						if($this->ismovable($tag)) {
    							if($this->movetolast($tag))	{
    								$this->move['last'][] = $tag;
    							} else {
    								$this->move['first'][] = $tag;
    							}
    						} else {
    							//We shouldn't touch this
    							$tag = '';
    						}
    					}
    				} else {
    					// Inline script
    					// unhide comments, as javascript may be wrapped in comment-tags for old times' sake
    					$tag = $this->restore_comments($tag);
    					if($this->ismergeable($tag)) {
    						preg_match('#<script.*>(.*)</script>#Usmi',$tag,$code);
    						$code = preg_replace('#.*<!\[CDATA\[(?:\s*\*/)?(.*)(?://|/\*)\s*?\]\]>.*#sm','$1',$code[1]);
    						$code = preg_replace('/(?:^\\s*<!--\\s*|\\s*(?:\\/\\/)?\\s*-->\\s*$)/','',$code);
    						$this->scripts[] = 'INLINE;'.$code;
    						$namejs .= $code;
    					} else {
    						//Can we move this?
    						if($this->ismovable($tag)) {
    							if($this->movetolast($tag))	{
    								$this->move['last'][] = $tag;
    							} else {
    								$this->move['first'][] = $tag;
    							}
    						} else {
    							//We shouldn't touch this
    							$tag = '';
    						}
    					}
    					// re-hide comments to be able to do the removal based on tag from $this->content
    					$tag = $this->hide_comments($tag);
    				}
    
    				//Remove the original script tag
    				$this->content = str_replace($tag,'',$this->content);
    			}
    
    			return true;
    		}
    
    		// No script files, great ;-)
    		return false;
    	}
    
    	//Joins and optimizes JS
    	public function minify() {
    		foreach($this->scripts as $script) {
    			if(preg_match('#^INLINE;#',$script)) {
    				//Inline script
    				$script = preg_replace('#^INLINE;#','',$script);
    				//Add try-catch?
    				if($this->trycatch) {
    					$script = 'try{'.$script.'}catch(e){}';
    				}
    				$this->jscode .= "\n".$script;
    			} else {
    				//External script
    				if($script !== false && file_exists($script) && is_readable($script)) {
    					$script = file_get_contents($script);
    					$script = preg_replace('/\x{EF}\x{BB}\x{BF}/','',$script);
    					//Add try-catch?
    					if($this->trycatch) {
    						$script = 'try{'.$script.'}catch(e){}';
    					}
    					$this->jscode .= "\n".$script;
    				}/*else{
    					//Couldn't read JS. Maybe getpath isn't working?
    				}*/
    			}
    		}
    
    		//Check for already-minified code
    		//$this->md5hash = md5($this->jscode);
    		$this->md5hash = sha1($namejs);

    Also in my opinion i would change the preg_match and in general all the regex for DOM and Xpath. It’s much faster with complex html. I’ve tried but i get blank pages, i guess because of class and me not knowing what is in the variables.

    https://wordpress.org/plugins/autoptimize/

Viewing 8 replies - 1 through 8 (of 8 total)
  • Plugin Author Frank Goossens

    (@futtta)

    Very interesting. Some random remarks (I’ll dive in later);
    * I have not seen evidence of md5 being broken. in general there always is a difference in the files if their md5 is different, but a difference of only one character is enough off course. more info on cache size and how to troubleshoot here.
    * sha1 is a good alternative
    * if you use the file name for hashing instead of the contents, you risk changes in those files not being taken into account
    * DOM and Xpath is an interesting idea indeed, but how do they handle broken HTML (of which there is a lot around)? I’ve used simplehtmldom as an alternative before in another project, but that’s an extra external dependency I’m not too eager to include.

    but this is a very interesting topic, I’ll get back to you on that!

    frank

    Hi futta!

    • About md5() i’ve encounter this behavior with a plugin that makes CSS Sprites from the images in your WordPress blog. Although they were the same images and the process were the same all the time, from time to time the hashed changed to it has to make a new sprite. I don’t know why but sha1() seems to solve that problem. Also this is a CMS that does the same all the time if the variables weren’t changed.
      Why should the same static content suddenly change to give a new hash is the content from the files remain the same?
    • About using the full url for that file instead the content. Those files are not supposed to be changed, they are static. If the developer change those files should be aware to refresh the cache.
    • And about Xpath.. In this case we are looking for script/css tags. that shouldn’t be broken or it wouldn’t work anyway. It works much faster than regex and load much less the cpu.

    But this is only my point of view. I could be wrong anyway.

    Plugin Author Frank Goossens

    (@futtta)

    * Hmm, would like to have a look at the sprites-plugin, do you have a link for me?
    * Would be great if plugin (or theme) developers would take it upon them to invalidate cache’s when JavaScript or CSS changes, but I can’t really depend on that. That being said a mix of the two approaches might work (although it complicates the logic). Something I’ll have to chew on some more 😉
    * based on what I read, I’d rather go for dom (and specifically http://be2.php.net/manual/en/domdocument.loadhtml.php) as that will probably be more lenient then xpath. This is a pretty invasive change though, maybe you’d like to join in on the fun? 🙂

    Plugin Author Frank Goossens

    (@futtta)

    Have been going through the documentation for domdocument->loadhtml, some issues which might make switching from regexp to dom not that straight forward in the comments (e.g. character encoding problems, doctype set/ changed, paragraphs get added). Not backing out yet, but if I do this, it won’t be easy.

    Next up; some perf-tests to compare dom (and xpath) to regexes.

    Plugin Author Frank Goossens

    (@futtta)

    So regarding those perf-tests, would this code be an optimal way of getting all code of inline javascript and url’s for external javascript in an array and removing it from the dom;

    $html_dom=file_get_contents($htmlfile);
    $starttime_dom=microtime(true);
    for ($x=0; $x<$iterations ;$x++) {
    	$dom_javascript = [];
    	$dom = new DOMDocument();
    	/* suppress any error when loading HTML, as libxml likes to complain a lot */
    	@$dom->loadHTML($html_dom);
    
    	/* put nodes in an array to avoid messing things up
    	(as in removeChild removing scripts before they're processed)
    	cfr. http://stackoverflow.com/a/6801318/237449 */
    	$_scripts=$dom->getElementsByTagName("script");
    	$scripts = [];
    	foreach ($_scripts as $script) {
    		$scripts[]=$script;
    	}
    
    	foreach ($scripts as $script) {
    		foreach ($script->attributes as $attribute) {
    			if ($attribute->name==="src") {
    				$dom_javascript["external"][]=$attribute->value;
    				}
    			}
    		if (!empty($script->nodeValue)) {
    			$dom_javascript["inline"][]=$script->nodeValue;
    			}
    		$script->parentNode->removeChild($script);
    	}
    }
    $endtime_dom=microtime(true);
    $timeused_dom=$endtime_dom-$starttime_dom;

    Or do you see possible optimizations in order to make this an honest comparison?

    I’m sorry, i’ve been off line those days, but now i’m back.
    Yes there is a problem with the encoding and DOM and the libxml complaining but i found this most useful:
    When loading the DOM using the encoding will make things easier:

    $dom = new DOMdocument("1.0", "utf8");
    	$dom->preserveWhiteSpace = false;
    	$dom->formatOutput       = true;
    	@$dom->loadHTML('<meta http-equiv="content-type" content="text/html; charset=utf-8">'.$buffer);

    At the end is enough using the $buffer = $dom->saveHTML(); .

    This is important because if it is not done the page will get messed up. For example my blog is in Spanish so I’m using UTF-8 encoding in the pages. For the translation plugin i’m developing it has not only to encode the page in latin but also in utf8 or even should accept not latin characters. In my blog you can see how works just choosing another language in the combo box, still is not perfect and have some issues with not latin characters like Chinese or thai but mostly works fine. The translation is done using DOM as well.

    To look for the attributes you could go straight for it instead of going recursively attribute by attribute:

    foreach ($scripts as $script) {
    if ($script->hasAttribute('src')){
    	$dom_javascript["external"][]=$script->getAttribute('src');
    }else{
             dom_javascript["inline"][]=$script->nodeValue;
    }

    I don’t know if at this point you should check if the script is from the same domain or not.

    Plugin Author Frank Goossens

    (@futtta)

    OK, getting attributes (& nodeValue if no attribs) indeed seems more efficient that way, thanks!

    Plugin Author Frank Goossens

    (@futtta)

    Took me some time, but I was finally able to finish the performance comparison between preg_match (regex) and DOM-based HTML parsing (using your homepage as one of the examples 😉 ). The results are astonishing; DOM-based HTML parsing is on average 500% slower. You can check out a small report on the test (including links to source) in this blogpost I just published.

    That being said, I am thinking about how to improve Autoptimize performance for initial loads and hashing on filenames vs content is one of the things I’ll be looking into.

Viewing 8 replies - 1 through 8 (of 8 total)
  • The topic ‘right now Autoptimize invalidate cache plugins’ is closed to new replies.