It might be worth noting on the plugin page that the method used removes about 5 transients a second – that’s my average speed so far on an InnoDB table with 250,000 entries.
I’ve got all day so I don’t mind. But other users may want to hack the code for a quicker method.
# while true; do echo "SELECT COUNT(*) FROM wp_options" | mysql examplecom |tail -1; sleep 300; done 226168 224330 222517 221012
Current server 15-minute load is 2.66 – server has 4 CPUs, i.e. only running at 66% of capacity.
It will probably be better to find and remove the transient records in a single SQL statement. However, my SQL is extremely rusty – it may take me some time to work it out!
Probably the easiest solution is to get it to only housekeep so many per night – just as long as this figure if more than the number you generate per day it will slowly clear it down.
Open up artiss-transient-cleaner.php and find the line…
$sql = "SELECT option_name FROM $wpdb->options WHERE option_name LIKE '_transient_timeout%'";
Change it to…
$sql = "SELECT option_name FROM $wpdb->options WHERE option_name LIKE '_transient_timeout%' LIMIT 10000";
You’d change 10000 to whatever limit you want.
I’ve thought of a more efficient way of performing the housekeeping. Would you be willing to beta test the result for me?
Yes, no problem… I did a mysqldump before this current purge, so I can do it again… presently down to the last 100,000 rows – we reached 6 rows/second earlier!
I just noticed that WooCommerce has code/SQL for doing this fast – it’s in admin/woocommerce-admin-status.php
Ok, looked at it sooner than I intended 😉
A-ha, found the code – it’s actually taken from a blog that explains how to do it in withint SQL, rather than busting out to PHP to do some of the processing. This is what I was after – the solution I was talking about before still relied on some PHP processing. Having said that I think my solution wouldn’t have been too much slower.
None-the-less, I’ll test this new solution and if I’m 100% happy it works I’ll add it to my plugin.
Also, the OPTIMIZE TABLE turned out to be equivalent to performing a DOS upon myself… I wondered if after 1/3 of the transients had gone, if running that would speed the future deletions (i.e. the remaining 160,000). It didn’t… but it did take the table out of action for 11 minutes whilst it optimized it!
Hmmm, well it obviously needed it 😉
Ok, the next release is going to introduce an options screen – I’ll allow users to turn off overnight optimisation, maybe just having it when a database upgrade occurs.
Perhaps an optimum solution would be to count the number of deleted rows. If it passes a certain threshold, email the user to advise them that an optimize would perhaps help.
The problem is that the sites that need an optimize are likely to be those that are really busy and hence that least want one to happen without them scheduling it.
Not sure how easy that would be as WordPress is modifying that table all the time.
I think the switch off option along with the ability to specify a maximum number of deletions per run should help.
I’m not convinced about a ‘maximum number of deletions’ option. Seems likely to be an option that will also be in a catch-22 situation: those who are not in danger of having cripplingly-large options tables won’t need it, and those who are in danger will find that such an option prevents the table from getting cripplingly large! Isn’t one of the benefits of the “single SQL statement” approach that it obviates the need to limit the number of deletions?
- The topic ‘Good, but not fast!’ is closed to new replies.