• Resolved kersavond

    (@kersavond)


    Hi,

    I am running Redis Object Cache on my WordPress site and have discovered a performance issue that appears to be directly caused by the plugin.

    When opening a post with approximately 80 images in the Gutenberg editor in wp-admin, the following occurs:

    • With Redis Object Cache enabled: 1.5 minutes of 100% CPU usage, 5 GB of network traffic
    • With Redis Object Cache disabled: short CPU peak of 70%, only 60 MB of network traffic

    This is reproducible and consistent. Have you seen this behavior before and how can we fix this?

    Best regards,
    Bart

    The page I need help with: [log in to see the link]

Viewing 5 replies - 1 through 5 (of 5 total)
  • Plugin Author Till Krüss

    (@tillkruess)

    Hi Bart,

    You’ll need to debug this. Redis Object Cache is just providing an API to communicate with the object cache, it’s your plugins/themes and WP core that’s actually doing the cache calls.

    Try using Query Monitor to have a closer look.

    Thread Starter kersavond

    (@kersavond)

    Hi Till,

    Thank you for your response. I understand Redis Object Cache is an API layer.
    However, Query Monitor does not show AJAX/REST API calls made by Gutenberg after page load. The 5 GB of network traffic is generated by Gutenberg’s /wp/v2/media/ REST calls – with Redis disabled these same calls only generate 60 MB.

    Is there a way to see what Redis is actually caching during these REST calls? For example via WP-CLI or Redis diagnostics?

    Best regards, Bart

    Plugin Author Till Krüss

    (@tillkruess)

    You can use redis-cli MONITOR to see what keys are going in and out.

    As well as redis-cli --bigkeys maybe one of your plugins is storing images in the cache, which is a bad practice.

    Thread Starter kersavond

    (@kersavond)

    Hi Till,

    Thanks for the suggestion. We ran both MONITOR and –bigkeys.

    No plugin is storing images in the cache — the biggest keys are legitimate: options:alloptions (775 KB), options:yootheme (144 KB), and a feed transient (585 KB).

    We cleaned up alloptions from ~1.2 MB down to 775 KB by removing stale data from inactive plugins (OMGF, Thrive, AIOSEO, WP Rocket cache entries), but the performance issue persists.

    Bart

    -------- summary -------

    Sampled 19443 keys in the keyspace!
    Total key length in bytes is 1572597 (avg len 80.88)

    Biggest string found '"CL-c178f1cf-f183-4f63-8d80-b78713cd6dc1bk1a:options:alloptions"' has 775801 bytes
    Biggest zset found '"CL-c178f1cf-f183-4f63-8d80-b78713cd6dc1bk1a:redis-cache:metrics"' has 619 members

    0 lists with 0 items (00.00% of keys, avg size 0.00)
    0 hashs with 0 fields (00.00% of keys, avg size 0.00)
    19442 strings with 30509054 bytes (99.99% of keys, avg size 1569.23)
    0 streams with 0 entries (00.00% of keys, avg size 0.00)
    0 sets with 0 members (00.00% of keys, avg size 0.00)
    1 zsets with 619 members (00.01% of keys, avg size 619.00)
    Thread Starter kersavond

    (@kersavond)

    Hi Till,

    we found the root cause: switching from LSAPI to PHP-FPM resolved the issue almost completely. With LSAPI + Redis the site generated ~5 GB of network traffic just from opening one post in the editor — with PHP-FPM it dropped to ~10 MB.

    It seems LSAPI was amplifying the Redis traffic massively under the combination of WPML + Gutenberg REST calls.

    Thanks for your help!

    Best regards,
    Bart

Viewing 5 replies - 1 through 5 (of 5 total)

You must be logged in to reply to this topic.