• Resolved tawabwp

    (@tawabwp)


    Hi all,

    I’m running into a performance issue with Redis Object Cache on a WooCommerce site and wanted to check whether this is a known behavior or something I should configure differently.

    After ~24 hours of uptime, Redis command latency (measured via slowlog and socket wait) increases dramatically. I’ve observed Redis operations taking up to 37,000 ms. When this happens, wp-admin becomes extremely slow (product edit pages can take tens of seconds). Flushing the Redis cache immediately resolves the issue, but it slowly returns over the next day.

    My environment: Redis via UNIX Socket, NGINX server, Large webshop with many transients and object cache entries (10K products, 10 attributes each, wp import running twice a day from multiple suppliers).
    Redis has ~200k keys after running for some time.
    The largest key is wp:options:alloptions (~1.4 MB).
    There is also a growing zset: wp:redis-cache:metrics (thousands of members).
    Redis SLOWLOG consistently shows very long-running EVAL commands (300–340 seconds cumulative runtime across calls).
    Redis is otherwise healthy (no swapping, sufficient memory).

    Total key length in bytes is 40278387 (avg len 115.69)

    Biggest string found "msOrZN^=;#0M<*cwh=N/_=o8uH#~qF%, rdMPe)F5f XS/Y4&A$NBa}?;;5=PT-rwp:options:alloptions" has 1458280 bytes
    Biggest zset found "msOrZN^=;#0M<*cwh=N/_=o8uH#~qF%, rdMPe)F5f XS/Y4&A$NBa}?;;5=PT-rwp:redis-cache:metrics" has 1540 members

    0 lists with 0 items (00.00% of keys, avg size 0.00)
    0 hashs with 0 fields (00.00% of keys, avg size 0.00)
    0 streams with 0 entries (00.00% of keys, avg size 0.00)
    348152 strings with 244424308 bytes (100.00% of keys, avg size 702.06)
    0 sets with 0 members (00.00% of keys, avg size 0.00)
    1 zsets with 1540 members (00.00% of keys, avg size 1540.00)
    Status: Verbonden
    Client: PhpRedis (v6.2.0)
    Drop-in: Valid
    Disabled: No
    Ping: 1
    Errors: []
    PhpRedis: 6.2.0
    Relay: Not loaded
    Predis: 2.4.0
    Credis: Not loaded
    PHP Version: 8.1.33
    Plugin Version: 2.7.0
    Redis Version: 8.2.0
    Multisite: No
    Metrics: Enabled
    Metrics recorded: 8692
    Filesystem: Writable
    Global Prefix: "wp_"
    Blog Prefix: "wp_"
    Timeout: 10
    Read Timeout: 10
    Retry Interval:
    WP_REDIS_SCHEME: "unix"
    WP_REDIS_PATH: "/home/ANONYMIZED/.redis/redis.sock"
    WP_REDIS_TIMEOUT: 10
    WP_REDIS_READ_TIMEOUT: 10
    WP_REDIS_MAXTTL: 3600
    WP_REDIS_PREFIX: "msOrZN^=;#0M<*cwh=N/_=o8uH#~qF%, rdMPe)F5f XS/Y4&A$NBa}?;;5=PT-r"
    WP_CACHE_KEY_SALT: "msOrZN^=;#0M<*cwh=N/_=o8uH#~qF%, rdMPe)F5f XS/Y4&A$NBa}?;;5=PT-r"
    WP_REDIS_PLUGIN_PATH: "/home/ANONYMIZED/domains/ANONYMIZED/public_html/wp-content/plugins/redis-cache"
    WP_REDIS_IGNORED_GROUPS: [
    "wp_all_import",
    "wp-all-import-pro",
    "wp-all-import-pro"
    ]
    Global Groups: [
    "blog-details",
    "blog-id-cache",
    "blog-lookup",
    "global-posts",
    "networks",
    "rss",
    "sites",
    "site-details",
    "site-lookup",
    "site-options",
    "site-transient",
    "users",
    "useremail",
    "userlogins",
    "usermeta",
    "user_meta",
    "userslugs",
    "redis-cache",
    "blog_meta",
    "image_editor",
    "network-queries",
    "site-queries",
    "theme_files",
    "translation_files",
    "user-queries",
    "code_snippets",
    "woo_variation_swatches"
    ]
    Ignored Groups: [
    "wp_all_import",
    "wp-all-import-pro",
    "counts",
    "plugins",
    "theme_json",
    "themes"
    ]
    Unflushable Groups: []
    Groups Types: {
    "blog-details": "global",
    "blog-id-cache": "global",
    "blog-lookup": "global",
    "global-posts": "global",
    "networks": "global",
    "rss": "global",
    "sites": "global",
    "site-details": "global",
    "site-lookup": "global",
    "site-options": "global",
    "site-transient": "global",
    "users": "global",
    "useremail": "global",
    "userlogins": "global",
    "usermeta": "global",
    "user_meta": "global",
    "userslugs": "global",
    "redis-cache": "global",
    "wp_all_import": "ignored",
    "wp-all-import-pro": "ignored",
    "blog_meta": "global",
    "image_editor": "global",
    "network-queries": "global",
    "site-queries": "global",
    "theme_files": "global",
    "translation_files": "global",
    "user-queries": "global",
    "counts": "ignored",
    "plugins": "ignored",
    "theme_json": "ignored",
    "code_snippets": "global",
    "themes": "ignored",
    "woo_variation_swatches": "global"
    }
    Drop-ins: [
    "Redis Object Cache Drop-In v2.7.0 by Till Krüss"
    ]

    I understand that Redis keys can increase drastically when you have many products, attributes, filters, and users in a WooCommerce webshop. That part makes sense to me. However, I find it hard to believe that I am the only one running into this issue. It feels like something is going wrong structurally rather than just normal growth.

    My knowledge of Redis is not deep enough to fully understand the root cause. An LLM suggested that I should look into Lua usage, but this is a new concept for me and I am not sure how to interpret that advice in the context of Redis Object Cache.

    I would like to ask if anyone here has experienced something similar or can provide some guidance. Any insights or shared experiences would be greatly appreciated.

Viewing 8 replies - 1 through 8 (of 8 total)
  • Plugin Author Till Krüss

    (@tillkruess)

    Your Redis key prefix is unnecessarily long, I’d make it human readable.

    How large is your Redis server? redis-cli dbsize

    Thread Starter tawabwp

    (@tawabwp)

    (integer) 320512

    Plugin Author Till Krüss

    (@tillkruess)

    Try setting WP_REDIS_DISABLE_GROUP_FLUSH to true.

    Thread Starter tawabwp

    (@tawabwp)

    Ok, I added the setting in my wp-config.php.

    Additionally, the spike is happening right now, I will send some details maybe it will show something.

    redis cli –latency: min: 0, max: 1177, avg: 61.49 (297 samples)

    Db size: (integer) 534296

    redis cli –big keys shows:


    -------- summary -------

    Total key length in bytes is 69416618 (avg len 127.69)

    Biggest string found "msOrZN^=;#0M<*cwh=N/_=o8uH#~qF%, rdMPe)F5f XS/Y4&A$NBa}?;;5=PT-rwp:options:alloptions" has 1458298 bytes

    Biggest   zset found "msOrZN^=;#0M<*cwh=N/_=o8uH#~qF%, rdMPe)F5f XS/Y4&A$NBa}?;;5=PT-rwp:redis-cache:metrics" has 5064 members

    0 lists with 0 items (00.00% of keys, avg size 0.00)

    0 hashs with 0 fields (00.00% of keys, avg size 0.00)

    0 streams with 0 entries (00.00% of keys, avg size 0.00)

    543643 strings with 327757497 bytes (100.00% of keys, avg size 602.89)

    0 sets with 0 members (00.00% of keys, avg size 0.00)

    1 zsets with 5064 members (00.00% of keys, avg size 5064.00)
    # Memory

    used_memory:483399128

    used_memory_human:461.01M

    used_memory_rss:527470592

    used_memory_rss_human:503.04M

    used_memory_peak:711369600

    used_memory_peak_human:678.41M

    used_memory_peak_time:1766945326

    used_memory_peak_perc:67.95%

    used_memory_overhead:44428725

    used_memory_startup:651656

    used_memory_dataset:438970403

    used_memory_dataset_perc:90.93%

    allocator_allocated:484030072

    allocator_active:504303616

    allocator_resident:527314944

    allocator_muzzy:0

    total_system_memory:66800857088

    total_system_memory_human:62.21G

    used_memory_lua:41984

    used_memory_vm_eval:41984

    used_memory_lua_human:41.00K

    used_memory_scripts_eval:1552

    number_of_cached_scripts:2

    number_of_functions:0

    number_of_libraries:0

    used_memory_vm_functions:32768

    used_memory_vm_total:74752

    used_memory_vm_total_human:73.00K

    used_memory_functions:192

    used_memory_scripts:1744

    used_memory_scripts_human:1.70K

    maxmemory:16000000000

    maxmemory_human:14.90G

    maxmemory_policy:allkeys-lru

    allocator_frag_ratio:1.04

    allocator_frag_bytes:20141192

    allocator_rss_ratio:1.05

    allocator_rss_bytes:23011328

    rss_overhead_ratio:1.00

    rss_overhead_bytes:155648

    mem_fragmentation_ratio:1.09

    mem_fragmentation_bytes:44051168

    mem_not_counted_for_evict:0

    mem_replication_backlog:0

    mem_total_replication_buffers:0

    mem_replica_full_sync_buffer:0

    mem_clients_slaves:0

    mem_clients_normal:685149

    mem_cluster_links:0

    mem_aof_buffer:0

    mem_allocator:jemalloc-5.3.0

    mem_overhead_db_hashtable_rehashing:0

    active_defrag_running:0

    lazyfree_pending_objects:0

    lazyfreed_objects:5644140
    • This reply was modified 2 months ago by tawabwp.
    • This reply was modified 2 months ago by tawabwp.
    • This reply was modified 2 months ago by tawabwp.
    Thread Starter tawabwp

    (@tawabwp)

    redis-cli -s slowlog get 3:

    1) 1) (integer) 449002

       2) (integer) 1767035315

       3) (integer) 521667

       4) 1) "EVAL"

          2) "                local cur = 0\n                local i = 0\n                local tmp\n                repeat\n                    t... (459 more bytes)"

          3) "0"

       5) "/home/anonymized/.redis/redis.sock:0"

       6) ""

    2) 1) (integer) 449001

       2) (integer) 1767035314

       3) (integer) 560053

       4) 1) "EVAL"

          2) "                local cur = 0\n                local i = 0\n                local tmp\n                repeat\n                    t... (459 more bytes)"

          3) "0"

       5) "/home/anonymized/.redis/redis.sock:0"

       6) ""

    3) 1) (integer) 449000

       2) (integer) 1767035314

       3) (integer) 511976

       4) 1) "EVAL"

          2) "                local cur = 0\n                local i = 0\n                local tmp\n                repeat\n                    t... (459 more bytes)"

          3) "0"

       5) "/home/anonymized/.redis/redis.sock:0"

       6) ""

    Plugin Author Till Krüss

    (@tillkruess)

    In that case WP_REDIS_DISABLE_GROUP_FLUSH will help.

    Try using a much shorter cache key prefix so it’s less work to group flush.

    Thread Starter tawabwp

    (@tawabwp)

    Alright, I’ve enabled WP_REDIS_DISABLE_GROUP_FLUSH and changed the key prefix to (webshopname_redis). Lets see how this behaves.

    I’m still a bit skeptical though. It feels like something is going wrong under the hood, possibly caused by a WooCommerce plugin thats generating an excessive number of keys. Thats just a gut feeling for now.

    Thread Starter tawabwp

    (@tawabwp)

    I also shortened by WP_CACHE_KEY_SALT: according to a LLM:

    The long salt and prefix caused real overhead. What your Redis was actually doing

    Your keys looked like this:

    msOrZN^=;#0M<*cwh=N/_=o8uH#~qF%, rdMPe)F5f XS/Y4&A$NBa}?;;5=PT-rwp:options:alloptions

    That means:

    • ~70–80 extra bytes per key
    • ~540,000 keys
    • Tens of MB of pure string overhead
    • Every SCAN MATCH, Lua EVAL, and prefix comparison had to process that junk

    Now combine that with:

    • WooCommerce plugins doing group flush
    • Redis Object Cache using Lua loops
    • SCAN running repeatedly

    Result: Redis CPU spikes and latency explosions.

    I am going to monitor the latency in the next couple of days, will share my findings in this thread, hopefully this will help someone in the future.

Viewing 8 replies - 1 through 8 (of 8 total)

You must be logged in to reply to this topic.