Crawler Server Load Limit on shared CloudLinux
-
Hi, I’m running LSCache WordPress plugin on shared hosting with LiteSpeed Enterprise and CloudLinux/CageFS isolation . The crawler works correctly, but I noticed that the Server Load Limit setting evaluates the global server load average, not the per-account load within my LVE.
From my understanding, this makes the setting effectively unusable in shared CloudLinux environments:
- On shared CloudLinux, the global load is dominated by other tenants’ activity, which I have no visibility into or control over.
- A conservative value would block my crawler permanently (global baseline often exceeds it).
- A high value effectively disables the protection and the setting becomes cosmetic.
- CloudLinux LVE limits already throttle my account independently, so I can’t actually overload the server from my side regardless of crawler activity.
Questions:
- Is there a way to make Server Load Limit evaluate per-account resource usage (CPU, EP, IOPS within the LVE) instead of global server load average?
- Is there a server-side variable, filter, or
wp-config.phpconstant that changes how the threshold is interpreted on CloudLinux? - If no such option exists, is this a known limitation, and is there any roadmap consideration for shared/CloudLinux-aware load detection?
For context: my hosting plan has dedicated LVE limits (CPU, EP, IOPS, memory), and CloudLinux automatically throttles overuse. The crawler workload is small (~280 URLs, ~70s per cycle) and well within my plan’s capacity, but the load limit setting offers no meaningful control in this environment.
Any guidance is appreciated. Thank you!
You must be logged in to reply to this topic.