• Resolved oanishchik

    (@oanishchik)


    Hi there,

    Firstly, after purging all page cache and ucss/js cache, launching crawlers, and waiting for them to finish their job and generate new page cache, there is always some amount left in the ucss request queue (the number can be quite different) with which nothing happens, even when I press ‘Force cron’ (after pressing it repeatedly and just waiting for hours, nothing happens, these requests remain in this queue; I don’t see any errors in the log after pressing ‘Force cron’, which does nothing, even though I seem to have the log enabled to the maximum). Secondly, I see that the number of un-generated ucss-es exceeds the number of remaining (stuck) requests in this queue. I randomly visit the site’s pages (in incognito mode) and see that ucss-es are being generated on a few pages, and the rest are reported as being in the queue to be generated. Furthermore, I can see from the number of ucss-es generated after cache purge (which are deducted from my quota) that there are clearly not enough ucss-es being generated. There are just under 2000 posts/pages on my site, and ucss-es should be generated for all of them. Previously, I tested generation in all modes, including guest and admin, and with webp, but now I have left only one ‘regular mode’. I have enabled cache generation for desktop and mobile. If I understand correctly (it is possible that I do not), with such a setting, ucss-es should be at least twice as much, not two thousand, but four. But only about one thousand of the quota is used after the cache purge. When I run the crawlers later, I see that the caches of all pages are generated. So, if I understand correctly, the crawlers can’t trigger ucss generation on those pages, nothing comes into the queue of requests for ucss generation (there are only stuck old requests). Accordingly, it is unclear how un-generated ucss-es could have been generated, except perhaps as a result of visiting certain pages of the site where they are not present. And while this doesn’t suit me, since I want ucss-es to be generated before a human (or robot) visits a page, I don’t see ucss being generated after a page has been visited anyway: I open a page in incognito mode, see that ucss is in the queue, close the browser, visit the page after some time in the same mode and see that ucss is still not generated. I have tried purging the cache (including all) many times, and nothing helps. In five days of experiments, I wasted 16K of ucss quota, but still couldn’t get all the ucss-es to be generated. Can you tell me what I might be doing wrong? Where to look for possible causes of the problem, what to try, what other information I might need to report to get to the bottom of this? Thank you! Report Number: BHPDKTYP.

Viewing 7 replies - 16 through 22 (of 22 total)
  • Thread Starter oanishchik

    (@oanishchik)

    Here is the situation at the moment. I only have a few requests stuck in queued requests. For some reason, ucss is not being generated for these pages. (Although I’ve noticed before that for pages that might be in this “stuck” list, ucss can actually be generated and displayed in the frontend). The log is full of rest_route=/litespeed/v1/notify_ucss. And I don’t doubt that ucss do get generated and sent to litespeed/ucss and the frontend. When the cache is purged for some reason and the crawlers do their job, the queue is filled with hundreds of requests. But by the time they are all processed, there are only a few hundred files in litespeed/ucss (with about two thousand pages on the site and guest mode + mobile activated). I’m not sure if this in itself is indicative of a problem. It’s possible that a single ucss could be for multiple pages. The real problem is that when I browse the site in incognito mode, I can see that the ucss is present on very few pages. The rest have it listed in the queue. But the queue in the Dashboard is almost empty (except for a few pending requests).

    BTW, I’ve noticed that I’ve been connected to node19 as a ucss service for a few days now, whereas before it was always node12 (even after redetect). Maybe this has something to do with the fact that I didn’t see rest_route=/litespeed/v1/notify_ucss in the log. This is just a wild guess, of course.

    • This reply was modified 1 year, 4 months ago by oanishchik.
    Thread Starter oanishchik

    (@oanishchik)

    I’m more and more inclined to think that the problem is with node12. Or rather, my connection to node12 for the purpose of working with ucss. For example, yesterday I had to clear the entire ucss cache. I had made a major style change that affected every page on the site. I then ran the crawlers. I was connected to node12 as often the case. I didn’t have a single ucss file in litespeed/ucss after spending several thousand ucss credits (I had to sign up for all the monthly subscriptions available, as credits are written off at an enormous rate, but to no avail, and it is impossible to buy more than one identical subscription per calendar month). I tried to reconnect several times. Each time I reconnected to node12. Eventually, at some point, I was switched to node123. Within a few minutes there were 60 files in litespeed/ucss. And after that, the number started to increase. Not as fast, but still increasing. And I also noticed that when connected not to node12, ucss is deleted when I purge page by id! This does not happen when connected to node12. I’ve tried using the log to find out the name of the file generated for that page. So I can delete it manually. But everything seems to work fine when I’m connected to another node! Is there a way that the connection to node12 is never made?

    Plus this. The numbers decrease very quickly (with no apparent effect on ucss shaping) when I force cron while connected to node12. Now, when I am connected to node123 and run force cron, the numbers decrease very slowly, each ucss takes like 10-15 secs to complete.

    • This reply was modified 1 year, 4 months ago by oanishchik.
    • This reply was modified 1 year, 4 months ago by oanishchik.
    Plugin Support qtwrk

    (@qtwrk)

    !class_exists('\LiteSpeed\Cloud') || \LiteSpeed\Cloud::save_summary(['server.ucss' => 'https://nodeXXX.quic.cloud']);

    please try code snippet like this , this will forcefully connect UCSS to nodeXXX.quic.cloud

    Thread Starter oanishchik

    (@oanishchik)

    Where exactly should I insert this code snippet?

    Plugin Support qtwrk

    (@qtwrk)

    it should be in theme’s functions.php , or you can use any plugin that can add code snippet

    Thread Starter oanishchik

    (@oanishchik)

    Thanks, it worked!

    Can you please tell me how to get ucss (or ccss if it is used for each page/post) to appear on all pages/posts before/without having to visit each page/post? If I understand correctly, when a page doesn’t have ucss/ccss, I can purge the cache, and run crawlers, they will trigger adding that page to the queue to generate ucss/ccss, and then cron will sooner or later pass everything from that queue to the cloud, and as the ucss/ccss are generated they will be ready to be embedded on the page. However, in order to embed them, either an actual visit is required, or the crawlers should do the work again. I’m not going to talk about the first option here. Running crawlers at this point seems pointless. They will find that all pages are cached, meaning that anything that is not ucss/ccss has been done to them. Do I need to purge the page cache so that the crawlers, seeing that it’s missing, will trigger its rebuilding and at the same time trigger the placement of the already prepared ucss/ccss on the page? Obviously, I can’t achieve this by purging the ucss/ccss cache. I would hardly be able to achieve this by purging the css/js cache, would I? After all, that would probably require remaking ucss/ccss. And my goal is to put already created ucss/ccss on pages. Purging other caches doesn’t seem to have the same effect: so crawlers would consider all pages/posts as not yet cached and could thus trigger the placing of ucss/ccss on them. Do I understand correctly how building ucss/ccss works? Is it possible to place ucss/ccss on all pages/posts in this way (provided they are all ready, i.e. (1) the crawlers ‘think’ that all pages are cached, (2) there are no ucss/ccss in the queue (at least not stuck), (3) some time has passed since the queue was vacated and we can assume that the cloud has already generated all requested ucss/ccss)? If the answer to the previous question is yes, is it true that the only way to achieve this effect is to empty the page cache?

    Thanks!

    Plugin Support qtwrk

    (@qtwrk)

    on my test, the crawler will add pages into the URIs , but it probably won’t add all at once due to limited queue and yes if crawler sees it cache hit already it won’t trigger the queue again until it triggers php as cache miss, so you may need to give it some time or repeat the crawler on cache miss condition multiple times , and crawler doesn’t think anything , it just send a request to website , the plugin then will think something , once it receives a request, it will check in database, see if UCSS/CCSS exists for this page or not , if not , put it into queue and send it to generator , then wait for notification, once it receives the generated UCSS/CCSS , it will purge related page, and next time such page is visited by human or crawler , it will add the UCSS/CCSS into the page

Viewing 7 replies - 16 through 22 (of 22 total)

The topic ‘Can’t manage to create all ucss-es’ is closed to new replies.