Interesting but not sure what can be done. Will take a closer look for the next update. Thank you for the feedback, @nikonn. Feel free to post again with any further ideas, suggestions, questions, etc. Glad to help anytime.
-
This reply was modified 1 year, 6 months ago by
Jeff Starr.
Thread Starter
Nikonn
(@nikonn)
Thank you for your feedback. For information, it may be interesting to you, Yandex has instructions for neutralizing such pages, today I tried to apply this instruction, the result will be only within two weeks. If something becomes clear, I will definitely report the result.
Can you share the resource? It might be useful, provide ideas, etc.
Thread Starter
Nikonn
(@nikonn)
No problem. I just don’t want to litter the site of the WordPress community that I respect with various garbage. Just add the yandex-ru domain before the link /support/webmaster/robot-workings/clean-param.html
It looks like adding this to robots.txt is their solution:
User-agent: Yandex
Disallow:
Clean-param: blackhole /
Is that what you are trying currently?
-
This reply was modified 1 year, 6 months ago by
Jeff Starr.
Thread Starter
Nikonn
(@nikonn)
I read your threads in tech support, and found one similar one, where the user talks about duplicating pages from 50 to 100, Google tells him this (https://wordpress.org/support/topic/blackhole-plugin-is-creating-pages-that-are-being-indexed/). Perhaps this instruction is useful for Google, although it has its own algorithms.
Thread Starter
Nikonn
(@nikonn)
To be honest, I tried several options, which one will work, I do not know.
Clean-param: s /?
Clean-param: https://****.ru/?
Clean-param: /?
I found another such option, but I haven’t applied it yet, I’m waiting for the result.
Disallow: /*?*
That issue I think was due to page caching. The plugin can’t work properly if any sort of page caching is active on site. I am working on a solution for this.
Thread Starter
Nikonn
(@nikonn)
I have only the plugin installed – Autoptimize and CSS JS Manager, Async JavaScript, Defer Render Blocking CSS
Yeah as long as none of them are doing any page caching specifically, there should be no issues. Page caching of any sort will cause problems.
Thread Starter
Nikonn
(@nikonn)
This is understandable, but while we are looking for a cure, maybe these pills will temporarily help. After all, a month ago, such a problem was not observed.
In the robots rules in your comment, I don’t think any of those will work. For example, Disallow: /*?*
effectively will block Yandex from crawling your entire site.
From what I understand this is what should work:
User-agent: Yandex
Disallow:
Clean-param: blackhole /
I would recommend trying that (and only that) as the way to go.
Thread Starter
Nikonn
(@nikonn)
Okay, now I’ll try to add this to robots.tхt
Yes and remove any other Yandex-related rules (like the ones in your comment). Best not to confuse the robot in any way.