cross-posted from: https://discuss.online/post/32165111
I realize my options are limited, but what about any robots.txt style steps? Thanks for any suggestions.
Another option to reduce (but not eliminate) this traffic is a country limit. In cloudflare you can set a manual security rule to do this. There are self hosted options too but harder to setup. It depends what country you are and where your users are based. My website is a business one so I only allow my own country (and if on holiday I might open that country if I need to check it’s working, although usually I just use a paid vpn back to my country so no need). You can also block specific countries. So many of my blocked requests are from USA, China, Russia etc
Please don’t do this. It is incredibly hostile to other people around the world. It is the recreation of broders but on the internet. Even if your website is a local business it is still useful for other people around the world to browse, compare things, get ideas. You never know but may be other people from around the world want to plan ahead before they travel or move. Or import your product or collaborate. Perhaps others from your country want to browse your site while travelling.
In an ideal world this should be the case but I can’t afford to do this practically and my business is a service, based on UK laws and requirements, available to UK residents only. The website is for information only and nothing is new or interesting to anybody but a few potential clients, and if theyre looking at it on holiday, theres something wrong with them! Nobody is going to reach out based on my website from abroad and if they did, I would not trust them at all. They would reach out through personal contacts or linkedin. If the bots stop spamming my site or server, I can stop limiting it.
0.- Take it out of the public.
Currently Anubis seems to be the standard for slowing down scrapers
https://github.com/TecharoHQ/anubis
There are also various poison and tarpit systems which will serve scrapers infinite garbage text or data designed to aggressively corrupt the models they’re training. Basically you can be as aggressive as you want. Your site will get scraped and incorporated into someone’s model at the end of the day, but you can show them down and make it hurt.
@potatopotato @selfhosted Black Ice exists. Software is hand-to-hand combat. The most #cyberpunk sentence I’ve read today:
“There are also various poison and tarpit systems which will serve scrapers infinite garbage text or data designed to aggressively corrupt the models they’re training. “
You can always go the Tarpit route as well https://zadzmo.org/code/nepenthes/
Another one; https://iocaine.madhouse-project.org/
But yeah, OP. You can’t reliably stop web scrapers from stealing your data. You can only make more difficult and costly to do so, at the expense of your own server, and in the case of anubis, at the expense of your real users.
I plan on switching to a RPI hosted website at some point, so I can add either iocaine or nepanthes to my website. Might as well make most of the data from my website poison to all the scrapers when I get the chance.
You could put your website behind a cloudflare anti bot check. But realistically, your website is public facing and these bots are scraping the public web. They will eventually get the data from your website.
I’m wondering if you could run CrowdSec on the server and manually block the offenders if they are not already in the community blocklists.
Isn’t fail2ban a possibility too? I created a filter for chatgpt and some others, and it feels like its working. My radicale server is my only free acessable service but it comes with a small webgui and so the bots showed up. I have no clue if the bot gets a fraction of your site each time it shows up, but seemingly the ban happens within 300ms when I remember correct. So it wouldn’t be that much of information…
When setting the retry to 1 it will ban at the first sight.
A big issue is that this works for bots that announce themselves as such, but there’s lots that pretend to be regular users, with fake user agents and ips selected from a random pool with each ip only sending like 1-3 request/day, but overall many thousands of requests. In my experience a lot of them are from huawei and tencent cloud/ASN
Yes, if that is true (and I am not that suprised about it) it is nearly impossible to block them this way.
Anubis is your friend
deleted by creator






