

From the blog post OP linked in a comment:
We made an unrelated change that caused a similar, longer availability incident two weeks ago on November 18, 2025. In both cases, a deployment to help mitigate a security issue for our customers propagated to our entire network and led to errors for nearly all of our customer base.
It seems that the method they have of specifically propagating new security configurations to their servers is not a gradual or group-based rollout, it pushes certain changes to all servers at once, so uncaught bugs end up hitting everything instead of just some initial test group.
In particular, the projects outlined below should help contain the impact of these kinds of changes:
Enhanced Rollouts & Versioning: Similar to how we slowly deploy software with strict health validation, data used for rapid threat response and general configuration needs to have the same safety and blast mitigation features. This includes health validation and quick rollback capabilities among other things.
“Fail-Open” Error Handling: As part of the resilience effort, we are replacing the incorrectly applied hard-fail logic across all critical Cloudflare data-plane components. If a configuration file is corrupt or out-of-range (e.g., exceeding feature caps), the system will log the error and default to a known-good state or pass traffic without scoring, rather than dropping requests. Some services will likely give the customer the option to fail open or closed in certain scenarios. This will include drift-prevention capabilities to ensure this is enforced continuously.







I’ve tried. SO many times. It’s just so damn clunky. I ended up using Krita (also FOSS) instead.