One of the arguments for free and open source software is that its transparency allows for anyone to evaluate the quality and security of its code and contribute to its improvement.

I’m curious about how many people actually can and do that?

I’m a user of software, not a programmer, but I do sometimes scan over install scripts to see if they contain URLs that I don’t recognise, although I’m not at all equipped to map and dig into the code of what I run and neither do I have the time to.

Do you?

Trying to guage how this theoretical benefit translates into the real world.

  • whotookkarl@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    1 day ago

    As a developer I have reviewed open source code and reported vulnerabilities to maintainers. I think more of the cve issues are reported by university organized and funded security research groups but corps and independent individuals report some too.

    The alternative is closed source, proprietary software where security vulnerabilities can be harder to identify by obscurity, but also be exploited for longer because they take longer to identify and report while others may still be using the exploit. This is what people mean when they say security by obscurity isn’t secure, because while it may grant the appearance of security if you prevent security groups from reviewing and examining code it may also prevent them from discovering vulnerabilities that are already being exploited. But measuring these effects sounds like an interesting read.