This is perhaps a controversial statement from someone who is fed up with all this age verification stuff, but having the user age be set on account creation (without providing ID or anything dumb like that) doesn’t seem that bad.
It just feels like a way to standardise parental controls. Instead of having to roll their own age verification stuff, software like Discord can rely on the UserAccountStorage value.
If it were possible to plug into a browser in a standard, privacy conscious way, it also reduces the need for third party parental control browser extensions, which I imagine can be a bit sketchy.
OSes collect and expose language and locale information anyway. What harm is age bands in addition to that?
What’s bad though is that it’s meaningless. Sure the OS can say you are 10 years old or 100 years old and you can’t change it… but then you open a page in your browser which runs a virtual machine and that VM now says you are, arbitrarily 50 years old. The VM is just another piece of software but put it in fullscreen (if you want) and voila, you are back to declaring whatever age you want to any application or Web page within that VM. If that’s feasible (and I fail to see how it wouldn’t, see countless examples in https://archive.org/details/software or https://docs.linuxserver.io/images/docker-webtop/ even though that’s running on another machine, so imagine that was a SaaS) then only people who aren’t aware of this might provide a meaningful information on the actual age but that’s temporary, the same way more and more people now learn to use a VPN.
Standardized parental controls would be great, actually. But it should be proper parental controls, not whatever this is. Because at the end of the day, the parent* should be involved in what their child is up to, and allow (or not) based on what the child needs and/or wants, instead of whatever we are doing now.
Or, to put it another way, if your teen has read Games of Thrones (the physical books), I don’t see much of a point in forbidding them from going to the wiki of it, and I’d be hard pressed to justify stopping them from talking about it online with other people who have read the books. The tools should allow for this kind of nuance, because actual people are going to use it and these kind of situations happen all the time.
* some parents are awful and would abuse this, see LGBT+ related things, but that’s a social issue, not a technological issue.
I thought similarly that a minimally privacy invasive set up like sending a “I’m over/under 18” signal that didn’t require verifying government ID/live face scans/AI “age approximation” would be a good idea, but I now think that this system would fall over very quickly due to the client and server not being able to trust each other in this environment.
The client app, be it browser, chat, game etc, can’t trust that the server it is communicating with isn’t acting nefariously, or is just collecting more data to be used for profiling.
An example would be a phishing advert that required a user to “Verify their Discord account”, gets the username and age bracket signal and dumps it into a list that is made available to groomers [1].
Conversely, the server can’t trust that the client is sending accurate information. [2]
Even in the proposal linked, it’s a DBUS service that “can be implemented by arbitrary applications as a distro sees fit” - there would be nothing to stop such a DBUS service returning differing age brackets based on the user’s preference or intention.
This lack of trust would land us effectively back to “I’m over 18, honest” click throughs that “aren’t enough” for lawmakers currently, and I think there would be a requirement in short order to have “effective age verification at account creation for the age bracket signal” with all the privacy invasive steps we all hate, and securing these client apps to prevent tampering.
At best, services wouldn’t trust the age bracket signal and still use those privacy invasive steps, joining the “Do Not Track” header and chocolate teapot for usefulness, and at worst “non verified clients/servers” (ie not Microsoft/Apple/Goolge/Meta/Amazon created) would be prevented from connecting.
The allure of the simplicity and minimal impact of the laws is what’s giving this traction, and I think the proposals are just propelling us toward a massive patch of black ice, sloped or otherwise.
Having said that, I can’t blame the devs for making an effort here, as it is a law, regardless of how lacking it is.
[1] I realise “Won’t someone think of the children!” is massively overused by authoritarians, give me some slack with my example :)
[2] Whilst the California/Colorado laws seem to make allowance for “people lie”, this is going to get re-implemented elsewhere without these exemptions.
I can see the slippery slope argument, however it overlooks the fact that countries/states are already willing to implement the non-privacy systems.
If these systems take off, it will give privacy advocates the ability to point at California’s system and say “look, they have a system that is as effective as the strong assurance stuff but without the people sending you angry emails.”
I see it as almost a “reverse slippry slope”. A way for people to push for less strict verification.
Yeah countries and states are relatively happy with the non-privacy systems as they “work”.
My principle problem is I cannot see this system “working” to the satisfaction of the seemingly incessant voices who don’t want a child to see something that they shouldn’t, where “something” is nebulous and seems to change with who you ask and at regular intervals.
I’m probably very jaded - I’d love to be proven wrong and this system works as a least worst option, but I’m in the UK and we recently seem hell bent on choosing the worst option offered.
My condolences - I’m in the UK as well and wouldn’t wish that on anyone.
If I may offer an alternate perspective: Politicians don’t actually care about any of this, they just want votes. California’s system allows them to say “Look, we solved child safety!” without having to deal with people complaining about privacy. If there’s an existing system in place, it’s easier for politicians to say “we already solved this!” and ignore those voices.
It also puts the guilt on parents. If this system in place, and you complain about your child seeing tiddy online, the question is going to be “why didn’t you set the age correctly then?”.
… Of course this might be me just being optimistic. I really hope we, as a species, grow out of this new age puritanism and government overreach.
Currently it’s self reported, but if it’s complied with and then they inevitably say now it needs id they can just block all the self reports until id is provided. This is the same tactic of marginally moving the line that has been happening for years
Sure. But at that point distros can just say “no use in California lol” and enjoy the free market share from disgruntled totally-not-californian Windows users.
It just feels like a way to standardise parental controls.
Then focus on that instead of pushing age laws.
And we all know this “Think of the children” is never about the children.
Next will be compliance through secureboot and TPM.
Someone else had brought up in the past few days that parents either don’t know that parental controls like this exist. Or they don’t care.
This law puts that age setting front and center and allows apps, like Discord, so say “no <13 year olds”. I think where this maybe gets tricky is if an app says “only <13 year olds”. As like people have said there is nothing stopping people from lying, and that is a two-way street.
No. All this law does it promote more data collection and impose more restrictions.
They don’t care about the children and, even if they did, it’s the parents’ job to parent them.
This is perhaps a controversial statement from someone who is fed up with all this age verification stuff, but having the user age be set on account creation (without providing ID or anything dumb like that) doesn’t seem that bad.
It just feels like a way to standardise parental controls. Instead of having to roll their own age verification stuff, software like Discord can rely on the UserAccountStorage value.
If it were possible to plug into a browser in a standard, privacy conscious way, it also reduces the need for third party parental control browser extensions, which I imagine can be a bit sketchy.
OSes collect and expose language and locale information anyway. What harm is age bands in addition to that?
In theory yes.
What’s bad though is that it’s meaningless. Sure the OS can say you are 10 years old or 100 years old and you can’t change it… but then you open a page in your browser which runs a virtual machine and that VM now says you are, arbitrarily 50 years old. The VM is just another piece of software but put it in fullscreen (if you want) and voila, you are back to declaring whatever age you want to any application or Web page within that VM. If that’s feasible (and I fail to see how it wouldn’t, see countless examples in https://archive.org/details/software or https://docs.linuxserver.io/images/docker-webtop/ even though that’s running on another machine, so imagine that was a SaaS) then only people who aren’t aware of this might provide a meaningful information on the actual age but that’s temporary, the same way more and more people now learn to use a VPN.
Standardized parental controls would be great, actually. But it should be proper parental controls, not whatever this is. Because at the end of the day, the parent* should be involved in what their child is up to, and allow (or not) based on what the child needs and/or wants, instead of whatever we are doing now.
Or, to put it another way, if your teen has read Games of Thrones (the physical books), I don’t see much of a point in forbidding them from going to the wiki of it, and I’d be hard pressed to justify stopping them from talking about it online with other people who have read the books. The tools should allow for this kind of nuance, because actual people are going to use it and these kind of situations happen all the time.
* some parents are awful and would abuse this, see LGBT+ related things, but that’s a social issue, not a technological issue.
Agreed, but at this point I think it’s worth taking what we can get.
I thought similarly that a minimally privacy invasive set up like sending a “I’m over/under 18” signal that didn’t require verifying government ID/live face scans/AI “age approximation” would be a good idea, but I now think that this system would fall over very quickly due to the client and server not being able to trust each other in this environment.
The client app, be it browser, chat, game etc, can’t trust that the server it is communicating with isn’t acting nefariously, or is just collecting more data to be used for profiling.
An example would be a phishing advert that required a user to “Verify their Discord account”, gets the username and age bracket signal and dumps it into a list that is made available to groomers [1].
Conversely, the server can’t trust that the client is sending accurate information. [2]
Even in the proposal linked, it’s a DBUS service that “can be implemented by arbitrary applications as a distro sees fit” - there would be nothing to stop such a DBUS service returning differing age brackets based on the user’s preference or intention.
This lack of trust would land us effectively back to “I’m over 18, honest” click throughs that “aren’t enough” for lawmakers currently, and I think there would be a requirement in short order to have “effective age verification at account creation for the age bracket signal” with all the privacy invasive steps we all hate, and securing these client apps to prevent tampering.
At best, services wouldn’t trust the age bracket signal and still use those privacy invasive steps, joining the “Do Not Track” header and chocolate teapot for usefulness, and at worst “non verified clients/servers” (ie not Microsoft/Apple/Goolge/Meta/Amazon created) would be prevented from connecting.
The allure of the simplicity and minimal impact of the laws is what’s giving this traction, and I think the proposals are just propelling us toward a massive patch of black ice, sloped or otherwise.
Having said that, I can’t blame the devs for making an effort here, as it is a law, regardless of how lacking it is.
[1] I realise “Won’t someone think of the children!” is massively overused by authoritarians, give me some slack with my example :) [2] Whilst the California/Colorado laws seem to make allowance for “people lie”, this is going to get re-implemented elsewhere without these exemptions.
I can see the slippery slope argument, however it overlooks the fact that countries/states are already willing to implement the non-privacy systems.
If these systems take off, it will give privacy advocates the ability to point at California’s system and say “look, they have a system that is as effective as the strong assurance stuff but without the people sending you angry emails.”
I see it as almost a “reverse slippry slope”. A way for people to push for less strict verification.
Yeah countries and states are relatively happy with the non-privacy systems as they “work”.
My principle problem is I cannot see this system “working” to the satisfaction of the seemingly incessant voices who don’t want a child to see something that they shouldn’t, where “something” is nebulous and seems to change with who you ask and at regular intervals.
I’m probably very jaded - I’d love to be proven wrong and this system works as a least worst option, but I’m in the UK and we recently seem hell bent on choosing the worst option offered.
My condolences - I’m in the UK as well and wouldn’t wish that on anyone.
If I may offer an alternate perspective: Politicians don’t actually care about any of this, they just want votes. California’s system allows them to say “Look, we solved child safety!” without having to deal with people complaining about privacy. If there’s an existing system in place, it’s easier for politicians to say “we already solved this!” and ignore those voices.
It also puts the guilt on parents. If this system in place, and you complain about your child seeing tiddy online, the question is going to be “why didn’t you set the age correctly then?”.
… Of course this might be me just being optimistic. I really hope we, as a species, grow out of this new age puritanism and government overreach.
Currently it’s self reported, but if it’s complied with and then they inevitably say now it needs id they can just block all the self reports until id is provided. This is the same tactic of marginally moving the line that has been happening for years
Sure. But at that point distros can just say “no use in California lol” and enjoy the free market share from disgruntled totally-not-californian Windows users.
Then focus on that instead of pushing age laws.
And we all know this “Think of the children” is never about the children.
Next will be compliance through secureboot and TPM.
Isn’t this an example of pushing for standardisation of parental controls?
Standardization of optional parental controls (and accessibility while we’re at it) would benefit most linux distros imho.
Someone else had brought up in the past few days that parents either don’t know that parental controls like this exist. Or they don’t care.
This law puts that age setting front and center and allows apps, like Discord, so say “no <13 year olds”. I think where this maybe gets tricky is if an app says “only <13 year olds”. As like people have said there is nothing stopping people from lying, and that is a two-way street.
No. All this law does it promote more data collection and impose more restrictions.
They don’t care about the children and, even if they did, it’s the parents’ job to parent them.
Leaving it to parents is the reason why we are in this mess.
What reason is that? What mess? I don’t give a shit what other people’s kids do on the Internet.
If somehow age verification is mandated everywhere, this I could get behind. It would be like saying you’re 18+ on a porn website.
It’d be stronger than that, since kids shouldn’t have admin rights on their pcs and couldn’t claim to be over 18.