

The EU can start by finding a way for full-time open source contributors to make a living off it. Solve that problem, and you’ll have plenty of open source projects, as well as open source devs who want to move there.


The EU can start by finding a way for full-time open source contributors to make a living off it. Solve that problem, and you’ll have plenty of open source projects, as well as open source devs who want to move there.


It’s used outside of UK too. I’ve seen it used in the US, for example. Usually it’s just a corporate term that says “you’re fired” but without saying that. They use terms like these all the time to try not to take accountability for fucking someone’s life up.


I’m not surprised at all that removing distractions like social media could “feel their focus sharpening”.
To answer the title, I could not go without my phone because I need it to authenticate to stuff.


The biggest issue with Youtube is that it has no real competition, at least for traditional long-form content. They have no incentive to improve the user experience.


The author seems to be more interested in generating outrage than anything, but I think the point about AI still stands. From a UX standpoint, key points that may be incorrect are a terrible idea. That they originally intended to force AI on the user, at least from how it seems, is problematic.
The author’s privacy and accessibility concerns seem artifical to me.


Looks like mostly SteamOS users, which isn’t too surprising. Hopefully we can see that number go up over time as people get sick of Windows.
Slightly off topic, but it looks like they reopened the forums!


The feature was introduced as a way for users to get relevant information faster, by providing them with an image, the webpage title, and AI-generated key points.
The AI part was made optional. That doesn’t mean they didn’t try.


Zen figured out link previews without using AI and the solution is really as simple as it gets. Maybe stop trying to manufacture problems for AI to solve?


Surely you have an example where it’s appropriate for a service to generate nonconsensual deepfakes of people then? Because last I checked, that’s what the post’s topic is.
And yes, children are people. And yes, it’s been used that way.
Edit: as for guardrails, yes any service should have that. We all know what Grok’s are though, coming from Elon “anti-censorship” Musk. I mentioned ChatGPT also generating images, and they have very strict guardrails. They still make mistakes though, and it’s still unacceptable. Also, any amount of local finetuning of these models can delete their guardrails accidentally, so yeah.


When someone clicks the “edit” button, I guess.


Ok yes you’re right. “Grok generate me some CSAM” is the same as opening up a photo editor and drawing a new real looking body onto someone’s child and putting it in a new body position. Same exact thing. No different at all. Twitter has no responsibility for running a service that can do this.


Grok, put this “small adult” into a bikini and have her bend over.
Creating nudes without consent, especially CSAM (even with consent), can be extremely illegal. Doing it in photo editor software makes you responsible and only leaves it on your device. ChatGPT will attempt to filter it, and their filters lean on the aggressive side, but that’s also between you and OpenAI. Grok will post it publicly.


Modern* protect children bills would be more accurate.
The playbook these days is to use children, terrorism, etc to justify something that fails to address that problem and pushes some other agenda.
This has been true anyway in the US and UK. I have no clue how true this is in France and I won’t pretend to know, but from some other comments on the potential implementation could be better than what we’ve seen so far. I hope so, anyway.


Yeah that particular issue doesn’t bother me much anyway, just delays startup by a second or two.


For the past month or so, I’ve been getting “RDSEED32 is broken” and it seems to be an issue with AMD’s drivers? Either way, there doesn’t appear to be a solution for me outside of getting a new CPU, but it also still boots and works so I’m not too bothered by it either.
But when updates roll around? Yeah, usually a good idea to make a backup before updating. Same is true with Windows, of course, but I already expect Windows to need a reinstall every year or so.


Ever since they bumped the min-spec Mac Mini to 16GB RAM, it has looked like such a great deal. The upgrades are still way too expensive (except RAM now I guess?) but base model is great.


Apple stores are the embodiment of wasted space.
Otherwise, there are ways to use negative space to help direct the user to important information. It’s just often abused to direct them away from it to sell something, sadly.


I mean, my car is an older Corolla, so almost entirely physical buttons. My phone, however…


Wow, i immediately installed flickboard, thanks!
Ironically, it felt to me like the post deified algorithms itself, but this is the main takeaway:
An “algorithm” is nothing more than a set of instructions to follow to complete some kind of task. For example (and closely related), a sorting algorithm might attempt to sort a list by randomizing the list, then checking if it’s sorted and repeating if not (bogosort).
Lemmy uses an algorithm to sort posts by “most recent”, for example, and I think that having a “most recent” sorting option is noncontroversial.
Where algorithmic feeds become problematic, in my opinion, is when they start becoming invasive or manipulative. This is also usually when they become personalized. Lemmy, Reddit (within a subreddit), and other kinds of forums usually do not have personalized feeds, and the sorting algorithms for “hot” are usually noncontroversial (maybe there’s debate about effectiveness, but none usually about harm). Platforms like FB, Twitter, TikTok, Instagram, YT, etc all have personalized feeds that they use personal data to generate. They also are the most controversial, and usually what is referred to as “algorithmic” feeds.
These personalized feeds are not magic. They often include ML black boxes in them, but training a model isn’t sorcery, nor are any of the other components to these algorithms. Like the article mentioned, they are written by people, and can be understood (for the most part), updated, and removed by people. There is no reason a personalized feed is required to invade your privacy or manipulate you. The only reason they do is because these companies are incentivized to do so to maximize how much ad revenue they make off you by keeping you engaged for longer.