

I’m pretty sure whatever voice system you’re using is just transcribing things to text and feeding it into an LLM, so it wouldn’t actually have that audio data. I’m not aware of any audio equivalent of LLMs existing.
I’m pretty sure whatever voice system you’re using is just transcribing things to text and feeding it into an LLM, so it wouldn’t actually have that audio data. I’m not aware of any audio equivalent of LLMs existing.
Sandwiches for sale! Can’t afford it? No problem! 30% discount. I’ll just cut it and toss one of the halves in the trash for you.
It wasn’t that long ago that it was unfathomable for anything other than humans to be able to do this.
As a researcher, a good chunk of my work is literally just sitting on my ass and thinking. Or thinking while taking a walk in the park, or thinking while mindlessly chopping wood in a video game. Now with a kid, it’s kind of switched to thinking about what to do for dinner, how I can get the chores done for the day or how to organize my time so that I can fit in a few hours of work. It’s work in the sense that it’s something that needs to be done and it has an energy cost to doing. It’s also not really something you can turn off even if you wanted to.
I don’t know if it’s the same in Europe, but here in Canada, I’ve only seen the option to trade in old phones when you’re buying one of the fancier phones with a bunch of bells and whistles I don’t need. There no way they would give me enough for this phone to make up for the price difference.
Also, 40 months is an unusually long time to be holding on to the same phone? What?
Regarding your last point, you could in theory also penalize for marking non AI generated images as AI generated.
If you’re your own audience, then you can keep the joker in your head. Posting it on Lemmy broadcasts it to everyone here and that makes all of us your audience.
It only works as a joke without the /s when your audience knows you well enough to know it’s meant as a joke. That does not apply to you when you post on Lemmy.
Ah, so a trans pose.
Nice.
The same could be said for when Meta “open sourced” their models. Someone has to do the training, or else these models wouldn’t exist in the first place.
Unlike conventional batteries, supercapacitors have an exceptionally long lifespan, lasting hundreds of thousands of charge-discharge cycles, whereas lithium batteries typically last only five years or less.
So, what’s the conversation rate between charge-discharge cycles and years?
Their privacy policy: https://www.fossify.org/policy/clock/
Quickly filtering out a subset of them to prioritize so that we get the most value possible out of the time that humans spend on it.
LLMs cannot:
LLMs can
Semantics aside, they’re very different skills that require different setups to accomplish. Just because counting is an easier task than analysing text for humans, doesn’t mean it’s the same it’s the same for a LLM. You can’t use that as evidence for its inability to do the “harder” tasks.
Sounds to me like a 50% improvement over zero human eyes.
It certainly would be. Thankfully, there’s many more than zero human eyes involved in this.
Considering that it’s a language task, LLMs exist, and the cost, it’s a reasonable assumption. It’d be pretty silly to analyse a bag of words when you have tools you can use with minimal work with much better results. Even sillier to spend over $200 for something that can be run on a decade old machine in a few hours.
Having come from the world of C++, this was a huge step up.
mathematically “correct” sounding output
It’s hard to say because that’s a rather ambiguous way of describing it (“correct” could mean anything), but it is a valid way of describing its mechanisms.
“Correct” in the context of LLMs would be a token that is likely to follow the preceding sequence of tokens. In fact, it computes a probability for every possible token, then takes a random sample according to that distribution* to choose the next token, and it repeats that until some termination condition. This is what we call maximum likelihood estimation (MLE) in machine learning (ML). We’re learning a distribution that makes the training data as likely as possible. MLE is indeed the basis of a lot of ML, but not all.
*Oversimplification.
Yep. It’s part of their mating ritual. You can learn more about it at c/fuckcars.
But… why? Just give me the full image.