

You’re not required to use it.
Basically a deer with a human face. Despite probably being some sort of magical nature spirit, his interests are primarily in technology and politics and science fiction.
Spent many years on Reddit before joining the Threadiverse as well.


You’re not required to use it.


It’s serving the will of prudes, religious fruitcakes, inattentive parents, the technologically illiterate, and anyone dumb enough to be taken in by the “think of the children!” Rhetoric of the control-freaks.
Unfortunately this is a rather large constituency.
Ah. After poking around in the Gradio UI a bit, I found an “Enable ADG” but the tooltip says it’s “Angle Domain Guidance”, same thing?
I’m a programmer, but sometimes with AI I feel like a primitive tribesperson blindly attempting various rituals in an effort to appease the machine spirits. Eventually something works, and then I just keep on doing that.
Edit: I have angered the gods! My ritual failed! When I enabled ADG the spirits smote me with the following:
RuntimeError: The size of tensor a (11400) must match the size of tensor b (5700) at non-singleton dimension 1
Guess I won’t be trying that for now. :)
ADG == Audio-Driven Guidance? I haven’t played around with that part much. I tried it out and couldn’t get it to work, but it turned out that the reason ACE Step wasn’t working was unrelated to that and I only figured out what was wrong after I stopped experimenting with ADG. So I haven’t gone back to try it again.
I’m not really much of a music connoisseur, I just know what I like when I hear it. So mostly I just put together lyrics and then throw them at the wall to see what sounds good. :)
It’s the one I use, so don’t expect miraculous improvement. :)
I’d love to hear what local model you settle on for lyrics, I’ve been having a lot of fun with ACE-Step 1.5 but the lyric generator it’s bundled with produces semi-nonsense lyrics that have nothing to do with what I prompt it with. Which is actually kind of fun in its own way, I literally never know what the song’s going to be about, but I’d like a little control sometimes too. :)


When the regular controller of the car - be it human, another AI, whatever - isn’t sending control signals, then the onboard controller knows that the car is uncontrolled. Of course it’s a “failure scenario”, I’m suggesting that this chip would be ideal for picking up when that sort of thing happens. The alternative is to just fall over.
I, too, am not sure what you’re arguing. I suggested that a low-power high-speed AI chip like this would be ideal for putting in robots, which have power constraints and aren’t always in reliable contact with outside controllers. That’s a very broad “niche” indeed. I don’t know what all this landmine stuff or probabilities of brake-slamming is all about or how it relates to what I suggested.


I have no idea what you’re thinking the scenario is here. The alternative is an uncontrolled car, I think I’d rather it had at least some brains behind the decisions it’s making.


Why doesn’t it work in those contexts? It’s better than nothing in those contexts too. I’d rather have a car with onboard intelligence to take over than an uncontrolled one.
I think you’re letting the perfect be the enemy of the good, here. There are plenty of situations where you don’t need a robot to behave perfectly. People don’t behave perfectly.


To what tasks could you set a bot that does stuff with minimal competence let’s say 90% of the time, and the other 10%, doesn’t create even bigger problems?
Sounds like a typical human to me.
A chip like this would be perfect for an autonomous robot. Drone, humanoid, whatever - something that still needs to be able to handle itself when it’s cut off from outside control. Always nice to have an internet connection to draw on a bigger, more capable “brain” somewhere else, but if that connection is lost you want it to be able to carry on with whatever it’s doing and not just flop over limply.


What I’m pointing out is that this target audience of AI haters is actually the whole gaming community.
Where do you get that from?


They titled it with the objective of getting clicks. OP chose to post it here with the objective of getting upvotes. Same basic goal.


According to “The Evolving Ecosystem”, a recent Connected Intelligence® report from Circana, LLC, 86% of U.S. consumers 18+ are aware of AI in smartphones and other technology devices
[…]
Of consumers who are aware of AI, 65% are interested in AI features coming to at least one of the device types studied — most commonly the smartphone. This figure rises to 82% of consumers between ages 18 and 24 and steadily declines among older groups.
So, an alternative headline that would be just as truthful: “A majority of US consumers are interested in AI features coming to their devices.”
That’s not going to get the upvotes here, though.
Oo. I use Qwen3-30B-A3B-Thinking-2507 as my generic “workhorse” local LLM, so this looks like it might be a nice upgrade with exactly the same basic specs. I’ll try it out.


Ethereum’s got a market cap of $350 billion and it’s where all the new development is going on, according to the Electric Capital Developer it has by far the most developers working on and with it. Approximately 65% of all new code written in the entire crypto industry is written for Ethereum or its Layer 2 scaling solutions (like Arbitrum, Optimism, and Base).
It’s spelled “Dogecoin,” by the way.


I said no major cryptocurrency. Monero’s got a market cap of $8 billion, it’s small fry.


No major cryptocurrency has used GPUs for mining for many years. Bitcoin uses completely custom ASICs and Ethereum switched away from proof of work entirely.


I guess we’ll see what people here find to complain about now.


A technology I’ve been eagerly anticipating for many, many years now. It still sounds like it’s in the “Real Soon Now, honest!” Phase though:
In the next 18 months, the company hopes to have a field-deployable read device that customers can use to read archived data. But SPhotonix isn’t presently targeting the consumer market. Kazansky estimates that the initial cost of the read device will be about $6,000 and the initial cost of the write device will be about $30,000.
[…]
“We need another three or four years of R&D to get it to the production and marketing standpoint,” Kazansky said.
[,]
“We are not aiming to become a manufacturing company,” said Kazansky. “We are a technology licensing company. We love the model of Arm Holdings. And to a certain extent, we love the model of Nvidia. So we are developing the enablement technology, and then we’re going to be forming some form of a consortium, some form of a group of companies that will help us to bring this technology to market.”
Which is where it’s been for all of those many years I’ve been anticipating it. But who knows, perhaps this will be the company to finally start selling them. I’m fine with them being expensive at first, the cost will come down if they take off.
Where are you getting these limitations from? They’re not in that article, and I went to the project’s page to double check and they’re not there either.
At this point that’s basically anything. Including all the popular open frameworks fro running local AIs.
What? This is like setting a cron job. Does cron remove your ability to make decisions or understand what is happening?
It’s open source, like the other projects Mozilla maintains. Do you apply this “they could take it away from us at any time!” Concern to Firefox as well?
Any source for this? Seriously, I know there’s a lot of anti-AI sentiment around here but you’re hallucinating worse than Gemini.