• 0 Posts
  • 19 Comments
Joined 1 year ago
cake
Cake day: August 4th, 2023

help-circle

  • I don’t know that Microsoft has any business trying to make Windows support these devices better…

    Windows is entirely built around two pillars:

    1. Enterprise support for corporations, and team machine management
    2. Entirely open compatibility so they can run almost any hardware you put into it, plug into it, and backwards compatibility for all that for as long as possible.

    Portable game machines are not an enterprise product. Nor do you care about broad hardware support or upgradability. Nor do you care about plugging in your parallel port printer from 1985. Nor do you care about running your ancient vb6 code to run your production machines over some random firewire card.

    Windows’ goal is entirely oppositional to portable gaming devices. It makes almost no sense for them to try to support it, as it’d go against their entire model. For things like these, you want a thin, optimized-over-flexible, purpose built OS that does one thing: play games. Linux is already built to solve this problem way better than Windows.

    But, Microsoft will probably be stupid enough to try anyway.





  • Google Assistant is definitely getting worse and worse all the time. When the Google Homes first released they were actually pretty useful and handy. I was willing to pick a few up and they served a good purpose. They ran CIRCLES around Alexa and all those.

    Now many years later, the devices don’t hear questions correctly, have to ask them four different times, they can’t even pick up my wife’s prompt words anymore, don’t even give reasonable answers when they do get the question right… It’s made hundreds of dollars worth of devices infuriating and useless.

    I bought a product that worked. It no longer works because it’s been “updated”.


  • I’m not up on EU politics all that much, so I hope someone more informed comes along and posts a better answer, but…

    My distant view + guess for as to why it’s different is that they have more than one party. Partisanship is at its worse when there are only 2 of you, as demonstrated by the US system - it’s all finger pointing and “us vs them” that just polarized everything.

    In the EU there are (at least?) 7ish “major” political parties, and while some are bigger than others, many actual hold seats and power unlike the US Green and Libertarian “parties” that are essentially meaningless.

    As such, any “partisanship” seems at least less extreme. It’s a lot harder to crucify one bad guy when your time and attention is split between 6 “bad guys”. And different parties back different things, so even if 3 were anti-abortion, you’d have to split your slander and hate to three different groups with different OTHER ideas. So it gets a bit lost in sauce.

    And on the other side, if you take a strong stance on one issue (such as this one), there are likely multiple parties on your side for that issue since there are unlikely to be 7 opinions, and even if they are, the similar ones can “tag team” a little bit since they’re more in line with each other than the opposing sides are.

    If you’ve ever played video games, games with more than 2 teams play very differently than ones that are just one or the other. Dynamics are much more complicated and constantly evolving than they are in a simple “team a vs team b”.

    As such, my understanding is that all of these extreme takes are severely diluted since there are more shades of gray and more nuance to the conversation and not just a constant “red vs blue”.



  • AI Fleets don’t solve the massive space problems that roads take up and the forced sprawl that is required to accommodate bigger and bigger vehicles.

    They most certainly do. If everyone can just freely hail an autonomous vehicle from a stack sitting outside the place they just left, they don’t all need to bring their own cars into said area. This saves substantially on parking which is far and away the biggest impact on said “sprawl”.

    And there’s no reason those vehicles need to be big either. So that solves your other problem too.

    Anything a car can do can be accomplished more efficiently and faster by non-car transportation.

    This is almost entirely false. Cars end up “losing” because of problems like the above, such as parking, and many of those are just removed by autonomous vehicles.


  • I do not think any system can be trained for these situations and the environment on small hardware. I see that as a direct conflict with capitalism that is motivated to minimize cost at the expense of hundreds of milliseconds. I don’t trust any auto manufacturer to use adequate hardware for the task.

    Don’t get me wrong, I do think this a valid cocnern, but I also don’t think we need to achieve “perfection” before this should be allowed on the road. I don’t trust every random human to pay attention, be able to see well enough, to notice the weird stuff, or to know how to react to it adaquetly. I think the sheer number of bicycle accidents alone show that our current systems and infrastructure don’t work for this.

    If cars were fully autonomous, we could give them all strict rules. It would be easier to make up rules for how cyclists should be treated by moving vehicles, and riders could count on that being the case. We try to do this with road rules now, but many drivers just straight don’t listen. And this makes cycling hard because you never have any idea what any single driver is going to do.

    A bit more soap boxy, but self driving cars should immediately abort any time they see shit they don’t recognize. Sure, in some ways, that’s easier said than done, but having good mechanisms for “how to safely stop when something weird happens” is critical here. And it makes a lot more of the “what do we do about weird shit in the street” a lot easier.

    And to another point, maybe cars just need hard dedicated lanes that cyclists aren’t allowed in, more like a tram or city subway. And if people know to only cross at designated areas, it makes a lot of this a lot easier too.

    And yes, capitalism makes a lot of this harder. I totally agree with you there. But this is something that should drastically save lives in the long run, but we might need to hold the fucking capitalist machine at bay while we do it.

    I think we still need real dedicated tensor math hardware before anything like self driving cars will be ethically possible. With the ~10 year cycle of any new silicon we are probably 5-8 years out before a real solution to the problem… As far as I can tell.

    This gets into the “too hard to articulate through text” zone, but I’ll just say that I think this is less far off than you think. For one, dedicated tensor hardware does exist and has existed for almost ten years at this point. It’s been commercially available for at least 5 IIRC. And for another, while lots of LLM type work is extremely intensive, lots of this object recognition type stuff is actually much easier. Lots of the “training” type stuff is the real expense, but it really only needs to be done “once” by the manufacturer and exported to each car. The amount of power needed by each car is much lower. And that type of stuff has been done pretty fast on consumer hardware for many years. It’s definitely still hard but it’s not like we’re talking about orders of magnitude out of reach where we need significant hardware breakthroughs - were essentially at the “this is mostly manageable with current hardware, but hardware evolves quickly anyway” stage.


  • As somebody fairly well-versed in the tech, and have done more than just play around with ChatGPT

    Lol. See above. And below. Someone “fairly well versed” in ChatGPT gives you just about basically zero expertise in this field. LLMs are a tiny little sliver in the ocean of AI. No one uses LLMs to drive cars. They’re LANGUAGE models. This doesn’t translate. Like, at all.

    Experts in the AI field know much more than some random person who has experimented with a “fad” of an online tool that gained massive popularity in the past year. This field is way bigger than that and you can’t extrapolate LLMs to driving cars.

    I can tell you that self-driving AI is not going to be here for at least another 40-50 years. The challenges are too great, and the act of driving a car takes a lot of effort for even a human to achieve.

    This is a fucking ludicrous statement. Some of these are already outperforming human drivers. You have your head in the sand. Telsa and Cruise are notoriously poorly performing. But they’re the ones in the public eye.

    When we have these cars that are sabotaged by a simple traffic cone on the windshield, or mistakes a tractor-trailer for the sky,

    If you don’t understand how minor these problems are in the scheme of the system, you have no idea how any of this works. If you do some weird shit to a car like plant an object on it that normally wouldn’t be there then I fucking hope to God the thing stops. It has no idea what that means so it fucking better stop. What do you want from it? To keep driving around doing it’s thing when it doesn’t understand what’s happening? What if the cone then falls off as it drives down the highway? Is that a better solution? What if that thing on its windshield it doesn’t recognize is a fucking human? Stopping is literally exactly what the fucking car should do. What would you do if I put a traffic cone on your windshield? I hope you wouldn’t keep driving.

    When we have these cars that are sabotaged by a simple traffic cone on the windshield, or mistakes a tractor-trailer for the sky, then we know that this tech is far worse than human drivers

    This is just a fucking insane leap. The fact that they are still statistically outperforming humans, while still having these problems says a lot about just how much better they are.

    Level 5 autonomous driving is simply not a thing. It will never be a thing for a long time.

    Level 5 is just a harder problem. We’ve already reached four. If you think 5 is going to take more than another ten to fifteen years, you’re fucking insane.

    Billions of dollars poured into the tech has gotten us a bunch of aggressive up-starts, who think they can just ignore the fatalities as the money comes pouring, and lie to our face about the capabilities of the technology. These companies need to be driven off of cliff and buried in criminal cases. They should not be protected by the shield of corporate personhood and put up for trial, but here we fucking are now…

    This paragraph actually makes sense. This is the one redeeming chunk of your entire post. Everything else is just bullshit. But yes, this is a serious problem. Unfortunately people can’t see the nuance in stuff like this, and when they see this they start with the “AI BAD! AUTONOMOUS VEHICLES ARE A HUGE PROBLEM! THIS IS NEVER HAPPENING!”.

    Yes, they’re are fucking crazy companies doing absolutely crazy shit. That’s the same in every industry. The only reason many of these companies exist and are allowed is because companies like Google/Waymo slowly pushed this stuff forward for many years and proved that cars could safely drive autonomously on public roads without causing massive safety concerns. They won the trust of legislation and got AI on the road.

    And then came the fucking billions in tech investment in companies that have no idea what they’re doing and putting shit on the road under the same legislation without the same levels of internal responsibility and safety. They have essentially abused the good faith won by their predecessors and the governing bodies need to fix this shit yesterday to get this dangerous shit off the streets. Thankfully that’s getting attention NOW and not when things got worse.

    But overwhelmingly slandering the whole fucking industry and claiming all AI or automous vehicles are bad is just too far off the deep end.



  • While I agree with your assessment, I just don’t think capitalism, at least in it’s current form, is equipped to handle this at all. You could say this is due to our government ineptitude, but we are not addressing these problems appropriately.

    Our regulatory bodies are being constantly undermined by out of control presidents and congress. And the people making the laws about these things do not even begin to understand the things they’re making laws about (see: “does tiktok use wifi?” etc).

    Regulatory bodies were made to fill this gap and fix this problem, but they are actively being meddled with, strong armed, and twisted into political entities.


  • Oh don’t get me wrong, I totally agree with you there. I was not trying to argue for the ethical dilemna at all here - I was just stating the original comment was objectively wrong in their analysis of “we don’t have anywhere near the tech to be able to even begin to get near a workable solution here”.

    But the ethics and morality questions are still extremely unanswered right now.

    IMO, the answers to all your questions are that companies are jumping on this way too fast (some more than others) and not doing this safely, and the collateral damage is becoming way too high. Our government and regulators are no where near equipped to solve this problem either. And our entire financial system that pushes for constantly increasing profits is not equipped to make sure this plays out safely, which would require losses and slow evolution now in order to safely reach a long term goal.

    An argument could be made that the “collateral damage” is warranted since autonomous vehicles will save so many lives in the long term, but that’s a hard question to answer. But I generally think there’s too much “firing from the hip” going on at the moment. Tesla and Cruise are currently demonstrating just how much we shouldn’t be trusting these companies. I think Waymo has generally been “acceptable” in terms of their risk and safety, but not everyone is running the way they are.




  • In ways yes, in ways no. LLMs are a tiny sliver of AI. Taking the current state of LLMs being overly-sold as AGI, and trying to extrapolate that to other applications, or to other strategies crosses a wide variety of oversimplifications. AI is not one single thing.

    It’s like seeing someone trying to hammer in a screw and saying “anyone playing with tools right now knows they’re never going to get a screw in.” But you’ve never seen a screwdriver.

    If you were saying “visual machine learning for a general purpose and unassisted driverless car is not happening tomorrow”, then sure.

    But things like the Waymo model are doing exceeding well right now. Instead of taking a top down approach to train cars to understand any intersection or road they could ever run into, they’re going bottom up by “manually” training them to understand small portions of cities really well. Waymo’s problem set is greatly reduced, it’s space is much narrower, it’s much more capable of receiving extremely accurate training data, and it performs way better because of it. It can then apply all the same techniques for object and obstacle detection other companies are using, but large swaths of the problem space are entirely eliminated.

    The hardware to solve those problems is much more available now. The capabilities of doing the “computationally intensive” stuff offline at a “super computer center” and doing small trivial work on the cars themselves is very much a possibility. The “situational adaptability” can be greatly reduced by limiting where the cars go, etc.

    The problems these cars are trying to solve have some overlap with your LLM experience, but it’s not even close to the same problem or the same context. The way you’re painting this is a massive oversimplification. It’s also not a problem anyone thinks is going to be solved overnight (except Elmo, but he’s pretty much alone on that one), they just know we’re sitting just over the cusp and being the first company to solve this is going to be at a huge advantage.

    Not to be rude, but there is a reason there are many field experts pushing this space, but LLM hobbyists are doubting it. LLMs are just a tiny subset of AI. And being a hobbyist you’re likely looking at one tiny slice of the pie and not the huge swaths of other information in the nearby spaces.


  • Ottomateeverything@lemmy.worldtoLinux@lemmy.mlGamedev and linux
    link
    fedilink
    arrow-up
    14
    arrow-down
    1
    ·
    1 year ago

    If you’re an engine developer, it’s a reasonably common problem.

    If you’re a game developer using a cross platform engine, it’s pretty uncommon, as the engine developer has already accounted for most of it.

    If you’re somewhere in the middle, it’s probably somewhere in the middle.

    It surprises me how many indie devs avoid some of the higher level / more popular engines for this reason alone. But I assume they just must enjoy that sort of stuff much more than I.