Search is being enshittified to push for AI as nextgen search. Nextgen search can map individuals behaviour, in a reactive, but also an probing manner. This forms the basis of a real time info tool, shown in google glass type devices. Tells you your opponents weaknesses and summarises facts and counterarguments and when threats are to be made, they are built to a detailed personal profile for a person who was nobody. Really tailored blaickmail to everyone, everywhere. Mapped out from the day they were born and mamma put the pics on facebook.

Every flavour of authoritarian government wants this ability and almost everyone else for that matter. Imagine going into the “diplomatic trades” either corporate, government, criminal or personal, with a google-glass partial vr data feed projecting your opponent’s every weakness. Epstein isn’t the only way to manipulate the powerful.

Search as well as forums like this and especially AI feed into the interface and can probe and map your attitudes, motivations and reactions so LLMs can become better socio-econo-political machines.

Datacentres are being built to have a lifetime, tailored LLM mapping every single human who has intereacted with a networked device in any way in a corporate, government, military, retail, personal manner and map their personality individually. You and everything about you are the LLM’s training data, as is everyone else to theirs. A group controlling this information in real time, can manipulate people in power in real time. 24/7 LLM that simulates you, based on your whole life, and others for everyone you know and eventually everyone on earth.

The holders of this information, impossibly few in number, would have the information advantage of every contest for power at any level and a blackmail machine. Having an affair? Didn’t pay your taxes? Who should we interview and what questions should we ask them as a “reporter”. What threat should we make to optimise his compliance. How can we Cambridge analytica our wsy to absolute power, without offering any coherent or logical platform. Just scientifically engineered by people smart enough to work at a Big Three consulting firm. Like Bannon from Bain, or like everyone from McKinsey, Blackkrock, Goldman Sachs, Musk, Thiel and all the others. At the very least, they would know what kind of hooker to blackmail the general with at the Defence conference in Dubai, France, China or the US or singapore or Switzerland or Germany or Canada and all the others. Get em drunk or drugem lightly, present the perfect hooker based on pornhub search history and voila! A general willing to compromise via blackmail. Scientifically engineered blackmail.

The “AI” would also provide intelligent analysis expert systems real key analysis to consolidate power in retail, wholesale and manufacturing markets. Private equity firm will literally pirate the corporate seas, right out of monty python. It’s usefull as fuck now, with improvements it can be more powerful than nukes.

This is more consequential, but not entirely different than 1984. But as brutal as Orwell’s warning was, it was, the reality we see today. Like the book was an old instruction manual and we just perfected it to be like, 5000% more dystopian than anything Orwell thought.

Hence the AI/DataCentre bonanza from technofascists and others. Tada! We made it worse than our most brutal sci-fi cautionary tale, then IMPROVED UPON IT.

Shit.

  • MagicShel@lemmy.zip
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    4 hours ago

    Enshittification is about making money. The fact that AI is now faster and more capable of sifting through the garbage that is Google search than a human is a side effect, not a plan.

    Also, your plot really only works if everyone tells AI about their every weakness, and if AI could be relied upon to answer factually. The person who relies on AI to destroy their opponent is inevitable going to run into a case where AI is completely wrong, and they are going to destroy themselves.

    This might be an interesting story to tell in fiction, but I think trying to bring this about in real life would be self-sabotaging.

    You could save a lot of water and anxiety with shorter showers.

    • StinkyFingerItchyBum@lemmy.caOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      38 minutes ago

      Sorry, I’ve worked for hedge funds and consulting firms and software companies at a high enough level to see how they think firsthand to ever doubt this.

      The AI boom and data centre craze makes perfect sense my scenario. The business model without this sure as hell doesn’t support these valuations, but my theory does. It’s a risk, everyone wants it. It is every countries intelligence agency’s wet dream. It a strategic asset and gets corporate power to pay for it.

      What would the cartels or mafia pay for a psycho-model-profile engineered LLM assessment model and extortion threat assessment based on key politicians, bankers, military and manufacturer worth? Then add their families, friends, all their gossip search history, pornhub, 4chan, google android, play and drive, then all your email and text messages, etc etc…

      Now what would everyone more powerful than say a corp, well funded startup megacorp, large crime syndicate or medium size country do.

      I mean, hey, Thiel and Musk, Putin, Xe, Trumptardius Gondii and the entire taxa ajacent aren’t that psycho? It will be fine. They said. Don’t worry they said. Checks newspaper. Hmmmm. Never heard of a hostile takeover?

      Look how effective the Epstein influence operation was. Now imagine the value based on what I said about today’s off the shelf commercial tech. It was too much data collection to be viable in the beginign. But enter the advanced LLM models with perfect and all encompasing data markets (combining all systems users from all platforms through quasi-state agencies masquerading as databrokers.)

      EDIT: You:

      Enshittification is about making money.

      Yes. ¿Por qué no los dos? Très? More search dollars by more page searches AND pushing supposedly smarter users towards LLM personality 1:1 mapping resolution of populations. Plus automation, sure yeah lots of reasons. All powerful, thus valuable. I’m guessing this is at the forefront of statecraft and state intelligence and everyone else who can bankroll this kind of power.

      EDIT 2:

      You know the quote about problems not being solved by the level of consciousness that created it. If you can’t see the value and ultimately fate of what we’ve already built, you are strong evidence of proving the quote true.

      It’s a soft power nuclear fucking weapon from threats to any regime, internal and external. From evolution to revolution to stasis.

      Edit 3: I’m sorry I get ahead of myself and forget the general audience. This is what Thiel’s Palantir was built for. Like in advertising videos and promo material. It’s an arms length extention of state power and they are milking more than one country.

      Ever wonder how dystopian fiction is ruled by megacorps? This seems bond villain levels of viable. Right up Thiel, Musks, Bezos, Microsoft, Anthropic, Cohere, facebook and every other respective system from OS to hardware to retail POS. It was public military information. until it dawned on the powers that be that it works better when people don’t believe they are being surveiled. Act natural. Capture everything. Build your model, Make better higher level analysis tools to work on ever-all encompasing training data.

      What we all call AI isn’t the AGI or even AI analysis tool, its the LLM data collection expert system. The AGI works on the data collected and only the well paying client (not us products) get to control them.

      It starts with simple stuff like when parents text each other messages about their kids behaviour for parenting purposes and only grows from there. Total information awareness.

      Edit-a-fucking-gain! And again!

      To be clear, everyone that says AI is stupid is stupid. The general public havent’t seen the advanced AI. They are basing their opinions on the state of public freemium options from commercial entities with Grok and ChatGPT, AMD yadda yadda, which are the core of the interactive personality probes and mappers, not the AI that analyses the data and provides actionable guidance. Even Steam with peformance scores of gamification and strategy across a broad array of users and problem/scenario/game types/paradigms.l and consumers pay for it under games and commercial software.