Fair enough, but the short term track we’re on is still a hellish dystopia. For the societal damage I’m worried about to happen, we don’t really need AGI as you are probably defining it. If we use the OpenAI definition for AGI, “systems that surpass human capabilities in a majority of economically valuable tasks”, I’d argue that the technology we have today is practically there already. The only thing holding back the dystopia is that corporate America hasn’t fully adapted to the new paradigm.
Imagine a future where most fast food jobs have been replaced by AI-powered kiosks and drive-thrus.
Imagine a future where most customer service jobs have been replaced by AI-powered video chat kiosks.
Imagine a future where most artistic commission work is completed by algorithms.
Imagine a future where all the news and advertising you read or watch is generated specifically to appeal to you by algorithms.
In this future, are the benefits of this technology shared equitably so that the people who used to do these jobs can enjoy their newfound leisure time? Or will those folks live in poverty while the majority of their incomes are siphoned off to the small fraction of the populace which are MS investors?
To help you out with the monopolistic/capitalist concern: https://simonwillison.net/2023/May/4/no-moat/
tl;dr: OpenAI’s edge with ChatGPT is essentially minor (according to the people from within), and the approach of building ever larger and inflexible models is challenged by (technologically more accessible and available) smaller and more agile models
Imagine a future where most fast food jobs have been replaced by AI-powered kiosks and drive-thrus.
Imagine a future where most customer service jobs have been replaced by AI-powered video chat kiosks.
Imagine a future where most artistic commission work is completed by algorithms.
The end-game is pretty clear: we have reached the limits to the model on which our current society is built (working jobs to earn money to spend money to live). We now have excess supply of the essential goods to sustain lives and scarcity of jobs at the same time. We will have soon to either accept that working isn’t a mean to an end (accept universal basic income and state interventionism), or enter a neofedalism era where resources are so consolidated that the illusion of scarcity can be maintain and justify the current system (which essentially the bullshit-jobs is all about).
It’s perhaps the most important societal reform our species will know, and nobody’s preparing for it :)
Imagine a future where all the news and advertising you read or watch is generated specifically to appeal to you by algorithms.
And this is already weaponized (e.g. TikTok’s algorithm trying to steer the youth towards education and science in China and towards … something completely different in the rest of the world).
It doesn’t really matter if Microsoft/OpenAI are the only ones with the underlying technology as long as the only economically feasible way to deploy the tech at scale is to rely on one of the big 3 cloud providers (Amazon, Google, Microsoft). The profits still accrue to them, whether we use a larger/inflexible or smaller/flexible model to power the AI - the most effective/common/economical way for businesses to leverage it will be as an AWS service or something similar.
Are you saying you’re cool with neofeudalism? Or just agreeing that this is yet another inevitable (albeit lamentable) step towards it?
It doesn’t really matter if Microsoft/OpenAI are the only ones with the underlying technology as long as the only economically feasible way to deploy the tech at scale is to rely on one of the big 3 cloud providers (Amazon, Google, Microsoft).
Yup, but as the “no moat” link I posted implied, at least for LLMs, it might not be required to spend very much in hardware to be almost as good as ChatGPT, so that’s some good news.
Are you saying you’re cool with neofeudalism? Or just agreeing that this is yet another inevitable (albeit lamentable) step towards it?
Oh, crap, no, sorry if I wasn’t clear. I believe we are at the crossroads with not much in the middle between our society evolving into extensive interventionism, taxation and wealth redistribution (to support UBI and other schemes for the increasingly large unemployable population) or neufeudalism. I don’t want billionaires and warlords to run the place, obviously. And I’m warry about how the redistribution would go with our current politicians and the dominant mindset associating individual merit to wealth and individualistic entrepreneurship.
Fair enough, but the short term track we’re on is still a hellish dystopia. For the societal damage I’m worried about to happen, we don’t really need AGI as you are probably defining it. If we use the OpenAI definition for AGI, “systems that surpass human capabilities in a majority of economically valuable tasks”, I’d argue that the technology we have today is practically there already. The only thing holding back the dystopia is that corporate America hasn’t fully adapted to the new paradigm.
Imagine a future where most fast food jobs have been replaced by AI-powered kiosks and drive-thrus.
Imagine a future where most customer service jobs have been replaced by AI-powered video chat kiosks.
Imagine a future where most artistic commission work is completed by algorithms.
Imagine a future where all the news and advertising you read or watch is generated specifically to appeal to you by algorithms.
In this future, are the benefits of this technology shared equitably so that the people who used to do these jobs can enjoy their newfound leisure time? Or will those folks live in poverty while the majority of their incomes are siphoned off to the small fraction of the populace which are MS investors?
I think we all know the answer to that one.
To help you out with the monopolistic/capitalist concern: https://simonwillison.net/2023/May/4/no-moat/
tl;dr: OpenAI’s edge with ChatGPT is essentially minor (according to the people from within), and the approach of building ever larger and inflexible models is challenged by (technologically more accessible and available) smaller and more agile models
Funny you bring this one up :)
https://marshallbrain.com/manna1
To a large extent, we have been there for a long time:
https://www.youtube.com/watch?v=7Pq-S557XQU
This, and the theory of bullshit jobs:
https://strikemag.org/bullshit-jobs/
were formative reads to me.
The end-game is pretty clear: we have reached the limits to the model on which our current society is built (working jobs to earn money to spend money to live). We now have excess supply of the essential goods to sustain lives and scarcity of jobs at the same time. We will have soon to either accept that working isn’t a mean to an end (accept universal basic income and state interventionism), or enter a neofedalism era where resources are so consolidated that the illusion of scarcity can be maintain and justify the current system (which essentially the bullshit-jobs is all about).
It’s perhaps the most important societal reform our species will know, and nobody’s preparing for it :)
This is already the case today:
https://en.wikipedia.org/wiki/Filter_bubble
And this is already weaponized (e.g. TikTok’s algorithm trying to steer the youth towards education and science in China and towards … something completely different in the rest of the world).
@u_tamtam
It doesn’t really matter if Microsoft/OpenAI are the only ones with the underlying technology as long as the only economically feasible way to deploy the tech at scale is to rely on one of the big 3 cloud providers (Amazon, Google, Microsoft). The profits still accrue to them, whether we use a larger/inflexible or smaller/flexible model to power the AI - the most effective/common/economical way for businesses to leverage it will be as an AWS service or something similar.
Are you saying you’re cool with neofeudalism? Or just agreeing that this is yet another inevitable (albeit lamentable) step towards it?
Yup, but as the “no moat” link I posted implied, at least for LLMs, it might not be required to spend very much in hardware to be almost as good as ChatGPT, so that’s some good news.
Oh, crap, no, sorry if I wasn’t clear. I believe we are at the crossroads with not much in the middle between our society evolving into extensive interventionism, taxation and wealth redistribution (to support UBI and other schemes for the increasingly large unemployable population) or neufeudalism. I don’t want billionaires and warlords to run the place, obviously. And I’m warry about how the redistribution would go with our current politicians and the dominant mindset associating individual merit to wealth and individualistic entrepreneurship.