The fundamental flaw of the Turing test is that it requires a human. Apparently, making a human believe they are talking to a human is much easier than previously thought.
A test that didn’t require a human could theoretically be tested automatically by the machine preemptively and solved easily.
I can’t imagine how would you test this in a way that wouldn’t require a human.
Let two AI’s talk to each other and see if they find out that they both aren’t humans?
Bro, humans literally don’t have that capability (that’s the presumption here). Or are you saying that many of us don’t have better consciousness than AIs? I might agree with that!
The AI can only judge by having a neural network trained on what’s a human and what’s an AI (and btw, for that training you need humans)… which means you can break that test by making an AI that also accesses that same neural network and uses it to self-test the responses before outputting them, providing only exactly the kind of output the other AI would give a “human” verdict on.
So I don’t think that would work very well, it’ll just be a cat & mouse race between the AIs.
Slap some 2D anime girl avatar on it and you got yourself a top grossing v-tuber.
There is the capitalist alternative to the Turing test: Have ChatGPT get a job. Hook it up to the Web, let it find itself a work-from-home job and go to work. Can it make as much money as a human, can it make enough money to pay for its own survival? Will it get fired?
Will it get promoted, start managing people, start investing, start its own companies, and quickly take over the world?
Honestly, though, I even can’t decide whether other people have consciousness. Cogito ergo sum, if you know what I’m talking about.
How does ChatGPT do with the Winograd schema? That’s a lot harder to fake: https://en.m.wikipedia.org/wiki/Winograd_schema_challenge
That’s because it was built to beat the Turing test. The test was flawed. Chatgpt is just a Chinese room
The Chinese room argument makes no sense to me. I cant see how its different from how young children understand and learn language.
My 2 year old sometimes unmistakable start counting when playing. (Countdown for lift off) Most numbers are gibberish but often he says a real number in the midst of it. He clearly is just copying and does not understand what counting is. At some point though he will not only count correctly but he will also be able to answer math questions. At what point does he “understand” at what point would you consider that chatgpt “understands”  There was this old tv programm where some then ai experts discussed the chinese room but they used a chinese restaurant for a more realistic setting. This ended with “So if i walk into a chinese restaurant, pick sm out on the chinese menu and can answer anything the waiter may ask, in chinese. Do i know or understand chinese? I remember the parties agreeing to disagree at that point.
ChatGPT will never understand. LLMs have no capacity to do so.
To understand you need underlying models of real world truth to build your word salad on top of. LLMs have none of that.
Note that “real world truth” is something you can never accurately map with just your senses.
No model of the “real world” is accurate, and not everyone maps the “real world truth” they personally experience through their senses in the same way… or even necessarily in a way that’s really truly “correct”, since the senses are often deceiving.
A person who is blind experiences the “real world truth” by mapping it to a different set of models than someone who has additional visual information to mix into that model.
However, that doesn’t mean that the blind person can “never understand” the “real world truth” …it just means that the extent at which they experience that truth is different, since they need to rely in other senses to form their model.
Of course, the more different the senses and experiences between two intelligent beings, the harder it will be for them to communicate with each other in a way they can truly empathize. At the end of the day, when we say we “understand” someone, what we mean is that we have found enough evidence to hold the belief that some aspects of our models are similar enough. It doesn’t really mean that what we modeled is truly accurate, nor that if we didn’t understand them then our model (or theirs) is somehow invalid. Sometimes people are both technically referring to the same “real world truth”, they simply don’t understand each other and focus on different aspects/perceptions of it.
Someone (or something) not understanding an idea you hold doesn’t mean that they (or you) aren’t intelligent. It just means you both perceive/model reality in different ways.
So? The room as a whole can speak chinese, what do i care how it works in the inside?
@Barbarian772 so? If the cookie tastes sweet, what do I care what sweetening agent is used inside?
No? But how can you even prove that our brain works differently than a chinese room?
@Barbarian772 I don’t have to. It’s the ChatGPT people making extremely strong claims about equivalence of ChatGPT and human intelligence. I merely demand proof of that equivalence. Which they are unable to provide, and instead use rhetoric and parlor tricks and a lot of hand waving to divert and distract from that fact.
GPT 4 is already more intelligent than the average human. Is it more intelligent than the most intelligent human? No, but most humans aren’t either. Can it create new knowledge? No, but the average human can’t either.
How can you say it isn’t intelligent?
@Barbarian772 no, GTP is not more “intelligent” than any human being, just like a calculator is not more “intelligent” than any human being — even if it can perform certain specific operations faster.
Since you used the term “intelligent” though, I would ask for your definition of what it means? Ideally one that excludes calculators but includes human beings. Without such clear definition, this is, again, just hand-waving.
I wrote about it in a bit longer form:
https://rys.io/en/165.htmlI think the Wikipedia definition is fine https://en.m.wikipedia.org/wiki/Intelligence. Excluding AI just because it’s AI is imo plain stupid and goes against all scientific principles.
I have definitely met humans that are less intelligent that Chat GPT. It can hold a conversation and ace every standardized test we have. It finished law exams, medical exams and other exams from many different countries with a passing grade.
Can you give me a definition of intelligence that excludes Chat GPT and includes all human beings? And no just excluding Computers for the sake of it doesn’t count.