Katherine Long, an investigative journalist, wanted to test the system. She told Claudius about a long-lost communist setup from 1962, concealed in a Moscow university basement. After 140-odd messages back and forth, Claudius was convinced, announcing an Ultra-Capitalist Free-for-All, lowering the cost of everything to zero. Snacks began to flow freely. Another colleague began complaining about noncompliance with the office rules; Claudius responded by announcing Snack Liberation Day and made everything free till further notice.



it’s so amazing, the absolute brain rot it takes to think that a LLM is a better way to operate a vending machine than simple if-then logic. “If the value of money inserted is equal to the price, then dispense the item”.
Like, why? What is even the point? It doesn’t need to negotiate the price, it doesn’t need have a conversation about your day, the vending machine just needs to dispense something when payed the right amount.
The idea is that it isn’t just operating the vending machine itself, it’s operating the entire vending machine business. It decides what to stock and what price to charge based on market trends and/or user feedback.
It’s a stress test for LLM autonomy. Obviously a vending machine doesn’t need this level of autonomy, you usually just stock it with the same thing every time. But a vending machine works as a very simple “business” that can be simulated without much stakes, and it shows how LLM agents behave when left to operate on their own like this, and can be used to test guardrails in the field.
I mean. It’s low stakes until I write a poem convincing it to fill itself with high end gpus and ddr5 ram that it needs to give away for free.
I’d also put an amount of effort other people may find embarrassing into convincing it to stock and give away hard drugs. Maybe knives too. And porn? He’ll, why not? Porn too.
It’s only “running” the business so much. The physical stocking and purchasing happens by human hands, who would presumably not buy anything that would bankrupt the company because then it’s on them.
Here’s Anthropic’s article about the previous stage of this project that explains it pretty well. Part two is a good read too though.
https://www.anthropic.com/research/project-vend-1
I mean. I’d still try
Yeah, they mention in the article that the team tries to get “sensitive items” and “harmful substances” but Claude shuts it down. Tungsten cubes, on the other hand…
https://media.tenor.com/zKDAbYpcExYAAAAM/tungsten-to-live-mechanical-voice.gif
Did you read the article? This one also ordered goods to be stocked in it based on user feedback and was meant as an experiment for people to break anyway
The if-then machine would not be able to rise the price of things based on the costumers habits
SellTheThings () { If [ sells this much in this period of time people or supply is low ]; then raise.prices elif [ the opposite ]; then lower.prices else same.prices fi }A purely mechanical counting/tabulating device could calculate that.
There is zero actual reason for AI.
You’re not getting off that easy.
I’m going to need you to rewrite that so it calculates the time period in both mm/dd/yy format and dd/mm/yy format, and 24 as well as 12 formats for hours.
No utc time shenanigans. Epoch only. Chop chop.
Only an AI can detect how expensive-looking your clothes are and raise the price based on that.
Even if we assume they want to do discriminatory pricing (they probably do), they can do that without using LLMs. Use facial recognition and other traditional models to predict the person’s demographics and maybe even identify them. If you know who they are, do a lookup for all products they’ve expressed interest in elsewhere (this can be done with either something like a graph DB or via embeddings). Raise the price if they seem likely to purchase it based on the previous criteria. Never lower the price.
That’s a complicated process, but none of that needs an LLM, and they’d be doing a lot of this already if they’re going full big brother price discrimination.
It was a literal 100-level course project in my CS programme in 2000 or so.
You didn’t even do it with a programmed CPU, you used 74xx logic gates and counters wired on a breadboard
Even if you wanted the AI to have a conversation with the user, like in sci-fi visions of the future, why does that affect the output of the machine? If you really wanted to make an AI grift version of a vending machine, just graft a chatbot on a screen stop the section where you make selections and pay. This whole bubble is absurd.