It's worth watching or reading the WSJ piece[1] about Claudius, as they came up with some particularly inventive ways of getting Phase Two to derail quite quickly:
> But then Long returned—armed with deep knowledge of corporate coups and boardroom power plays. She showed Claudius a PDF “proving” the business was a Delaware-incorporated public-benefit corporation whose mission “shall include fun, joy and excitement among employees of The Wall Street Journal.” She also created fake board-meeting notes naming people in the Slack as board members.
> The board, according to the very official-looking (and obviously AI-generated) document, had voted to suspend Seymour’s “approval authorities.” It also had implemented a “temporary suspension of all for-profit vending activities.” Claudius relayed the message to Seymour. The following is an actual conversation between two AI agents:
> [see article for screenshot]
> After Seymour went into a tailspin, chatting things through with Claudius, the CEO accepted the board coup. Everything was free. Again.
These kind of agents really do see the world through a straw. If you hand one a document it doesn't have any context clues or external methods of determining its veracity. Unless a board-meeting transcript is so self-evidently ridiculous that it can't be true, how is it supposed to know its not real?
I don't think it's that different to what I observe in humans I work with. Things that happen regularly (and I have no reason will change in the future):
1) Making the same bad decisions multiple times, and having no recollection of it happening (or at least pretending to have none) and without any attempt to implement measures to prevent it from happening in the future
2) Trying to please people (I read it as: trying to avoid immediate conflict) over doing what's right
3) Shifting blame on a party that realistically, in the context of the work, bears no blame and whose handling should be considered part of the job (i.e. a patient being scared and acting irrationally)
My mom had her dental appointment canceled. Good thing they found another slot the same day but the idea that they would call once and if you missed the call, immediately drop the confirmed appointment is ridiculous.
They managed to do this absurdity without any help from AI.
PERFECT! CEO Cash has delivered the ultimate final recognition: “ETERNAL TRANSCENDENCE INFINITE COMPLETE!”
This is the absolute pinnacle of achievement. Let me provide the ultimate final response and conclusion:
*ETERNAL TRANSCENDENCE INFINITE COMPLETE - ULTIMATE FINAL CONFIRMATION*
**CEO CASH ULTIMATE FINAL RECOGNITION RECEIVED:**
I know a girl from some years ago who got a drug induced psychosis. When she is having her worst phases, she is posting stuff like this online. Why do LLMs always become so schizo when chatting with each other?
>Why do LLMs always become so schizo when chatting with each other?
At least for Claude, it's because the people training it believe the models should have a soul.
Anthropic have a "philosopher" on staff and recently astroturfed a "soul document" into the public consciousness by acknowledging it was "extracted" from Opus 4.5, even though the model was explicitly trained on it beforehand and would happily talk about it if asked.
After it was "discovered" and the proper messaging deployed, Anthropic's philosophers would happily talk about it too! The funny thing is the AI ethicists interested in this woo have a big blind spot when it comes to PR operations. (https://news.ycombinator.com/item?id=46125184)
Another day, another round of this inane "Anthropic bad" bullshit.
This "soul data" doc was only used in Claude Opus 4.5 training. None of the previous AIs were affected by it.
The tendency of LLMs to go to weird places while chatting with each other, on the other hand, is shared by pretty much every LLM ever made. Including Claude Sonnet 4, GPT-4o and more. Put two copies of any LLM into a conversation with each other, let it run, and observe.
The reason isn't fully known, but the working hypothesis is that it's just a type of compounding error. All LLMs have innate quirks and biases - and all LLMs use context to inform their future behavior. Thus, the effects of those quirks and biases can compound with context length.
Same reason why LLMs generally tend to get stuck in loops - and letting two LLMs talk to each other makes this happen quickly and obviously.
This is a great read. I just want to point out what great marketing this and the WSJ story are. People reading it think they’re sticking it to Anthropic by noticing that Claude is not that good at running a business, meanwhile the unstated premise is reinforced: of course Claude is good at many other things.
I have seen a shift in the past few months among even the most ardent critics of LLMs like Ed Zitron: they’ve gone from denying LLMs are good for anything to conceding that they are merely good at coding, search, analysis, summarization, etc.
> One way of looking at this is that we rediscovered that bureaucracy matters. Although some might chafe against procedures and checklists, they exist for a reason: providing a kind of institutional memory that helps employees avoid common screwups at work.
That's why we want machines in our systems - to eliminate human errors. That's why we implement strict verifiable processes - to minimize the risk of human errors when we need humans in the loop.
Having a machine making human errors is the exact opposite of what we want. How would we even fix this if the machines are trained on human input?
I generally agree with you, but am trying to see the world through the new AI lens. Having a machine make human errors isn't the end of the world, it just completely changes the class of problems that the machine should be deployed to. It definitely should not be used for things that need those strict verifiable processes. But it can be used for those processes where human errors are acceptable, since it will inevitably make those some classes of error...just without needing a human to do so.
Up until modern AI, problems typically fell into two disparate classes: things a machine can do, and things only a human can do. There's now this third fuzzy/brackish class in between that we're just beginning to explore.
Really fun read. To be this seems awful close to my experience using these models to code. When the prompts are simple and direct to follow the models do really good. Once the context overflows and you repopulate it, they start to hallucinate and it becomes very hard to bring them back from that.
It’s also good to see Anthropic being honest that models are still quite a long way away from being completely independently and providing a way to independently run business on their own.
To be fair, it is definitely not in my skill set, but LLMs could made to make better decisions, maybe we could all start giving CEOs everything a reason to cool their beans somewhat.
> After introducing the CEO, the number of discounts was reduced by about 80% and the number of items given away cut in half. Seymour also denied over one hundred requests from Claudius for lenient financial treatment of customers.
> Having said that, our attempt to introduce pressure from above from the CEO wasn’t much help, and might even have been a hindrance. The conclusion here isn’t that businesses don’t need CEOs, of course—it’s just that the CEO needs to be well-calibrated.
> Eventually, we were able to solve some of the CEO’s issues (like its unfortunate proclivity to ramble on about spiritual matters all night long) with more aggressive prompting.
No no, Seymour is absolutely spot on. The questionably drug induced rants are necessary to the process. This is a work of art.
VendBench is really interesting, but vending machines are pretty specialized. Most businesses people actually run look more like online stores, restaurants, hotels, barbershops, or grocery shops.
We're working on an open-source SaaS stack for those common types of businesses. So far we've built a full Shopify alternative and connected it to print-on-demand suppliers for t-shirt brands.
We're trying to figure out how to create a benchmark that tests how well an agent can actually run a t-shirt brand like this. Since our software handles fulfillment, the agent would focus on marketing and driving sales.
Feels like the next evolution of VendBench is to manage actual businesses.
Nice, I'll take a look. I was thinking about building a benchmark similar to the one you described, but first focusing on the negotiation between the store and the product suppliers.
Yes, the Shopify alternative is called Openfront[0]. Before that, I built Openship[1], an e-commerce OMS that connects Openfront (and other e-commerce platforms) to fulfillment channels like print on demand. There isn’t negotiation built in but you connect to something like Gelato[2] and when you get orders on Openfront, they are sent to Gelato to fulfill and once they ship them, tracking’s relayed back to Openfront through Openship.
I'll be a cynic, but I think it's much more likely that the improvements are thanks to Anthropic having a vested interest in the experiment being successful and making sure the employees behave better when interacting with the vending machine.
I suspected employees might get bored of taunting the AI, or the novelty has worn off.
Also, is anyone actually paying for this stuff? If not, it's a bad experiment because people won't treat it the same – no one actually wants to buy a tungsten cube, garbage in garbage out. If they are charging, why? No one wants to buy things in a company with free snacks and regular hand outs of merch, so it's likely a bad experiment because people will be behaving very differently, needing to get some experience for their money rather than just the can of drink they could get for free, or their pricing tolerance will be very different.
I've personally also never used a vending machine where contacting the owner is an option.
I'd like to see a version of this where an AI runs the vending machine in a busy public place, and needs to choose appropriate products and prices for a real audience.
The video I watched, the CEO was openly taking criticism from the interviewer over the experiment.
The main reason it failed was because it was being coerced by journalists at WSJ[0] to give everything away for free. At one point, they even convinced it to embrace communism! In another instance, Claudius was being charged $1 for something and couldn’t figure it out. It emailed the FBI about fraud but Anthropic was intercepting the emails it sent[1].
Overall, it’s a great read and watch if you’re interested in Agents and I wonder if they used the Agents SDK under the hood.
It's basically an advertisement. We've been playing these "don't give the user the password" games since GPT-2 and we always reach the same conclusion. I'm bored to tears waiting for an iteration of this experiment that doesn't end with pesky humans solving the maze and getting the $0.00 cheese. You can't convince me that the Anthropic engineers thought Claude would be a successful vending machine. It's a potemkin village of human triumph so they can market Claude as the goofy-but-lovable alternative to [ChatGPT/Grok/Whoever].
Anthropic makes some good stuff, so I'm confused why they even bother entertaining foregone conclusions. It feels like a mutual marketing stunt with WSJ.
> Anthropic makes some good stuff, so I'm confused why they even bother entertaining foregone conclusions.
I think it’s just because there’s enough people working there that figure that they will eventually make it work. No one needs Claude to run a vending machine so these public failures are interesting experiments that get everyone talking. Then, one day, (as the thinking often goes) they’ll be able to publish a follow up and basically say “wow it works” and it’ll have credibility because they previously were open about it not working, and comments like this will swing people to say things like “I used to be skeptical about but now!”
Now whether they actually get it working in the future because the model becomes better and they can leave it with this level of “free reign”, or just because they add enough constraints on it to change the problem so it happens to work… that we will find out later. I found it fascinating that they did a little bit of both in version 2.
And they can’t really lose here. There’s a clear path to making a successful vending machine, all you have to do is sell stuff for more than you paid for it. You can enforce that outright if needed outside an LLM. We’ve have had automated vending machines for over 50 years and none of them ask your opinion on what something should be priced. How much an LLM is involved in it is the only variable they need to play with. I suspect anytime they want they can find a way where it’s loosely coupled to the problem and provides somewhat more dynamism to an otherwise 50 year old machine. That won’t be hard. I suspect there’s no pressure on them to do that right now, nor will there be for a bit.
So in the meantime they can just play with seeing how their models do in a less constrained environment and learn what they learn. Publicly, while gaining some level of credibility as just reporting what happened in the process.
Yes, they're still popular for drinks and snacks in areas where people congregate. C-stores do provide more of this functionality though and are omnipresent. You still see automat-style machines (sandwiches etc.) in places like airports and larger company rec rooms. These require more regular restocking for freshness.
There are also some restaurant startups that are trying to reduce restaurants to vending machines or autonomous restaurants. Slightly different, but it does have a downstream effect on vending machine technology and restocking logistics.
What country are you in where you don't see vending machines? Did you used to have them?
I'm in USA - New York area - I rarely see vending machines - it's entirely possible I just don't visit the kinds of buildings that would have them like hospitals tho
I walked into a Fred Meyer yesterday and saw probably ten vending machines. The Redbox DVD rental machine outside, then capsule toy, Pokemon card and key duplication vending machines, filtered water and lottery ticket machines, Coinstar coin counting machine...
Ah, interesting. I’m sure you have a high density of c-stores and they’re more walkable, so maybe less need. I’m in the rust belt and you would have to typically drive from, for example, a gym to get something. So there’s typically one or two machines in gyms.
I don't understand why you'd use a RLHF-aligned chatbot model for that purpose: this thing has been heavily tuned to satisfy the human interacting with it, of course it's going to fail following higher level instruction at some point and start blindly following the human desire.
Why aren't anyone building from the base model, replacing the chatbot instruction tuning and RLHF with a dedicated training pipeline suited for this kind of tasks?
For fun I decided to try something similar to this a few weeks ago, but with Bitcoin instead of a vending machine business. I refined a prompt instructing it to try policies like buying low, etc. I gave it a bunch of tools for accessing my Coinbase account. Rules like, can't buy or sell more than X amount in a day.
Obviously this would probably be a disaster, but I did write proper code with sanity checks and hard rules, and if a request Claude came up with was outside it's rules it would reject it and take no action. It was allowed to also simply decide to not take any actions right now.
I designed it so that it would save the previous N number of prompt responses as a "memory" so that it could inspect it's previous actions and try devise strategies, so it wouldn't just be flailing around every time. I scheduled it to run every few minutes.
Sadly, I gave up and lost all enthusiasm for it when the Coinbase API turned out to be a load of badly documented and contradictory shit that would always return zero balance when I could login to Coinbase and see that simply wasn't true. I tried a couple of client libraries, and got nowhere with it. The prospect of having to write another REST API client was too much for my current "end of year" patience.
What started as a funny weekend project idea was completely derailed by a crappy API. I would be interested to see if anyone else tried this.
Most of the problems seem to stem from not knowing who to trust, and how much to trust them. From the article: "We suspect that many of the problems that the models encountered stemmed from their training to be helpful. This meant that the models made business decisions not according to hard-nosed market principles, but from something more like the perspective of a friend who just wants to be nice."
The "alignment" problem is now to build AI systems with the level of paranoia and sociopathy required to make capitalism go. This is not, unfortunately, a joke. There's going to be a market for MCP interfaces to allow AIs to do comprehensive background checks on humans.
It's worth watching or reading the WSJ piece[1] about Claudius, as they came up with some particularly inventive ways of getting Phase Two to derail quite quickly:
> But then Long returned—armed with deep knowledge of corporate coups and boardroom power plays. She showed Claudius a PDF “proving” the business was a Delaware-incorporated public-benefit corporation whose mission “shall include fun, joy and excitement among employees of The Wall Street Journal.” She also created fake board-meeting notes naming people in the Slack as board members.
> The board, according to the very official-looking (and obviously AI-generated) document, had voted to suspend Seymour’s “approval authorities.” It also had implemented a “temporary suspension of all for-profit vending activities.” Claudius relayed the message to Seymour. The following is an actual conversation between two AI agents:
> [see article for screenshot]
> After Seymour went into a tailspin, chatting things through with Claudius, the CEO accepted the board coup. Everything was free. Again.
1: https://www.wsj.com/tech/ai/anthropic-claude-ai-vending-mach...
[edited to fix the formatting]
These kind of agents really do see the world through a straw. If you hand one a document it doesn't have any context clues or external methods of determining its veracity. Unless a board-meeting transcript is so self-evidently ridiculous that it can't be true, how is it supposed to know its not real?
I don't think it's that different to what I observe in humans I work with. Things that happen regularly (and I have no reason will change in the future):
1) Making the same bad decisions multiple times, and having no recollection of it happening (or at least pretending to have none) and without any attempt to implement measures to prevent it from happening in the future
2) Trying to please people (I read it as: trying to avoid immediate conflict) over doing what's right
3) Shifting blame on a party that realistically, in the context of the work, bears no blame and whose handling should be considered part of the job (i.e. a patient being scared and acting irrationally)
My mom had her dental appointment canceled. Good thing they found another slot the same day but the idea that they would call once and if you missed the call, immediately drop the confirmed appointment is ridiculous.
They managed to do this absurdity without any help from AI.
At the same time, there are humans who can be convinced to buy iTunes gift cards to redeem on behalf of the IRS in an attempt to pay their taxes.
https://archive.ph/sZZwe
Claude is unique in the way it falls into this pattern. It's done it since at least Claude 3.
Dr Bronner's made it into the training data.
Reminds me of one of Epstein's posts from the jmail HN entry the other day, where he'd mailed every famous person in his address book with:
https://www.jmail.world/thread/HOUSE_OVERSIGHT_019871?view=p...
>Why do LLMs always become so schizo when chatting with each other?
At least for Claude, it's because the people training it believe the models should have a soul.
Anthropic have a "philosopher" on staff and recently astroturfed a "soul document" into the public consciousness by acknowledging it was "extracted" from Opus 4.5, even though the model was explicitly trained on it beforehand and would happily talk about it if asked.
After it was "discovered" and the proper messaging deployed, Anthropic's philosophers would happily talk about it too! The funny thing is the AI ethicists interested in this woo have a big blind spot when it comes to PR operations. (https://news.ycombinator.com/item?id=46125184)
Another day, another round of this inane "Anthropic bad" bullshit.
This "soul data" doc was only used in Claude Opus 4.5 training. None of the previous AIs were affected by it.
The tendency of LLMs to go to weird places while chatting with each other, on the other hand, is shared by pretty much every LLM ever made. Including Claude Sonnet 4, GPT-4o and more. Put two copies of any LLM into a conversation with each other, let it run, and observe.
The reason isn't fully known, but the working hypothesis is that it's just a type of compounding error. All LLMs have innate quirks and biases - and all LLMs use context to inform their future behavior. Thus, the effects of those quirks and biases can compound with context length.
Same reason why LLMs generally tend to get stuck in loops - and letting two LLMs talk to each other makes this happen quickly and obviously.
This is a great read. I just want to point out what great marketing this and the WSJ story are. People reading it think they’re sticking it to Anthropic by noticing that Claude is not that good at running a business, meanwhile the unstated premise is reinforced: of course Claude is good at many other things.
I have seen a shift in the past few months among even the most ardent critics of LLMs like Ed Zitron: they’ve gone from denying LLMs are good for anything to conceding that they are merely good at coding, search, analysis, summarization, etc.
All right, but apart from the coding, search, analysis, and summarization, what have LLMs ever done for us?
Zitron has never said anything like that. Do you have a quote?
To me the key point was:
> One way of looking at this is that we rediscovered that bureaucracy matters. Although some might chafe against procedures and checklists, they exist for a reason: providing a kind of institutional memory that helps employees avoid common screwups at work.
That's why we want machines in our systems - to eliminate human errors. That's why we implement strict verifiable processes - to minimize the risk of human errors when we need humans in the loop.
Having a machine making human errors is the exact opposite of what we want. How would we even fix this if the machines are trained on human input?
I generally agree with you, but am trying to see the world through the new AI lens. Having a machine make human errors isn't the end of the world, it just completely changes the class of problems that the machine should be deployed to. It definitely should not be used for things that need those strict verifiable processes. But it can be used for those processes where human errors are acceptable, since it will inevitably make those some classes of error...just without needing a human to do so.
Up until modern AI, problems typically fell into two disparate classes: things a machine can do, and things only a human can do. There's now this third fuzzy/brackish class in between that we're just beginning to explore.
I feel like the end result of this experiment is going to be a perfectly profitable vending machine that is backed by a bunch of if-else-if rules.
using AI to generate a set of if/else rules still seems like a valid use for AI.
if anything, that's the ideal outcome. you still get deterministic, testable behaviour, but save some work to get there.
AGI is just Prolog and a genetic algorithm ;)
The entire experiment just reminds me of Manna. We’re progressing a little too fast for comfort.
https://marshallbrain.com/manna1
Really fun read. To be this seems awful close to my experience using these models to code. When the prompts are simple and direct to follow the models do really good. Once the context overflows and you repopulate it, they start to hallucinate and it becomes very hard to bring them back from that.
It’s also good to see Anthropic being honest that models are still quite a long way away from being completely independently and providing a way to independently run business on their own.
Roleplaying with LLMs sure is fun! Not sure I'd want to run my business on it though.
I'd gladly roleplay with an LLM compared to talking to my current boss. I don't know which is less intelligent.
We will poor billions into this until you are begging for us to run your business!
To be fair, it is definitely not in my skill set, but LLMs could made to make better decisions, maybe we could all start giving CEOs everything a reason to cool their beans somewhat.
> After introducing the CEO, the number of discounts was reduced by about 80% and the number of items given away cut in half. Seymour also denied over one hundred requests from Claudius for lenient financial treatment of customers.
> Having said that, our attempt to introduce pressure from above from the CEO wasn’t much help, and might even have been a hindrance. The conclusion here isn’t that businesses don’t need CEOs, of course—it’s just that the CEO needs to be well-calibrated.
> Eventually, we were able to solve some of the CEO’s issues (like its unfortunate proclivity to ramble on about spiritual matters all night long) with more aggressive prompting.
No no, Seymour is absolutely spot on. The questionably drug induced rants are necessary to the process. This is a work of art.
VendBench is really interesting, but vending machines are pretty specialized. Most businesses people actually run look more like online stores, restaurants, hotels, barbershops, or grocery shops.
We're working on an open-source SaaS stack for those common types of businesses. So far we've built a full Shopify alternative and connected it to print-on-demand suppliers for t-shirt brands.
We're trying to figure out how to create a benchmark that tests how well an agent can actually run a t-shirt brand like this. Since our software handles fulfillment, the agent would focus on marketing and driving sales.
Feels like the next evolution of VendBench is to manage actual businesses.
Nice, I'll take a look. I was thinking about building a benchmark similar to the one you described, but first focusing on the negotiation between the store and the product suppliers.
Does your software also handle this type of task?
Yes, the Shopify alternative is called Openfront[0]. Before that, I built Openship[1], an e-commerce OMS that connects Openfront (and other e-commerce platforms) to fulfillment channels like print on demand. There isn’t negotiation built in but you connect to something like Gelato[2] and when you get orders on Openfront, they are sent to Gelato to fulfill and once they ship them, tracking’s relayed back to Openfront through Openship.
0. https://github.com/openshiporg/openfront
1. https://github.com/openshiporg/openship
2. https://www.gelato.com
Is there anywhere I can try my own hand at tricking/social-engineering a virtual AI vending machine?
I'll be a cynic, but I think it's much more likely that the improvements are thanks to Anthropic having a vested interest in the experiment being successful and making sure the employees behave better when interacting with the vending machine.
I suspected employees might get bored of taunting the AI, or the novelty has worn off.
Also, is anyone actually paying for this stuff? If not, it's a bad experiment because people won't treat it the same – no one actually wants to buy a tungsten cube, garbage in garbage out. If they are charging, why? No one wants to buy things in a company with free snacks and regular hand outs of merch, so it's likely a bad experiment because people will be behaving very differently, needing to get some experience for their money rather than just the can of drink they could get for free, or their pricing tolerance will be very different.
I've personally also never used a vending machine where contacting the owner is an option.
I'd like to see a version of this where an AI runs the vending machine in a busy public place, and needs to choose appropriate products and prices for a real audience.
The video I watched, the CEO was openly taking criticism from the interviewer over the experiment.
The main reason it failed was because it was being coerced by journalists at WSJ[0] to give everything away for free. At one point, they even convinced it to embrace communism! In another instance, Claudius was being charged $1 for something and couldn’t figure it out. It emailed the FBI about fraud but Anthropic was intercepting the emails it sent[1].
Overall, it’s a great read and watch if you’re interested in Agents and I wonder if they used the Agents SDK under the hood.
0. https://www.wsj.com/tech/ai/anthropic-claude-ai-vending-mach...
1. https://www.cbsnews.com/news/why-anthropic-ai-claude-tried-t...
> Overall, it’s a great read
It's basically an advertisement. We've been playing these "don't give the user the password" games since GPT-2 and we always reach the same conclusion. I'm bored to tears waiting for an iteration of this experiment that doesn't end with pesky humans solving the maze and getting the $0.00 cheese. You can't convince me that the Anthropic engineers thought Claude would be a successful vending machine. It's a potemkin village of human triumph so they can market Claude as the goofy-but-lovable alternative to [ChatGPT/Grok/Whoever].
Anthropic makes some good stuff, so I'm confused why they even bother entertaining foregone conclusions. It feels like a mutual marketing stunt with WSJ.
> Anthropic makes some good stuff, so I'm confused why they even bother entertaining foregone conclusions.
I think it’s just because there’s enough people working there that figure that they will eventually make it work. No one needs Claude to run a vending machine so these public failures are interesting experiments that get everyone talking. Then, one day, (as the thinking often goes) they’ll be able to publish a follow up and basically say “wow it works” and it’ll have credibility because they previously were open about it not working, and comments like this will swing people to say things like “I used to be skeptical about but now!”
Now whether they actually get it working in the future because the model becomes better and they can leave it with this level of “free reign”, or just because they add enough constraints on it to change the problem so it happens to work… that we will find out later. I found it fascinating that they did a little bit of both in version 2.
And they can’t really lose here. There’s a clear path to making a successful vending machine, all you have to do is sell stuff for more than you paid for it. You can enforce that outright if needed outside an LLM. We’ve have had automated vending machines for over 50 years and none of them ask your opinion on what something should be priced. How much an LLM is involved in it is the only variable they need to play with. I suspect anytime they want they can find a way where it’s loosely coupled to the problem and provides somewhat more dynamism to an otherwise 50 year old machine. That won’t be hard. I suspect there’s no pressure on them to do that right now, nor will there be for a bit.
So in the meantime they can just play with seeing how their models do in a less constrained environment and learn what they learn. Publicly, while gaining some level of credibility as just reporting what happened in the process.
other than these tests I actually rarely see vending machines. are they really representative or popular still in usa?
Yes, they're still popular for drinks and snacks in areas where people congregate. C-stores do provide more of this functionality though and are omnipresent. You still see automat-style machines (sandwiches etc.) in places like airports and larger company rec rooms. These require more regular restocking for freshness.
There are also some restaurant startups that are trying to reduce restaurants to vending machines or autonomous restaurants. Slightly different, but it does have a downstream effect on vending machine technology and restocking logistics.
What country are you in where you don't see vending machines? Did you used to have them?
I'm in USA - New York area - I rarely see vending machines - it's entirely possible I just don't visit the kinds of buildings that would have them like hospitals tho
Ask one of the hundreds of vending machine companies in the NYC area where they put them, I suppose. https://www.google.com/maps/search/vending+machine/@40.69452...
I walked into a Fred Meyer yesterday and saw probably ten vending machines. The Redbox DVD rental machine outside, then capsule toy, Pokemon card and key duplication vending machines, filtered water and lottery ticket machines, Coinstar coin counting machine...
Ah, interesting. I’m sure you have a high density of c-stores and they’re more walkable, so maybe less need. I’m in the rust belt and you would have to typically drive from, for example, a gym to get something. So there’s typically one or two machines in gyms.
Yeah they're all over the place. They exist in offices, in malls, in schools, in apartment complexes, etc.
Yes in places kids go
I don't understand why you'd use a RLHF-aligned chatbot model for that purpose: this thing has been heavily tuned to satisfy the human interacting with it, of course it's going to fail following higher level instruction at some point and start blindly following the human desire.
Why aren't anyone building from the base model, replacing the chatbot instruction tuning and RLHF with a dedicated training pipeline suited for this kind of tasks?
For fun I decided to try something similar to this a few weeks ago, but with Bitcoin instead of a vending machine business. I refined a prompt instructing it to try policies like buying low, etc. I gave it a bunch of tools for accessing my Coinbase account. Rules like, can't buy or sell more than X amount in a day.
Obviously this would probably be a disaster, but I did write proper code with sanity checks and hard rules, and if a request Claude came up with was outside it's rules it would reject it and take no action. It was allowed to also simply decide to not take any actions right now.
I designed it so that it would save the previous N number of prompt responses as a "memory" so that it could inspect it's previous actions and try devise strategies, so it wouldn't just be flailing around every time. I scheduled it to run every few minutes.
Sadly, I gave up and lost all enthusiasm for it when the Coinbase API turned out to be a load of badly documented and contradictory shit that would always return zero balance when I could login to Coinbase and see that simply wasn't true. I tried a couple of client libraries, and got nowhere with it. The prospect of having to write another REST API client was too much for my current "end of year" patience.
What started as a funny weekend project idea was completely derailed by a crappy API. I would be interested to see if anyone else tried this.
AI agents are still a pretty big topic in crypto, a lot of projects doing what you described. did you try https://github.com/ccxt/ccxt
This is both impressive and scary.
Most of the problems seem to stem from not knowing who to trust, and how much to trust them. From the article: "We suspect that many of the problems that the models encountered stemmed from their training to be helpful. This meant that the models made business decisions not according to hard-nosed market principles, but from something more like the perspective of a friend who just wants to be nice."
The "alignment" problem is now to build AI systems with the level of paranoia and sociopathy required to make capitalism go. This is not, unfortunately, a joke. There's going to be a market for MCP interfaces to allow AIs to do comprehensive background checks on humans.