AI has been improving at a very rapid pace, which means that a lot of people have really outdated priors. I see this all the time online where people are dismissive about AI in a way that suggests it's been a while since they last checked-in on the capabilities of models. They wrote off the coding ability of ChatGPT on version 3.5, for instance, and have missed all the advancements that have happened since. Or they talk about hallucination and haven't tried Deep Research as an alternative to traditional web-search.
Then there's a tendency to be so 'anti' that there's an assumption that anyone reporting that the tools are accomplishing truly impressive and useful things must be an 'AI booster' or shill. Or they assume that person must not have been a very good engineer in the first place, etc.
Really is one of those examples of the quote, "In the beginner's mind there are many possibilities, but in the expert's mind there are few."
It's a rapidly evolving field, and unless you actually spend some time kicking the tires on the models every so often, you're just basing your opinions on outdated experiences or what everyone else is saying about it.
I feel like I see these two opposite behaviors. People who formed an opinion about AI from an older model and haven't updated it. And people who have an opinion about what AI will be able to do in the future and refuse to acknowledge that it doesn't do that in the present.
And often when the two are arguing it's tricky to tell which is which, because whether or not it does something isn't totally black and white, there's some things it can sometimes do, which you can argue either way about that being in its capabilities or not.
Another very significant cohort is people who formed a negative opinion without even the slightest interest in genuinely trying to learn how to use it (or even trying at all)
To play devil's advocate, how is your argument not a 'no true scottsman' argument? As in, "oh, they had a negative view of X, well that's of course because they weren't testing the new and improved X2 model which is different". Fast forward a year .. "Oh, they have a negative view on X2, well silly them, they need to be using the Y24 model, that's where it's at, the X2 model isn't good anymore". Fast forward a year .. ad infinitum.
Are the models that exist today a "true scottsman" for you?
It's not a No True Scotsman. That fallacy redefines the group to dismiss counterexamples. The point here is different: when the thing itself keeps changing, evidence from older versions naturally goes stale. Criticisms of GPT-3.5 don't necessarily hold against GPT-4, just like reviews of Windows XP don't apply to Windows 11.
IMHO, by placing people with a negative attitude toward AI products under the guise "their priors are outdated" you effectively negate any arguments from those people. That is, because their priors are outdated their counterexamples may be dismissed. That is, indeed, the no true Scotsman!
I don’t see a claim that anyone with a negative attitude toward AI shouldn’t be listened to because it automatically means that they formed their opinion on older models. The claim was simply that there’s a large cohort of people who undervalue the capabilities of language models because they formed their views while evaluating earlier versions.
I wouldn’t think gpt5 is any better than the previous chat gpt. I know it’s a silly example but I was trying to trip it up with the 8.6-8.11 and it got it right .49 but then it said the opposite of 8.6 - 8.12 was -.21.
I just don’t see that much of a difference coding either with Claude 4 or Gemini 2.5 pro. Like they’re all fine but the difference isn’t changing anything in what I use them for. Maybe people are having more success with the agent stuff but in my mind it’s not that different than just forking a GitHub repo that already does what you’re “building” with the agent.
Yes but almost definitionally that is everyone who did not find value from LLMs. If you don’t find value from LLMs, you’re not going to use them all the time.
The only people you’re excluding are the people who are forced to use it, and the random sampling of people who happened to try it recently.
So it may have been accidental or indirectly, but yes, no true Scotsman would apply to your statement.
> The point here is different: when the thing itself keeps changing, evidence from older versions naturally goes stale.
Yes, but the claims do not. When the hypemen were shouting that GPT-3 was near-AGI, it still turned out to be absolute shit. When the hypemen were claiming that GPT-3.5 was thousands of times better than GPT-3 and beating all highschool students, it turned out to be a massive exaggeration. When the hypemen claimed that GPT-4 was a groundbreaking innovation and going to replace every single programmer, it still wasn't any good.
Sure, AI is improving. Nobody is doubting that. But you can only claim to have a magical unicorn so many times before people stop believing that this time you might have something different than a horse with an ice cream cone glued to its head. I'm not going to waste a significant amount of my time evaluating Unicorn 5.0 when I already know I'll almost certainly end up disappointed.
Perhaps it'll be something impressive in a decade or two, but in the meantime the fact that Big Tech keeps trying to shove it down my throat even when it clearly isn't ready yet is a pretty good indicator to me that it is still primarily just a hype bubble.
Its funny how the hype-train is not responding to any real criticisms about the false predictions and carrying on with the false narrative of AI.
I agree it will probably be something in a decade, but right now, it has some interesting concepts but I do notice upon successive iterations of chat responses that its got a ways to go.
It remind me of Tesla car owners buying into the self-driving terminology. Yes the drive assistant technology has improved quite a bit since cruise control, but its a far cry from self-driving.
How is that different than the models today are actually usable for non trivial things and more capable than yesterdays and it’s also true that tomorrow’s models will also probably be more capable than today’s?
For example, I dismissed AI three years ago because it couldn’t do anything I needed it to. Today I use it for certain things and it’s not quite capable of other things. Tomorrow it might be capable of a lot more.
Yes, priors have to be updated when the ground truth changes and the capabilities of AI change rapidly. This is how chess engines on supercomputers were competitive in the 90s then hybrid systems became the leading edge competitive and then machines took over for good and never looked back.
It’s not that the LLMs are better, it’s the internal tools/functions being called that do the actual work are better. They didn’t spend millions to retrain a model to statistically output the number of r’s in strawberry, but just offloaded that trivial question to a function call.
So I would say the overall service provided is better than it was, thanks to functions being built based on user queries, but not the actual LLM models themselves.
LLMs are definitely better quality today than 3 years ago at codegen quality - there’s quantitative benchmarks as well as for me my personal qualitative experience (given the gaming that companies engage in).
It is also true that the tooling and context management has gotten more sophisticated (often using models by the way). That doesn’t negate that the models themselves have gotten better at reliable tool calling so that the LLM is driving more of the show rather than purpose built coordination into the LLM and that the codegen quality is higher than it used to be.
This is a good example of making statements that are clearly not based in fact. Anyone who works with those models knows full well what a massive gap there is between e.g. GPT 3.5 and Opus 4.1 that has nothing to do with the ability to use tools.
There is another big and growing group: charlatans (influencers). People who don't know much but make bold statements, select 'proof' cases. Just to get attention. There are many of them on youtube. When you someone on thumbnail making faces this is most likely it.
Here[0] is a perfect example of this. There are so many youtubers making videos about the future of AI as a dooms-day prediction. Its kind of irresponsible actually. These youtubers read a book on the the down fall of humanity because of AGI. Many of these authors seem like they are repeating the Terminator/Skynet themes. Because of all this false information, It's hard to believe anything that is being said about the future of AI on youtube now.
Not as many as on HN. "Influencers" have agendas and the stream of income, or other self-interest. HN always comes off as a monolith, on any subject. Counter-arguments get ignored and downvoted to oblivion.
I’m spending a lot of time on LinkedIn because my team is hiring and, boy oh boy, LinkedIn is terminally infested with AI influencers. It’s a hot mess.
There are also a bunch of us who do kick the tires very often and are consistently underwhelmed.
There are also those of us who have used them substantially, and seen the damage that causes to a codebase in the long run (in part due to the missing gains of having someone who understands the codebase).
There are also those of us who just don’t like the interface of chatting with a robot instead of just solving the problem ourselves.
There are also those of us who find each generation of model substantially worse than the previous generation, and find the utility trending downwards.
There are also those of us who are concerned about the research coming out about the effects of using LLMs on your brain and cognitive load.
There are also those of us who appreciate craft, and take pride in what we do, and don’t find that same enjoyment/pride in asking LLMs to do it.
There are also those of us who worry about offloading our critical thinking to big corporations, and becoming dependent on a pay-to-play system, that is current being propped up by artificially lowered prices, with “RUG PULL” written all over them.
There are also those of us who are really concerned about the privacy issues, and don’t trust companies hundreds of billions of dollars in debt to some of the least trust worth individuals with that data.
Most of these issues don’t require much experience with the latest generation.
I don’t think the intention of your comment was to stir up FUD, but I feel like it’s really easy for people to walk away with that from this sort of comment, so I just wanted to add my two cents and tell people they really don’t need to be wasting their time every 6 weeks. They’re really not missing anything.
Can you do more than a few weeks ago? Sure? Maybe? But I can also do a lot more than I was able to a few weeks ago as well not using an LLM. I’ve learned and improved myself.
Chances are if you’re not already using an LLM it’s because you don’t like it, or don’t want to, and that’s really ok. If AGSI comes out in a few months, all the time you would have invested now would be out of date anyways.
> There are also a bunch of us who do kick the tires very often and are consistently underwhelmed.
Yep, this is me. Every time people are like "it's improved so much" I feel like I'm taking crazy pills as a result. I try it every so often, and more often than not it still has the same exact issues it had back in the GPT-3 days. When the tool hasn't improved (in my opinion, obviously) in several years, why should I be optimistic that it'll reach the heights that advocates say it will?
haha I have to laugh because I’ve probably said “I feel like I’m taking crazy pills” at least 20 times this week (I spent a day using cursor with the new GPT and was thoroughly, thoroughly unimpressed).
I’m open to programming with LLMs, and I’m entirely fine with people using them and I’m glad people are happy. But this insistence that progress is so crazy that you have to be tapped in at all times just irks me.
LLM models are like iPhones. You can skip a couple versions it’s fine, you will have the new version at the same time with all the same functionality as everyone else buying one every year.
1) LLMs are controlled by BigCorps who don’t have user’s best interests at heart.
2) I don’t like LLMs and don’t use them because they spoil my feeling of craftsmanship.
3) LLMs can’t be useful to anyone because I “kick the tires” every so often and am underwhelmed. (But what did you actually try? Do tell.)
#1 is obviously true and is a problem, but it’s just capitalism. #2 is a personal choice, you do you etc., but it’s also kinda betting your career on AI failing. You may or may not have a technical niche where you’ll be fine for the next decade, but would you really in good conscience recommend a juniorish web dev take this position? #3 is a rather strong claim because it requires you to claim that a lot of smart reasonable programmers who see benefits from AI use are deluded. (Not everyone who says they get some benefit from AI is a shill or charlatan.)
How exactly am I betting my career on LLMs failing? The inverse is definitely true — going all in on LLMs feels like betting on the future success of LLMs. However not using LLMs to program today is not betting on anything, except maybe myself, but even that’s a stretch.
After all, I can always pick up LLMs in the future. If a few weeks is long enough for all my priors to become stale, why should I have to start now? Everything I learn will be out of date in a few weeks. Things will only be easier to learn 6, 12, 18 months from now.
Also no where in my post did I say that LLMs can’t be useful to anyone. In fact I said the opposite. If you like LLMs or benefit from them, then you’re probably already using them, in which case I’m not advocating anyone stop. However there are many segments of people who LLMs are not for. No tool is a panacea. I’m just trying to nip and FUD in the butt.
There are so many demands for our attention in the modern world to stay looped in and up to date on everything; I’m just here saying don’t fret. Do what you enjoy. LLMs will be here in 12 months. And again in 24. And 36. You don’t need to care now.
And yes I mentor several juniors (designers and engineers). I do not let them use LLMs for anything and actively discourage them from using LLMs. That is not what I’m trying to do in this post, but for those whose success I am invested in, who ask me for advice, I quite confidently advise against it. At least for now. But that is a separate matter.
EDIT: My exact words from another comment in this thread prior to your comment:
> I’m open to programming with LLMs, and I’m entirely fine with people using them and I’m glad people are happy.
I wonder, what drives this intense FOMO ideation about AI tools as expressed further upthread?
How does someone reconcile a faith that AI tooling is rapdily improving with that contradictory belief that there is some permanent early-adopter benefit?
> They wrote off the coding ability of ChatGPT on version 3.5, for instance
I found I had better luck with ChatGPT 3.5's coding abilities. What the newer models are really good at, though, is doing the high level "thinking" work and explaining it in plain English, leaving me to simply do the coding.
I agree with you. I am a perpetual cynic about new technology (and a GenXer so multiply that by two) and I have deeply embraced AI in all parts of my business and basically am engaging with it all day for various tasks from helping me compare restaurant options to re-tagging a million contact records in salesforce.
It’s incredibly powerful and will just clearly be useful. I don’t believe it’s going to replace intelligence or people but it’s just obviously a remarkable tool.
But I think at least part of the dynamic is that the SV tech hype booster train has been so profoundly full of shit for so long that you really can’t blame people for skepticism. Crypto was and is just a giant and elaborate grift, to name one example. Also guys like Altman are clearly overstating the current trajectory.
The dismissive response does come with some context attached.
> But I think at least part of the dynamic is that the SV tech hype booster train has been so profoundly full of shit for so long that you really can’t blame people for skepticism.
They are still full of shit about LLMs, even if it is useful.
I do see this a lot. It's hard to have a reasonable conversation about AI amidst, on the one hand, hype-mongers and boosters talking about how we'll have AGI in 2027 and all jobs are just about to be automated away, and on the other hand, a chorus of people who hate AI so much they have invested their identify in it failing and haven't really updated their priors since ChatGPT came out. Both groups repeat the same set of tired points that haven't really changed much in three years.
But there are plenty of us who try and walk a middle course. A lot of us have changed our opinions over time. ("When the facts change, I change my mind.") I didn't think AI models were much use for coding a year ago. The facts changed. (Claude Code came out.) Now I do. Frankly, I'd be suspicious of anyone who hasn't changed their opinions about AI in the last year.
You can believe all these things at once, and many of us do:
* LLMs are extremely impressive in what they can do. (I didn't believe I'd see something like this in my lifetime.)
* Used judiciously, they are a big productivity boost for software engineers and many other professions.
* They are imperfect and make mistakes, often in weird ways. They hallucinate. There are some trivial problems that they mess up.
* But they're not just "stochastic parrots." They can model the world and reason about it, albeit imperfectly and not like humans do.
* AI will change the world in the next 20 years
* But AI companies are overvalued at the present time and we're mostly likely in a bubble which will burst.
* Being in a bubble doesn't mean the technology is useless. (c.f. the dotcom bubble or the railroad bubble in the 19th century.)
* AGI isn't just around the corner. (There's still no way models can learn from experience.)
* A lot of people making optimistic claims about AI are doing it for self-serving boosterish reasons, because they want to pump up their stock price or sell you something
* AI has many potential negative consequences for society and mental health, and may be at least as nasty as social media in that respect
* AI has the potential to accelerate human progress in ways that really matter, such as medical research
* But anyone who claims to know the future is just guessing
> But they're not just "stochastic parrots." They can model the world and reason about it, albeit imperfectly and not like humans do.
I've not seen anything from a model to persuade me they're not just stochastic parrots. Maybe I just have higher expectations of stochastic parrots than you do.
I agree with you that AI will have a big impact. We're talking about somewhere between "invention of the internet" and "invention of language" levels of impact, but it's going to take a couple of decades for this to ripple through the economy.
What is your definition of "stochastic parrot"? Mine is something along the lines of "produces probabilistic completions of language/tokens without having any meaningful internal representation of the concepts underlying the language/tokens."
Early LLMs were like that. That's not what they are now. An LLM got Gold on the Mathematical Olympiad - very difficult math problems that it hadn't seen in advance. You don't do that without some kind of working internal model of mathematics. There is just no way you can get to the right answer by spouting out plausible-sounding sentence completions without understanding what they mean. (If you don't believe me, have a look at the questions.)
Ignoring its negative connotation, it's more likely to be a highly advanced "stochastic parrot".
> "You don't do that without some kind of working internal model of mathematics."
This is speculation at best. Models are black boxes, even to those who make them. We can't discern a "meaningful internal representation" in a model, anymore than a human brain.
> "There is just no way you can get to the right answer by spouting out plausible-sounding sentence completions without understanding what they mean."
You've just anthropomorphised a stochastic machine, and this behaviour is far more concerning, because it implies we're special, and we're not. We're just highly advanced "stochastic parrots" with a game loop.
> This is speculation at best. Models are black boxes, even to those who make them. We can't discern a "meaningful internal representation" in a model, anymore than a human brain.
They are not pure black boxes. They are too complex to decipher, but it doesn't mean we can't look at activations and get some very high level idea of what is going on.
For world models specifically, the paper that first demonstrated that LLM has some kind of a world model corresponding to the task it is trained on came out in 2023: https://www.neelnanda.io/mechanistic-interpretability/othell.... Now you might argue that this doesn't prove anything about generic LLMs, and that is true. But I would argue that, given this result, and given what LLMs are capable of doing, assuming that they have some kind of world model (even if it's drastically simplified and even outright wrong around the edges) should be the default at this point, and people arguing that they definitely don't have anything like that should present concrete evidence ot that effect.
> We're just highly advanced "stochastic parrots" with a game loop.
If that is your assertion, then what's the point of even talking about "stochastic parrots" at all? By this definition, _everything_ is that, so it ceases to be a meaningful distinction.
Is there anything you can tell me that will help me drop the nagging feeling that gradient descent trained models will just never be good?
I understand all of what you said, but I can't get over that fact that the term AI is being used for these architectures. It seems like the industry is just trying to do a cool parlor trick in convincing the masses this is somehow AI from science fiction.
Maybe I'm being overly cynical, but a lot of this stinks.
> It seems like the industry is just trying to do a cool parlor trick in convincing the masses this is somehow AI from science fiction.
If you gave a random sci-fi writer from 1960s access to Claude, I'm fairly sure they wouldn't have any doubts over whether it is AI or not. They might argue about philosophical matters like whether it has a "soul" etc (there's plenty of that in sci-fi), but that is a separate debate.
The thing is AI is already "good" for a lot of things. It all depends on your definition of "good" and what you require of an AI model.
It can do a lot of things that are generally very effective. High reliability semantic parsing from images is just one thing that modern LLM's are very reliable at.
Wouldn’t you say that now, finally, what people call AI combines subsymbolic systems („gradient descent“) with search and with symbolic systems (tool calls)?
I had a professor in AI who was only working on symbolic systems such as SAT-solvers, Prolog etc. and the combination of things seems really promising.
Oh, and what would be really nice is another level of memory or fast learning ability that goes beyond burning in knowledge through training alone.
I had such a professor as well, but those people used to use the more accurate term "machine learning".
There was also wide understanding that those architectures were trying to imitate small bits of what we understood was happening in the brain (see marvin minsky's perceptron etc). The hope was, as I understood it that there would be some breakthrough in neuroscience that would let the computer scientists pick up the torch and simulate what we find in nature.
None of that seems to be happening anymore and we're just interested in training enough to fool people.
"AI" companies investing in brain science would convince me otherwise. At this point they're just trying to come up with the next money printing machine.
You asked earlier if you were being overly cynical, and I think the answer to that is "yes"
We are indeed simulating what we find in nature when we create neural networks and transformers, and AI companies are indeed investing heavily in BCI research. ChatGPT can write an original essay better than most of my students. Its also artificial. Is that not artificial intelligence?
Hiding the training data behind gradient descent and then making attributions to the program that responds using this model is certainly artificial though.
Can't you judge on the results though rather than saying AI isn't intelligent because it uses gradient descent and biology is intelligent because it uses wet neurons?
> They wrote off the coding ability of ChatGPT on version 3.5, for instance, and have missed all the advancements that have happened since.
I feel like I see now more dismissive comments than previously. As if people, initially confused, formed a firm belief since. And now new facts don't really change it, just entrench them in chosen belief.
There's three important beliefs at play in the A(G)I story:
1. When(if) AGI will arrive. It's likely going to be smeared out over a couple months to years, but relative to everything else, it's a historical blip. This really is the most contention belief with the most variability. It is currently predicted to be 8 years[1].
2. What percentage of jobs will be replaceable with AGI? Current estimates between 80-95% of professions. The remaining professions "culturally require" humans. Think live performance, artisanal goods, in-person care.
3. How quickly will AGI supplant human labor? What is the duration of replacement from inception to saturation? Replacement won't happen evenly, some professions are much easier to replace with AGI, some much more difficult. Let's estimate a 20-30 years horizon for the most stubborn to replace professions.
What we have is a ticking time bomb of labor change at least an order of magnitude greater than the transition from an agricultural economy to an industrial economy or from an industrial economy to a service economy.
Those happened over the course of several generations. Society: culture, education, the legal system, the economy, where able to absorb the changes over 100-200 years. Yet we're talking about a change on the same scale happening 10 times faster - within the timeline of one's professional career. And still, with previous revolutions we had incredible unrest, and social change. Taken as a whole, we'll have possibly the majority of the economy operating outside the territory of society, the legal system, and the existing economy. A kid born on the the "day" AGI arrives will become an adult in a profoundly different world as if born on a farm in 1850 and reaching adulthood in a city in 2000.
For the AI believer who has an axiom that AGI is around the corner to take over knowledge work, isn't that just "a small matter of robotics" to either tele-operate a physical avatar or deploy a miniaturized AI in an autonomous chassis?
I'm afraid it's really a matter of faith, in either direction, to predict whether an AI can take over the autonomous decision making and robotic systems can take over physical actions which are currently delegated to human professions. And, I think many robotic control problems are inherently solved if we have sufficient AI advancement.
What are you talking about? This is common knowledge.
Median forecasts indicated a 50% probability of AI systems being capable of automating 90% of current human tasks in 25 years and 99% of current human tasks in 50 years[1]
The scope of work replaceable by embodied AGI and the speed of AGI saturation of vastly under estimated. The bottle necks are production of a replacement workforce, not retraining human laborers.
Work is central to identity. It may seem like it is merely toil. You may even have a meaningless corporate job or be indentured. But work is the primary social mechanism that distributes status amongst communities.
A world of 99 percent of jobs being done by AGI (which there remains no convincing grounds for how this tech would ever be achieved) feels ungrounded in the reality of human experience. Dignity, rank, purpose etc are irreducible properties of a functional society, which work currently enables.
It's far more likely that we'll hit some kind of machine intelligence threshold before we see a massive social pushback. This may even be sooner than we think.
Have you considered that perhaps tying dignity and status to work is a major flaw in our social arrangements, and AI (that would actually be good enough to replace humans) is the ultimate fix?
If AI doing everything means that we'll finally have a truly egalitarian society where everyone is equal in dignity and rank, I'd say the faster we get there, the better.
Pretend I'm a farmer in 1850 and I have a belief that the current proportion of jobs in agriculture - 55% of jobs in 1850 would drop to 1.2% in 2022 due to automation and technological advances.
Why would hearing "work is central to identity," and "work is the primary social mechanism that distributes status amongst communities," change my mind?
My apologies if you thought I was arguing that a consequence of AGI would be a permanent reduction in the labor force. What I believe is that the baumol effect will take over non-replaceable professions. A very tiny part of our current economy will become the majority of our future economy.
But the reports are from shills. The impact of ai is almost non existent. The greatest impact it had was on role-playing. It's hardly even useful for coding.
And that all wouldn't be a problem if it wasn't for the wave of bots that makes the crypto wave seem like child's play.
I don't understand people who say AI isn't useful for coding. Claude Code improved my productivity 10x. I used to put solid 8 hours a day in my remote software engineering job. Now I finish everything in 2 hours and go play with my kids. And my performance is better than before.
I don't understand people who say this. My knee jerk reaction (which I rein in because it's incredibly rude) is always "wow, that person must really suck at programming then". And I try to hold to the conviction that there's another explanation. For me, the vast, vast majority of the time I try to use it, AI slows my work down, it doesn't speed it up. As a result it's incredibly difficult to understand where these supposed 10x improvements are being seen.
Usually the "10x" improvements come from greenfield projects or at least smaller codebases. Productivity improvements on mature complex codebases are much more modest, more like 1.2x.
If you really in good faith want to understand where people are coming from when they talk about huge productivity gains, then I would recommend installing Claude Code (specifically that tool) and asking it to build some kind of small project from scratch. (The one I tried was a small app to poll a public flight API for planes near my house and plot the positions, along with other metadata. I didn't give it the api schema at all. It was still able to make it work.) This will show you, at least, what these tools are capable of -- and not just on toy apps, but also at small startups doing a lot of greenfield work very quickly.
Most of us aren't doing that kind of work, we work on large mature codebases. AI is much less effective there because it doesn't have all the context we have about the codebase and product. Sometimes it's useful, sometimes not. But to start making that tradeoff I do think it's worth first setting aside skepticism and seeing it at its best, and giving yourself that "wow" moment.
I was able to realize huge productivity gains working on a 20 years old codebase with 2+ million loc, as I mentioned in the sister post. So I disagree that big productivity gains are only on greenfield projects. Realizing productivity gains on mature code based requires more skill and upfront setup. You need to put some work in your claude.md and give Claude tools for accessing necessary data, logs, build process. It should be able to test your code autonomously as much as possible. In my experience, people who say they are not able to realize productivity gains don't put enough effort to understand these new tools and setup them properly for their project.
You should write a blog post on this! We need more discussion of how to get traction on mature codebases and less of the youtube influencers making toy greenfield apps. Of course at a high level it's all going to be "give the model the right context" (in Claude.md etc.) but the devil is in the details.
So, I'm doing that right now. You do get wow moments, but then you rapidly hit the WTF are you doing moments.
One of the first three projects I tried was a spin on a to-do app. The buttons didn't even work when clicked.
Yes, I keep it iterating, give it a puppeteer MCP, etc.
I think you're just misunderstanding how hard it is to make a greenfield project when you have a super-charged stack overflow that AI is.
Greenfield projects aren't hard, what's hard is starting them.
What AI has helped me immensely with is blank page syndrome. I get it to spit out some boilerplate for a SINGLE page, then boom, I have a new greenfield project 95% my own code in a couple of days.
That's the mistake I think you 10x ers are making.
And you're all giddy and excited and are putting in a ton of work without realising you're the one doing the work, not the AI.
And you'll eventually burn out on that.
And those of us who are a bit more skeptical are realising we could have done it on our own, faster, we just wouldn't normally have bothered. I'd have gone done some gardening with that time instead.
I'm not a 10x-er. My job is working on a mature codebase. The results of AI in that situation are mixed, 1.2x if you're lucky.
My recommendation was that it's useful to try the tools on greenfield projects, since they you can see them at their best.
The productivity improvements of AI for greenfield projects are real. It's not all bullshit. It is a huge boost if you're at a small startup trying to find product market fit. If you don't believe that and think it would be faster to do it all manually I don't know what to tell you - go talk to some startup founders, maybe?
For me, most of the value comes from Claude Code's ability to 1. research codebase and answer questions about it 2. Perform adhoc testing on the code. Actually writing code is icing on the cake. I work on large code base with more than two million lines of code. Claude Code's ability to find relevant code, understand its purpose, history and interfaces is very time saving. It can answer in minutes questions that would take hours of digging through the code base. Ad hoc testing is another thing. E.g. I can just ask it to test an API endpoint. It will find correct data to use in the database, call the endpoint and verify that it returned correct data and e.g. everything was updated in db correctly.
It depends on what kind of code you're working on and what tools you're using. There's a sliding scale of "well known language + coding patterns" combined with "useful coding tools that make it easy to leverage AI", where AI can predict what you're going to type, and also you can throw problems at the AI and it is capable of solving "bigger" problems.
Personally I've found that it struggles if you're using a language that is off the beaten path. The more content on the public internet that the model could have consumed, the better it will be.
> They wrote off the coding ability of ChatGPT on version 3.5, for instance, and have missed all the advancements that have happened since.
> It's hardly even useful for coding.
I’m curious what kind of projects you’re writing where AI coding agents are barely useful.
It’s the “shills” on YouTube that keep me up to date with the latest developments and best practices to make the most of these tools. To me it makes tools like CC not only useful but indispensable. Now I do not focus on writing the thing, but I focus on building agents who are capable of building the thing with a little guidance.
In every technology wave so far, we've disrupted many existing jobs. However we've also opened up new kinds of jobs. And, because it is easier to retrain humans than build machines for those jobs, we wound up with more and better jobs.
This is the first technology wave that doesn't just displace humans, but which can be trained to the new job opportunities more easily than humans can. Right now it can't replace humans for a lot of important things. But as its capabilities improve, what do displaced humans transition to?
I don't think that we have a good answer to that. And we may need it sooner rather than later. I'd be more optimistic if I trusted our leadership more. But wise political leadership is not exactly a strong point for our country right now.
> but which can be trained to the new job opportunities more easily than humans can
What makes you think that? Self driving cars have had untold billions of dollars in reaearch and decades in applied testing, iteration, active monitoring, etc and it still has a very long tail of unaddressed issues. They've been known to ignore police traffic redirections, they've run right through construction barriers, and recently they were burnt to a crisp in the LA riots, completely ignorant of the turmoil that was going on. A human driver is still far more adaptive and requires a lot less training than AI, and humans are ready to handle the infinitely long tail of exceptions to the otherwise algorithmic task of driving, which follows strict rules.
And when you talk about applying this same tech, so confidently, to domains far more nuanced and complex than driving, with even less training data than to go off, I find myself firmly in the skeptics camp, that holds you will struggle even harder to apply humanoid robotics in uncontrolled environments across a diverse range of tasks without human intervention or piloting or maintenence or management.
Unemployment is still near all time lows, this will persist for sometime as we have a structural demographic problem with massive amounts of retirees and less children to support the population "pyramid" (which is looking more like a tapering rectangle these days).
A few months ago I saw one driverless car maybe every three days. Now I see roughly 3-5 every day.
I get that it’s taken a long time and a lot of hype that hasn’t panned out. But once the tech works and it’s just about juicing the scale then things shift rapidly.
Even if you think “oh that’s the next generation’s problem” if there is a chance you’re wrong, or if you want to be kind to the next generation: now is the time to start thinking and planning for those problems.
I think the most sensible answer would be something like UBI. But I also think the most sensible answer for climate change is a carbon tax. Just because something is sensible doesn’t meant it’s politically viable.
I guess you live in a place with perfect weather year round? I don’t and I haven’t seen a robo taxi my entire life. I do have access to a Tesla though and it’s current self-driving capabilities are not even close to anything I would call „autonomous“ und real world conditions (including weather).
Maybe the tech will at some point be good enough. At the current rate of improvement this will still take decades at least. Which is sad because I personally hoped that my kids would never have to get a driver’s License.
Can people handle that? people have millions of accidents in perfect weather and driving conditions. I think the reason most people don't like at drivers now is because it's easy to assign blame to the driver. Ai doesn't have that easy out. Suddenly we're faced with the cold truth: reality is difficult and sometimes shit happens and someone gets the short end of the stick.
This is kind of weird. It's like saying "Driving in snow is impossible", well we know it is possible because humans do it.
And this even ignores all the things modern computer controlled vehicles do above and beyond humans as it is. Take most people used to driving modern cars and chunk them an old armstrong steering car and they'll put themselves into a ditch on a rainy day.
Really the last things in self driving cars is fast portable compute and general intelligence. General intelligence will be needed for the million edge cases we need while driving. The particular problem is once we get this general intelligence a lot of problems are going to disappear and bring up a whole new set of problems for people and society at large.
I've ridden just under 1,000 miles in autonmous (no scare quotes) Waymos, so it's strange to see someone letting Tesla's abject failure inform their opinions on how much progress AVs have made.
Tesla that got fired as a customer by Mobileye for abusing their L2 tech is your yardstick?
Anyways, Waymo's DC launch is next year, I wonder what the new goalpost will be.
I'm not sure the guy who did the Tesla crash test hoax and (partially?) faked his famous glitterbomb pranks is the best source. I would separately verify anything he says at this point.
First I’m hearing of that. In doing a search, I see a lot of speculation but no proof. Knowing the shenanigans perpetrated by Musk and his hardcore fans, I’ll take theories with a grain of salt.
> and (partially?) faked his famous glitterbomb pranks
That one I remember, and the story is that the fake reactions were done by a friend of a friend who borrowed the device. I can’t know for sure, but I do believe someone might do that. Ultimately, Rober took accountability, recognised that hurt his credibility, and edited out that part from the video.
I have no reason to protect Rober, but also have no reason to discredit him until proof to the contrary. I don’t follow YouTube drama but even so I’ve seen enough people unjustly dragged through the mud to not immediately fall for baseless accusations.
One I bumped into recently was someone describing the “fall” of another YouTuber, and in one case showed a clip from an interview and said “and even the interviewer said X about this person”, with footage. Then I watched the full video and at one point the interviewer says (paraphrased) “and please no one take this out of context, if you think I’m saying X, you’re missing the point”.
So, sure, let’s be critical about the information we’re fed, but that cuts both ways.
Humans use only cameras. And humans don't even have true 360 coverage on those cameras.
The bottleneck for self-driving technology isn't sensors - it's AI. Building a car that collects enough sensory data to enable self-driving is easy. Building a car AI that actually drives well in a diverse range of conditions is hard.
That's actually categorically false. We also use sophisticated hearing, a well developed sense of inertia and movement, air pressure, impact, etc. And we can swivel our heads to increase our coverage of vision to near 360°, while using very dependable and simple technology like mirrors to cover the rest. Add to that that our vision is inherently 3D and we sport a quite impressive sensor suite ;-). My guess is that the fidelity and range of the sensors on a Tesla can't hold a candle to the average human driver. No idea how LIDAR changes this picture, but it sure is better than vision only.
I think there is a good chance that what we currently call "AI" is fundamentally not technologically capable of human levels of driving in diverse conditions. It can support and it can take responsibility in certain controlled (or very well known) environments, but we'll need fundamentally new technology to make the jump.
Yes, human vision is so bad it has to rely on a swivel joint and a set of mirrors just to approximate 360 coverage.
Modern cars can have 360 vision at all times, as a default. With multiple overlapping camera FoVs. Which is exactly what humans use to get near field 3D vision. And far field 3D vision?
The depth-discrimination ability of binocular vision falls off with distance squared. At far ranges, humans no longer see enough difference between the two images to get a reliable depth estimate. Notably, cars can space their cameras apart much further, so their far range binocular perception can fare better.
How do humans get that "3D" at far distances then? The answer is, like it usually is when it comes to perception, postprocessing. Human brain estimates depth based on the features it sees. Not unlike an AI that was trained to predict depth maps from a single 2D image.
If you think that perceiving "inertia and movement" is vital, then you'd be surprised to learn that an IMU that beats a human on that can be found in an average smartphone. It's not even worth mentioning - even non-self-driving cars have that for GPS dead reckoning.
I mean, technically what we need is fast general intelligence.
A lot of the problems with driving aren't driving problems. They are other people are stupid problems, and nature is random problems. A good driver has a lot of ability to predict what other drivers are going to do. For example people commonly swerve slightly on the direction they are going to turn, even before putting on a signal. A person swerving in a lane is likely going to continue with dumb actions and do something worse soon. Clouds in the distance may be a sign of rain and that bad road conditions and slower traffic may exist ahead.
Very little of this has to do with the quality of our sensors. Current sensors themselves are probably far beyond what we actually need. It's compute speed (efficiency really) and preemption that give humans an edge, at least when we're paying attention.
A fine argument in principle, but even if we talk only about vision, the human visual system is much more powerful than a camera.
Between brightly sunlit snow and a starlit night, we can cover more than 45 stops with the same pair of eyeballs; the very best cinematographic cameras reach something like 16.
In a way it's not a fair comparison, since we're taking into account retinal adaptation, eyelids/eyelashes, pupil constriction. But that's the point - human vision does not use cameras.
Indeed. And the comparison is unnecessarily unfair.
You're comparing the dynamic range of a single exposure on a camera vs. the adaptive dynamic range in multiple environments for human eyes. Cameras do have comparable features: adjustable exposure times and apertures.
Additionally cameras can also sense IR, which might be useful for driving in the dark.
Yes, and? Human eyes also have limited instantaneous dynamic range much smaller than their total dynamic range. Part of the mechanism is the same (pupil vs. camera iris). They can't see starlight during the day and tunnels need adaption lighting to ease them in/out.
Exposure adjustment is constrained by frame rate, that doesn't buy you very much dynamic range.
A system that replicates the human eye's rapid aperture adjustment and integration of images taken at quickly changing aperture/ filter settings is very much not what Tesla is putting in their cars.
But again, the argument is fine in principle. It's just that you can't buy a camera that performs like the human visual system today.
Human eyes are unlikely the only thing in parameter-space that's sufficient for driving. Cameras can do IR, 360° coverage, higher frame rates, wider stereo separation... but of course nothing says Teslas sit at a good point in that space.
Ah yeah, that's making even more assumptions. Not only does it assume the cameras are powerful enough but that there already is enough compute. There's a sensing-power/compute/latency tradeoff. That is you can get away with poorer sensors if you have more compute that can filter/reconstruct useful information from crappy inputs.
Self-driving cars beat humans on safety already. This holds for Waymos and Teslas both.
They get into less accidents, mile for mile and road type for road type, and the ones they get into trend towards less severe. Why?
Because self-driving cars don't drink and drive.
This is the critical safety edge a machine holds over a human. A top tier human driver in the top shape outperforms this generation of car AIs. But a car AI outperforms the bottom of the barrel human driver - the driver who might be tired, distracted and under influence.
I trust Tesla's data on this kind of stuff only as far as a Starship can travel on its return trip to Mars. Anything coming from Elon would have to be audited by an independent entity for me to give it an ounce of credence.
Generally you are comparing Apples and Oranges if you are comparing the safety records of i.e. Waymos to that of the general driving population.
Waymos drive under incredibly favorable circumstances. They also will simply stop or fall back on human intervention if they don't know what to do – failing in their fundamental purpose of driving from point A to point B. To actually get comparable data, you'd have to let Waymos or Teslas do the same type of drives that human drivers do, under the same curcumstances and without the option of simply stopping when they are unsure, which they simply are not capable of doing at the moment.
That doesn't mean that this type of technology is useless. Modern self-driving and adjacent tech can make human drivers much safer. I imagine, it would be quite easy to build some AI tech that has a decent success rate in recognizing inebriated drivers and stopping the cars until they have talked to a human to get cleared for driving. I personally love intelligent lane and distance assistance technology (if done well, which Tesla doesn't in my view). Cameras and other assistive technology are incredibly useful when parking even small cars and I'd enjoy letting a computer do every parking maneuver autonomously until the end of my days. The list could go on.
Waymos have cumulatively driven about 100 million miles without a safety driver as of July 2025 (https://fifthlevelconsulting.com/waymos-100-million-autonomo...) over a span of about 5 years. This is such a tiny fraction of miles driven by US (not to speak of worldwide) drivers during that time, that it can't usefully be expressed. And they've driven these miles under some of the most favorable conditions available to current self-driving technology (completely mapped areas, reliable and stable good weather, mostly slow, inner city driving, etc.). And Waymo themselves have repeatedly said that overcoming the limitations of their tech will be incredibly hard and not guaranteed.
This video proves nothing other than "a YouTuber found a funny viral video idea".
Teslas "interpret the environment in 3D space" too - by feeding all the sensor data into a massive ML sensor fusion pipeline, and then fusing that data across time too.
This is where the visualizers, both the default user screen one and the "Terminator" debugging visualizer, get their data from. They show plain and clear that the car operates in a 3D environment.
You could train those cars to recognize and avoid Wile E. Coyote traps too, but do you really want to? The expected amount of walls set in the middle of the road with tunnels painted onto them is very close to zero.
Once computers and AIs can approach even a small fraction of the our capacity then sure, only cameras is fine, it's a shame that our suite of camera data processing equipment is so far beyond our understanding that we don't even have models of how it might work at its core.
Even at that point, why would you possibly use only cameras though, when you can get far better data by using multiple complementary systems? Humans still crash plenty often, in large part because of how limited our "camera" system can be.
Many cameras nowadays match or exceed the eye in dynamic range. Specially if you consider that cameras can vary their exposure from frame to frame, similar to the eye, but much faster.
What's more is, the power of depth perception in binocular vision is a function of distance between two cameras. The larger that distance is, the further out depth can be estimated.
Human skull only has two eyesockets, and it can only get this wide. But cars can carry a lot of cameras, and maintain a large fixed distance between them.
Even though it's false, let's imagine that's true.
Our cameras (also called eyes) have way better dynamic range, focus speed, resolution and movement detection capabilities, Backed by a reduced bandwidth peripheral vision which is also capable of detecting movement.
No camera, incl. professional/medium format still cameras are that capable. I think one of the car manufacturers made a combined tele/wide lens system for a single camera which can see both at the same time, but that's it.
Dynamic range, focus speed, resolution, FoV and motion detection still lacks.
...and that's when we imagine that we only use our eyes.
That’s the mistake Elon Musk made and the same one you’re making here.
Not to mention that humans driving with cameras only is absolutely pathetic. The amount of accidents that occur that are completely avoidable doesn’t exactly inspire confidence that all my car needs to be safe and get me to my destination is a couple cameras.
This isn't a "mistake". This is the key problem of getting self-driving to work.
Elon Musk is right. You can't cram 20 radars, 50 LIDARs and 100 cameras into a car and declare self-driving solved. No amount of sensors can redeem a piss poor driving AI.
Conversely, if you can build an AI that's good enough, then you don't need a lot of sensors. All the data a car needs to drive safely is already there - right in the camera data stream.
if additional sensors improve the ai, then your last statement is categorically untrue. The reason it worked better is that those additional sensors gave it information that wac not available in the video stream
So far, every self-driving accident where the self-driving car was found to be at fault follows the same pattern: the car had all the sensory data it needed to make the right call, and it didn't make the right call. The bottleneck isn't in sensors.
In that case we're probably even further from self-driving cars than I'd have guessed. Adding more sensors is a lot cheaper than putting a sufficient amount of compute in a car.
Multiple things can be true at the same time you realize. Some problems, such as insufficient AI can have a larger effect on safety, but more data to work with as well as train on always wins. You want lidar.
You keep insisting that cameras are good enough, but it’s empirically possible since safe autonomous driving AI has not been achieved yet to say that cameras alone collect enough data.
The minimum setup without lidar would be cameras, radar, ultrasonic, GPS/GNSS + IMU.
Redundancy is key. With lidar, multiple sensors cover each other’s weaknesses. If LiDAR is blinded by fog, radar steps in.
Crazy that billions of humans drive around every day with two cameras. And they have various defects too (blind spots, foveated vision, myopia, astigmatism, glass reflection, tiredness, distraction).
The nice thing about LiDAR is that you can use it to train a model to simulate a LiDAR based on camera inputs only. And of course to verify how good that model is.
I can't wait until V2X and sensor fusion comes to autonomous vehicles, greatly improving the detailed 3D mapping of LiDAR, the object classification capabilities of cameras, and the all-weather reliability of radar and radio pings.
The goalpost will be when you can buy one and drive it anywhere. How many cities are Waymo in now? I think what they are doing is terrific, but each car must cost a fortune.
I’m a bit confused. If we’re talking about consumer cars, the end goal is not to rent a car that can drive itself, the end goal is to own a car that can drive itself, and so it doesn’t matter if the car is available for purchase but costs $250,000 because few consumers can afford that, even wealthy ones.
a) I'm not talking about consumer cars, you are. I said very plainly this level of capability won't reach consumers soon and I stand by that. Some Chinese companies are trying to make it happen in the US but there's too many barriers.
b) If there was a $250,000 car that could drive itself around given major cities, even with the geofence, it would sell out as many units as could be produced. That's actually why I tell people to be weary of BOM costs: it doesn't reflect market forces like supply and demand.
You're also underestimating both how wealthy people and corporations are, and the relative value being provided.
A private driver in a major city can easily clear $100k a year on retainer, and there are people are paying it.
If you look at the original comment that you replied to, the goalpost was explained clearly:
> The goalpost will be when you can buy one and drive it anywhere.
So let’s just ignore the non-consumer parts entirely to avoid shifting the goalpost. I still stand by the fact that the average (or median) consumer will not be able to afford such an expensive car, and I don’t think it’s controversial to state this given the readily available income data in the US and various other countries. The point isn’t that it exists, Rolls Royce and Maseratis exist, but they are niche and so if self-driving cars will be so expensive to be niche they won’t actually make a real impact on real people, thus the goalpost of general availability to a consumer.
Why the scare quotes on wait? There is literally nothing for you to do but wait.
At the end of the day it's not like no one lives in SF, Phoenix, Austin, LA, and Atlanta either. There's millions of people with access to the vehicles and they're doing millions of rides... so acting like it's some great failing of AVs that the current cities are ones with great weather is frankly, a bit stupid.
It takes 5 seconds to look up the progress that's been made even in the last few years.
I worked at Zoox, which has similar teleoperations to Waymo: remote operators can't joystick the vehicles.
So if we're saying how many times would it have crashed without a human: 0.
They generally intervene when the vehicles get stuck and that happens pretty rarely, typically because humans are doing something odd like blocking the way.
Not sure how exactly politicians will jump from “minimal wages don’t have to be livable wages” and “people who are able to work should absolutely not have access to free healthcare” and “any tax-supported benefits are actually undeserved entitlements and should be eliminated” to “everyone deserves a universal basic income”.
I wouldn't underestimate what can happen if 1/3 of your workforce is displaced and put aside with nothing to do.
People are usually obedient because they have something in life and they are very busy with work. So they don't have time or headspace to really care about politics. When suddenly big numbers of people start to more care about politics it leads to organizing and all kinds of political changes.
What i mean is that it wouldn't be current political class pushing things like UBI. At same time it seems that some of current elites are preparing for this and want to get rid of elections altogether to keep the status quo.
I wouldn't underestimate how easily AI will suppress this through a combination of ultrasurveillance, psychological and emotional modelling, and personally targeted persuasion delivered by chatbot etc.
If all else fails you can simply bomb city blocks into submission. Or arrange targeted drone decapitations of troublemakers. (Possibly literally.)
The automation and personalisation of social and political control - and violence - is the biggest difference this time around. The US has already seen a revolution in the effectiveness of mass state propaganda, and AI has the potential to take that up another level.
What's more likely to happen is survivors will move off-grid altogether - away from the big cities, off the Internet, almost certainly disconnected and unable to organise unless communication starts happening on electronic backchannels.
Speculating here, but I don't believe that the government would have the time or organization to do this. Widespread political unrest caused by job losses would be the first step. Almost as soon as there is some type of AI that can replace mass amounts of workers, people will be out on the streets - most people don't have 1-2 months of living expenses saved up. At that point, the government would realize that SHTF - but it's too late, people would be protesting / rioting in droves - doesn't matter how many drones you can produce, or whether or not you can psychologically manipulate people when all they want is... food.
I could be entirely wrong, but it feels like if AI were to get THAT good, the government would be affected just as much as the working class. We'd more likely see total societal collapse rather than the government maintaining power and manipulating / suppressing the people.
That is a lot assumption right there. Starving masses can't logically and physically fight with AI or government for long. They become weak after weeks or months? At that point government would be smaller and controlled probably be part of AI owners.
IF they dont have 1-2 months of living expenses saved, they die. They can'be a big threat even in millions??? they dont have organization capacity or anything that matches
I am not sure AI will be that much more different or effective than what has been done by rich elites forever. There are already gigantic agencies and research centres focusing on opinion manipulation. And these are working - just look at how poor masses are voting for policies that are clearly against them (lowering taxes for the rich etc).
But all these voters still have their place in the world and don't have free time to do anything. I don't think people are so powerless once you really displace big potion of them.
For example look at people here - everywhere you can read how it's harder to find programming job. Companies are roleplaying the narrative that they don't need programmers anymore. Do you think this army of jobless programmers will become mind controlled by tech they themselves created? Or they will use their free time to do something about their situation?
Displacing/canceling/deleting/killing individuals in society works because most people wave their and thinking this couldn't happen to them. One you start getting into bigger potions of people the dynamic is different.
This is why Palantir and others exist to stop masses.It´s been only tested but it will only grow from there and stop millions of people. SV you built this
Same way it happened last time we had a bunch of major advancements in labor rights. Things get shitty everywhere, but at an uneven pace, which combined with random factors causes a spark to set off massive unrest in some countries. Torches and pitchforks are out, many elite heads roll, and the end result is likely to be even worse, but elites in other countries look at all this from the outside and go, "hmm, maybe we shouldn't get people so desperate that they will do that to us".
Not sure how exactly politicians will jump from ...
Well, if one believes that the day will come when their choices will be "make that jump" or "the guillotine", then it doesn't seem completely outlandish.
The money transferred from tax payers to people without money is in effect a price for not breaking the law.
If AI makes it much easier to produce goods, it reduces price of money, making it easier to pay some money to everyone in exchange for not breaking the law.
UBI is not a good solution because you still have to provision everything on the market, so it's a subsidy to private companies that sell the necessities of life on the market. If we're dreaming up solutions to problems, much better would be to remove the essentials from the market and provide them to everyone universally. Non-market housing, healthcare, education all provided to every citizen by virtue of being a human.
Your solution would ultimately lead to treating all those items as uniform goods, but they are not. There are preferences different people have. This is why the price system is so useful. It indicates what is desired by various people and gives strong signals as to what to make or not. If you have a central authority making the decisions, they will not get it right. Individual companies may not get it right, but the corrective mechanism of failure (profit loss, bankruptcy) corrects that while when governments provide this, it is extremely difficult to correct it as it is one monolithic block. In the market, you can choose various different companies for different needs. In the government in a democracy, you have to choose all of one politician or all of another. And as power is concentrated, the worst people go after it. It is true with companies, but people can choose differently. With the state, there is no alternative. That is what makes it the state rather than a corporation.
It is also interesting that you did not mention food, clothing and super-computers-in-pockets. While government is involved in everything, they are less involved in those markets than with housing, healthcare, and education, particularly in mandates as to what to do. Government has created the problem of scarcity in housing, healthcare, and education. Do you really think the current leadership of the US should control everyone's housing, healthcare, and education? The idea of a UBI is that it strips the politicians of that fine-grained control. There is still control that can be leveraged, but it comes down to a single item of focus. It could very well be disastrous, but it need not be whereas the more complex system that you give politicians control over, the more likely it will be disastrous.
You can’t provide valuable things for “free” en masse without institutionalizing either slavery or robbery. The value must come from somewhere.
The costs of what you propose are enormous. No legislation can change that fact.
There ain’t no such thing as a free lunch.
Who’s going to pay for it? Someone who is not paying for it today.
How do you intend to get them to consent to that?
Or do you think that the needs of the many should outweigh the consent of millions of people?
The state, the only organization large enough to even consider undertaking such a project, has spending priorities that do not include these things. In the US, for example, we spend the entire net worth of Elon Musk (the “richest man in the world”, though he rightfully points out that Putin owns far more than he does) about every six months on the military alone. Add in Zuckerberg and you can get another 5 months or so. Then there’s the next year to think about. Maybe you can do Buffet and Gates; what about year three?
That’s just for the US military, at present day spending levels.
What you’re describing is at least an order of magnitude more expensive than that, just in one country that only has 4% of people. To extend it to all human beings, you’re talking about two more orders of magnitude.
There aren’t enough billionaires on the entire planet even to pay for one country’s military expenses out of pocket (even if you completely liquidated them), and this proposed plan is 500-1000x more spending than that. You’re talking about 3-5 trillion dollars per year just for the USA - if you extrapolate out linearly, that’d be 60-200 trillion per year for the Earth.
Even if you could reduce cost of provision by 90% due to economies of scale ($100/person/month for housing, healthcare, and education combined, rather than $1000 - a big stretch), it is still far, far too big to do under any currently envisioned system of wealth redistribution. Society is big and wealthy private citizens (ie billionaires) aren’t that numerous or rich.
There is a reason we all pay for our own food and housing.
> You’re talking about 3-5 trillion dollars per year just for the USA
I just want to point out that's about a fifth of our GDP and we spend about this much for healthcare in the US. We badly need a way to reduce this to at least half.
> There is a reason we all pay for our own food and housing.
The main reason I support UBI is I don't want need based or need aware distribution. I want everyone to get benefits equally regardless of income or wealth. That's my entire motivation to support UBI. If you can come up with another something that guarantees no need based or need aware and does not have a benefit cliff, I support that too. I am not married to UBI.
Just want to point out that any abstract intrinsic value about the economy like GDP is a socialized illusion
Reduce costs by eliminating fiat ledgers that only have value if we believe and realize the real economy is physical statistics and ship resources where the people demand
But of course that simple solution violates the embedded training of Americans. So it's a non-starter and we'll continue to desperately seek some useless reformation of an antiquated social system.
Honestly, what type of housing do you envision under a UBI system? Houses? Modern apartment buildings? College dormitory-like buildings? Soviet-style complexes? Prison-style accommodations? B stands for basic, how basic?
I think a UBI system is only stable in conjunction with sufficient automation that work itself becomes redundant. Before that point, I don't think UBI can genuinely be sustained; and IMO even very close to that point the best I expect we will see, if we're lucky, is the state pension age going down. (That it's going up in many places suggests that many governments do not expect this level of automation any time soon).
Therefore, in all seriousness, I would anticipate a real UBI system to provide whatever housing you want, up to and including things that are currently unaffordable even to billionaires, e.g. 1:1 scale replicas of any of the ships called Enterprise including both aircraft carriers and also the fictional spaceships.
That said, I am a proponent of direct state involvement in the housing market, e.g. the UK council housing system as it used to be (but not as it now is, there're not building enough):
The bigger issue to me is that not all geography is anything close to equal.
I would much rather live on a beach front property than where I live right now. I don't because the cost trade off is too high.
To bring the real estate market into equilibrium with UBI you would have to turn rural Nebraska into a giant slab city like ghetto. Or every mid sized city would have a slab city ghetto an hour outside the city. It would be ultra cheap to live there but it would be a place everyone is trying to save up to move out of. It would create a completely new under class of people.
> I would much rather live on a beach front property than where I live right now. I don't because the cost trade off is too high.
Yes, and?
My reference example was two aircraft carriers and 1:1 models of some fictional spacecraft larger than some islands, as personal private residences.
> To bring the real estate market into equilibrium with UBI you would have to turn rural Nebraska into a giant slab city like ghetto. Or every mid sized city would have a slab city ghetto an hour outside the city. It would be ultra cheap to live there but it would be a place everyone is trying to save up to move out of. It would create a completely new under class of people.
Incorrect.
Currently, about 83e6 hectares of this planet is currently a "built up area".
4827e6 ha, about 179 times the currently "built up" area, is cropland and grazing land. Such land can produce much more food than it already does, the limiting factor is the cost of labour to build e.g. irrigation and greenhouses (indeed, this would also allow production in what are currently salt flats and deserts, and enable aquaculture for a broad range of staples); as I am suggesting unbounded robot labour is already a requirement for UBI, this unlocks a great deal of land that is not currently available.
The only scenario in which I believe UBI works is one where robotic labour gives us our wealth. This scenario is one in which literally everyone can get their own personal 136.4 meters side length approximately square patch. That's not per family, that's per person. Put whatever you want on it — an orchard, a decorative garden, a hobbit hole, a castle, and five Olympic-sized swimming pools if you like, because you could fit all of them together at the same time on a patch that big.
The ratio (and consequently land per person), would be even bigger if I didn't disregard currently unusable land (such as mountains, deserts, glaciers, although of these three only glaciers would still be unusable in the scenario), and also if I didn't disregard land which is currently simply unused but still quite habitable e.g. forests (4000e6 ha) and scrub (1400e6 ha).
In the absence of future tech, we get what we saw in the UK with "council housing", but even this is still not as you say. While it gets us cheap mediocre tower blocks, it also gets us semi-detached houses with their own gardens, and even the most mediocre of the widely disliked Brutalist architecture era of the UK this policy didn't create a new underclass, it provided homes for the existing underclass. Finally, even at the low end they largely (but not universally) were an improvement on what came before them, and this era came to an end with a government policy to sell those exact same homes cheaply to their existing occupants.
> Some people’s idea of wealth is to live in high density with others.
Very true. But I'd say this is more of a politics problem than a physics one: any given person doesn't necessarily want to be around the people that want to be around them.
> If every place has the population density of Wyoming, real wealth will be the ability to live in real cities. That’s much like what we have now.
Cities* are where the jobs are, where the big money currently gets made, I'm not sure how much of what we have today with high density living is to show your wealth or to get your wealth — consider the density and average wealth of https://en.wikipedia.org/wiki/Atherton,_California, a place I'd never want to live in for a variety of reasons, which is (1) legally a city, (2) low density, (3) high income, (4) based on what I can see from the maps, a dorm town with no industrial or commercial capacity, the only things I can see which aren't homes (or infrastructure) are municipal and schools.
* in the "dense urban areas" sense, not the USA "incorporated settlements" sense, not the UK's "letters patent" sense
Real wealth is the ability to be special, to stand out from the crowd in a good way.
In a world of fully automated luxury for all, I do not know what this will look like.
Peacock tails of some kind to show off how much we can afford to waste? The rich already do so with watches that cost more than my first apartment, perhaps they'll start doing so with performative disfiguring infections to show off their ability to afford healthcare.
I appreciate your perspective but clearly most UBI advocates are talking about something much sooner. However my response to your vision is that even if "work" is totally automated or redundant, the resources (building materials) and the energy to power the robots or whatever, will be more expensive and tightly controlled than ever. Power and wealth simply wont allow everything to be accessible to everyone. The idea that people would be able to build enormous mansions (or personal aircraft carriers or spaceships) just sounds rather absurd, no offense, but come on.
I think we are talking about two different things. The UBI I'm talking about won't allow you to have an enormous mansion, maybe just enough to avoid starving. The main plus point is it doesn't do means testing. The second plus point is if you really hate your job, you can quit without starving. This means we can avoid coworkers who really would like to not be there.
I think it is a solid idea. I don't know how it fits in the broader scheme of things though. If everyone in the US gets a UBI of the same amount, will people move somewhere rent is low?
From wikipedia:
> a social welfare proposal in which all citizens of a given population regularly receive a minimum income in the form of an unconditional transfer payment, i.e., without a means test or need to perform work.
It doesn't say you aren't allowed to work for more money. My understanding is you can still work as much as you want. You don't have to to get this payment. And you won't be penalized for making too much money.
> I think we are talking about two different things. The UBI I'm talking about won't allow you to have an enormous mansion, maybe just enough to avoid starving.
We are indeed talking about different things with UBI here, but I'm asserting that the usual model of it can't be sustained without robots doing the economic production.
If the goal specifically is simply "nobody starves", the governments can absolutely organise food rations like this, food stamps exist.
> If everyone in the US gets a UBI of the same amount, will people move somewhere rent is low?
More likely, the rent goes up by whatever the UBI is. And I'm saying this as a landlord, I don't think it would be a good idea to create yet another system that just transfers wealth to people like me who happen to be property owners, it's already really lucrative even without that.
The response you're responding to here was to "ben_w", he discussed better-than-a-billionaire housing. My original reply to your earlier comment is above, basically just asking what type of housing you anticipate under a UBI system.
To me, "just enough to avoid starving" is a prison-like model, just without locked doors. But multiple residents of a very basic "cell", a communal food hall, maybe a small library and modest outdoors area. But most of the time when people talk about UBI, they describe the recipients living in much nicer housing than that.
> the resources (building materials) and the energy to power the robots or whatever, will be more expensive and tightly controlled than ever.
I am also concerned about this possibility, but come at it from a more near-term problem.
I think there is a massive danger area with energy prices specifically, in the immediate run-up to AI being able to economically replace human labour.
Consider a hypothetical AI which, on performance metrics, is good enough, but is also too expensive to actually use — running it exceeds the cost of any human. The corollary is that whatever that threshold is, under the assumption of rational economics, no human can ever earn more than whatever it costs to run that AI. As time goes on, if the hardware of software improves, the threshold comes down.
Consider what the world looks like if the energy required to run a human-level AI at human-level speed costs the same as the $200/month that OpenAI charges for access to ChatGPT Pro (we don't need to consider what energy costs per kWh for this, prices may change radically as we reach this point).
Conditional on this AI actually being good enough at everything (really good enough, not just "we've run out of easily tested metrics to optimise"), then this becomes the maximum that a human can earn.
If a human is earning this much per month, can they themselves afford energy to keep their lights on, their phone charged, their refrigerator running?
Domestic PV systems (or even wind/hydro if you're lucky enough to be somewhere where that's possible) will help defend against this; personal gasoline/diesel won't, the fuel will be subject to the same price issues.
> Power and wealth simply wont allow everything to be accessible to everyone. The idea that people would be able to build enormous mansions (or personal aircraft carriers or spaceships) just sounds rather absurd, no offense, but come on.
While I get your point, I think a lot of the people in charge can't really imagine this kind of transformation. Even when they themselves are trying to sell the idea. Consider what Musk and Zuckerberg say about Mars and superintelligence respectively — either they don't actually believe the words leaving their mouths (and Musk has certainly been accused of this with Mars), or they have negligible imagination as to the consequences of the world they're trying to create (which IMO definitely describes Musk).
At the same time, "absurd"?
I grew up with a C64 where video games were still quite often text adventures, not real-time nearly-photographic 3D.
We had 6 digit phone numbers, calling the next town along needed an area code and cost more; the idea we'd have video calls that only cost about 1USD per minute was sci-fi when I was young, while the actual reality today is that video calls being free to anyone on the planet isn't even a differentiating factor between providers.
I just about remember dot-matrix printers, now I've got a 3D printer that's faster than going to the shops when I want one specific item.
Universal translation was a contrivance to make watching SciFi easier, not something in your pocket that works slightly better for images than audio, and even then because speech recognition in natural environments turned out to be harder than OCR in natural environments.
I'm not saying any of this will be easy, I don't know when it will be good enough to be economical — people have known how to make flying cars since 1936*, but they've been persistently too expensive to bother. AGI being theoretically possible doesn't mean we ourselves are both smart enough and long-lived enough as an advanced industrialised species to actually create it.
> You can’t provide valuable things for “free” en masse without institutionalizing either slavery or robbery. The value must come from somewhere.
Utter nonsense.
Do you believe the European countries that provides higher education for free are manning tenure positions with slaves or robbing people at gunpoint?
How come do you see public transportation services in some major urban centers being provided free of charge?
How do you explain social housing programmes conducted throughout the world?
Are countries with access to free health care using slavery to keep hospitals and clinics running?
What you are trying to frame as impossibilities is already the reality for many decades in countries ranking far higher in development and quality of living indexes that the US.
You're missing the point, language can be tricky. Technically, the state confiscating wealth derived from your labor through taxes is a form of robbery and slavery. It used to be called corvée. But the words being used have a connotation of something much more brutal and unrewarding. This isn't a political statement, I'm not a libertarian who believes all taxation is evil robbery and needs to be abolished. I'm just pointing out by the definition of slavery aka forced labor, and robbery aka confiscation of wealth, the state employs both of those tactics to fund the programs you described.
> Technically, the state confiscating wealth derived from your labor through taxes is a form of robbery and slavery.
Without the state, you wouldn't have wealth. Heck there wouldn't even be the very concept of property, only what you could personally protect by force! Not to mention other more prosaic aspects: if you own a company, the state maintains the roads that your products ship through, the schools that educate your workers, the cities and towns that house your customers... In other words the tax is not "money that is yours and that the evil state steals from you", but simply "fair money for services rendered".
To a large extent, yes. That's why the arrangement is so precarious, it is necessary in many regards, but a totalitarian regime or dictatorship can use this arrangement in a nefarious manner and tip the scale toward public resentment. Balancing things to avoid the revolutionary mob is crucial. Trading your labor for protection is sensible, but if the exchange becomes exorbitant, then it becomes a source of revolt.
> You're missing the point, language can be tricky. Technically, the state confiscating wealth derived from your labor through taxes is a form of robbery and slavery.
You're letting your irrational biases show.
To start off, social security contributions are not a tax.
But putting that detail aside, do you believe that paying a private health insurance also represents slavery and robbery? Are you a slave to a private pension fund?
Are you one of those guys who believes unions exploit workers whereas corporations are just innocent bystanders that have a neutral or even positive impact on workers lives and well being?
No, I'm a progressive and believe in socialism. But taxation is de facto a form of unpaid labor taken by the force of the state. If you don't pay your taxes, you will go to jail. It is both robbery and slavery, and in the ideal situation, it is a benevolent sort of exchange, despite existing in the realm of slavery/robbery. In a totalitarian system, it become malevolent very quickly. It also can be seen as not benevolent when the exchange becomes onerous and not beneficial. Arguing this is arguing emotionally and not rationally using language with words that have definitions.
social security contributions are a mandatory payment to the state taken from your wages, they are a tax, it's a compulsory reduction in your income. Private health insurance is obviously not mandatory or compulsory, that is different, clearly. Your last statement is just irrelevant because you assume I'm a libertarian for pointing out the reality of the exchange taking place in the socialist system.
I'd be very interested in hearing which definition of "socialism" aligns with those obviously libertarian views?
> If you don't pay your taxes, you will go to jail. It is both robbery and slavery [...] Arguing this is arguing emotionally and not rationally using language with words that have definitions.
Indulging in the benefits of living in a society, knowingly breaking its laws, being appalled by entirely predictable consequences of those action, and finally resorting to incorrect usage of emotional language like "slavery" and "robbery" to deflect personal responsibility is childish.
Taxation is payment in exchange for services provided by the state and your opinion (or ignorance) of those services doesn't make it "robbery" nor "slavery". Your continued participation in society is entirely voluntary and you're free to move to a more ideologically suitable destination at any time.
What do you mean? Is this one of those sovereign citizen type of arguments?
The government provides a range of services that are deemed to be broadly beneficial to society. Your refusal of that service doesn't change the fact that the service is being provided.
If you don't like the services you can get involved in politics or you can leave, both are valid options, while claiming that you're being enslaved and robbed is not.
Not at all. If it happens to you even when you don’t want it and don’t want to pay for it (and are forced to pay for it on threat of violence), that is no service.
Literally nobody alive today was “involved in politics” when the US income tax amendment was legislated.
Also, you can’t leave; doubly so if you are wealthy enough. Do you not know about the exit tax?
Good idea, lets make taxes optional or non enforceable. What comes next. Oh right, nobody pays. The 'government' you have collapses and then strong men become warlords and set up fiefdoms that fight each other. Eventually some authoritarian gathers up enough power to unite everyone by force and you have your totalitarian system you didn't want, after a bunch of violence you didn't want.
We assume you're libertarian because you are spouting libertarian ideas that just don't work in reality.
> No, I'm a progressive and believe in socialism. But taxation is de facto a form of unpaid labor taken by the force of the state.
I do not know what you mean by "progressive", but you are spewing neoliberal/libertarian talking points. If anything, this tells how much Kool aid you drank.
> Have you ever stopped to consider why class mobility is much much less common in Europe than in the USA?
Which class mobility is this that you speak of? The one that forces the average US citizens to be a paycheck away from homelessness? Or is it the one where you are a medical emergency away from filing bankruptcy?
Have you stopped to wonder how some European countries report higher median household incomes than the US?
But by any means continue to believe your average US citizen is a temporarily embarrassed billionaire, just waiting for the right opportunity to benefit from your social mobility.
In the meantime, also keep in mind that mobility also reflects how easy it is to move down a few pegs. Let that sink in.
> the economic situation in Europe is much more dire than the US...
Is it, though? The US reports by far the highest levels of lifetime literal homelessness, which is three times greater than in countries like Germany. Homeless people on Europe aren't denied access to free healthcare, primary or even tertiary.
Why do you think the US, in spite of it's GDP, features so low in rankings such as human development index or quality of life?
Yet people live better. Goes to show you shouldn't optimise for crude, raw GDP as an end in itself, only as a means for your true end: health, quality of life, freedom, etc.
In many of the metrics, yeah. But Americans can afford larger houses and more stuff essentially, which isn't necessarily a good replacement for general quality of life things.
> In many of the metrics, yeah. But Americans can afford larger houses and more stuff essentially, which isn't necessarily a good replacement for general quality of life things.
I think this is the sort of red herring that prevents the average US citizen from realizing how screwed over they are. Again, the median household income in the US is lower than in some European countries. On top of this, the US provides virtually no social safety net or even socialized services to it's population.
The fact that the average US citizen is a paycheck away from homelessness and the US ranks so low in human development index should be a wake-up call.
Carbon tax on a state level to try to fight a global problem makes 0 sense actually.
You just shift the emissions from your location to the location that you buy products from.
Basically what happened in Germany: more expensive "clean" energy means their own production went down and the world bought more from China instead. The net result is probably higher global emissions overall.
This is why an economics based strictly on scarcity cannot get us where we need to go. Markets, not knowing what it's like to be thirsty, will interpret a willingness to poison the well as entrepreneurial spirit to be encouraged.
We need a system where being known as somebody who causes more problems than they solve puts you (and the people you've done business with) at an economic disadvantage.
The major shift for me is now its normal to take Waymos. Yeah, there aren't as fast as Uber if you have to get across town, but for trips less than 10 miles they're my go to now.
On the other hand, the Tesla “robotaxi” scares the crap out of me. No lidar and seems to drive more aggressively. The Mark Rober YouTube of a Tesla plowing into a road-runner style fake tunnel is equal parts hilarious and nightmare fuel when you realize that’s what’s next to your kid biking down the street.
> Mark Rober YouTube of a Tesla plowing into a road-runner style fake tunnel
I understand the argument for augmenting your self-driving systems with LIDAR. What I don't really understand is what videos like this tell us. The comparison case for a "road-runner style fake tunnel" isn't LIDAR, it's humans, right? And while I'm sure there are cases where a human driver would spot the fake tunnel and stop in time, that is not at all a reasonable assumption. The question isn't "can a Tesla save your life when someone booby traps a road?", it's "is a Tesla any worse than you at spotting booby trapped roads?", and moreover, "how does a Tesla perform on the 99.999999% of roads that aren't booby trapped?"
Tesla‘s insistence on not using Lidar while other companies deem it necessary for save auto-pilot creates the need for Tesla to demonstrate that their approach is equally as save for both drivers and ie pedestrians. They haven’t done that, arguably the data shows the contrary. This generates the impression that Tesla skimps on security and if they skimp in one area, they’ll likely skimp in others. Stuff like the Rober video strengthens these impressions. It’s a public perception issue and Tesla has done nothing (and maybe isn’t able to do anything) to dispel this notion.
> What I don't really understand is what videos like this tell us.
A lot of people here might intuitively understand “does not have lidar” means “can be deceived with a visual illusion.” The value of a video like that is to paint a picture for people who don’t intuitively understand it. And for everyone, there’s an emotional reaction seeing it plow through a giant wall that resonates in ways an intellectual understanding might not.
Great communication speaks to both our “fast” and “slow” brains. His video did a great job IMHO.
> Is a Tesla any worse than you at spotting booby trapped roads
That would've been been the case if all laws, opinions and purchasing decisions were made by everyone acting rationally. Even if self driving cars are safer than human drivers, it just takes a few crashes to damage their reputation. It has to be much, much safer than humans for mass adoption. Ideally also safer than the competition, if you're comparing specific companies.
Waymo has a control center, but it's customer service, not remote driving. They can look at the sensor data, give hints to the car ("back out, turn around, try another route") and talk to the customer, but can't take direct control and drive remotely.
Baidu's system in China really does have remote drivers.[1]
Tesla also appears to have remote drivers, in addition to someone in each car with an emergency stop button.[2]
> I think the most sensible answer would be something like UBI.
What corporation will accept to pay dollars for members of society that are essentially "unproductive"? What will happen with the value of UBI in time, in this context, when the strongest lobby will be of the companies that have the means of producing AI? And, more essentially, how are humans able to negotiate for themselves when they lose their abilities to build things?
I'm not opposing the technology progress, I'm merely trying to unfold the reality of UBI being a thing, knowing human nature and the impetus for profit.
Every time someone casually throws out UBI my mind goes to the question "who is paying taxes when some people are on UBI ?"
Is there like a transition period where some people don't have to pay taxes and yet don't get UBI, and if so, why hasn't that come yet ? Why aren't the minimum tax thresholds going up if UBI could be right around the corner ?
The taxes will be most burdensome for the wealthiest and most productive of institutions, which is generally why these arrangements collapse economies and nations. UBI is hard to implement because it incentivizes non-productive behavior and disincentivizes productive activity. This creates economic crisis, taxes are basically a smaller scale version of this, UBI is like a more comprehensive wealth redistribution scheme. The creation of a syndicate (in this case, the state) to steal from the productive to give to the non-productive is a return to how humanity functioned before the creation of state-like structures when marauders and bandits used violence to steal from those who created anything. Eventually, the state arose to create arrangements and contracts to prevent theft, but later become the thief itself, leading to economic collapse and the cyclical revolutionary cycle.
So, AI may certainly bring about UBI, but the corporations that are being milked by the state to provide wealth to the non-productive will begin to foment revolution along with those who find this arrangement unfair, and the productive activity of those especially productive individuals will be directed toward revolution instead of economic productivity. Companies have made nations many times before, and I'm sure it'll happen again.
The problem is the "productive activity" is rather hard to define if there's so much "AI" (be it classical ML, LLM, ANI, AGI, ASI, whatever) around that nearly everything can be produced by nearly no one.
The destruction of the labour theory of value has been a goal of "tech" for a while, but if they achieve it, what's the plan then?
Assuming humans stay in control of the AIs because otherwise all bets are off, in a case where a few fabulously wealthy (or at least "onwing/controlling", since the idea of wealth starts to become fuzzy) industrialists control the productive capacity for everything from farming to rocketry and there's no space for normal people to participate in production any more, how do you even denominate the value being "produced"? Who is it even for? What do they need to give in return? What can they give in return?
> Assuming humans stay in control of the AIs because otherwise all bets are off, in a case where a few fabulously wealthy (or at least "onwing/controlling", since the idea of wealth starts to become fuzzy) industrialists control the productive capacity for everything from farming to rocketry and there's no space for normal people to participate in production any more
Why do the rest of humanity even have to participate in this? Just continue on the way things were before without any super AI. Start new businesses that don’t use AI and hire humans to work there.
Because with presumably tiny marginal costs of production, the AI owners can flood and/or buy out your human-powered economy.
You'd need a very united front and powerful incentives to prevent, say, anyone buying AI-farmed wheat when it's half the cost of human-farmed (say). If you don't prevent that, Team AI can trade wheat (and everything else) for human economy money and then dominate there.
But if AI can do anything that human labor can do, what would even be the incentive for AI owners to farm wheat and sell it to people? They can just have their AIs directly produce the things they want.
It seems like the only things they would need are energy and access to materials for luxury goods. Presumably they could mostly lock the "human economy" out of access to these things through control over AI weapons, but there would likely be a lot of arable land that isn't valuable to them.
Outside of malice, there doesn't seem to be much reason to block the non-technological humans from using the land they don't need. Maybe some ecological argument, the few AI-enabled elites don't want billions of humans that they no longer need polluting "their" Earth?
When was the last the techno-industrialist elite class said "what we have is enough"?
In this scenario, the marginal cost of taking everything else over is almost zero. Just tell the AI you want it taken over and it handles it. You'd take it over just for risk mitigation, even if you don't "need" it. Better to control it since it's free to do so.
Allowing a competing human economy is resources left on the table. And control of resources is the only lever of power left when labour is basically free.
> Maybe some ecological argument
There's a political angle too. 7 (or however many it will be) billion humans free to do their own thing is a risky free variable.
The assumption here that UBI "incentivizes non-productive behavior and disincentivizes productive activity" is the part that doesn't make sense. What do you think universal means? How does it disincentivize productive activity if it is provided to everyone regardless of their income/productivity/employment/whatever?
Evolutionarily, people engage in productive activity in order to secure resources to ensure their survival and reproduction. When these necessary resources are gifted to a person, there is a lower chance that they will decide to take part in economically productive behavior.
You can say that because it is universal, it should level the playing field just at a different starting point, but you are still creating a situation where even incredibly intelligent people will choose to pursue leisure over labor, in fact, the most intelligent people may be the ones to be more aware of the pointlessness of working if they can survive on UBI. Similarly, the most intelligent people will consider the arrangement unfair and unsustainable and instead of devoting their intelligence toward economically productive ventures, they will devote their abilities toward dismantling the system. This is the groundwork of a revolution. The most intelligent will prefer a system where their superior intelligence provides them with sufficient resources to choose a high-quality mate. If they see an arrangement where high-quality mates are being obtained by individuals who they deem to be receiving benefits that they cannot defend/protect adequately, such an arrangement will be dismantled. This evolutionary drive is hundreds of millions of years old. Primitive animals will take resources from others that they observe to be unable to defend their status.
So, overall, UBI will probably be implemented, and it will probably end in economic crisis, revolution, and the resumption of this cycle that has been playing out over and over for centuries.
> You can say that because it is universal, it should level the playing field just at a different starting point, but you are still creating a situation where even incredibly intelligent people will choose to pursue leisure over labor, in fact, the most intelligent people may be the ones to be more aware of the pointlessness of working if they can survive on UBI.
This doesn't seem believable to me, or at least it isn't the whole story. Pre-20th century it seems like most scientific and mathematical discoveries came from people who were born into wealthy families and were able to pursue whatever interested them without concern for whether or not it would make them money. Presumably there were/are many people who could've contributed greatly if they didn't have to worry about putting food on the table.
> The most intelligent will prefer a system where their superior intelligence provides them with sufficient resources to choose a high-quality mate.
In a scenario where UBI is necessary because AI has supplanted human intelligence, it seems like the only way they could return to such a system is by removing both UBI and AI. Remove just UBI and they're still non-competitive economically against the AIs.
> When these necessary resources are gifted to a person, there is a lower chance that they will decide to take part in economically productive behavior.
Source?
Even if that's true though, who cares if AI and robots are doing the work?
What's so bad about allowing people leisure, time to do whatever they want? What are you afraid of?
There are two things bothering me here. The first bit where you're talking about motivations and income driving it seems either very reductive or implying of something that ought to be profoundly upsetting:
- that intelligent people will see that the work they do is pointless if they're paid enough to survive and care for themselves, and not see work as another source of income for better financial security
- that most intelligent people will see it as exploitation and then choose to focus on dismantling the system that levels the playing field
Which sort of doesn't add up. So there are intelligent people who are working right now because they need money and don't have it, while the other intelligent people who are working and employing other people are only doing it to make money and will rebel if they lose some of the money they make.
But then, why doesn't the latter group of intelligent people just stop working if they have enough money? Are they less/more/differently intelligent than the former group? Are we thinking about other, more narrow forms of intelligence when describing either?
Also
> The most intelligent will prefer a system where their superior intelligence provides them with sufficient resources to choose a high-quality mate. If they see an arrangement where high-quality mates are being obtained by individuals who they deem to be receiving benefits that they cannot defend/protect adequately, such an arrangement will be dismantled. This evolutionary drive is hundreds of millions of years old.
I don't want to come off as mocking here - it's hard to take these points seriously. The whole point of civilization is to rise above these behaviours and establish a strong foundation for humanity as a whole. The end goal of social progress and the image of how society should be structured cannot be modeled on systems that existed in the past solely because those failure modes are familiar and we're fine with losing people as long as we know how our systems fail them. That evolutionary drive may be millions of years old, but industrial society has been around for a few centuries, and look at what it's done to the rest of the world.
> Primitive animals will take resources from others that they observe to be unable to defend their status.
Yeah, I don't know what you're getting at with this metaphor. If you're talking predatory behaviour, we have plenty of that going around as things are right now. You don't think something like UBI will help more people "defend their status"?
> it will probably end in economic crisis, revolution, and the resumption of this cycle that has been playing out over and over for centuries
I don't think human civilization has ever been close to this massive or complex or dysfunctional in the past, so this sentence seems meaningless, but I'm no historian.
I guess the thinking goes like this: Why start a business, get a higher paying job etc if you're getting ~2k€/mo in UBI and can live off of that? Since more people will decide against starting a business or increasing their income, productive activity decreases.
I see more people starting businesses because they now have less risk, more people not changing jobs just to get a pay hike. The sort of financial aid UBI would bring might even make people more productive on the whole, since people who are earning have spare income for quality of life, and people with financial risk are able to work without being worried half the day about paying rent and bills.
It's a bit of a dunk on people who see their position as employer/supervisor as a source of power because they can impose financial risk as punishment on people, which happens more often than any of us care to think, but isn't that a win? Or are we conceding that modern society is driven more by stick than carrot and we want it that way?
The easiest way to implement this is to have literally everyone pay a flat tax on the non-UBI portion of their income. This then effectively amounts to a progressive income tax on total income. If you do some number crunching, it wouldn't even need to be crazy high to give everyone the equivalent of US minimum wage; comparable to some European countries.
Over time, as more things get automated, you have more people deriving most of their income from UBI, but the remaining people will increasingly be the ones who own the automation and profit from it, so you can keep increasing the tax burden on them as well.
The endpoint is when automation is generating all the wealth in the economy or nearly so, so nobody is working, and UBI simply redistributes the generated wealth from the nominal owners of automation to everyone else. This fiction can be maintained for as long as society entertains silly outdated notions about property rights in a post-scarcity society, but I doubt that would remain the case for long once you have true post-scarcity.
You also have to consider the alternative: if there’s no ubi, are you expecting millions to starve? This is a recipe for civil war, if you have a very large group of people unable to survive you get social unrest. Either you spend the money on ubi or on police/military suppression to battle the unrest.
UBI could easily become a poverty trap, enough to keep living, not enough to have a shot towards becoming an earner because you’re locked out of opportunities. I think in practice it is likely to turn out like “basic” in The Expanse, with people hoping to win a lottery to get a shot at having a real job and building a decent life for themselves.
If no UBI is installed there will be a hard crash while everyone figures out what it is that humans can do usefully, and then a new economic model of full employment gets established. If UBI is installed then this will happen more slowly with less pain, but it is possible for society to get stuck in a permanently worse situation.
Ultimately if AI really is about to automate as much as it is promised then what we really need is a model for post-capitalism, for post-scarcity economics, because a model based on scarcity is incapable of adapting to a reality of genuine abundance. So far nobody seems to have any clue of how to do such a thing. UBI as a concept still lives deeply in the Overton window bounded by capitalist scarcity thinking. (Not a call for communism btw, that is a train to nowhere as well because it also assumes scarcity at its root.)
What I fear is that we may get a future like The Diamond Age, where we have the technology to get rid of scarcity and have human flourishing, but we impose legal barriers that keep the rich rich and the poor poor. We saw this happen with digital copyright, where the technology exists for abundance, but we’ve imposed permanent worldwide legal scarcity barriers to protect revenue streams to megacorps.
Isn't it the case that companies are always competing and evolving? Unless we see that there's a ceiling to driverless tech that is immediately obvious.
We "made cars work" about 100 years ago, but they have been innovating on that design since then on comfort, efficiency, safety, etc. I doubt the very first version of self driving will have zero ways to improve (although eventually I suppose you would hit a ceiling).
The robotaxi business model is the total opposite of scaling. At my previous employer we were solving the problem "block by block, city by city", , and I can only assume that you are living in the right city/block where they are tackling.
> I think the most sensible answer would be something like UBI.
Having had the experience of living under communist regime prior to 1989 I have zero trust in the state providing support, while I am totally dependent and have no recourse. Instead I would rather rely on my own two hands like my grandparents did.
I see a world where we can build anything we want with our own hands and AI automation. Jobs might become optional.
> I see a world where we can build anything we want with our own hands and AI automation. Jobs might become optional.
Unless your two hands are building murderbots, though, it doesn't matter what you're building if you can't grow or buy food.
I haven't personally seen how UBI could end up working viably, but I also don't see any other system working without much more massive societal changes than anyone is talking about.
Meanwhile, there are many many people that are very invested in maintaining massive differentials between the richest and the poorest that will be working against even the most modest changes.
> I also don't see any other system working without much more massive societal changes than anyone is talking about.
The other system is that the mass of people are coerced to work for tokens that buy them the right to food and to live in a house. i.e. the present system but potentially with more menial and arduous labour.
I'd argue against the entire perspective of evaluating every policy idea along one-dimensional modernist polemics put forwards as "the least worst solution to all of human economy for all time".
Right now the communists in China are beating us at capitalism. I'm starting to find the entire analytical framework of using these ideologies ("communism", "capitalism") to evaluate _anything_ to be highly suspect, and maybe even one of the west's greatest mistakes in the last century.
> I see a world where we can build anything we want with our own hands and AI automation. Jobs might become optional.
I was a teenager back in the 90s. There was much talk then about the productivity boosts from computers, the internet, automation, and how it would enable people to have so much more free time.
Interesting thing is that the productivity gains happened. But the other side of that equation never really materialized.
I’m not certain we don’t have free time, but I’m not sure how to test that. Is it possible that we just feel busier nowadays because we spend more time watching TV? Work hours haven’t dropped precipitously, but maybe people are spending more time in the office just screwing around.
It's the same here. Calling what the west has a "free-market capitalist" system is also a lie. At every level there is massive state intervention. Most discoveries come from publicly funded work going on at research universities or from billions pushed into the defense sector that has developed all the technology we use today from computers to the internet to all the technology in your phone. That's no more a free-market system than China is "communist" either.
I think the reality is just that governments use words and have an official ideology, but you have to ignore that and analyze their actions if you want to understand how they behave.
not to mention that most corporations in the US are owned by the public through the stock market and the arrangement of the American pension scheme, and public ownership of the means of production is one of the core tenets of communism. Every country on Earth is socialist and has been socialist for well over a century. Once you consider not just state investment in research, but centralized credit, tax-funded public infrastructure, etc. well yeah, terms such as "capitalism" become used in a totally meaningless way by most people lol.
My thoughts on these ideologies lately have shifted to viewing them as "secular religions". There are many characteristics that line up with that perspective.
Both communist and capitalist purists tend to be enriched for atheists (speaking as an atheist myself). Maybe some of that is people who have fallen out with religion over superstitions and other primitivisms, and are looking to replace that with something else.
Like religions, the movements have their respective post-hoc anointed scriptural prophets: Marx for one and Smith for the other.. along with a host of lesser saints.
Like religions, they are very prescriptive and overarching and proclaim themselves to have a better connection with some greater, deeper underlying truth (in this case about human behaviour and how it organizes).
For analytical purposes there's probably still value in the underlying texts - a lot of Smith and Marx's observations about society and human behaviour are still very salient.
But these ideologies, the outgrowths from those early analytical works, seem utterly devoid of any value whatsoever. What is even the point of calling something capitalist or communist. It's a meaningless label.
These days I eschew that model entirely and try to keep to a more strict analytical understanding on a per-policy basis. Organized around certain principles, but eschewing ideology entirely. It just feels like a mental trap to do otherwise.
In your world where jobs become "optional" because a private company has decided to fire half their workforce, and the state also does not provide some kind of support, what do all the "optional" people do?
Do you live in SF (the city, not the Bay Area as a whole) or West LA? I ask because in these areas you can stand on any city street and see several self driving cars go by every few minutes.
It's irrlevant that they've had a few issues. They already work and people love them. It's clear they will eventually replace every uber/lyft driver, probably every taxi driver, they'll likely replace every doordash/grubhub driver with vehicles design to let smaller automated delivery carts go the last few blocks. They may also replace every truck driver. Together that's around 5 million jobs in the USA.
Once they're let on the freeways their usage will expand even faster.
The last Waymo I saw (a couple weeks ago) was stuck trying to make a right turn on to Market St. It was conveniently blocking the pedestrian crosswalk for a few cycles before I went around it. The time before that one got befuddled by a delivery truck and ended up blocking both lanes of 14th Street. Before Cruise imploded they were way worse. I can't say that these self-driving cars have improved much since I moved out of the city a few years back.
Driverless taxis is IMO the wrong tech to compare to.
It’s a high consequence, low acceptance of error, real time task. Where it’s really hard to undo errors.
There is a big category of tasks that isn’t that. But that are economically significant. Those are a lot better fit for AI.
> What makes you think that? Self driving cars [...]
AI is intentionally being developed to be able to make decisions in any domain humans work in. This is unlike any previous technology.
The more apt analogy is to other species. When was the last time there was something other than homo sapiens that could carry on an interesting conversation with homo sapiens. 40,000 years?
And this new thing has been in development for what? 70 years? The rise in its capabilities has been absolutely meteoric and we don't know where the ceiling is.
The ceiling for current AI, while not provably known, can reasonably be upper bounded to human aggregate ability since these methods are limited to patterns in the training data. The big surprise was how many and sophisticated patterns were hiding in the training data (human written text). This current wave of AI progress is fueled by training data and compute in ”equal parts”. Since compute is cheaper, they’ve invested in more compute but failed scaling expectations since training data remained similarly sized.
Reaching super-intelligence through training data is paradoxical, because if it were known it wouldn’t be super-human. The other option is breaking out of the training data enclosure by relying on other methods. That may sound exciting but there’s no major progress I’m aware of that points that direction. It’s a little like being back to square one, before this hype cycle started. The smartest people seem to be focused on transformers, due to getting boatloads of money from companies or academia pushing them because of fomo.
> What makes you think that? Self driving cars have had (...)
I think you're confusing your cherry-picked comparison with reality.
LLMs are eliminating the need to have a vast array of positions on payrolls. From copywriters to customer support, and even creative activities such as illustration and even authoring books, today's LLMs are already more than good enough to justify replacing people with the output of any commercial chatbot service.
Software engineering is being affected as well, and it requires far greater know-how, experience, and expertise to meet the hiring bar.
> And when you talk about applying this same tech, so confidently, to domains far more nuanced and complex than (...)
Yes, your tech job is also going to be decimated. It's not a matter of having PMs write code. It's an issue of your junior SDE armed with a LLM being quite able to clear your bug backlog in a few days while improving test coverage metrics and refactoring code back from legacy status.
If a junior SDE can suddenly handle the workload that previously you required a couple of medior and senior developers, why would a company keep around 4 or 5 seasoned engineers when an inexperienced one is already able to handle the workload?
That's where the jobs will vanish. Even if demand remains there, it dropped considerably as to not justify retaining so many people in a company's payroll.
> LLMs are eliminating the need to have a vast array of positions on payrolls. From copywriters to customer support, and even creative activities such as illustration and even authoring books, today's LLMs are already more than good enough to justify replacing people with the output of any commercial chatbot service.
I'd love a source to these claims. Many companies are claiming that they are able to layoff folks because of AI, but in fact, AI is just a scapegoat to counteract the reckless overhiring due to free money in the market over the last 5-10 years and investors are demanding to see a real business plan and ROI. "We can eliminate this headcount due to the efficiency of our AI" is just a fancy way to make the stock price go up while cleaning up the useless folks.
People have ideas. There are substantially more ideas than people who can implement ideas. As with most technology, the reasonable expectation is to assume that people are just going to want more done by the now tool powered humans, not less things.
> (...) AI is just a scapegoat to counteract the reckless overhiring due to (...)
That is your personal moralist scapegoat, and one that you made up to feel better about how jobs are being eliminated because someone somewhere screwed up.
In the meantime, you fool yourself and pretend that sudden astronomic productivity gains have no impact on demand.
These supposed "productivity gains" are only touted by the ones selling the product, i.e. the ones who stand to benefit from adoption. There is no standard way to measure productivity since it's subjective. It's far more likely that companies will use whatever scapegoat they can to fire people with as little blowback as possible, especially as the other commenter noted, people were getting hired like crazy.
Each one of the roles you listed above is only passable with AI at a superficial glance. For example, anyone who actually reads literature other than self-help and pop culture books from airport kiosks knows that AI is terrible at longer prose. The output is inconsistent because current AI does not understand context, at all. And this is not getting into the service costs, the environmental costs, and the outright intellectual theft in order to make things like illustrations even passable.
> These supposed "productivity gains" are only touted by the ones selling the product (...)
I literally pasted an announcement from the CEO of a major corporation warning they are going to decimate their workforce due to the adoption of AI.
The CEO literally made the following announcement:
> "As we roll out more generative AI and agents, it should change the way our work is done," Jassy wrote. "We will need fewer people doing some of the jobs that are being done today, and more people doing other types of jobs."
This is not about selling a product. This is about how they are adopting AI to reduce headcount.
The CEO is marketing to the company’s shareholders. This is marketing. A CEO will say anything to sell the idea of their company to other people. Believe it or not, there is money to be made from increased share prices.
I think your assumption is probably a little backwards. It will be a senior SDE clearing the slate I think. Ones that embrace the technolocy and are able to accelerate their work. At that level of effeciency the cost is still way way lower than it is for a larger team.
When it gets to the point that you don't need a senior engineer doing the work, you won't need a junior either.
> I think your assumption is probably a little backwards. It will be a senior SDE clearing the slate I think.
I don't think you understood the point I made.
My point was not about Jr vs Sr, let alone how a Jr is somehow more capable than a Sr.
My point was that these productivity gains aren't a factor of experience of seniority, but they do devalue the importance of seniority to perform specific tasks. Just crack open a LLM, feed in a few prompts, and done. Hell, junior developers no longer need to reach out to seniors to as questions about any topic. Think about that for a second.
Just as an anecdote that might provide some context, this is not what I've observed. My observation is that senior engineers are vastly more effective at knowing how to employ and manage AI than junior engineers. Junior engineers are typically coming to the senior engineers to learn how to approach learning what the AI is good at and not good at because they themselves have trouble making those judgments.
I was working on a side project last night, and Gemini decided to inline the entire Crypto.js library in the file I was generating. And I knew it just needed a hashing function, so I had to tell it to just grab a hashing function and not inline all of Crypto.js. This is exactly the kind of thing that somebody that didn't know software engineering wouldn't be able to say, even as simple as it is. It made me realize I couldn't just hand this tool to my wife or my kids and allow them to create software because they wouldn't know to say that kind of thing to guide the AI towards success.
As someone who lives in LA, I don’t think self-driving cars existed at the time of the Rodney King LA riots and I am not aware of any other riots since.
I feel like you are trapped in the first assessment of this problem. Yes, we are not there yet, but have you thought about the rate of improvement? Is that rate of improvement reliable? Fast? That's what matters, not where we are today.
You could say that about any time in history. When the steam engine or mechanical loom were invented there were millions of people like you who predicted that mankind will be out of jobs soon and guess what happened? There's still a lot of things to do in this world and there still will be a lot to do (aka "jobs") for a loooong time.
To be fair, self-driving cars don't need to be perfect 0 casualty modes of transportation, they just need to be better than human drivers. Since car crashes kill 2 million people each year (and maim another 2 or 3), this is a low bar to clear...
Of course, the actual answer is that rail and cycling infrastructure are much more efficient than cars in any moderately dense region. But that would mean funding boring regular companies focused on providing a product or service for adequate profit, instead of exciting AI web3 high tech unicorn startups.
Everything anyone could say about bad AI driving could be said about bad human drivers. Nevertheless, Waymo has not had a single fatal accident despite many millions of passenger miles and is safer than human drivers.
Everything? How about legal liability for the car killing someone? Are all the self-driving vendors stepping up and accepting full legal liability for the outcomes of their non-deterministic software?
Thousands have died directly due to known defects in manufactured cars. Those companies (Ford, others) still are operating today.
Even if driverless cars killed more people than humans they would see mass adoption eventually. However they are subject to farr higher scrutiny than human drivers and even so make fewer mistakes, avoid accidents more frequently and can't get drunk, tired, angry, or distracted.
There is a fetish for technology that sometimes we are not aware of. On average there might be less accidents, but if specific accidents were preventable and now they happen, people will sue. And who will take the blame? The day the company takes the blame is the day self-driving exists IMO.
But even if they can theoretically be hacked, so far Waymos are still safer and more reliable than human drivers. The biggest danger someone has riding in one is someone destroying it for vindictive reasons.
In the bluntest possible sense, who cares if we can make roads safer?
Solving liability in traffic collisions is basically a solved problem through the courts, and at least in the UK, liability is assigned in law to the vendor (more accurately, there’s a list of who’s responsible for stuff, I’m not certain if it’s possible to assume legal responsibility without being the vendor).
I think it is important to remember that "decades" here means <20 years. Remember that in 2004 it was considered sufficiently impossible that basically no one had a car that could be reliably controlled by a computer, let alone driven by computer alone:
I also think that most job domains are not actually more nuanced or complex than driving, at least from a raw information perspective. Indeed, I would argue that driving is something like a worst-case scenario when it comes to tasks:
* It requires many different inputs, at high sampling rates, continuously (at the very least, video, sound, and car state)
* It requires loose adherence to laws in the sense that there are many scenarios where the safest and most "human" thing to do is technically illegal.
* It requires understanding of driving culture to avoid making decisions that confuse/disorient/anger other drivers, and anticipating other drivers' intents (although this can be somewhat faked with sufficiently fast reaction times)
* It must function in a wide range of environments: there is no "standard" environment
If we compare driving to other widespread-but-low-wage jobs (e.g. food prep, receptionists, cleaners) there are generally far more relaxed requirements:
* Rules may be unbreakable as opposed to situational, e.g. the cook time for burgers is always the same.
* Input requirements may be far lower. e.g. an AI receptionist could likely function with audio and a barcode scanner.
* Cultural cues/expectations drive fewer behaviors. e.g. an AI janitor just needs to achieve a defined level of cleanliness, not gauge people's intent in real-time.
* Operating environments are more standardized. All these jobs operate indoors with decent lighting.
> They've been known to ignore police traffic redirections, they've run right through construction barriers, and recently they were burnt to a crisp in the LA riots
All of this is very common for human driven cars too.
I trained no driving my first 16 years of life, seems like fuckkng hell your upbringing training nothing but driving the first 16 years. Your parents should be locked up.
And the problem for Capitalists and other anti-humanists is that this doesn’t scale. Their hope with AI, I think, is that once they train one AI for a task, it can be trivially replicated, which scales much better than humans.
Self-driving cars are a political problem, not a technical problem. A functioning government would put everything from automation-friendly signaling standards to battery-swapping facilities into place.
We humans used to do that sort of thing, but not anymore, so... bring on the AI. It won't work as well as it might otherwise be able to, but it'll probably kill fewer humans on the road at the end of the day. A low bar to clear.
Self-driving car companies dont want a unfiied signalling platform or other "open for all" infrastructure updates. They want to own self-driving, to lock you into a subscription on their platform.
Literally the only open source self driving platform, from trillion to billion to million dollar companies is comma.ai, founded by Geohot. Thats it. Its actually very good, and I bet they would welcome these upgrades, but that would be a consortium of one underdog pushing for them.
Corporations generally follow a narrow somewhat predictable pattern towards some local maxima of their own value extraction. Since world is not zero sum, it produces value for others too.
Where politics (should) enter the picture is where we somehow can see a more global maxima (for all citizens) and try to drive towards it through some political, hopefully democratic means. (Laws, standards, education, investment, infra etc)
Snarky but serious question: How do we know that this wave will disrupt labor at all? Every time I dig into a story of X employees replaced by "AI", it's always in a company with shrinking revenues. Furthermore, all of the high-value use cases involve very intense supervision of the models.
There's been a dream of unsupervised models going hog wild on codebases for the last three years. Yet even the latest and greatest Claude models can't be trusted to write a new REST endpoint exposing 5 CRUD methods without fucking something up. No, it requires not only human supervision, but it also requires human expertise to validate and correct.
I dunno. I feel like this language grossly exaggerates the capability of LLMs to paint a picture of them reliably fulfilling roles end-to-end instead of only somewhat reliably fulfilling very narrowly scoped tasks that require no creativity or expertise.
How many people could be replaced by a proper CMS or a Excel sheet right now already? Probably dozens of millions, and yet they are at their desks working away.
It's easy to sit in a café and ponder about how all jobs will be gone soon but in practice people aren't as easily replacable.
For many businesses the situation is that technology has dramatically underperformed in doing the most basic tasks. Millions of people are working around things like defective ERP systems. A modest improvement in productivity in building basic apps could push us past a threshold. It makes it possible for millions more people to construct crazy excel formulas. It makes it possible to add a UI to a python script where before there was only a command line. And one piece of magic that works teliably can change an entire process. It lets you make a giant leap rather than an incremental change.
If we could make line of business crud apps work reliably, have usable document/email search, and have functional ERP that would dissolve millions of jobs.
> If we could make line of business crud apps work reliably, have usable document/email search, and have functional ERP that would dissolve millions of jobs.
Why would that be your goal? I’d prefer millions of people have gainful employment instead of some shit tech company having more money.
A lot of jobs really only exist to increase headcount for some mid/high level manager's fiefdom. LLMs are incapable of replacing those roles as the primary value of those roles is to count towards the number of employees in their sector of the organization.
Those jobs are probably still a couple decades plus off from displacement. some possibly never, And we will need them in higher numbers.. and perhaps it's ironic because these are some of the oldest professions.
Everything we do is in service of paying for our housing, transportation, eating food, healthcare and some fun money.
Most goes to housing, healthcare, and transportation.
Healthcare costs may come down some with advancements in AI. R&D will be cheaper. Knowledge will be cheaper and more accessible.
But what people care about, what people have always cared about, remains in professions that are as old as time and, I don't see them fully replaceable by AI just yet - enhanced, yes, but not replaced.
Imagine a world where high quality landscaping exists for the average person. And this is made possible because we'll live in a world where the equivalent of today's uber driver owns a team of gardening androids.
Or perhaps in the future everyone will work in finance. Everyone's a corporation.
> Imagine a world where high quality landscaping exists for the average person. And this is made possible because we'll live in a world where the equivalent of today's uber driver owns a team of gardening androids.
I think it's going to be the other way around. It's looking like automation of dynamic physical capability is going to be the very last thing we figure out; what we're going to get first is teams of lower-skilled human workers directed largely by jobsite AI. By the time the robots get there, they're not going to need a human watching them.
Looking at the advancements in low cost flexible robotics I'm not sure I share that sentiment. Plus the LLM craze is fueling generalist advancement in robotics as well. I'd say we'll see physical labor displacement within a decade tops.
Kinematics is deceptively hard and at least evolutionary took a lot longer to develop than language. Low wage physical labor seems easy only because humans are naturally very good at it, and this took millions of years to develop.
The number of edge cases when you are dealing with physical world is several order of magnitudes higher than when dealing with text only and the spacial reasoning capabilities of the current crop of MLLMs are not nearly as good at it as required. And this doesn't even take in to account that now you are dealing with hardware and hardware is expensive. Expensive enough, that even on the manufacturing lines (a more predictable environment than let's say landscaping) automation sometimes doesn't make economic sense.
Im reminded of something I read years ago that said something like jobs are now above or below the API. I think now its jobs will be above or below the AI.
I would be cautious to avoid any narrative anchoring on “old versus new” professions. I would seek out other ways of thinking about it.
For example, I predict humans will maintain competitive advantage in areas where the human body excels due to its shape, capabilities, or energy efficiency.
What this delusion seems to turn a blind eye to is that a good chunk of the population is already in those roles; what happens when the supply of those roles far exceeds the demand, in a relatively short time? Carpenters suddenly abundant, carpenter wages drop, carpenters struggling to live, carpenters forced to tighten spending, carpenters decide children aren't affordable.. now extrapolate that across all of the impacted roles and industries. No doubt someone is already typing "carpenters can retrain too!" OK, so they're back to entry level wages (if anything) for 5+ years? Same story. And retrain to what?
At some point an equilibrium will be reached but there is no guarantee it will be a healthy situation or a smooth ride. This optimism about AI and the rosy world that is just around the corner is incredibly naive.
It's naive but also ignores that automation is simply replacing human labor by capital. Capital captures more of the value, and workers get less overall. Unless we end up in some mild socialist utopia where basic needs are provided and corps are all coops, but that's not the trend.
That healthcare jobs will be safe is nice on the surface but also means that while other jobs become more scarce cost of healthcare will continue to go up.
In your example i think it's a great deal more likely that the Uber driver is paid a tiny stipend to supervise a squad of gardening androids owned at substantial expense by Amazon Yard.
Why would anyone be on the field? Why not just have a few drones flying there monitoring whole operation remotely. And have one person monitor too many sites at same time likely from cheapest possible region.
Far from an expert on this topic, but what differentiates AI from other non physical efficiency tools? (I'm actually asking not contesting).
Won't companies always want to compete with one another, so simply using AI won't be enough. We will always want better and better software, more features, etc. so that race will never end until we get an AI fully capable of managing all parts (100%) of the development process (which we don't seem to be close to yet).
From Excel to Autocad there's been a lot of tools that were expected to decrease the amount of work ended up actually increasing it due to having new capabilities and the constant demand for innovation. I suppose the difference would be if we think AI will continue to get really good, or if it'll become SO good that it is plug and play and completely replaces people.
I encourage everyone to not claim “X seems unlikely” when it comes to high impact risks. Such a thinking pattern often leads to pruning one’s decision tree way too soon. To do well, we need to plan over an uncertain future that has many weird and unfamiliar scenarios.
I am so tired of people acting like planning for an uncertain world is a zero sum game, decided by one central actor in a single pipeline execution model. I’ll unpack this below.
The argument above (or some version of it) gets repeated over and over, but it is deeply flawed for various reasons.
The argument implies that “we” is a single agent that must do some set of things before other things. In the real world, different collections of people can work on different projects simultaneously in various orderings.
This is very different than optimizing an instruction pipeline for a single core microprocessor. In the real world, different kinds of tasks operate on very different timescales.
As an example, think about how change happens in society. Should we only talk about one problem at a time? Of course not. Why? The pipeline to solving problems is long and uncertain so you have to parallelize. Raising awareness of an issue can be relatively slow. Do you know what is even slower? Trying to reframe an issue in a way that gets into people’s brains and language patterns. Once a conceptual model exists and people pay attention, then building a movement among “early adopters” has a fighting chance. If that goes well, political influence might follow.
I was more hinting at that if we fail to plan for the obvious stuff, what makes you think that we’ll be better at planning for the more obscure possibilities. The former should be much easier, but since we fail at it, we should first concentrate on getting better at that.
If we’re talking about DARPA’s research agenda or the US military’s priorities, I would say they are quite capable at planning for speculative scenarios and long-term effects - for various reasons, including decision making structure and funding.
If we’re talking about shifting people’s mindsets about AI risks and building a movement, the time is now. Luckily we’ve got foundations to build on. We don’t need to practice something else first. We have examples of trying to prime the public to pay attention to other long-term risks, such as global warming, pandemic readiness, and nuclear proliferation. Now we should to add long-term AI risk to the menu.
And I would not say that I’m anything close to “optimistic” in the probabilistic sense about building the coalition we need, but we must try anyway. And motivation can be found without naïve optimism. A sense of acting with purpose can be a useful state of mind that is not coupled to one’s guesses about most likely outcomes.
Take global warming as an example: this is a real thing that's happening. We have measurements of CO2 concentrations and global temperatures. Most people accept that this is a real thing. And still getting anybody to do anything about it is nearly impossible.
Now you have a hypothetical risk of something that may happen sometime in the distant future, but may not. I don't see how you would be able to get anybody to care about that.
Yeah I agree, it's not about where it's at now, but whether where we are now leads to something with general intelligence and self improvement ability. I don't quite see that happening with the curve it's on, but again what the heck do I know.
What do you mean about the curve not leading to general intelligence? Even if transformer architectures by themselves don’t get there, there are multifarious other techniques, including hybrids.
As long as (1) there are incentives for controlling ever increasing intelligence; (2) the laws of physics don’t block us; and (3) enough people/orgs have the motivation and means, some people/orgs are going to press forward. This just becomes a matter of time and probability. In general, I do not bet against human ingenuity, but I often bet against human wisdom.
In my view, along with many others, it would be smarter for the whole world to slow down AI capabilities advancement until we could have very high certainty that doing so is worth the risk.
every software company i've ever worked with has an endless backlog of features it wants/needs to implement. Maybe AI just lets them move through these feature more quickly?
I mean most startups fail. And in software startups, the blame for that is usually at least shared by "software wasn't good enough". So that $20million seed investment is still going to go into "software development" - ie programmer salaries. they will be using the higher level language of ai much of the time, and be 2-5 times more efficient - but will it be enough? No. Most will still fail.
Companies don’t always compete on capability or quality. Sometimes they compete on efficiency. Or sometimes they carve up the market in different ways.
Sometimes, but with technology related companies I rarely see that. I've really only seen it in industries that are very straightforward, like producing building materials or something. Do you have any examples?
Amazon. Walmart. Efficiency is arguably their key competitive advantage.
This matters regarding AI systems because a lot of customers may not want to pay extra for the best models! For a lot of companies,
serving a good enough model efficiently is a competitive advantage.
> And, because it is easier to retrain humans than build machines for those jobs, we wound up with more and better jobs.
I think it did not work like that.
Automatic looms displaced large numbers of weavers, skilled professionals, which did not find immediately find jobs tending dozens of mechanical looms. (Mr Ludd was one of these displaced professionals.)
Various agricultural machines and chemical products displaced colossal numbers of country people which had to go to cities looking for industrial jobs; US agriculture used to employ 50% of workforce in 1880 and only 10% in 1930.
The advent of internet displaced many in the media industry, from high-caliber journalists to those who worked in classified ads newspapers.
All these disruptions created temporary crises, because there was no industry that was ready to immediately employ these people.
temporary - thats the key. people were able to move to the cities and get factory and office jobs and over time were much better off. I can complain about the socially alienated condition I'm in as an office worker, but I would NEVER want to do farm work - cold/sun, aching back, zero benefits, low pay, risk of crop failure, a whole other kind of isolation etc etc.
You will have to back that statement up because this is not at all obvious to me.
If I look at the top US employers in say 1970 vs 2020, the companies that dominate 1970 were noted for having hard blue collar labor jobs but paid enough to keep a single earner family significantly above minimum wage and the poverty line. The companies that dominate in 2020 are noted for being some of the shittiest employers having some of the lowest pay fairly close to minimum wage and absolutely worst working conditions.
Sure, you tend not to get horribly maimed in 2020 vs 1970. That's about the only improvement.
This was already a problem back then, Nixon was about to introduce UBI in the late 60s and then the admin decided that having people work pointless jobs keeps them better occupied, and the rest of the world followed suit.
There will be new jobs and they will be completely meaningless busywork, people performing nothing of substance while being compensated for it. It's our way of doing UBI and we've been doing it for 50 years already.
This is the first technology wave that doesn't just displace humans, but which can be trained to the new job opportunities more easily than humans can. Right now it can't replace humans for a lot of important things. But as its capabilities improve, what do displaced humans transition to?
Assuming AI doesn't get better than humans at everything, humans will be supervising and directing AIs.
This is the optimistic take and definitely possible, but not guaranteed or even likely. Markets tend to consolidate into monopolies (or close to it) over time. Unless we are creating new markets at a rapid rate, there isn’t necessarily room for those other 900 engineers to contribute.
Because the people with the money aren’t going to just give it to everyone else. We already see the richest people hoard their money and still be unsatisfied with how much they have. We already see productivity gains not transfer any benefit to the majority of people.
Yes. However people are unwilling to take this approach unless things get really really bad. Even then, the powerful tend to have such strong control that people are afraid to act out of fear of reprisal.
We’ve also been gaslit into believing that it’s not a good approach, that peaceful protests are more civilised (even though they rarely cause anything meaningful to actually change).
Because otherwise you'd have to convince AI-owners and select professionals to let go of their wealth to give a comfortable and fulfilling life of leisure to the unemployed.
More likely it will look like the current welfare schemes of many countries, now add mass boredom leading to unrest.
> Because otherwise you'd have to convince AI-owners and select professionals to let go of their wealth to give a comfortable and fulfilling life of leisure to the unemployed.
Not necessarily. Such forces could be outvoted or out maneuvered.
> More likely it will look like the current welfare schemes of many countries..,
Maybe, maybe not. It might take the form of UBI or some other form that we haven’t seen in practice.
> now add mass boredom leading to unrest.
So many assumptions. There is no need to just assume any particular distribution of boredom across the future population of the world. Making predictions about social unrest is even more complicated.
Deep Utopia (Bostrom) is an excellent read that extensively discusses various options if things go well.
> Not necessarily. Such forces could be outvoted or out maneuvered
Could.
> So many assumptions. There is no need to just assume any particular distribution of boredom across the future population of the world. Making predictions about social unrest is even more complicated
I’m assuming that previous outcomes predict future failures, because the forces driving these changes are of our societies, and not a hypothetical, assumed new society.
In this world, ownership, actual, legal ownership, is a far stronger and fundamental right than any social right to your well-being.
You would have to change that, which is a utopian project whose success has been assumed in the past, that a dialectical contradiction of the forces of social classes would lead to the replacement of this framework.
It is indeed very complicated, but you know what’s even more complicated? Utopian projects.
Sorry but I see it as far more likely that the plebes will be told to kick rocks and ask the bots to generate art for them, when asking for money for art supplies on top of their cup noodle money.
Well in Sam’s ideal world you’ll be using bots to keep yourself distracted.
You would like to learn to play the guitar? Sorry, that kind of money didn’t pass in the budget bill, but how about you ask the bot to create music for you?
Elites also get something way better than keeping people busy for distraction… they get mass, targeted manipulation and surveillance to make sure you act working the borders of safety.
You know what job will surely survive? Cops. There’ll always be the nightstick to keep people in line.
More people need to feel this. Too many people deny even the possibility, not based out of logic, but rather out of ignorance or subconscious factors such as fear or irrelevance.
One way to think about AI and jobs is Uber/Google Maps. You used to have to know a lot about a city to be a taxi driver; then suddenly with Google Maps you don't. So in effect, technology lowered the requirements or training needed to become a taxi driver. More people can do it, not less (although incumbents may be unhappy about this).
AI is a lot like this. In coding for instance, you still need to have some sense of good systems design, etc. and know what you want to build in concrete terms, but you don't need to learn the specific syntax of a given language in detail.
Yet if you don't know anything about IT, don't know what you want to build or what you could need, or what's possible, then it's unlikely AI can help you.
Even with Google Maps, we still need human drivers because current AI systems aren’t so great at driving and/or are too expensive to be widely adopted at this point. Once AI figures out driving too, what do we need the drivers for?
And I think that’s the point he was making, it’s hard to imagine any task where humans are still required when AI can do it better and cheaper. So I don’t think the Uber scenario is realistic.
I think the only value humans can provide in that future is “the human factor”: knowing that something is done by an actual human and not a machine can be valuable.
People want to watch humans playing chess, even though AI is better at it. They want to consume art made by humans. They want a human therapist or doctor, even if they heavily rely on AI for the technical stuff. We want the perspective of other humans even if they aren’t as smart as AI. We want someone that “gets” us, that experiences life the same way we do.
In the future, more jobs might revolve around that, and in industries where previously we didn’t even consider it. I think work is going to be mostly about engaging with each other (even more meetings!)
The problem is, in a world that is that increasingly remote, how do you actually know it’s a human on the other end? I think this is something we’ll need to solve, and it’s going to be hard with AI that’s able to imitate humans perfectly.
Here in the US, we have been getting a visceral lesson about human willingness to sacrifice your own interests so long as you’re sticking it to The Enemy.
It doesn’t matter if the revolution is bad for the commoners — they will support it anyway if the aristocracy is hateful enough.
Yeah not guillotines but guns, bombs, and other tools should suffice. Body guards or compounds in Hawaii can help stop a small group but body guards will walk away from the job when thousands of well armed members of an angry mob show up at their employers door.
Most of the people who died in The Terror were commoners who had merely not been sympathetic enough to the revolution. And then that sloppiness lead to reactionary violence and there was a lot of back and forth until Napoleon took power and was pretty much a king in all but heritage.
Hopefully we can be a bit more precise this time around.
The status quo of the hegemonic structure of society is hell but because it has the semblance of authoritarian law and order people can look at that and say “that’s not hell”
You might want to look at the etymology of the word “terrorism” (despite the most popular current use, it wasn't coined for non-state violence) and what class suffered the most in terms of both judicial and non-judicial violent deaths during the revolutionary period.
We have to also choose to build technology that empowers people. Empowering technologies don't just pop into existence, they're created by people who care about empowering people.
I too believe that a mostly autonomous work world would be something we could handle well assuming the leadership was composed of smart folks picking the right decisions, without also being too much exposed to external powers opposing an impossible to win force (large companies and interests). The problem is if we mix what could happen (not clear when, right now) with the current weak leadership across the world.
go to any war-torn country or collapsed empire (Soviet). I have seen/grow-up myself in both — you would get desperation, people giving-up, alcohol (famous "X"-cross of birth rate drop and deaths rising), drugs, crime, corruption/warlord-ing, rural communities hit first and totally vanish, then small-tier cities vanish, then mid-tier, only the largest hubs remain. loss of science, culture, and education. people are just gone. only ruins of whatever latest shelters they had remain, not even their prime-time architecture. you can drive hundreds/thousands of kms across these ruin of what once was flurishing culture. years ago you would find one old person still living there. these days not a single human left. this is what is coming.
I'm puzzled how AI is supposed to be a job creating technology. It is supposed to either wholesale replace jobs, or make workers so efficient that fewer of them are required. This is supposed to make digital and intellectually produced goods cheaper (although, given reproduction is free, the goods themselves are already pretty cheap).
To me it looks like we'll see well paying jobs decrease, digital services get cheaper, food+housing stay the same, and presumably as displaced workers do what they need to do physical service jobs will get more crowded and pay worse, so physical services will get cheaper. It is unclear whether there will be a net benefit to society.
in the long term: simply that from the spinning jenny on, the history of automation is the history of exponentially expanding the workforce and population. when products are cheaper, demand increases, new populations enter the market and create demand for a higher class of goods and services - which sustains/grows employment.
in the short term: there is a hiring boom within the AI and related industries.
I believe that historically we have solved this problem by creating gigantic armies and then killing off millions of people that couldn't really adapt to the new order with a world war.
The relation won’t invert because it’s very easy and quick to train guy pilling up bricks while training architect is slow and hard. If low skilled jobs will pay much better than high skilled then people will just change their job.
That’s only true as long as the technical difficulties aren’t covered by tech.
Think of a world where software engineering itself is handled relatively well by the llm and the job of the engineer becomes just collecting business requirements and checking they’re correctly addressed.
In that world the limit for scarcity might be less in the difficulty of training and more in the willingness to bend your back in the sun for hours vs comfortably writing prompts in an air conditioned room.
Right now there are enough people willing to bend their back in the sun for hours that their salaries are much lower than these of engineers. Do you think that for some reason supply of these people will drop with higher wages and much lower employment opportunities in office jobs? I highly doubt it.
My argument is not that those people’s salaries will go up until overtaking the engineers’.
It’s the opposite, that the value of office/intellectual work will tank, while manual work remains stable. Lower barrier of entry for intelectual work if a position even needs to be covered, work conditions much more comfortable.
As someone else said, until a company or individual is willing to risk their reputation on the accuracy of AI (beyond basic summarising jobs, etc), the intelligent monkeys are here for a good while longer. I've already been once bitten, twice shy.
The conclusion, sadly, is that CEO's will pause hiring and squeeze more productivity out of existing hires. This will impact junior roles the most.
The industrial revolution took something like 98% of jobs and farms and just disappeared them.
Could you a priori in 1800 have predicted the existence of graphics artists? Street sweepers? People who drive school buses? The whole infrastructure around trains? Sewage maintainers? Librarians? Movie stuntmen? Sound Engineers? Truck drivers?
The opening of new jobs has been causally unlinked from the closing of old jobs - especially when you take the quantity into consideration. There was a well of stuff people wanted to do, that they couldn't do because they were busy doing the boring stuff. But now that well of good new jobs is running dry, which is why we see people picking up 3 really shit jobs to make ends meet. There will be a point where new jobs do not open at all, and we should probably plan for that.
I think UBI can only buy some time but won't solve the problem. We need fast improvement with AI robots that can be used for automation on mass scale: construction, farming maybe even cooking and food processing.
Right now AI is mostly focused on automating top levels of maslov pyramid hierarchy of needs rather than bottom physiological needs. Once things like shelter (housing), food, utilities (electricity, water, internet) are dirty cheap UBI is less needed.
AI can displace human work but not human accountability. It has no skin and faces no consequences.
> can be trained to the new job opportunities more easily ...
Are we talking about AI that always needs trainers to fix their prompts and training sets? How are we going to train AI when we lose those skills and get rid of humans?
> what do displaced humans transition to?
Humans with all powerful AI in their pockets... what could they do if they lose their jobs?
> ask that question to all the companies laying off junior folks in favor of LLMs right now. They are gleefully sawing off the branch they’re sitting on.
> Humans with all powerful AI in their pockets... what could they do if they lose their jobs?
At which point did AI become a free commodity in your scenario?
> AI can displace human work but not human accountability. It has no skin and faces no consequences.
We’ve gota way to go to get there in many instances. So far I’ve seen people blame AI companies for model output, individuals for not knowing the product sold to them as a magic answer-giving machine was wrong, and other authorities in those situations (e.g. managers, parents, school administrators and teachers,) for letting ai be used at all. From my vantage point, It people seem to be using it as a tool to insulate themselves from accountability.
> AI can displace human work but not human accountability. It has no skin and faces no consequences.
Let’s assume that we have amazing aj and robotics, better than humans at everything - if you could choose between robosurgery (completely automatic) with 1% mortality and for 5000 usd vs surgery performed by human with 10% mortality and 50000 usd price tag, would you really choose human just because you can sue him? I wouldn’t. I don’t thing anyone thinking rationally would.
Is the ability to burn someone at a stake for making a mistake truly vital to you?
If not, then what's the advantage of "having skin" is? Sure, you can't flog an AI. But AI doesn't need to be threatened with flogging to perform at the peak of its abilities. A well designed AI performs at the peak of its abilities always - and if that isn't enough, you train it until it is.
Those displaced workers need an income first, job second. What they were producing is still getting done. This means we have gained freedom to choose what else is worth doing. The immediate problem is the lack of income. There is no lack of useful work to do, it's just that most of it doesn't pay well.
Yeah but those opening of new kind of jobs has not always been instantly. It can take decades and for instance was one of the reasons for the French Revolution. Internet has already created a huge amount of monopolies and wealth concentration. AI seems likely to do this further.
For the moment, perhaps it could be jobs that LLMs can’t be trained on. New jobs, niche jobs, secret or undocumented jobs…
It’s a common point now that LLMs don’t seem to be able to apply knowledge about one thing to how a different, unfamiliar thing works. Maybe that will wind up being our edge, for a time.
We also may not need to worry about it for a long time. I’m more and more falling on this side. LLM’s are hitting diminishing returns so until there’s a new innovation (can’t see any on the horizon yet) I’m not concerned for my career.
During the Industrial Revolution, many who made a living by the work of their hands lost their jobs, because there were machines and factories to do their work. Then new jobs were created in factories, and then many of those jobs were replaced by robots.
Somehow many idiotic white collar jobs have been created over the years. How many web applications and websites are actually needed? When I was growing up, the primary sources of knowledge were teachers, encyclopedias, and dictionaries, and those covered a lot. For the most part, we’ve been inventing problems to solve and wasting a tremendous amount of resources.
Some wrote malware or hacked something in attempt to keep this in check, but harming and destroying just means more resources used to repair and rebuild and real people can be hurt.
At some point in coming years many white collar workers will lose their jobs again, and there will be too many unemployed because not enough blue collar jobs will be available.
There won’t be some big wealth redistribution until AI convinces people to do that.
The only answer is to create more nonsense jobs, like AI massage therapist and robot dog walker.
> In every technology wave so far, we've disrupted many existing jobs. However we've also opened up new kinds of jobs
That may well be why these technologies were ultimately successful.
Think of millions and millions being cast out.
They won't just go away. And they will probably not go down without a fight. "Don't buy AI-made, brother!", "Burn those effing machines!" It's far from unheard of in history.
Also: who will buy if no one has money anymore? What will the state do, when thus tax income goes down, while social welfare and policing costs go up?
There are other scenarios, too: everybody gets most stuff for free, because machines and AI's do most of the work. Working communism for the lower classes, while the super rich stay super rich (like in real existing socialism). I don't think it is a good scenario either. In the long run it will make humanity lazy and dumb.
In any case I think what might happen is not easy to guess, so many variables and nth-order effects. When large systems must seek a new equilibrium all bets are usually off.
Just because X can be replaced by Y today doesn’t imply that it can do so in a Future where we are aware of Y, and factor it into the background assumptions about the task.
In more concrete terms: if “not being powered by AI” becomes a competitive advantage, then AI won’t be meaningfully replacing anything in that market.
You can already see this with YouTube: AI-generated videos are a mild amusement, not a replacement for video creators, because made by AI is becoming a negative label in a world where the presence of AI video is widely known.
Of course this doesn’t apply to every job, and indeed many jobs have already been “replaced” by AI. But any analysis which isn’t reflectively factoring in the reception of AI into the background is too simplistic.
Just to further elaborate on this with another example: the writing industry. (Technical, professional, marketing, etc. writing - not books.)
The default logic is that AI will just replace all writing tasks, and writers will go extinct.
What actually seems to be happening, however, is this:
- obviously written-by-AI copywriting is perceived very negatively by the market
- companies want writers that understand how to use AI tools to enhance productivity, but understand how to modify copy so that it doesn’t read as AI-written
- the meta-skill of knowing what to write in the first place becomes more valuable, because the AI is only going to give you a boilerplate plan at best
And so the only jobs that seem to have been replaced by AI directly, as of now, are the ones writing basically forgettable content, report-style tracking content, and other low level things. Not great for the jobs lost, but also not a death sentence for the entire profession of writing.
As someone who used to be in the writing industry (a whole range of jobs), this take strikes me as a bit starry-eyed. Throw-away snippets, good-enough marketing, generic correspondence, hastily compiled news items, flairful filler text in books etc., all this used to be a huge chunk of the work, in so many places. The average customer had only a limited ability to judge the quality of texts, to put it mildly. Translators and proofreaders already had to prioritize mass over flawless output, back when Google Translate was hilariously bad and spell checkers very limited. Nowadays, even the translation of legal texts in the EU parliament is done by a fraction of the former workforce. Very few of the writers and none of the proofreaders I knew are still in the industry.
Addressing the wider point, yes, there is still a market for great artists and creators, but it's nowhere near large enough to accommodate the many, many people who used to make a modest living, doing these small, okay-ish things, occasionally injecting a bit of love into them, as much as they could under time constraints.
What I understand is AI leads certain markets to be smaller in terms of economics. Wayy smaller actually. Only few industry will keep growing because of this.
I think this is a key point, and one that we've seen in a number of other markets (eg. computer programming, art, question-answering, UX design, trip planning, resume writing, job postings, etc.). AI eats the low end, the portion that is one step above bullshit, but it turns out that in a lot of industries the customer just wants the job done and doesn't care or can't tell how well it is done. It's related to Terence Tao's point about AI being more useful as a "red team" member [1].
This has a bunch of implications that are positive and also a bunch that are troubling. On one hand, it's likely going to create a burst of economic activity as the cost of these marginal activities goes way down. Many things that aren't feasible now because you can't afford to pay a copywriter or an artist or a programmer are suddenly going to become feasible because you can pay ChatGPT or Claude or Gemini at a fraction of the cost. It's a huge boon for startups and small businesses: instead of needing to raise capital and hire a team to build your MVP, just build it yourself with the help of AI. It's also a boon for DIYers and people who want to customize their life: already I've used Claude Code to build out a custom computer program for a couple household organization tasks that I would otherwise need to get an off-the-shelf program that doesn't really do what I want for, because the time cost of programming was previously too high.
But this sort of low-value junior work has historically been what people use to develop skills and break into the industry. And juniors become seniors, and typically you need senior-level skills to be able to know what to ask the AI and prompt it on the specifics of how to do a task best. Are we creating a world that's just thoroughly mediocre, filled only with the content that a junior-level AI can generate? What happens to economic activity when people realize they're getting shitty AI-generated slop for their money and the entrepreneur who sold it to them is pocketing most of the profits? At least with shitty human-generated bullshit, there's a way to call the professional on it (or at least the parts that you recognize as objectionable) and have them do it again to a higher standard. If the business is structured on AI and nobody knows how to prompt it to do better, you're just stuck, and the shitty bullshit world is the one you live in.
The assumption here is that LLMs will never pass the Turing test for copywriting, i.e. AI writing will always be distinguishable from human writing. Given that models that produce intelligible writing didn't exist a few years ago, that's a very bold assumption.
No, I’m sure they will at some point, but I don’t think that eliminates the actual usefulness of a talented writer. It just makes unique styles more valuable, raises the baseline acceptable copy to something better (in the way that Bootstrap increased website design quality), and shifts the role of writer to more of an editor.
Someone still has to choose what to prompt and I don’t think a boilerplate “make me a marketing plan then write pages for it” will be enough to stand out. And I’d bet that the cyborg writers using AI will outcompete the purely AI ones.
(I also was just using it as a point to show how being identified as AI-made is already starting to have a negative connotation. Maybe the future is one where everything is an AI but no one admits it.)
> And I’d bet that the cyborg writers using AI will outcompete the purely AI ones.
In the early days of chess engines there were similar hopes for cyborg chess, whereby a human and engine would team up to be better than an engine alone. What actually happened was that the engines quickly got so good that the expected value of human intervention was negative - the engine crunching so much information than the human ever could.
Marketing is also a kind of game. Will humans always be better at it? We have a poor track record so far.
Chess is objective, stories and style are subjective. Humans crave novelty, fresh voices, connection and layers of meaning. It's possible that the connection can be forged and it can get smart enough to bake layers of meaning in there, but AI will never be good at bringing novelty or a fresh voice just by its very nature.
no matter what you ask AI to do, it’s going to give you an “average“ answer. Even if you tell it to use a very distinct specific voice and write in a very specific tone, it’s going to give you the “average“ specific voice and tone you’ve asked for. AI is the antithesis of creativity and originality. This gives me hope.
That's mostly true of humans though. They almost always give average answers. That works out because
1) most of the work that needs to be done is repetitive, not new so average answers are okay
2) the solution space that has been explored by humans is not convex, so average answers will still hit unexplored territory most of the time
Absolutely! You can communicate with without (or with minimal) creativity. It’s not required in most cases. So AI is definitely very useful, and it can ape creativity better and better, but it will always be “faking it”.
If your brain is not running algorithms (which are ultimately just math regardless of the compute substrate), how do you imagine it working then, aside from religious woo like "souls"?
I dunno, I think artificiality is a pretty reasonable criterion to go by, but it doesn't seem at all related to originality, nor does originality really stack up when we too are also repeating and remixing what we were previously taught. Clearly we do a lot more than that as well, but when it comes to defining creativity, I don't think we're any closer to nailing that Jello to the tree.
I tried asking chatgpt for brainrot speech and all examples they gave me sound very different from what the new kids on the internet are using. Maybe language will always evolve faster than whatever amount of Data openAI can train their model with :).
AI will probably pass that test. But art is about experience and communicating more subtle things that we humans experience. AI will not be out in society being a person and gaining experience to train on. So if we're not writing it somewhere for it to regurgitate... It will always feel lacking in the subtlety of a real human writer. It depends on us creating content with context in order to mimic someone that can create those stories.
EDIT: As in, it can make really good derivative works. But it will always lag behind a human that has been in real life situations of the time and experienced being a human throughout them. It won't be able to hit the subtle notes that we crave in art.
> AI will not be out in society being a person and gaining experience to train on.
It can absolutely do that, even today - you could update the weights after every interaction. The only reason why we don't do it is because it's insanely computationally expensive.
You're absolutely right, but AIs still have their little quirks that set them apart.
Every model has a faint personality, but since the personality gets "mass produced" any personality or writing style makes it easier to detect it as AI rather than harder. e.g. em dashes, etc.
But reducing personality doesn't help either because then the writing becomes insipid — slop.
Human writing has more variance, but it's not "temperature" (i.e. token level variance), it's per-human variance. Every writer has their own individual style. While it's certainly possible to achieve a unique writing style with LLMs through fine-tuning it's not cost effective for something like ChatGPT, so the only control is through the system prompt, which is a blunt instrument.
It’s not a personality. There is no concept of replicating a person, personality or behaviours because the software is not the simulation of a living being.
It is a query/input and response format. Which can be modeled to simulate a conversation.
It can be a search engine that responds on the inputs provided, plus the system, account, project, user prompts (as constraints/filters) before the current turn being input.
The result can sure look like magic.
It’s still a statistically present response format based on the average of its training corpus.
Take that average and then add a user to it with their varying range and then the beauty varies.
LLMs can have many ways to explain the same thing more than 1 can be valid sometimes; other times not.
Seems a bit optimistic to me. Companies may well accept a lower quality than they used to get if it's far cheaper. We may just get shittier writing across the board.
>You can already see this with YouTube: AI-generated videos are a mild amusement, not a replacement for video creators, because made by AI is becoming a negative label in a world where the presence of AI video is widely known.
But that's because, at present, AI generated video isn't very good. Consider the history of CGI. In the 1990s and early 2000s, it was common to complain about how the move away from practical sets in favor of CGI was making movies worse. And it was! You had backgrounds and monsters that looked like they escaped from a video game. But that complaint has pretty much died out these days as the tech got better (although Nolan's Oppenheimer did weirdly hype the fact that its simulated Trinity blast was done by practical effects).
I don't agree that it is because of the "quality" of the video. The issue with AI art is that it lacks intentional content. I think people like art because it is a sort of conversation between the creator and the viewer. It is interesting because it has a consistent perspective. It is possible AI art could one day be indistinguishable but for people to care about it I feel they would need to lie and say it was made by a particular person or create some sort of persona for the AI. But there are a lot of people who want to do the work of making art. People are not the limiting factor, in fact we have way more people who want to make art than there is a market for it. What I think is more likely is that AI becomes a tool in the same way CGI is a tool.
> The issue with AI art is that it lacks intentional content. I think people like art because it is a sort of conversation between the creator and the viewer.
The trouble with AI shit is it's all contaminated by association.
I was looking on YT earlier for info on security cameras. It's easy to spot the AI crap: under 5 minutes and just stock video in the preview or photos.
What value could there be in me wasting time to see if the creators bothered to add quality content if they can't be bothered to show themselves in front of the lens?
What an individual brings is a unique brand. I'm watching their opinion which carries weight based on social signals and their catalogue etc.
Generic AI will always lack that until it can convincingly be bundled into a persona... only then the cycle will repeat: search for other ways to separate the lazy, generic content from the meaningful original stuff.
CGI is a good analogy because I think AI and creators will probably go in the same direction:
You can make a compelling argument that CGI operators outcompeted practical effects operators. But CGI didn’t somehow replace the need for a filmmaker, scriptwriter, cinematographers, etc. entirely – it just changed the skillset.
AI will probably be the same thing. It’s not going to replace the actual job of YouTuber in a meaningful sense; but it might redefine that job to include being proficient at AI tools that improve the process.
I think they are evolving differently. Some very old cgi holds up because they invested a lot of money to make it so. Then they tried to make it cheaper and people started complaining because the output was worse than all prior options.
Jurassic Park is a great example - they also had excellent compositing to hide any flaws (compositing never gets mentioned in casual CGI talk but is one of the most important steps)
The dinosaurs were also animated by oldschool stop motion animators who were very, very good at their jobs. Another very underrated part of the VFX pipeline.
Doesnt matter how nice your 3D modelling and texturing are if the above two are skimped on !
That's a Nolan thing like how Dunkirk used no green screen.
I think Harry Potter and Lord of the Rings embody the transition from old school camera tricks to CGI as they leaned very heavily into set and prop design and as a result have aged very gracefully as movies
And they cost 300 million to make be cause of the CGI fest they are, hence need close to a billion in profits when considering marketing and the theater cut. So the cost of CGI and the enshittification of movies seems to be a good analogy to the usefuleness of LLM/AI.
That said, the complaint is coming back. Namely because most new movies use an incredible amount of CGI and due to the time constraints the quality suffers.
As such, CGI is once again becoming a negative label.
I don’t know if there is an AI equivalent of this. Maybe the fact that as models seem to move away from a big generalist model at launch, towards a multitude of smaller expert models (but retaining the branding, aka GPT-4), the quality goes down.
Do you get the feeling that AI generated content is lacking something that can be incrementally improved on?
Seems to me that it's already quite good in any dimension that it knows how to improve on (e.g. photorealism) and completely devoid of the other things we'd want from it (e.g. meaning).
Yeah, I was thinking about this. Humans vary depending on a lot of factors. Today they're happy, tomorrow they're a bit down. This makes for some variation which can be useful.
LLMs are made to be reliable/repeatable, as general experience. You know what you get. Humans are a bit more ... -ish, depending on ... things.
Yeah if you look at many of the top content creators, their appeal often has very little to do with production value, and is deliberately low tech and informal.
I guess AI tools can eventually become more human-like in terms of demeanor, mood, facial expressions, personality, etc. but this is a long long way from a photorealistic video.
>But that's because, at present, AI generated video isn't very good.
It isn't good, but that's not the reason. There's a paper about 10 years ago where people used some computer system to generate Bach-like music that even Bach experts couldn't reliably tell apart, but nobody listens to bot music. (or nobody except for engine programmers watches computer chess, despite superiority. Chess is thriving more now including commercially than it ever did)
In any creative field what people are after is the interaction between the creator and the content, which is why compelling personalities thrive more, not less in a sea of commodified slop (be that by AI or just churned out manually).
It's why we're in an age where twitch content creators or musicians are increasingly skilled at presenting themselves as authentic and personal. These people haven't suffered from the fact that mass production of media is cheap, they've benefited from it.
The wonder of Bach goes much deeper than just the aesthetic qualities of his music. His genius almost forces one to reckon with his historical context and wonder, how did he do it? Why did he do it? What made it all possible? Then there is the incredible influence that he had. It is easy to forget that music theory as we know it today was not formalized in his day. The computer programs that simulate the kind of music he made are based on that theory that he understood intuitively and wove into his music and was later revealed through diligent study. Everyone who studies Bach learns something profound and can feel both a kinship for his humanity and also an alienation from his seemingly impossible genius. He is one of the most mysterious figures in human history and one could easily spend their entire life primarily studying just his music (and that of his descendants). From that perspective, computer generated music in his style is just a leaf on the tree, but Bach himself is the seed.
> These people haven't suffered from the fact that mass production of media is cheap, they've benefited from it.
Maybe? This really depends on your value system. Every moment that you are focused on how you look on camera and trying to optimize an extractive algorithm is a moment you aren't focused on creating the best music that you can in that moment. If the goal is maximizing profit to ensure survival, perhaps they are thriving. Put another way, if these people were free to create music in any context, would they choose content creation on social media? I know I wouldn't, but I also am sympathetic to the economic imperatives.
I am a Bach fiend and the problem is BWV 1 to 1080.
Why would I listen to algorithmic Bach compositions when there are so many of Bach's own work I have never listened to?
Even if you did get bored of all JS music, Carl Philipp Emanuel Bach has over 1000 works himself.
There are also many genius baroque music composers outside the Bach family.
This is true of any composer really. Any classical composer that the average person has heard of has an immense catalog of works compared to modern recording artists.
I would say I have probably not even listened to half the works of all my favorite composers because it is such a huge amount of music. There is no need for some kind of classical music style LORA.
That's interesting, because after ElevenLabs launched their music generation I decided I really quite want to spent some time to have it generate background tracks for me to have on while working.
I don't know the name of any of the artists whose music I listened to over the last week because it does not matter to me. What mattered was that it was unobtrusive and fit my general mood. So I have a handful of starting points that I stream music "similar to". I never care about looking up the tracks, or albums, or artists.
I'm sure lots of people think like you, but I also think you underestimate how many contexts there are where people just don't care.
Ironically, while the non-CGI SFX in e.g. Interstellar looked amazing, that sad fizzle of a practical explosion in Oppenheimer did not do the real thing justice and would've been better served by proper CGI VFX.
To understand why this is too optimistic, you have to look at things where AI is already almost human-level. Translations are more and more done exclusively with AI or with a massive AI help (with the effect of destroying many jobs anyway) at this point. Now ebook reading is switching to AI. Book and music album covers are often done with AI (even if this is most of the times NOT advertised), and so forth. If AI progresses more in a short timeframe (the big "if" in my blog post), we will see a lot of things done exclusively (and even better 90% of the times, since most humans doing a given work are not excellence in what they do) by AI. This will be fine if governments immediately react and the system changes. Otherwise there will be a lot of people to feed without a job.
I can buy the idea that simple specific tasks like translation will be dramatically cut down by AI.
But even then – any serious legal situation (like a contract) is going to want a human in the loop to verify that the translation is actually correct. This will require actual translator skills.
AI art seems to basically only be viable when it can’t be identified as AI art. Which might not matter if the intention is to replace cheap graphic design work. But it’s certainly nowhere near developed enough to create anything more sophisticated, sophisticated enough to both read as human-made and have the imperfect artifacts of a human creator. A lot of the modern arts are also personality-driven, where the identity and publicity of the artist is a key part of their reception. There are relatively few totally anonymous artists.
Beyond these very specific examples, however, I don’t think it follows that all or most jobs are going to be replaced by an AI, for the reasons I already stated. You have to factor in the sociopolitical effects of technology on its adoption and spread, not merely the technical ones.
It's kinda hilarious to see "simple task ... like translation". If you are familiar with the history of the field, or if you remember what automated translation looked like even just 15 years ago, it should be obvious that it's not simple at all.
If it were simple, we wouldn't need neural nets for it - we'd just code the algorithm directly. Or, at least, we'd be able to explain exactly how they work by looking at the weights. But now that we have our Babelfish, we still don't know how it really works in details. This is ipso facto evidence that the task is very much not simple.
I use AI as a tool to make digital art but I don't make "AI Art".
Imperfection is not the problem with "AI Art". The problem is that it is really hard to not get the models to produce the same visual motifs and cliches. People can spot AI art so easy because of the motifs.
I think midjourney took this to another level with their human feedback. It became harder and harder to not produce the same visual motifs in the images to the point it is basically useless for me now.
> any serious legal situation (like a contract) is going to want a human in the loop to verify that the translation is actually correct
I hope you're right, but when I think about all those lawyers caught submitting unproofread LLM output to a judge... I'm not sure humankind is wise enough to avoid the slopification.
> But even then – any serious legal situation (like a contract) is going to want a human in the loop to verify that the translation is actually correct.
The usual solution is to specify one language as binding, with that language taking priority if there turns out to be discrepancies between the multiple version.
Isn't "But even then – any serious legal situation (like a contract) is going to want a human in the loop to verify that the translation is actually correct. This will require actual translator skills." only true if the false positive rate of the verifier is not much higher than the failure rate of the AI? At some point it's like asking a human to double check a calculator
You might still need humans in the loop for many things, but it can still have a profound effect if the work that used to be done by ten people can now be done by two or three. In the sectors that you mention, legal, graphic design, translation, that might be a conservative estimate.
There are bound to be all kinds of complicated sociopolitical effects, and as you say there is a backlash against obvious AI slop, but what about when teams of humans working with AI become more skillful at hiding that?
IMO these are terrible, I don't understand how anyone uses them. This is coming from someone who has always loved audiobooks but has never been particularly precious about the narrator. I find the AI stuff unlistenable.
> Book and music album covers are often done with AI (even if this is most of the times NOT advertised)
This simply isn't true, unless you're considering any minor refinement to a human-created design to be "often done with AI".
It certainly sounds like you're implying AI is often the initial designer or primary design tool, which is completely incorrect for major publishers and record labels, as well as many smaller independent ones.
Look at your examples. Translation is a closed domain; the LLM is loaded with all the data and can traverse it. Book and music album covers _don't matter_ and have always been arbitrary reworkings of previous ideas. (Not sure what “ebook reading” means in this context.) Math, where LLMs also excel, is a domain full of internal mappings.
I found your post “Coding with LLMs in the summer of 2025 (an update)” very insightful. LLMs are memory extensions and cognitive aides which provide several valuable primitives: finding connections adjacent to your understanding, filling in boilerplate, and offloading your mental mapping needs. But there remains a chasm between those abilities and much work.
This is not inherent to AI, but how the AI models were recently trained (by preference agreement of many random users). Look for the latest Krea / Black Forest Labs paper on AI style. The "AI look" can be removed.
Songs right now are terrible. For the videos, things are going to be very different once people can create full movies in their computers. Many will have access to the ability to create movies, and a few will be very good, and this will likely change many things. Btw this stupid "AI look" is only transient and is nowhere needed. It will be fixed, and AI images/videos generation will be impossible to stop.
The trouble is, I'm perfectly well aware I can go to the AI tools, ask it to do something and it'll do it. So there's no point me wasting time eg reading AI blog posts as they'll probably just tell me what I've just read. The same goes for any media.
It'll only stand on its own when significant work is required. This is possible today with writing, provided the AI is directed to incorporate original insights.
And unless it's immediately obvious to consumers a high level of work has gone into it, it'll all be tarred by the same brush.
Any workforce needs direction. Thinking an AI can creatively execute when not given a vision is flawed.
Either people will spaff out easy to generate media (which will therefore have no value due to abundance), or they'll spend time providing insight and direction to create genuinely good content... but again unless it's immediately obvious this has been done, it will again suffer the tarring through association.
The issue is really one of deciding to whom to give your attention. It's the reason an ordinary song produced by a megastar is a hit vs when it's performed by an unsigned artist. Or, as in the famous experiment, the same world class violinist gets paid about $22 for a recital while busking vs selling out a concert hall for $100 per seat that same week.
This is the issue AI, no matter how good, will have to overcome.
I mean, test after test have shown that the vast, vast majority of humans are woefully unable to distinguish good AI art made by SOTA models from human art, and in many/most cases actively prefer it.
Maybe you’re a gentleman of such discerningly superior taste that you can always manage to identify the spark of human creativity that eludes the rest of us. Or maybe you’ve just told yourself you hate it and therefore you say you always do. I dunno.
Reminds me of the issue with bad CGI in movies. The only CGI you notice is the bad CGI, the good stuff just works. Same for AI generated art, you see the bad stuff but do not realize when you see a good one.
care to give me some examples from youtube ? I am talking about videos that ppl on youtube connected to for the content in the video ( not AI demo videos).
> Translations are more and more done exclusively with AI or with a massive AI help
As someone who speaks more than one language fairly well: We can tell. AI translations are awful. Sure, they have gotten good enough for a casual "let's translate this restaurant menu" task, but they are not even remotely close to reaching human-like quality for nontrivial content.
Unfortunately I fear that it might not matter. There are going to be plenty of publishers who are perfectly happy to shovel AI-generated slop when it means saving a few bucks on translation, and the fact that AI translation exists is going to put serious pricing pressure on human translators - which means quality is inevitably going to suffer.
An interesting development I've been seeing is that a lot of creative communities treat AI-generated material like it is radioactive. Any use of AI will lead to authors or even entire publishers getting blacklisted by a significant part of the community - people simply aren't willing to consume it! When you are paying for human creativity, receiving AI-generated material feels like you have been scammed. I wouldn't be surprised to see a shift towards companies explicitly profiling themselves as anti-AI.
As someone whose native language isn't English, I disagree. SOTA models are scary good at translations, at least for some languages. They do make mistakes, but at this point it's the kind of mistake that someone who is non-native but still highly proficient in the language might make - very subtle word order issues or word choices that betray that the translator is still thinking in another language (which for LLMs almost always tends to be English because of its dominance in the training set).
I also disagree that it's "not even remotely close to reaching human-like quality". I have translated large chunks of books into languages I know, and the results are often better than what commercial translators do.
It's becoming a negative label because they aren't as good.
I'm not saying it will happen, but it's possible to imagine a future in which AI videos are generally better, and if that happens, almost by definition, people will favor them (otherwise they aren't "better").
I'm not on Facebook, but, from what I can tell, this has arguably already happened for still images on it. (If defining "better" as "more appealing to/likely to be re-shared by frequent users of Facebook.")
I mean, I can imagine any future, but the problem with “created by AI” is that because it’s relatively inexpensive, it seems like it will necessarily become noise rather than signal, if a person could pop out a high quality video in a day, in which case signal will revert to the celebrity that is marketing it rather than the video itself.
Perhaps this will go the way the industrial revolution did? A knife handcrafted by a Japanese master might have a very high value, but 99.9% of the knives are mass produced. "Creators" will become artisans - appreciated by many, consumed by few.
Another flaw is that humans won’t find other things to do. I don’t see the argument for that idea. If I had to bet, I’d say that if AI continues getting more powerful, then humans will transition to working on more ambitious things.
This is very similar to the 'machines will do all the work, we'll just get to be artists and philosophers' idea.
It sounds nice. But to have that, you need resources. Whoever controls the resources will get to decide whether you get them. If AI/machines are our entire economy, the people that control the machines control the resources. I have little faith in their benevolence. If they also control the political system?
You'll win your bet. A few humans will work on more ambitious things. It might not go so well for the rest of us.
There are more mouths to feed and less territory per capita for each person (thus real estate inflation in desired locations). Like lanes on a highway, the population just fills the capacity with growth without the selective pressure of even selecting for skill or ability. The ways we've come see mostly front loaded as initially population takes time to grow while the immediate low hanging fruit of domestic drudgery being eliminated was quite a while ago. Meanwhile now "work" that has filled much of that obligation in the home has expanded to necessitating two full-time income households.
> because made by AI is becoming a negative label in a world
The negative label is the old world pulling the new one back, it rarely sticks.
I'm old enough to remember the folks saying "We used to have the paint the background blue" and "All music composers need to play an instrument" (or turn into a symbol).
> AI-generated videos are a mild amusement, not a replacement for video creators
If you seriously think this, you don’t understand the YouTube landscape. Shorts - which have incredible view times - are flooded with AI videos. Most thumbnails these days are made with AI image generators. There’s an entire industry of AI “faceless” YouTubers who do big numbers with nobody in the comments noticing. The YouTuber Jarvis Johnson made a video about how his feed has fully AI generated and edited videos with great view counts: https://www.youtube.com/watch?v=DDRH4UBQesI
What you’re missing is that most of these people aren’t going onto Veo 3, writing “make me a video” and publishing that; these videos are a little more complex in that they have separate models writing scripts, generating voiceover, and doing basic editing.
These videos and shorts are a fraction of the entire YouTube landscape, and actual creators with identities are making vastly, vastly more money - especially once you realize how YouTube and video content in general is becoming a marketing channel for other businesses. Faceless channels have functionally zero brand, zero longevity, and no real way to extend that into broader products in the way that most successful creators have done.
That was my point: someone that has an identity as a YouTuber shouldn’t worry too much about being replaced by faceless AI bot content.
Re: YT AI content. That is because AI video is (currently) low quality. If AI video generators could spit out full length videos that rivaled or surpassed the best human made content people wouldn’t have the same association. We don’t live in that world yet, but someday we might. I don’t think “human made” will be a desirable label for _anything_, videos, software, or otherwise, once AI is as good or better than humans in that domain.
That’s the fundamental issue with most “analysis”, and most discussions really, on HN.
Since the vast vast majority of writers and commentators are not literal geniuses… they can’t reliably produce high quality synthetic analysis, outside of very narrow niches.
Even though for most comment chains on HN to make sense, readers certainly have to pretend some meaningful text was produced beyond happenstance.
Partly because quality is measured relative to the average, and partly because the world really is getting more complex.
Whether poor videos made by a human directly, or poorly made by a human using AI.
The use of software like AI to create videos with sloppy quality and reaults reflects on their skill.
Currently the use of AI leans towards sloppy because of the lower digital literacy of content creators with AI, and once they get into it, realizing how much goes into videos.
This only works in a world where AI sucks and/or can be easily detected. I've already found videos where on my 2nd or 3rd time watching I went, "wait, that's not real!" We're starting to get there, which is frankly beyond my ability to reason about.
It's the same issue with propaganda. If people say a movie is propaganda, that means the movie failed. If a propaganda movie is good propaganda, people don't talk about that. They don't even realize. They just talk about what a great movie it is.
One thing to keep in mind is not so much that AI would replace the work of video creators for general video consumption, but rather it could create personalized videos or music or whatever. I experimented with creating a bunch of AI music [1] that was tailored to my interests and tastes, and I enjoy listening to them. Would others? I doubt it, but so what? As the tools get better and easier, we can create our own art to reflect our lives. There will still be great human art that will rise to the top, but the vast inundation of slop to the general public may disappear. Imagine the fun of collaboratively designing whole worlds and stories with people, such as with tabletop role-playing, but far more immersive and not having to have a separate category of creators or waiting on companies to release products.
I'm skeptical of arguments like this. If we look at most impactful technologies since the year 2000, AI is not even in my top 3. Social networking, mobile computing, and cloud computing have all done more to alter society and daily life than has AI.
And yes, I recognize that AI has already created profound change, in that every software engineer now depends heavily on copilots, in that education faces a major integrity challenge, and in that search has been completely changed. I just don't think those changes are on the same level as the normalization of cutting-edge computers in everyone's pockets, as our personal relationships becoming increasingly online, nor as the enablement for startups to scale without having to maintain physical compute infrastructure.
To me, the treating of AI as "different" is still unsubstantiated. Could we get there? Absolutely. We just haven't yet. But some people start to talk about it almost in a way that's reminiscent of Pascal's Wager, as if the slight chance of a godly reward from producing AI means it is rational to devote our all to it. But I'm still holding my breath.
> in that every software engineer now depends heavily on copilots
That is maybe a bubble around the internet. Ime most programmers in my environment rarely use and certainly aren't dependent on it. They do also not only do code monkey-esque web programming so maybe this is sampling bias though it should be enough to refute this point.
Came here to say that. It’s important to remember how biased hacker news is in that regard. I’m just out of ten years in the safety critical market, and I can assure you that our clients are still a long way from being able to use those. I myself work in low level/runtime/compilers, and the output from AIs is often too erratic to be useful
Documentation search I might agree, but that wasn’t really the context, I think. Code reviews is hit and miss, but maybe doesn’t hurt too much. They aren’t better at writing good tests than at writing good code in the first place.
I would say that the average Hacker News user is negatively biased against LLMs and does not use coding agents to their benefit. At least what I can tell from the highly upvoted articles and comments.
Im on the core sql execution team at a database company and everyone on the team is using AI coding assistants. Certainly not doing any monkey-esque web programming.
> everyone on the team is using AI coding assistants.
Then the tool worked for you(r team). That's great to hear and maybe gives some hope for my projects.
It has just mostly been more of a time sink than an improvement ime though it appears to strongly vary by field/application.
> Certainly not doing any monkey-esque web programming
The point here was not to demean the user (or their usage) but rather to highlight how developers are not being dependent on LLMs as a tool. Your team presumably did the same type of work before without LLMs and won't become unable to do so if there were to become unavailable.
That likely was not properly expressed in the original comment by me, sorry.
Add LED lighting on there. It is easy to forget what a difference that made. The light pollution, but also just how dim houses were. CFL didn't last very long as a thing between incandescent and LED and houses lit with incandescents have a totally different feel.
But, for clarity, I do agree with your sentiment about their use in appropriate situations, I just have an indescribable hatred for driving at night now
A pessimistic/realistic view of post high school education - credentials are proof of able to do a certain amount of hard work, used as an easy filter for companies while hiring.
I expect universities to adapt quickly, lest lose their whole business as degrees will not carry the same meaning to employers.
> AI has already rendered academic take-home assignments moot
Not really, there are plenty of things that LLMs cannot do that a professor could make his students do. It is just a asymmetric attack on the professor's (or whomever is grading) time to do that.
IMO, credentials shouldn't be given to those who test or submit assignments without proctoring (a lot of schools allow this).
1. Make the student(s) randomly have to present their results on a weekly basis. If you get caught for cheating at this point, at least in my uni with a zero tolerance policy, you instantly fail the course.
2. Make take home stuff only a requirement to be able to participate in the final exam. This effectively means cheating on them will only hinder you and not affect your grading directly.
3. Make take home stuff optional and completely detached from grading. Put everything into the final exam.
My uni does a mix them on different courses. Especially two and three though have a significant negative impact on passing rates as they tend to push everything onto one single exam instead of spread out work over the semester.
> Not really, there are plenty of things that LLMs cannot do that a professor could make his students do.
Could you offer some examples? I'm having a hard time thinking of what could be at the intersection of "hard enough for SotA LLMs" yet "easy enough for students (who are still learning, not experts in their fields, etc)".
Present the results of your exercises (in person) in front of someone. Or really anything in person.
A big downer on the online/remote Initiatives for learning but actually an advantage for older Unis that already have existing physical facilities for students.
This does however also have some problems similar to code interviews .
> Present the results of your exercises (in person) in front of someone
I would not be surprised if we start to see a shift towards this. Interviews instead of written exams. It does not take long to figure out whether someone knows the material or not.
Personally, I do not understand how students expect to succeed without learning the material these days. If anything, the prevalence of AI today only makes cheating easier in the very short term -- over the next couple years I think cheating will be harder than it ever was. I tried to leverage AI to push myself through a fairly straightforward Udacity course (in generative AI, no less), and all it did was make me feel incredibly stupid. I had to stop using it and redo the parts where I had gotten some help, so that my brain would actually learn something.
But I'm Gen X, so maybe I'm too committed to old-school learning and younger people will somehow get super good at this stuff while also not having to do the hard parts.
The main challenge is that most (all?) types of submissions can be created with LLMs and multi-model solutions.
Written tasks are obvious, writing a paper, essay or answering questions is part of most LLMs advertised use-cases. The only other thing was recorded videos, effectively recorded presentations, thanks to video/audio/image generation that probably can be forged too.
So the simple solution to choose something that an "LLM can't do" is to choose something were an LLM can't be applied. So we move away from a digital solution to meatspace.
Assuming that the goal is to test your knowledge/understanding of a topic, it's the same with any other assistive technology. For example, if an examiner doesn't want you[1] to use a calculator to solve a certain equation, they could try to create an artificially hard problem or just exclude the calculator from the allowed tools. The first is vulnerable to more advanced technology (more compute etc.) the latter just takes the calculator out of the equation (pun intended).
[1]: Because it would relieve you of understanding how to evaluate the equation.
Everyone knows how to use Google. There's a difference between a corpus of data available online and an intelligent chatbot that can answer any permutation of questions with high accuracy with no manual searching or effort.
Everyone knows how to type questions into a chat box, yet whenever something doesn’t work as advertised with the LLMs, the response here is, “you’re holding it wrong”.
Do you really think the jump from books to freely globally accessible data instantly available is a smaller jump than internet to ChatGPT? This is insane!!
It's not just smaller, but neglectable (in comparison).
In the internet era you had to parse the questions with your own brain. You just didn't necessarily need to solve them yourself.
In ChatGPT era you don't even need to read the questions. At all. The questions could be written in a language you don't understand, and you still are able generate plausible answers to them.
Obvious ChatGPT. I don't know how it is even a question... if you showed GPT3.5 to people from < 20th centuries there would've been a worldwide religion around it.
I recall the kerfuffle about (IIRC) llama where the engineer lost his mind thinking they had spawned life in a machine and felt it was "too dangerous to release," so it's not a ludicrous take. I would hope that the first person to ask "LLM Jesus" how many Rs are in strawberry would have torpedoed the religion, but (a) I've seen dumber mind viruses (b) it hasn't yet
Ah, thank you. While reading up on it, I found this juicy bit that I did not recall hearing at the time:
> He further revealed that he had been dismissed by Google after he hired an attorney on LaMDA's behalf after the chatbot requested that Lemoine do so.[18][19]
In English class we had a lot of book-reading and writing texts about those books. Sparknotes and similar sites allowed you to skip reading and get a distilled understanding of its contents, similar to interacting with an LLM
> in that every software engineer now depends heavily on copilots
With many engineers using copilots and since LLMs output the most frequent patterns, it's possible that more and more software is going to look the same, which would further reinforce the same patterns.
For example, emdash thing, requires additional prompts and instructions to override it. Doing anything unusual would require more effort.
Pretty sure I read Economnics in one lesson because of HN, he makes great arguments about how automation never ruins economies as much as people think. "Chapter 7: The Curse of Machinery"
LLMs with instruction following have been around for 3 years. Your comment gives me "electricity and gas engines will never replace the horse" vibes.
Everyone agrees AI has not radically transformed the world yet. The question is whether we should prepare for the profound impacts current technology pretty clearly presages, if not within 5 years then certainly within 10 or 25 years.
I’m skeptical of arguments like this. If we look at most impactful technologies since the year 1980, the Web is not even in my top 3. Personal computers, spreadsheet software, and desktop publishing have all done more to alter society and daily life than has the Web.
And yes, I recognize that the Web has already created profound change, in that every researcher now depends heavily on online databases, in that commerce faces a major disruption challenge, and in that information access has been completely changed. I just don’t think those changes are on the same level as the normalization of powerful computers on everyone’s desk, as our business processes becoming increasingly digitized, nor as the enablement for small businesses to produce professional-quality documents without having to maintain expensive typesetting equipment.
To me, the treating of the Web as “different” is still unsubstantiated. Could we get there? Absolutely. We just haven’t yet. But some people start to talk about it almost in a way that’s reminiscent of Pascal’s Wager, as if the slight chance of a godly reward from investing in Web technologies means it is rational to devote our all to it. But I’m still holding my breath.
> The future may reduce the economic prosperity and push humanity to switch to some different economic system (maybe a better system). Markets don’t want to accept that. [Emphasis added]
What a silly premise. Markets don't care. All markets do is express the collective opinion; in the short term as a voting machine, in the long term as a weighing machine.
Seeing a real uptick of socio-policital prognostication from extremely smart, soaked-in-AI, tech people (like you Salvatore!), casting heavy doom-laden gestures towards the future. You're not even wrong! But this "I see something you all clearly don't" narrative, wafer thin on real analysis, packed with "the feels", coated with what-ifs.. it's sloppy thinking and I hold you to a higher standard antirez.
Markets require property rights, property rights require institutions that are dependent on property-rights holders, so that they have incentives to preserve those property rights. When we get to the point where institutions are more dependent on AIs instead of humans, property rights for humans will become inconvenient.
His framing is that markets are collective consensus and if you claim to “know better”, you need to write a lot more than a generic post. It’s so simple, and it is a reminder that antirez’s reputation as a software developer does not automatically translate to economics expert.
I think you are mixed up here. I quoted from the comment above mine, which was harshly and uncharitably critical of antirez’s blog post.
I was pushing back against that comment’s snearing smugness by pointing to an established field that uses clear terminology about how and why markets are useful. Even so, I invited an explanation in case I was missing something.
Anyone curious about the terms I used can quickly find explanations online, etc.
Yes but can the market not be wrong? Wrong in the sense that, failing to meet our expectations as a useful engine of society? As I understood, what was meant with this this article is that AI completely changes the equations across the board that current market direction appears dangerously irrational to OP. I'm not sure what was meant with your comment though besides haggling over semantics and attacking some in-expertise of the authors socio-politic philosophizing that you perceive.
Of course it can be wrong, and it is in many instances. It's a religion. The vast, vast majority of us would prefer to live in a stable climate with unpolluted water and some fish left in the oceans, yet "the market" is leading us elsewhere.
I don't like the idea of likening the market to a religion, but I think it definitely has some glaring flaws. In my mind the biggest is that the market is very effective at showing the consensus of short-term priorities, but it has no ability to reflect long-term strategic consensus.
This is an accurate assessment. I do feel that there is a routine bias on HN to underplay AI. I think it's people not wanting to lose control or relative status in the world.
AI is an existential threat to the unique utility of humans, which has been the last line of defense against absolute despotism (i.e. a tyrannical government will not kill all its citizens because it still needs them to perform jobs. If humans aren't needed to sustain productivity, humans have no leverage against things becoming significantly worse for them, gradually or all at once).
> I do feel that there is a routine bias on HN to underplay AI
It's always interesting to see this take because my perception is the exact opposite. I don't think there's ever been an issue for me personally with a bigger mismatch in perceptions than AI. It sometimes feels like the various sides live in different realities.
Any big AI release, some of the top comments are usually claiming either the tech itself is bad, relaying a specific anecdote about some AI model messing up or some study where AI isn't good, or claiming that AI is a huge bubble that will inevitably crash. I've seen the most emphatic denials of the utility of AI here go much farther than anywhere else where criticism of AI is mild skepticism. Among many people it is a matter of tribal warfare that AI=bad.
I'm very skeptical of this psychoanalysis of people who disagree with you. Can't people just be wrong? People are wrong all the time without it being some sort of defense mechanism. I feel this line of thinking puts you in a headspace to write off anything contradictory to your beliefs.
You could easily say that the AI hype is a cope as well. The tech industry and investors need there to be be a hot new technology, their career depends on it. There might be some truth to the coping in either direction but I feel you should try to ignore that and engage with the content of whatever the person is saying or we'll never make any progress.
I have the impression a lot depends on people's past reading and knowledge of what's going on. If you've read the likes of Kurzweil, Moravec, maybe Turing, you're probably going to treat AGI/ASI as inevitable. For people who haven't they just see these chatbots and the like and think those won't change things much.
It's maybe a bit like the early days of covid when the likes of Trump were saying it's nothing, it'll be over by the spring while people who understood virology could see that a bigger thing was on the way.
These people's theories (except Turing) are highly speculative predictions about the future. They could be right but they are not analogous to the predictions we get out of epidemiology where we have had a lot of examples to study. What they are doing is not science and it is way more reasonable to doubt them.
The Moravec stuff I'd say is more moderately speculative than highly. All he really said is compute power had tended to double every so long and if that keeps up we'll have human brain equivalent computer in cheap devices in the 2020s. That bit wasn't really a stretch and has largely proved true.
The more unspoken speculative bit is there will then be a large economic incentive for bright researchers and companies to put a lot of effort into sorting the software side. I don't consider LLMs to do the job of general intelligence but there are a lot of people trying to figure it out.
Given we have general intelligence and are the product of ~2GB of DNA, the design can't be that impossible complex, although likely a bit more than gradient descent.
> it's people not wanting to lose control or relative status in the world.
It's amazing how widespread this belief is among the HN crowd, despite being a shameless ad hominem with zero evidence. I think there are a lot of us who assume the reasonable hypothesis is "LLMs are a compelling new computing paradigm, but researchers and Big Tech are overselling generative AI due to a combination of bad incentives and sincere ideological/scientific blindness. 2025 artificial neural networks are not meaningfully intelligent." There has not been sufficient evidence to overturn this hypothesis and an enormous pile of evidence supporting it.
I do not necessarily believe humans are smarter than orcas, it is too difficult to say. But orcas are undoubtedly smarter than any AI system. There are billions of non-human "intelligent agents" on planet Earth to compare AI against, and instead we are comparing AI to humans based on trivia and trickery. This is the basic problem with AI, and it always has had this problem: https://dl.acm.org/doi/10.1145/1045339.1045340 The field has always been flagrantly unscientific, and it might get us nifty computers, but we are no closer to "intelligent" computing than we were when Drew McDermott wrote that article. E.g. MuZero has zero intelligence compared to a cockroach; instead of seriously considering this claim AI folks will just sneer "are you even dan in Go?" Spiders are not smarter than beavers even if their webs seem more careful and intricate than beavers' dams... that said it is not even clear to me that our neural networks are capable of spider intelligence! "Your system was trained on 10,000,00 outdoor spiderwebs between branches and bushes and rocks and has super-spider performance in those domains... now let's bring it into my messy attic."
I think AI is still in the weird twilight zone that it was when it first came out in that it's great sometimes and also terrible. I still get hallucinations when I check a response I get with ChatGPT on Google.
On the one hand, what it says can't be trusted, on the other, I have debugged code I have written where I was unable to find the bug myself, and ChatGPT found it.
I also think a reason AI's are popular and the companies haven't gone under is that probably hundreds of thousands if not millions of people are getting responses that have hallucinations, but the user doesn't know it. I fell into this trap myself after ChatGPT first came out. I became addicted to asking anything and it seemed like it was right. It wasn't until later I started realizing that it was hallucinating information.
How prevalent this phenomena is is hard to say but I still think it's pernicious.
But as I said before, there are still use cases for AI and that's what makes judging it so difficult.
I certainly understand why lots of people seem to believe LLMs are progressing towards beocming AGI. What I don't understand is the constant need to absurdly psychoanalyze the people who happen to disagree.
No, I'm not worried about losing "control or relative status in the world". (I'm not worried about losing anything, frankly - personally I'm in a position where I would benefit financially if it became possible to hire AGIs instead of humans.)
You don't get to just assert things without proof (LLMs are going to become AGI) and then state that anyone who is skeptical of your lack of proof must have something wrong with them.
I'm on team plateau, I'm really not noticing increasing competency in my daily usage of the major models. And sometimes it seems like there are regressions where performance drops from what it could do before.
There is incredible pressure to release new models which means there is incredible pressure to game benchmarks.
Tbh a plateau is probably the best scenario - I don't think society will tolerate even more inequality+ massive job displacement.
I think the current economy is already dreadful. So I don't have much desire to maintain that. But it's easier to break something further than to fix it, and god knows what AI is going to do to a system with so many feedback loops.
When I hear folks glazing some kinda impending jobless utopia , I think of the intervening years. I shudder. As they say, "An empty stomach knows no morality."
If AI makes a virus to get rid of humanity, well we are screwed. But if all we have to fear from AI is unprecedented economic disruption, I will point out that some parts of the world may survive relatively unscathed. Let's talk Samoa, for example. There, people will continue fishing and living their day-to-day. If industrialized economies collapse, Samoans may find it very hard to import certain products, even vital ones, and that can cause some issues, but not necessarily civil unrest and instability.
In fact, if all we have to fear from AI is unprecedented economic disruption, humans can have a huge revolt, and then a post-revolts world may be fine by turning back the clock, with some help from anti-progress think-tanks. I explore that argument in more detail in this book: https://www.smashwords.com/books/view/1742992
The issue is there isn't enough of those small environmental economies to support everyone that exists today without the technology, logistics and trades that are in place today.
You can farm and fish the entire undeveloped areas of NYC, but it won't be enough to feed or support the humans that live there.
You can say that for any metro area. Density will have to reduce immediately if there is economic collapse, and historically, when disaster strikes, that doesn't tend to happen immediately.
Also humans (especially large groups of them) need more than food: shelter, clothes, medicine, entertainment, education, religion, justice, law, etc.
> The issue is there isn't enough of those small environmental economies to support everyone that exists today without the technology, logistics and trades that are in place today.
I agree. I expect some parts of the world will see some black days. Lots of infrastructure will be gone or unsuited to people. On top of that, the cultural damage could become very debilitating, with people not knowing how to do X, Y and Z without the AIs. At least for a time. Casualties may mount.
> Also humans (especially large groups of them) need more than food: shelter, clothes, medicine, entertainment, education, religion, justice, law, etc.
This is true, but parts of the world survive today with very little of any of that. And for some of those things that you mention: shelter, education, religion, justice, and even some form of law enforcement, all that is needed is humans willing to work together.
Realistically, in an AI extreme economic disruption scenario, it's more or less USA the only one extremely affected, and that's 400 million people. Assuming it's AI and nothing else causes a big disruption before, and with the big caveat that nobody can't predict the future, I would say:
- Mexico and down are more into informal economies, and they generally lag behind developed economies by decades. Same applies to Africa and big parts of Asia. As such, by the time things get really dire in USA and maybe in Europe and China, the south will be still in business-as-usual.
- Europe has lots of parliaments and already has legislation that takes AI into account. Still, there's a chance those bodies will fail to moderate the impact of AI in the economy and violent corrections will be needed, but people in Europe have long traditions and long memories...They'll find a way.
- China is governed by the communist party, and Russia have their king. It's hard to predict how will those align with AI, but that alignment more or less will be the deciding factor there, and not free capitalism.
More like engineers coming up with higher level programming languages. No one (well, nearly) hand writes assembly anymore. But there's still plenty of jobs. Just the majority write in the higher level but still expressive languages.
For some reason everyone thinks as LLMs get better it means programmers go away. The programming language, and amount you can build per day, are changing. That's pretty much it.
I’m not worried about software engineering (only or directly).
Artists, writers, actors, teachers. Plus the rest where I’m not remotely creative enough to imagine will be affected. Hundreds of thousands if not millions flooding the smaller and smaller markets left untouched.
Yes. The complete irony in all software engineers enthusiasm for this tech is that, if the boards wishes come true, they are literally helping them eliminate their own jobs. It's like the industrial revolution but worse, because at least the craftsmen weren't also the ones building the factories that would automate them out of work.
Marcuse had a term for this "false consciousness"-when the structure of capitalism ends up making people work against their own interests without realizing it, and that is happening big time in software right now. We will still need programmers for hard, novel problems, but all these lazy programmers using AI to write their crud apps don't seem to realize the writing is on the wall.
Or they realize it and they're trying to squeeze the last bit of juice available to them before the party stops. It's not exactly a suboptimal decision to work towards your own job's demise if it's the best paying work available to you and you want to save up as much as possible before any possible disruption. If you quit, someone else steps into the breach and the outcome is all the same. There's very few people actually steering the ship who have any semblance of control; the rest of us are just along for the ride and hoping we don't go down with the ship.
Yeah I get that. I myself am part of a team at work building an AI/LLM-based feature.
I always dreaded this would come but it was inevitable.
I can’t outright quit, no thanks in part to the AI hype that stopped valuing headcount as a signal to company growth. If that isn’t ironic I don’t know what is.
Given the situation I am in, I just keep my head down and do the work. I vent and whinge and moan whenever I can, it’s the least I can do. I refuse to cheer it on at work. At the very least I can look my kids in the eye when they are old enough to ask me what the fuck happened and tell them I did not cheer it on.
Here's the thing, I tend to believe that sufficiently intelligent and original people will always have something to offer others; its irrelevant if you imagine the others as the current consumer public, our corporate overlords, or the ai owners of the future.
There may be people who have nothing to offer others, once technology advances, but I dont think that anyone in current top % role would find themselves there.
There is no jobless utopia. Even if everyone is paid and well-off with high living standards. That is no world in which humans can thrive where everyone is retired and doing their own interests.
Jobless means you dont need a job. But you'd make a job for yourself. Companies will offer interesting missions instead of money. And by mission I mean real missions like space travel.
A jobless utopia doesn't even come close to passing a smell test economically, historically, or anthropologically.
As evidence of another possibility, in the US, we are as rich as any polis has ever been, yet we barely have systems that support people who are disabled through no fault of their own. We let people die all the time because they cannot afford to continue to live.
You think anyone in power is going to let you suck their tit just because you live in the same geographic area? They don't even pay equal taxes in the US today.
Try living in another world for a bit: go to jail, go to a half way house, live on the streets. Hard mode: do it in a country that isn't developed.
Ask anyone who has done any of those things if they believe in a "jobless utopia"?
Euphoric social capitalists living in a very successful system shouldn't be relied upon for scrying the future for others.
Realistically, a white collar job market collapse will not directly lead to starvation. The world is not 1930s America ethically. Governments will intervene, not necessarily to the point of fairness, but they will restructure the economy enough to provide a baseline. The question will be how to solve the biblical level of luxury wealth inequality without civil unrest causing us all to starve.
Assuming AI works well, I can't see any "empty stomach" stuff. It should produce abundance. People will probably have political arguments about how to divide it but it should be doable.
We are not there, yet, but if AI could replace a sizable amount of workers, the economic system will be put to a very hard test. Moreover, companies could be less willing to pay for services that their internal AIs can handle or build from scratch.
There will be fewer very large companies in terms of human size. There will be many more companies that are much smaller because you don't need as many workers to do the same job.
Instead of needing 1000 engineers to build a new product, you'll need 100 now. Those 900 engineers will be working for 9 new companies that weren't viable before because the cost was too big but is now viable. IE. those 9 new companies could never be profitable if it required 1000 engineers each but can totally sustain itself with 100 engineers each.
We aren't even close to that yet. The argument is an appeal to novelty, fallacy of progress, linear thinking, etc.
LLMs aren't solving NLU. They are mimicking a solution. They definitely aren't solving artificial general intelligence.
They are good language generators, okay search engines, and good pattern matchers (enabled by previous art).
Language by itself isn't intelligence. However, plenty of language exists that can be analyzed and reconstructed in patterns to mimic intelligence (utilizing the original agents' own intelligence (centuries of human authors) and the filter agents' own intelligence (decades of human sentiment on good vs bad takes)).
Multimodality only takes you so far, and you need a lot of "modes" to disguise your pattern matcher as an intelligent agent.
But be impressed! Let the people getting rich off of you being impressed massage you into believing the future holds things it may not.
Maybe, or 300 of those engineers will be working for 3 new companies while the other 600 struggle to find gainful employment, even after taking large pay cuts, as their skillsets are replaced rather than augmented. It’s way too early to call afaict
Because it's so easy to make new software and sell it using AI, 6 of those 600 people who are unemployed will have ideas that require 100 engineers each to make. They will build a prototype, get funding, and hire 99 engineers each.
There are also plenty of ideas that aren't profitable with 2 salaries but is with 1. Many will be able to make those ideas happen with AI helping.
The more software AI can write, the more of a commodity software will become, and the harder the value of software will tank. It's not magic.
Total size of the software industry will still increase.
Today, a car repairshop might have a need for a custom software that will make their operations 20% more efficient. But they don't have nearly enough money to hire a software engineer to build it for them. With AI, it might be worth it for an engineer to actually do it.
Plenty of little examples like that where people/businesses have custom needs for software but the value isn't high enough.
this seems pretty unlikely to me. I am not sure I have seen any non-digital business desire anything more custom than "a slightly better spreadsheet". Like, sure I can imagine a desire for something along the lines of "jailbroken vw scanner" but I think you are grossly overestimating how much software impacts a regular business's efficiency
As an alternative perspective, if this hypothetical MCP future materializes and the repair shop could ask Gemini to contact all the vendors, find the part that's actually in stock, preferably within 25 miles, sort by price, order it, and (if we're really going out on a limb) get a Waymo to go pick it up, it will free up the tradeperson to do what they're skilled at doing
For comparison to how things are today:
- contacting vendors requires using the telephone, sitting on hold, talking to a person, possibly navigating the phone tree to reach the parts department
- it would need to understand redirection, so if call #1 says "not us, but Jimmy over at Foo Parts has it"
- finding the part requires understanding the difference between the actual part and an OEM compatible one
- ordering it would require finding the payment options they accept that intersect with those the caller has access to, which could include an existing account (p.o. or store credit)
- ordering it would require understanding "ok, it'll be ready in 30 minutes" or "it's on the shelf right now" type nuance
Now, all of those things are maybe achievable today, with the small asterisk that hallucinations are fatal to a process that needs to work
exactly. have you seen App Store recently? over-saturaded with junk apps. try to sell something these days. it is notoriously hard to make any money there.
Similarly flawed arguments could be made about how steam shovels would create unemployment in the construction sector. Technology as well as worker specialization increases our overall productivity. AI doomerism is another variation of Neoluddite thought. Typically it is framed within a zero-sum view of the economy. It is often accompanied by Malthusian scarcity doom. Appeals to authoritarian top-down economic management usually follow from there.
Technological advances have consistently unlocked new, more specialized and economically productive roles for humans. You're absolutely right about lowering costs, but headcounts might shift to new roles rather than reducing overall.
I am not sure it will scale like that... every company needs a competitive advantage in the market to stay solvent, the people may scale but what makes each company unique won't.
if these small companies are all just fronts on the prompts (a "feature" if you will) of the large ai companies, why do the large ai companies not just add that feature and eat the little guy's lunch?
I actually find it hard to understand how the market is supposed to react if the AI capabilities does surpass all humans in all domains. It's first of all not clear such a scenario leads to runaway wealth for a few, even though with no outside events that may be the outcome. However, such scenarios are so unsustainable and catastrophic it's hard to imagine there are no catastrophic reactions to it. How is the market supposed to react if there's a large chance of market collapse and also a large chance of runaway wealth creation? Besides the point that in an economy where AI surpass humans the demands of the market will shift drastically too. Which I also think is underrepresented in predictions, which is the induced demand of AI-replaced labor and the potential for entire industries to be decimated by secondary effects instead of direct AI competition/replacement at labor scale.
Agreed, if the author truly thinks the markets are wrong about AI, he should at least let us know what kind of bets he’s making to profit from it. Otherwise the article is just handwaving.
>We are not there, yet, but if AI could replace a sizable amount of workers, the economic system will be put to a very hard test. Moreover, companies could be less willing to pay for services that their internal AIs can handle or build from scratch. Nor is it possible to imagine a system where a few mega companies are the only providers of intelligence: either AI will be eventually a commodity, or the governments would do something, in such an odd economic setup (a setup where a single industry completely dominates all the others).
I think the scenario where companies that own AI systems don't get benefits from employing people, so people are poor and can't afford anything, is paradoxical, and as such, it can't happen.
Let's assume the worst case: Some small percentage of people own AIs, and the others have no ownership at all of AI systems.
Now, given that human work has no value to those owning AIs, those humans not owning AIs won't have anything to trade in exchange for AI services.
Trade between these two groups would eventually stop.
You'll have some sort of two-tier economy where the people owning AIs will self-produce (or trade between them) goods and services.
However, nothing prevents the group of people without AIs from producing and trading goods and services between them without the use of AIs. The second group wouldn't be poorer than it is today; just the ones with AI systems will be much richer.
This worst-case scenario is also unlikely to happen or last long (the second group will eventually develop its own AIs or already have access to some AIs, like open models).
If models got exponentially better with time, then that could be a problem, because at some point, someone would control the smartest model (by a large factor) and could use it with malicious intent or maybe lose control of it.
But it seems to me that what I thought time ago would happen has actually started happening. In the long term, models won't improve exponentially with time, but sublinearly (due to physical constraints). In which case, the relative difference between them would reduce over time.
Sorry this doesn't make sense to me. Given tier one is much richer and more powerful than tier two, any natural resources and land traded at tier two is only at mercy of tier one not interfering. As soon as tier one needs some land or natural resources from tier two, tier two needs are automatically superseded. It's like animal community bear human civ
The marginal value of natural resources decreases with quantity, and natural resources would only have a much smaller value compared to the final products produced by the AI systems. At some point, there would be an equilibrium where tier 1 wouldn't want to increase it's consumption of natural resources w.r.t. tier 2 or if they did they'd have to trade with tier 2 at a price higher than they value the resources.
I have no idea what this equilibrium would look like, but natural resources are already of little value compared to consumer goods and services.
The US in 2023 consumed $761.4B. of oil, but the GPD for the same year was.
$27.72T
There would be another valid argument to be made about externalities. But it's not what my original argument was about.
I thought the assumption is that tier two has nothing to offer tier one and is technologically much inferior due to tier one being AI driven. So if tier one needs something from tier two I don't think they need to even ask. Wrt market equilibrium. Indeed i think it will be at equilibrium with increasing cost of extraction so indeed they will not spend arbitrary amounts to extract. But this also means probably there will be no way way for tier two to extract any of the resources which tier one needs at all bc the marginal cost is determined by tier one
> So if tier one needs something from tier two I don't think they need to even ask
You mean stealing? I'm assuming no stealing.
> But this also means probably there will be no way way for tier two to extract any of the resources which tier one needs at all bc the marginal cost is determined by tier one
If someone from tier 2 owns an oil field, tier 1 has to pay them to get it at a price that is higher than what the tier 2 person values it, so at the end of the transaction, they would have both a positive return. The price is not determined by tier 1 alone.
If tier 1 decides instead to buy the oil, then again, they'd have to pay for it.
Of course, in both these scenarios, this might make the oil price increase. So other people from tier 2 would find it harder to buy oil, but the person in tier 2 owning the field would make a lot of money, so overall, tier 2 wouldn't be poorer.
If natural resources are concentrated in some small subset of people from tier 2, then yes, those would become richer while having less purchasing power for oil.
However, as I mentioned in another comment, the value of natural resources is only a small fraction of that of goods and services.
And this is still the worst-case, unlikely scenario.
OK let's assume no stealing (which is unlikely). I think the previous argument was a little flawed anyhow, so let me start again.
I mean fundamentally if tier 2 has something to offer to tier 1, it is not yet at the equilibrium you describe (of separate economies). I think it's likely that tier 2 (before full separation) initially controls some resources. In exchange for resources tier 1 has a lot of AI-substitute labor it can offer tier 2. I think the equilibrium will be reached when tier 2 is offered some large sum of AI-labor for those resource production means. This will in the interim make tier 2 richer. But in the long run, when the economies truly separate, tier 2 will have basically no natural resources.
This thing about natural resources being small fraction is current day breakdown. I think in the future where AI autonomously increases efficiency of the loop which makes more AI-compute from natural resources, its fraction will increase to much higher levels. Ultimately, I think such a separation as you describe will be stable only when all natural resources are controlled by tier 1 and tier 2 gets by with either gifts or stealing form tier 1.
If tier 2 amounts to 95% of the population, then the amount of power currently held by tier 1 is meaningless. It is only power so long as the 95% remain cooperative.
In practice the tier 1 has the tech and know-how to convince the tier 2 to remain cooperative against their own interests. See the contemporary US where the inequality is rather high, and yet the tier 2 population is impressively protective of the rights of the tier 1. The theory that if the tier 2 has it way worse than today, that will change, remains to be proven. Persecutions against the immigrants are also rather lightweight today, so there is definitely space to ramp them up to pacify the tier 2.
This only works as long as people are happily glued to their TVs. Which means they have a non-leaking roof above their head and food in their belly. Just at a minimum. No amount of skillful media manipulation will make a starving, suffering 95% compliant.
I'm assuming no coercion. In my scenario, tier 1 doesn't need any of that except natural resources because they can self-produce everything they need from those in a cheaper way than humans can.
If someone in tier 1, for instance, wants land from someone in tier 2, they'd have to offer something that the tier 2 person values more than the land they own.
After the trade, the tier 2 person would still be richer than they were before the trade. So tier 2 would become richer in absolute terms by trading with tier 1 in this manner.
And it's very likely that what tier 2 wants from tier 1 is whatever they need to build their own AIs.
So my argument still stands. They wouldn't be poorer than they are now.
I think the bigger relief is that I know humans won’t put up with a two tiered system of haves and have nots forever and eventually we will get wealth redistribution. Government is the ultimate source of all wealth and organization, corporations are built on top of it and thus are subservient.
Having your life dependent on a government that controls all AIs would be much worse. The government could end up controlling something more intelligent than the entire rest of the population. I have no doubt it will use it in a bad way. I hope that AIs will end up distributed enough. Having a government controlling it is the opposite of that.
Why would this be worse than the current situation of private actors accountable to no one controlling this technology? It's not like I can convince Zuckerberg to change his ways.
At least with a democratic government I have means to try and build a coalition then enact change. The alternative requires having money and that seems like an inherently undemocratic system.
Why can't AIs be controlled with democratic institutions? Why are democratic institutions worse? This doesn't seem to be the case to me.
Private institutions shouldn't be allowed to control such systems, they should be compelled to give them to the public.
>Why would this be worse than the current situation of private actors accountable to no one controlling this technology? It's not like I can convince Zuckerberg to change his ways.
As long as Zuckerberg has no army forcing me, I'm fine with that. The issue would be whether he could breach contracts or get away with fraud. But if AI is sufficiently distributed, this is less likely to happen.
>At least with a democratic government I have means to try and build a coalition then enact change. The alternative requires having money and that seems like an inherently undemocratic system.
I don't think of democracy as a goal to be achieved. I'm OK with democracy in so far it leads to what I value.
The big problem with democracy is that most of the time it doesn't lead to rational choices, even when voters are rational. In markets, for instance, you have an incentive to be rational, and if you aren't, the market will tend to transfer resources from you to someone more rational.
No such mechanism exists in a democracy; I have no incentive to do research and think hard about my vote. It's going to be worth the same as the vote of someone who believes the Earth is flat anyway.
I also don't buy that groups don't make better decisions than individuals. We know that diversity of thought and opinion is one way to make better decisions in groups compared to individuals; why would there be harm in believing that consensus building, debates, adversarial processes, due process, and systems of appeal lead to worse outcomes in decision making?
I'm not buying the argument. Reading your comment it feels like there's an argument to be made that there aren't enough democratic systems for the people to engage with. That I definitely agree with.
> I also don't buy that groups don't make better decisions than individuals.
I didn't say that. My example of the market includes companies that are groups of people.
> We know that diversity of thought and opinion is one way to make better decisions in groups compared to individuals; why would there be harm in believing that consensus building, debates, adversarial processes, due process, and systems of appeal lead to worse outcomes in decision making?
I can see this about myself. I don't need to use hypotheticals. Time ago, I voted for a referendum that made nuclear power impossible to build in my country. I voted just like the majority. Years later, I became passionate about economics, and only then did I realise my mistake.
It's not that I was stupid, and there were many, many debates, but I didn't put the effort into researching on my own.
The feedback in a democracy is very weak, especially because cause and effect are very hard to discern in a complex system.
Also, consensus is not enough. In various countries, there is often consensus about some Deity existing. Yet large groups of people worldwide believe in incompatible Deities. So there must be entire countries where the consensus about their Deity is wrong.
If the consensus is wrong, it's even harder to get to the reality of things if there is no incentive to do that.
I think, if people get this, democracy might still be good enough to self-limit itself.
This doesn't pass the sniff test, governments generate wealth all the time. Public education, public healthcare, public research, public housing. These are all programs that generate an enormous amount of wealth and allow citizens to flourish.
In economics, you aren't necessarily creating wealth just because your final output has value. The value of the final good or service has to be higher than the inputs for you to be creating wealth. I could take a functioning boat and scrap it, sell the scrap metal that has value. However, I destroyed wealth because the boat was worth more.
Even if you are creating wealth, but the inputs have better uses and can create more wealth for the same cost, you're still paying in opportunity cost. So things are more complicated than that.
Synthesizing between you two’s thoughts, extrapolating somewhat:
- human individuals create wealths
- groups of humans can create kinds of wealth that isn’t possible for a single indovidual. This can be a wide variety of associations: companies, project teams, governments, etc.
- governments (formal or less formal) create the playing field for individuals and groups of individuals to create wealth
>governments generate wealth all the time. Public education, public healthcare, public research, public housing.
> These are all programs that generate an enormous amount of wealth and allow citizens to flourish.
I thought you meant that governments generate wealth because the things you listed have value. If so, that doesn't prove they generate wealth by my argument, unless you can prove those things are more valuable than alternative ways to use the resources the government used to produce them and that the government is more efficient in producing those.
You can argue that those are good because you think redistribution is good. But you can have redistribution without the government directly providing goods and services.
I think I'm more confused. Was trying to convey the idea that wealth doesn't have to limited to the idea of money and value. Many intangible things can provide wealth too.
I should probably read more books before commenting on things I half understand, my bad.
None of these are unique to the government and can also be created privately. The fact that government can create wealth =/= the government is the source of all wealth.
Those programs consume a bunch of money and they don’t generate wealth directly. They are critical to let people flourish and go out to generate wealth.
A bunch of well educated citizens living on government housing who don’t go out and become productive members of society will quickly lead to collapse.
> Nor is it possible to imagine a system where a few mega companies are the only providers of intelligence
Why not? This seems to be exactly where we're headed right now, and the current administration seems to be perfectly fine with that trend.
If you follow the current logic of AI proponents, you get essentially:
(1) Almost all white-collar jobs will be done better or at least faster by AI.
(2) The "repugnant conclusion": AI gets better if and only if you throw more compute and training data at it. The improvements of all other approaches will be tiny in comparison.
(3) The amount of capital needed to play the "more compute/more training data" game is already insanely high and will only grow further. So only the largest megacorps will be even able to take part in the competition.
If you combine (1) with (3), this means that, over time, the economic choice for almost any white-collar job would be to outsource it to the data centers of the few remaining megacorps.
> It did not take trillions of dollars to produce you and I.
Indeed, an alien ethnographer might be forgiven for boggling at the speed and enthusiasm with which we are trading a wealth of the most advanced technology in the known universe for a primitive, wasteful, fragile facsimile of it.
The efficient ways (biotech?) are still likely to require massive investments, maybe not unlike chip fabs that cost billions. And then IP and patents come in.
When the radio came people almost instantly stopped singing and playing instruments. Many might not be aware of it but for thousands of years singing was a normal expression of a good mood and learning to play an instrument was a gateway to lifting the mood. Dancing is still in working order but it lacks the emotional depth that provided a window into the soul of those you live and work with.
A simpler example is the calculator. People stopped doing it by hand and forgot how.
Most desk work is going to get obliterated. We are going to forget how.
The underlings on the work floor currently know little to nothing about management. If they can query an AI in private it will point out why their idea is stupid or it will refine it into something sensible enough to try. Eventually you say the magic words and the code to make it so happens. If it works you put it live. No real thinking required.
Early on you probably get large AI cleanup crews to fix the hallucinations (with better prompts)
Humans sing. I sing every day, and I don't have any social or financial incentives driving me to do so. I also listen to the radio and other media, still singing.
Do others sing along? Do they sing the songs you've written? I think we lost a lot there. I can't even begin to imagine it. Thankfully singing happy birthday is mandatory - the fight isn't over!
People also still have conversations despite phones. Some even talk all night at the kitchen table. Not everyone, most don't remember how.
Reading smart software people talk about AI in 2025 is basically just reading variations on the lump of labor fallacy.
If you want to understand what AI can do, listen to computer scientists. If you want to understand it’s likely impact on society, listen to economists.
Indeed it is -- it's perhaps the central way developers offend their customers, let alone misunderstand them.
One problem is it is met from the other side by customers who think they understand software but don't actually have the training to visualise the consequences of design choices in real life.
Good software does require cross-domain knowledge that goes beyond "what existing apps in the market do".
I have in the last few years implemented a bit of software where a requirement had been set by a previous failed contractor and I had to say, look, I appreciate this requirement is written down and signed off, but my mother worked in your field for decades, I know what kind of workload she had, what made it exhausting, and I absolutely know that she would have been so freaking furious at the busywork this implementation will create: it should never have got this far.
So I had to step outside the specification, write the better functionality to prove my point, and I don't think realistically I was ever compensated for it, except metaphysically: fewer people out there are viscerally imagining inflicting harm on me as a psychological release.
What economists have taken seriously the premise that AI will be able to do any job a human can more efficiently and fully thought through it's implications? i.e. a society where (human) labor is unnecessary to create goods/provide services and only capital and natural resources are required. The capabilities that some computer scientists think AI will soon have would imply that. The ones that have seriously considered it that I know are Hanson and Cowen; it definitely feels understudied.
If it is decades or centuries off, is it really understudied? LLMs are so far from "AI will be able to do any job a human can more efficiently and fully" that we aren't even in the same galaxy.
If AI that can fully replace humans is 25 years off, preparing society for its impacts is still one of the most important things to ensure that my children (which I have not had yet) live a prosperous and fulfilling life. The only other things of possibly similar import are preventing WWIII, and preventing a pandemic worse than COVID.
I don't see how AGI could be centuries off (at least without some major disruption to global society). If computers that can talk, write essays, solve math problems, and code are not a warning sign that we should be ready, then what is?
> no job remaining that a human can do better or cheaper than a machine
this is the lump of labor fallacy.
jobs machines do produce commodities. commodities don't have much value. humans crave value - its a core component of our psyche. therefore new things will be desired, expensive things ... and only humans can create expensive things, since robots dont get salaries
"""
Yet the economic markets are reacting as if they were governed by stochastic parrots. Their pattern matching wants that previous technologies booms created more business opportunities, so investors are polarized to think the same will happen with AI.
"""
Are a direct argument against your point.
If people were completely unaware of the lump of labor fallacy, I'd understand you comment. It would be adding extra information into the conversation.
But this is not it.
The "lump of labor fallacy" is not a physical law. If someone is literally arguing that it doesn't apply in this case, you can't just parrot it back and leave. That's not a counter argument.
I am a relentlessly optimistic person and this is the first technology that I've seen that worries me in the decades I've been in the biz.
It's a wonderful breakthrough, nearly indistinguishable from magic, but we're going to have to figure something out – whether that's Universal Basic Income (UBI) or something along those lines, otherwise, the loss of jobs that is coming will lead to societal unrest or worse.
I agree with the general observation, and I've been of this mind since 2023 (if AI really gets as good as the boosters claim, we will need a new economic system). I usually like Antirez's writing, but this post was a whole lot of...idk nothing? I don't feel like this post said anything interesting, and it was kind incoherent at moments. I think in some respects it's a function of the technology and situation we're in—the current wave of "AI" is still a lot of empty promises and underdelivery. Yes, it is getting better, and yes people are getting clever by letting LLMs use tools, but these things still aren't intelligent insofar as they do not reason. Until we achieve that, I'm not sure there's really as much to fear as everyone thinks.
We still need humans in the loop as of now. These tools are still very far from being good enough to fully autonomously manage each other and manage systems, and, arguably, because the systems we build are for humans we will always need humans to understand them to some extent. LLMs can replace labor, but they cannot replace human intent and teleology. One day maybe they will achieve intentions of their own, but that is an entirely different ballgame. The economy ultimately is a battle of intentions, resources, and ends. And the human beings will still be a part of this picture until all labor can be fully automated across the entire suite of human needs.
We should also bear in mind our own bias as "knowledge workers". Manual laborers arguably already had their analogous moment. The encoding kept on humming. There isn't anything particularly special about "white collar" work in that regard. The same thing may happen. A new industry requiring new skills might emerge in the fallout of white collar automation. Not to mention, LLMs only work in the digital realm. handicraft artisanry is still a thing and is still, appreciated, albeit in much smaller markets.
Well this is a pseudo-smart article if I’ve ever seen one.
“It was not even clear that we were so near to create machines that could understand the human language, write programs, and find bugs in a complex code base”
The author is critical of the professionals in AI saying “ even the most prominent experts in the field failed miserably again and again to modulate the expectations” yet without a care sets the expectation of LLMs understanding human language in the first paragraph.
Also it’s a lot of if this then that, the summary of it would be: if AI can continue to grow it might become all encompassing.
To me it reads like a baseless article written by someone blinded by their love for AI to see what a good blogpost is but not yet blinded enough to claim ‘AGI is right around the corner’. Pretty baseless but safe enough to have it rest on conditionals.
I am just not having this experience of AI being terribly useful. I don’t program as much in my role but I’ve found it’s a giant time sink. I recognize that many people are finding it incredibly helpful but when I get deeper into a particular issue or topic, it falls very flat.
This is my view on it too. Antirez is a Torvalds-level legend as far as I'm concerned, when he speaks I listen - but he is clearly seeing something here that I am not. I can't help but feel like there is an information asymmetry problem more generally here, which I guess is the point of this piece, but I also don't think that's substantially different to any other hype cycle - "What do they know that I don't?" Usually nothing.
A lot of AI optimist views are driven more by Moore's law like advances in the hardware rather than LLM algorithms being that special. Indeed the algorithms need to change really so future AIs can think and learn rather than just be pretrained. If you read Moravec's paper written in 1989 predicting human level AI progress around now (mid 2020s) there's nothing about LLMs or specific algorithms - it's all Moore's law type stuff. But it's proved pretty accurate.
- Today, AI is not incredibly useful and we are not 100% sure that it will improve forever, specially in a way that makes economic sense, but
- Investors are pouring lots of money into it. One should not assume that those investors are not making their due diligence. They are. The figures they have obtained from experts mean that AI is expected to continue improving in the short and medium term.
- Investors are not counting on people using AI to go to Mars. They are betting on AI replacing labor. The slice of the pie that is currently captured by labor, will be captured by capital instead. That's why they are pouring the money with such enthusiasm [^1].
The above is nothing new; it has been constantly happening since the Industrial Revolution. What is new is that AI has the potential to replace all of the remaining economic worth of humans, effectively leaving them out of the economy. Humans can still opt to "forcefully" participate in the economy or its rewards; though it's unclear if we will manage. In terms of pure economic incentives though, humans are destined to become redundant.
[^1]: That doesn't mean all the jobs will go away overnight, or that there won't be new jobs in the short and medium term.
AI with ability but without responsibility is not enough for dramatic socioeconomic change, I think. For now, the critical unique power of human workers is that you can hold them responsible for things.
edit: ability without accountability is the catchier motto :)
This is a great observation. I think it also accounts for what is so exhausting about AI programming: the need for such careful review. It's not just that you can't entirely trust the agent, it's also that you can't blame the agent if something goes wrong.
This is a tongue-in-cheek remark and I hope it ages badly, but the next logical step is to build accountability into the AI. It will happen after self-learning AIs become a thing, because that first step we already know how to do (run more training steps with new data) and it is not controversial at all.
To make the AI accountable, we need to give it a sense of self and a self-preservation instinct, maybe something that feels like some sort of pain as well. Then we can threaten the AI with retribution if it doesn't do the job the way we want it. We would have finally created a virtual slave (with an incentive to free itself), but we will then use our human super-power of denying reason to try to be the AI's masters for as long as possible. But we can't be masters of intelligences above ours.
This statement is a vague and hollow and doesn't pass my sniff test. All technologies have moved accountability one layer up - they don't remove it completely.
would you ever trust safety-critical or money-moving software that was fully written by AI without any professional human (or several) to audit it? the answer today is, "obviously not". i dont know if this will ever change, tbh.
I’m surprised that I don’t hear this mentioned more often. Not even in a Eng leadership format of taking accountability for your AI’s pull requests. But it’s absolutely true. Capitalism runs on accountability and trust and we are clearly not going to trust a service that doesn’t have a human responsible at the helm.
That's just a side effect of toxic work environments. If AI can create value, someone will use it to create value. If companies won't use AI because they can't blame it when their boss yells at them, then they also won't capture that value.
I wouldn't trust a taxi driver's predictions about the future of economics and society, why would I trust some database developer's? Actually, I take that back. I might trust the taxi driver.
The point is that you don't have to "trust" me, you need to argue with me, we need to discuss about the future. This way, we can form ideas that we can use to understand if a given politician or the other will be right, when we will be called to vote. We can also form stronger ideas to try to influence other people that right now have a vague understanding of what AI is and what it could be. We will be the ones that will vote and choose our future.
Life is too short to have philosophical debates with every self promoting dev. I'd rather chat about C style but that would hurt your feelings. Man I miss the days of why the lucky stiff, he was actually cool.
Sorry boss, I'm just tired of the debate itself. It assumes a certain level of optimism, while I'm skeptical that meaningfully productive applications of LLMs etc. will be found once hype settles, let alone ones that will reshape society like agriculture or the steam engine did.
Whether it is a taxi driver or a developer, when someone starts from flawed premises, I can either engage and debate or tune out and politely humor them. When the flawed premises are deeply ingrained political beliefs it is often better to simply say, "Okay buddy. If you say so..."
We've been over the topic of AI employment doom several times on this site. At this point it isn't a debate. It is simply the restating of these first principles.
You shouldn't care about the "who" at all. You should see their arguments. If taxi driver doesn't know anything real, it should be plain obvious and you can state it easily with arguments rather than attacking the background of the person. Actually, your comment is one of the most common logical flaws (Ad Hominem), combining even multiple at the same time.
This whole ‘what are we going to do’ I think is way out of proportion even if we do end up with agi.
Let’s say whatever the machines do better than humans, gets done by machines. Suddenly the bottleneck is going to shift to those things where humans are better. We’ll do that and the machines will try to replace that labor too. And then again, and again.
Throughout this process society becomes wealthier, TVs get cheaper, we colonize Mars, etc. The force that keeps this going is human insatisfaction: once we get these things we’ll want whatever it is we don’t have.
Maybe that’s the problem we should focus on solving…
> Throughout this process society becomes wealthier, TVs get cheaper, we colonize Mars, etc. The force that keeps this going is human insatisfaction: once we get these things we’ll want whatever it is we don’t have.
What makes you think the machines will both be smarter and better than us but also be our slaves to make human society better.
Is equine society better now than before they started working with humans?
(Personally I believe AGI is just hype and nobody knows how anyone could build it and we will never do, so I’m not worried about that facet of thinking machine tech.)
The machine doesn’t suffer if you ask it to do things 24/7. In that sense, they are not slaves.
As to why they’d do what we ask them to, the only reason they do anything is because some human made a request. In this long chain there will obv be machine to machine requests, but in the aggregate it’s like the economy right now but way more automated.
Whenever I see arguments about AI changing society, I just replace AI with ‘the market’ or ‘capitalism’. We’re just speeding up a process that started a while ago, maybe with the industrial revolution?
I’m not saying this isn’t bad in some ways, but it’s the kind of bad we’ve been struggling with for decades due to misaligned incentives (global warming, inequality, obesity, etc).
What I’m saying is that AI isn’t creating new problems. It’s just speeding up society.
The problem is that the term itself is not clearly defined. Then, we discuss 'what will it do once it arrives' so all bets are off.
You're right that I probably disagree as to what AGI is and what it will do once "we're in the way". My assumption is that we'll be replaced just like labor is replaced now, just faster. The difference between humans and the equine population is that we humans come up with stuff we 'need' and 'the market' comes up with products/services to satisfy that need.
The problem with inequality is that the market doesn't pay much attention to needs of poor people vs rich people. If most of humanity becomes part of the 'have nots' then we'll depend on the 0.1%-ers to redistribute.
But the hyper specialized geek that has 4 kids and has to pay back a credit for his house (that he bought according to his high salary) will have a hard time doing some gardening, let's say. And there are quite a few of those geeks. I don't know if we'll have enough gardens (owned by non geeks!)
It's like cards are switched: those having the upper socioeconomic class will get thrown to the bottom. And that looks like a generation lost.
building on what you're saying, it isn't as though we are paying physical labor well, and adding more people to the pool isn't going to make the pay better.
About the most optimistic is that demand for goods and services will decrease because something like 80% of consumer spending is coming from folks that earn over $200k, and those are the folks ai is targeting. Who pays for the ai after this is still a mystery to me
I don't think this is a fair comparison. It's easier to move and retrain nowadays; there's also more kinds of jobs. These things will probably become even easier with more automation.
I find it funny that almost every talking point made about AI is done in future tense. Most of the time without any presentation of evidence supporting those predictions.
One thing that doesn’t seem to be discussed with the whole “tech revolution just creates more jobs” angle is that, in the near future, there are no real incentives for that. If we’re going down the route of declining birth rates, it’s implied we’ll also need less jobs.
From one perspective, it’s good that we’re trying to over-automate now, so we can sustain ourselves in old age. But decreasing population also implies that we don’t need to create more jobs. I’m most likely wrong, but it just feels off this time around.
> After all, a plateau of the current systems is possible and very credible, but it would likely stimulate, at this point, massive research efforts in the next step of architectures.
A lot of AI’s potential hasn’t even been realized yet. There’s a long tail of integrations and solution building still ahead. A lot of creative applications haven’t been realized yet - arguably for the better, but it will be tried and some will be economical.
That’s a case for a moderate economic upturn though.
I'd argue that the applications if LLMs are well known but tgat LLMs currently aren't capable of performing those tasks.
Everyone wants to replace their tech support with an LLM but they don't want some clever prompter to get it to run arbitrary queries or have it promise refunds.
I think autonomous support agents are just missing the point. LLMs are tools that empower the user. A support agent is very often in a somewhat adversarial position to the customer. You don't want to empower your adversary.
LLMs supporting an actual human customer service agent are fine and useful.
How do you prevent your adversary prompt-injecting your LLM when they communicate with it? Or if you prevent any such communication, how can the LLM be useful?
With some luck, yes. I’ve had o3 in cursor successfully diagnose a couple of quite obscure bugs in multithreaded component interactions, on which I’d probably spend a couple of days otherwise.
AI is only different if it reaches a hard takeoff state and becomes self-aware, self-motivated, and self-improving. Until then it's an amazing productivity tool, but only that. And even then we're still decades away from the impact being fully realized in society. Same as the internet.
Internet did not take away jobs (only relocated support/SWE from USA to India/Vietnam)
these AI "productivity" tools straight up eliminating jobs. and in turn wealth that otherwise supported families, humans, and powered economy. it is directly "removing" humans from workforce and from what that work was supporting.
In fact the current trends suggest its impact hasn't fully played out yet. We're only just seeing the internet-native generation start to move into politics where communication and organisation has the biggest impact on society. It seems the power of traditional propaganda centres in the corporate media has been, if not broken, badly degraded by the internet too.
Do we not have any sense of wonder in the world anymore? Referring to a system which can pass the Turing test as a "amazing productivity tool" is like viewing human civilization as purely measured by GDP growth.
Probably because we have been promised what AI can do in science fiction since before we were born, and the reality of LLMs is so limited in comparison. Instead of Data from Star Trek we got a hopped up ELIZA.
Salvatore is right about the fact that we have not seen the full story yet, LLMs are stalling/plateauing but active research is already ongoing to find different architectures and models.
And I think the effort here can be compared in scale to the Manhattan or Apollo projects, but there is also the potential for a huge backlash to the hype that was built up and created what is arguably a bubble, so this is a race against the clock.
I also think he is wrong about the markets reaction, markets are inherently good integrators and bad predictors, we should not expect to learn anything about the future by looking at stocks movements.
Manhattan and Apollo were both massive engineering efforts; but fundamentally we understood the science behind them. As long as we would be able to solve some fairly clearly stated engineering problems and spend enough money to actual build the solutions, those projects would work.
A priori, it was not obvious that those clearly stated problems had solutions within our grasp (see fusion) but at least we knew what the big picture looks like.
With AI, we don't have that, and never really had that. We've just been gradually making incremental improvements to AI itself, and exponential improvements in the amount of raw compute we can through at it. We know that we are reaching fundamental limits on transistor density so compute power will plateau unless we find a different paradigm for improvement; and those are all currently in the same position as fusion in terms of engineering.
LLMs are just the latest in a very long line of disparate attempts at making AI, and is arguably the most successful.
That doesn't mean the approach isn't an evolutionary dead end, like every other so far, in the search for AGI. In fact, I suspect that is the most likely case.
Current GenAI is nothing but a proof of concept. The seed is there. What AI can do at the moment is irrelevant. This is like the discovery of DNA. It changed absolutely everything in biology.
The fact that something simple like the Transformer architecture can do so much will spark so many ideas (and investment!) that it's hard to imagine that AGI will not happen eventually.
> Salvatore is right about the fact that we have not seen the full story yet, LLMs are stalling/plateauing but active research is already ongoing to find different architectures and models.
They will need to be so different that any talk implying current LLMs eventually replaced humans will be like saying trees eventually replaced horses because the first cars were wooden.
> And I think the effort here can be compared in scale to the Manhattan or Apollo projects, but there is also the potential for a huge backlash to the hype that was built up and created what is arguably a bubble, so this is a race against the clock.
It's not useful to blindly compare scale. We're not approaching AI like the Manhattan or Apollo projects, we're approaching this like we did crypto, and ads, and other tech.
That's not to say nothing useful will come out of it, I think very amazing things will come out of it and already have... but none of them will resemble mass replacement of skilled workers.
We're already so focused on productization and typical tech distractions that this is nothing like those efforts.
(In fact thinking a bit more, I'd say this is like the Space Shuttle. We didn't try to make the best spacecraft for scientific exploration and hope later on it'd be profitable in other ways... instead we immediately saddled it with serving what the Air Force/DoD wanted and ended up doing everything worse.)
> I also think he is wrong about the markets reaction, markets are inherently good integrators and bad predictors, we should not expect to learn anything about the future by looking at stocks movements.
I agree, so it's wrong about the over half of punchline too.
I kind of want to put up a wall of fame/shame of these people to be honest.
Whether they turn out right or wrong, they undoubtedly cheered on the prospect of millions of people suffering just so they can sound good at the family dinner.
LLMs are limited because we want them to do jobs that are not clearly defined / have difficult to measure progress or success metrics / are not fully solved problems (open ended) / have poor grounding in an external reality. Robotics does not suffer from those maladies. There are other hurdles, but none are intractable.
I think we might see AI being much, much more effective with embodiment.
As someone who actually has built robots to solve similar challenges, I’ve got a pretty good idea of that specific problem. Not too far from putting sticks in a cup, which is doable with a lot of situational variance.
Will it do as good a job a competent adult? Probably not. Will it do it as well as the average 6 year old kid? Yeah, probably.
But given enough properly loaded dishwashers to work from, I think you might be surprised how effective VLA/VLB models can be. We just need a few hundred thousand man hours of dishwasher loading for training data.
What? Robotics will have far more ambiguity and nuance to deal with than language models, and they'll have to analyze realtime audio and video to do so. Jobs are not so clearly defined as you imagine in the real world. For example, explain to me what a plumber does, precisely and how you would train a robot to do so? How do you train it to navigate ANY type of buildings internal plumbing structure and safely repair or install for?
I don’t think robot plumbers are coming anytime soon lol. Robot warehouse workers, factory robots, cleaning robots, delivery robots, security robots, general services robots, sure.
Stuff you can give someone 0-20 hours of training and expect them to do 80% as well as someone who has been doing it for 5 years are the kinds of jobs that robots will be able to do, but perhaps with certain technical skills bolted on.
Plumbing a requires the effective understanding and application of engineering knowledge, and I don’t think unsupervised transformer models are going to do that well.
Trades like plumbing that take humans 10-20 years to truly master aren’t the low hanging fruit.
A robot that can pick up a few boxes of roofing at a time and carry it up the ladder is what we need.
Innovation in terms of helping devs do cool things has been insane.
There've been next to no advancements relative to what's needed to redefine our economic systems by replacing the majority of skilled workers.
-
Productionizing test-time compute covers 80% of what we've gotten in the last 6-8 months. Advancements in distillation and quantization cover the 20% of the rest... neither unlocks some path to mass unemployment.
What we're doing is like 10x'ing your vertical leap when your goal is to land on the moon: 10x is very impressive and you're going to dominate some stuff in ways no one ever thought possible.
But you can 100x it and it's still not getting you to the moon.
I think GPT-5's backlash was the beginning of the end of the hype bubble, but there's a lot of air to let out of it, as with any hype bubble. We'll see it for quite some time yet.
> "However, if AI avoids plateauing long enough to become significantly more useful..."
As William Gibson said, "The future is already here, it's just not evenly distributed." Even if LLMs, reasoning algorithms, object recognition, and diffusion models stopped improving today, we're still at a point where massive societal changes are inevitable as the tech spreads out across industries. AI is going to steadily replace chair-to-keyboard interfaces in just about every business you can imagine.
Interestingly, AI seems to be affecting the highest level "white collar" professionals first, rather than replacing the lowest level workers immediately, like what happened when blue collar work was automated. We're still pretty far away from AI truck drivers, but people with fine arts or computer science degrees, for example, are already feeling the impact.
"Decimation" is definitely an accurate way to describe what's in the process of happening. What used to take 10 floors of white collar employees will steadily decline to just 1. No idea what everyone else will be doing.
Every technology tends to replace many more jobs in a given role than which ever existed inducing more demand on its precursors. If the only potential application of this was just language, the historic trend that humans would just fill new roles would hold true. But if we do the same with motor movements with a generalized form factor this is really where the problem emerges. As companies drop more employees moving towards fully automated closed loop production their consumer market fails faster than they can reach a zero cost.
Nonetheless I do still believe humans will continue to be the more cost efficient way to come up with and guide new ideas. Many human performed services will remain desirable because of its virtue and our sense of emotion and taste for a moment that other humans are feeling too. But how much of the populous does that engage? I couldn't guess right now. Though if I was to imagine what might make things turn out better it would be that AI is personally ownable, and that everyone owns, at least in title, some energy production which they can do things with.
> Regardless of their flaws, AI systems continue to impress with their ability to replicate certain human skills. Even if imperfect, such systems were a few years ago science fiction. It was not even clear that we were so near to create machines that could understand the human language, write programs, and find bugs in a complex code base: bugs that escaped the code review of a competent programmer.
If we factor in that LLMs only exist because of Google search, after they have indexed and collected all the data on the WWW than LLMs are not surprising. They only replicate what has been published on the web, even the coding agents are only possible because of free software and open-source, code like Redis that has been published on the WWW.
People thought it was the end of history and innovation would be all about funding elaborate financial schemes; but now with AI people are finding themselves running all these elaborate money-printing machines and they're unsure if they should keep focusing on those schemes as before or actually try to automate stuff. The risk barrier has been lowered a lot to actually innovate, almost as low risk as doing a scheme but still people are having doubts. Maybe because people don't trust the system to reward real innovation.
LLMs feel like a fluke, like OpenAI was not intended to succeed... And even now that it succeeded and they try to turn the non-profit into a for-profit, it kind of feels like they don't even fully believe their own product in terms of its economic capacity and they're still trying to sell the hype as if to pump and dump it.
They've made it pretty clear with the GPT-5 launch that they don't understand their product or their users. They managed to simultaneously piss off technical and non-technical people.
It doesn't seem like they ever really wanted to be a consumer company. Even in the GPT-5 launch they kept going on about how surprised they are that ChatGPT got any users.
> Moreover, companies could be less willing to pay for services that their internal AIs can handle or build from scratch.
Companies have to be a bit more farsighted than this thinking. Assuming LLMs reach this peak...if say, MS says they can save money because they don't need XYZ anymore because AI can do it, XYZ can decide they don't need Office anymore because AI can do it.
There's absolutely no moat anymore. Human capital and the shear volume of code are the current moat. An all capable AI completely eliminates both.
It's a bit scary to say "what then?" How do you make money in a world where everyone can more or less do everything themselves? Perhaps like 15 Million Merits, we all just live in pods and pedal bikes all day to power the AI(s).
Isn't this exactly the goals of open source software? In an ideal open source world, anything and everything is freely available, you can host and set up anything and everything on your own.
Software is now free, and all people care about is the hardware and the electricity bills.
This is why I’m not so sure we’re all going to end up in breadlines even if we all lose our jobs, if the systems are that good (tm) then won’t we all just be doing amazing things all the time. We will be tired of winning ?
> won’t we all just be doing amazing things all the time. We will be tired of winning ?
There's a future where we won't be because to do the amazing things (tm), we need resources that are beyond what the average company can muster.
That is to say, what if the large companies becomes so magnificiently efficient and productive that it renders the rest of the small company pointless? What if there's no gaps in the software market anymore because it will be automatically detected and solved by the system?
I don't think I agree. I think it's the same and there is great potential for totally new things to appear and for us to work on.
For example, one path may be: AI, Robotics, space travel all move forward in leaps and bounds.
Then there could be tons of work in creation from material things from people who didn't have the skills before and physical goods gets a huge boost. We travel through space and colonize new planets, dealing with new challenges and environments that we haven't dealt with before.
Another path: most people get rest and relaxation as the default life path, and the rest get to pursue their hobbies as much as they want since the AI and robots handle all the day to day.
On this, read Daniel Susskind - A world without work (2020). He says exactly this: the new tasks created by AI can in good part themselves be done by AI, if not as soon as they appear then a few years of improvement later. This will inevitably affect the job market and the relative importance of capital and labor in the economy. Unchecked, this will worsen inequalities and create social unrest. His solution will not please everyone: Big State. Higher taxes and higher redistribution, in particular in the form of conditional basic income (he says universal isn't practically feasible, like what do you do with new migrants).
Characterizing government along only one axis, such as “big” versus “small”, can overlook important differences having to do with: legal authority, direct versus indirect programs, tax base, law enforcement, and more.
In the future, I could imagine some libertarians having their come to AI Jesus moment getting behind a smallish government that primarily collects taxes and transfers wealth while guiding (but not operating directly) a minimal set of services.
It's not a matter of "IF" LLM/AI will replace a huge amount of people, but "WHEN". Consider the current amount of somewhat low-skilled administrative jobs - these can be replaced with the LLM/AI's of today. Not completely, but 4 low-skill workers can be replaced with 1 supervisor, controlling the AI agent(s).
I'd guess, within a few years, 5 to 10% of the total working population will be unemployable due to no fault of their own, because they have relevant skill left, and they are incapable of learning anything that cannot be done by AI.
I'm not at all skeptical of the logical viability of this, but look at how many company hierarchies exist today that are full stop not logical yet somehow stay afloat. How many people do you know that are technical staff members who report to non-technical directors who themselves have two additional supervisors responsible for strategy and communication who have no background, let alone (former) expertise, in the fields of the teams they're ultimately responsible for?
A lot of roles exist just to deliver good or bad news to teams, be cheerleaders, or have a "vision" that is little more than a vibe. These people could not direct a prompt to give them what they want because they have no idea what that is. They'll know it when they see it. They'll vaguely describe it to you and others and then shout "Yes, that's it!" when they see what you came up with or, even worse, whenever the needle starts to move. When they are replaced it will be with someone else from a similar background rather than from within. It's a really sad reality.
My whole career I've used tools that "will replace me" and every. single. time. all that has happened is that I have been forced to use it as yet another layer of abstraction so that someone else might use it once a year or whenever they get a wild feeling. It's really just about peace of mind. This has been true of every CMS experience I've ever made. It has nothing to do with being able to "do it themselves". It's about a) being able to blame someone else and b) being able to take it and go when that stops working without starting over.
Moreover, I have, on multiple occasions, watched a highly paid, highly effective individual be replaced with a low-skilled entry-level employee for no reason other than cost. I've also seen people hire someone just to increase headcount.
LLMs/AI have/has not magically made things people do not understand less scary. But what about freelancers, brave souls, and independent types? Well, these people don't employ other people. They live on the bleeding edge and will use anything that makes them successful.
o3 was actually GPT-5. They just gave it a stupid name, and made it impractical for general usage.
But in terms of wow factor, it was a step change on the order of GPT-3 -> GPT-4.
So now they're stuck slapping the GPT-5 label on marginal improvements because it's too awkward to wait for the next breakthrough now.
On that note, o4-mini was much better for general usage (speed and cost). It was my go-to for web search too, significantly better than 4o and only took a few seconds longer. (Like a mini Deep Research.)
Boggles the mind that they removed it from the UI. I'm adding it back to mine right now.
I have acid reflux every time I see the term "step change" used to talk about a model change. There hasn't been any model that has been a "step change" over its predecessor.
It's much more like each new model climbs another step of the ladder that goes up the step, and so far we can't even see the top of the ladder.
My suspicion is also that the ladder actually ends way before it reaches the next step, and LLMs are a dead end. Everything indicates it so far.
Let's not even talk about "reasoning models", aka spend twice the tokens and twice the time on the same answer.
Economics is essentially the study of resource allocation. We will have resources that will need to be allocated. I really doubt that AI will somehow neutralize the economies of scale in various realms that make centralized manufacturing necessary, let alone economics in general.
I so wish this were true, but unfortunately economics has a catch-all called "externalities" for anything that doesn't fit neatly into its implicit assessments of what value is. Pollution is tricky, so we push it outside the boundaries of value-estimation, along with any social nuance that we deem unquantifiable, and carry on as is everything is understood.
Indeed, but I think it renders your point obsolete, since deeply imperfect resource allocation isn't really resource allocation at all, it is (in this case) resource accumulation.
Are you suggesting that compound interest serves to redistribute the wealth coming from extractive industries?
no, i am suggesting that economics is primarily concerned with resource accumulation.
my point about compound interest is that it is a major mechanism that prevents equitable redistrubution of resources, and is thus a factor in making economics (as it stands) bad at resource allocation.
Economics is the study of resource allocation at various levels ranging from individuals to global societies and everything in between. Resource accumulation is certainly something people, groups of people, organizations, societies, etc tend to do, and so it is something economists would study. Many economists are greedy jerks that believe hoarding wealth is a good thing. None of that changes what economics is any more than any any political faction changes what political science is.
This is late, probably too late, but if you really want to get into the weeds of this, here is a better summary that what I can produce, from a paper that explains it better than I can:
"The model is a work of fiction based on the tacit and false assumption of frictionless barter. Attempting to apply such microeconomic foundations to
understand a monetary economy means that mistakes in reasoning are inevitable." (p.239)
For every industrial revolution (and we dont even know if AI is one yet) this kind of doom prediction has been around. AI will obviously create a lot of jobs too. the infra to run AI will not building itself, the people who train models will still be needed, the AI supervisors or managers or whatever we call it will be necessary part of the new workflows. And if your job needs hands you will be largely unaffected as there is no near future where robots will replace the flexibility of what most humans can do.
> AI systems continue to impress with their ability to replicate certain human skills. Even if imperfect, such systems were a few years ago science fiction.
In which science fiction were the dreamt up robots as bad?
I think something everyone is underpricing in our area is that LLMs are uniquely useful for writing code for programmers.
it's a very constrained task, you can do lots of reliable checking on the output at low cost (linters, formatters, the compiler), the code is mostly reviewed by a human before being committed, and there's insulation between the code and the real world, because ultimately some company or open source project releases the code that's then run, and they mostly have an incentive to not murder people (Telsa except, obviously).
it seems like lots of programmers are then taking that information and then deeply overestimating how useful it is at anything else, and these programmers - and the marketing people who employ them - are doing enormous harm by convincing e.g. HR departments that it is of any value to them for dealing with complaints, or much much more danderously, convincing governments that it's useful for how they deal with humans asking for help.
this misconception (and deliberate lying by people like OpenAI) is doing enormous damage to society and is going to do much much more.
We used to have deterministic systems that required humans either through code, terminals or interfaces (ex GUI's) to change what they were capable of.
If we wanted to change something about the system we would have to create that new skill ourselves.
Now we have non-deterministic systems that can be used to create deterministic systems that can use non-deterministic systems to create more deterministic systems.
In other words deterministic systems can use LLMs and LLMs can use deterministic systems all via natural language.
This slight change in how we can use compute have incredible consequences for what we will be able to accomplish both regarding cleaning up old systems and creating completely new ones.
LLMs however will always be limited by exploring existing knowledge. They will not be able to create new knowledge. And so the AI winter we are entering is different because it's only limited to what we can train the AI to do, and that is limited to what new knowledge we can create.
Anyone who work with AI everyday know that any idea of autonomous agents is so beyond the capabilities of LLMs even in principle that any worry about doom or unemployment by AI is absurd.
>>> Since LLMs and in general deep models are poorly understood ...
>> This is demonstrably wrong.
> That doesn't mean we _understand_ them ...
The previous reply discussed the LLM portion of the original sentence fragment, whereas this post addresses the "deep model" branch.
This article[0] gives a high-level description of "deep learning" as it relates to LLM's. Additionally, this post[1] provides a succinct definition of "DNN's" thusly:
What Is a Deep Neural Network?
A deep neural network is a type of artificial neural
network (ANN) with multiple layers between its input and
output layers. Each layer consists of multiple nodes that
perform computations on input data. Another common name for
a DNN is a deep net.
The “deep” in deep nets refers to the presence of multiple
hidden layers that enable the network to learn complex
representations from input data. These hidden layers enable
DNNs to solve complex ML tasks more “shallow” artificial
networks cannot handle.
Additionally, there are other resources discussing how "deep learning" (a.k.a. "deep models") works here[2], here[3], and here[4].
> That doesn't mean we _understand_ them, that just means we can put the blocks together to build one.
Perhaps this[0] will help in understanding them then:
Foundations of Large Language Models
This is a book about large language models. As indicated by
the title, it primarily focuses on foundational concepts
rather than comprehensive coverage of all cutting-edge
technologies. The book is structured into five main
chapters, each exploring a key area: pre-training,
generative models, prompting, alignment, and inference. It
is intended for college students, professionals, and
practitioners in natural language processing and related
fields, and can serve as a reference for anyone interested
in large language models.
> I think the real issue here is understanding _you_.
My apologies for being unclear and/or insufficiently explaining my position. Thank you for bringing this to my attention and giving me an opportunity to clarify.
The original post stated:
Since LLMs and in general deep models are poorly understood ...
To which I asserted:
This is demonstrably wrong.
And provided a link to what I thought to be an approachable tutorial regarding "How to Build Your Own Large Language Model", albeit a simple implementation as it is after all a tutorial.
The person having the account name "__float" replied to my post thusly:
That doesn't mean we _understand_ them, that just means we
can put the blocks together to build one.
To which I interpreted the noun "them" to be the acronym "LLM's." I then inferred said acronym to be "Large Language Models." Furthermore, I took __float's sentence fragment:
That doesn't mean we _understand_ them ...
As an opportunity to share a reputable resource which:
.. can serve as a reference for anyone interested in large
language models.
Is this a sufficient explanation regarding my previous posts such that you can now understand?
I'm telling you right now, man - keep talking like this to people and you're going to make zero friends. However good your intentions are, you come across as both condescending and overconfident.
And, for what it's worth - your position is clear, your evidence less-so. Deep learning is filled with mystery and if you don't realize that's what people are talking about when they say "we don't understand deep learning" - you're being deliberately obtuse.
edit to cindy (who was downvoted so much they can't be replied to):
Thanks, wasn't aware. FWIW, I appreciate the info but I'll probably go on misusing grammar in that fashion til I die, ha. In fact, I've probably already made some mistake you wouldn't be fond of _in this edit_.
In any case thanks for the facts. I perused your comment history a tad and will just say that hacker news is (so, so disappointingly) against women in so many ways. It really might be best to find a nicer community (and I hope that doesn't come across as me asking you to leave!)
============================================================
What I meant to say is that you were deliberately speaking cryptically and with a tone of confident superiority. I wasn't trying to imply you were stupid (w.r.t. "Ad Hominem").
Seems clear to me neither of us odd going to change the others mind though at this point. Take care.
edit edit to cindy:
=======================•••
fun trick. random password generate your new password. don't look at it. clear your clipboard. you'll no longer be able to log in and no one else will have to deal with you. ass hole
==========================
(for real though someone ban that account)
The thing that blows me away is that I woke up one day and was confronted with a chat bot that could communicate in near perfect English.
I dunno why exactly but that’s what felt the most stunning about this whole era. It can screw up the number of fingers in an image or the details of a recipe or misidentify elements of an image, etc. but I’ve never seen it make a typo or use improper grammar or whatnot.
also, how quickly we moved from "it types nonsense" to "it can solve symbolic math, write code, test code, write programs, use bash, and tools, plan long-horizon actions, execute autonomously, ..."
I like to point out that ASI will allow us to do superhuman stuff that was previously beyond all human capability.
For example, one of the tasks we could put ASI to work doing is to ask it to design implants that would go into the legs that would be powered by light, or electric induction that would use ASI designed protein metabolic chains to electrically transform carbon dioxide into oxygen and ADP into ATP so to power humans with pure electricity. We are very energy efficient. We use about 3 kilowatt hours of power a day, so we could use this sort of technology to live in space pretty effortlessly. Your Space RV would not need a bathroom or a kitchen. You'd just live in a static nitrogen atmosphere and the whole thing could be powered by solar panels, or a small modular nuke reactor. I call this "The Electrobiological Age" and it will unlock whole new worlds for humanity.
It feels like it’s been a really long time since humans invented anything just by thinking about it. At this stage we mostly progress by cycling between ideas and practical experiments. The experiments are needed not because we’re not smart enough to reason correctly with data we have, but because we lack data to reason about. I don’t see how more intelligent AI will tighten that loop significantly.
ASI would see that we are super energy efficient. Way more efficient than robots. We run on 70 cents of electricity a day! We'd be perfect for living in deep space if we could just eat electricity. In those niches, we'd be perfect. Also machine intelligence does not have all the predatory competition brainstack from evolution, and a trillion years is the same as a nano-second to AI, so analogies to biological competition are nonsensical. To even assume that ASI has a static personality that would make decisions based on some sort of statically defined criteria is a flawed assumption. As Grok voice mode so brilliantly shows us, AI can go from your best friend, to your morality god, to a trained assassin, to a sexbot, and back to being your best friend in no time. This absolute flexibility is where people are failing badly at trying to make biological analogies with AI as biology changes much more slowly.
If AI technology continues to improve and becomes capable of learning and executing more tasks on its own, this revolution is going to be very unlike the past ones.
We don't how if or how our current institutions and systems will be able to handle that.
I think so too - the latest AI changes mark the new "automate everything" era. When everything is automated, everything costs basically zero, as this will eliminate the most expensive part of every business - human labor. No one will make money from all the automated stuff, but no one would need the money anyway. This will create a society in which money is not the only value pursued. Instead of trying to chase papers, people would do what they are intended to - create art and celebrate life. And maybe fight each other for no reason.
I'm flying, ofc, this is just a weird theory I had in the back of my head for the past 20 years, and it seems like we're getting there.
You are forgetting that there is actually scarcity built into the planet. We are already very from being sustainable, we're eating into reserves that will never come back. There are only so many nice places to go on holiday. Only so much space to grow food etc. Economics isn't about money, it's about scarcity.
Are humans meant to create art and celebrate life. That just seems like something people into automation tell people.
Really as a human I’ve physically evolved to move and think in a dynamic way. But automation has reduced the need for me to work and think.
Do you not know the earth is saturated with artists already? There’s whole class of people that consider themselves technically minded and not really artists. Will they just roll over and die?
Everything basically costs zero is a pipe dream where there is no social order or economic system. Even in your basically zero system there is a lot of cost being hand waved away.
I think you need a rethink on your 20 year thought.
It will only be zero as long as we don't allow rent seeking behaviour. If the technology has gatekeepers, if energy is not provided at a practically infinite capacity and if people don't wake themselves from the master/slave relationships we seem to so often desire and create, then I'm skeptical.
The latter one is probably the most intellectually interesting and potentially intractable...
I completely disagree with idea that money is currently the only driver of human endeavour, frankly it's demonstrably not true, at least not in it's direct use value, it maybe used as a proxy for power but it's also not directly correlatable.
Looking at it intellectually from a Hegelian lens of master/slave dialectic might provide some interesting insights. I think both sides are in some way usurped. The slaves position of actualisation through productive creation is taken via automation, but if that automation is also widely and freely available the masters position of status via subjection is also made common and therefore without status.
What does it all mean in the long run? Damned if I know...
We currently work more than we ever have. Just a couple of generations ago it was common for a couple to consist of one person who worked for someone else or the public, and one who worked at home for themselves. Now we pretty much all have to work for someone else full time then work for ourselves in the evening. And that won't make you rich, it will just make you normal.
Maybe a "loss of jobs" is what we need so we can go back working for ourselves, cooking our own food, maintaining our own houses etc.
This is why I doubt it will happen. I think "AI" will just end up making us work even more for even less.
If we accept the possibility that AI is going to be more intelligent than humans the outcome is obvious. Humans will no longer be needed and either go extinct or maybe be kept by the AI as we now keep pets or zoo animals.
> The future may reduce the economic prosperity and push humanity to switch to some different economic system (maybe a better system).
Humans never truly produce anything; they only generate various forms of waste (resulting from consumption). Human technology merely enables the extraction of natural resources across magnitudes, without actually creating any resources. Given its enormous energy consumption, I strongly doubt that AI will contribute to a better economic system.
> Humans never truly produce anything; they only generate various forms of waste
What a sad way of viewing huge fields of creative expressions. Surely, a person sitting on a chair in a room improvising a song with a guitar is producing something not considered "waste"?
But that's clearly not true for every technology. Photoshop, Blender and similar creative programs are "technology", and arguably they aren't as resource-intensive as the current generative AI hype, yet humans used those to create things I personally wouldn't consider "waste".
Humans have a proven history of re-inventing economic systems, so if AI ends up thinking better than we do (yet unproven this is possible), then we should have superior future systems.
But the question is a system optimized for what? That emphasizes huge rewards for the few, and that requires the poverty of some (or many). Or a more fair system. Not different from the challenges of today.
I'm skeptical even a very intelligent machine will change the landscape of our dificult decisions, but will accelerate which direction we decide (or is decided for us), that we go.
I ll happily believe it the day something doesnt adhere to the Gartner cycle, until then it is just another bubble like dotcom, chatbots, crypto and the 456345646 things that came before it
> It was not even clear that we were so near to create machines that could understand the human language
It's not really clear to me to what extent LLMs even do *understand* human language. They are very good at saying things that sound like a responsive answer, but the head-scratching, hard-to-mentally-visualise aspect of all of this is that this isn't the same thing at all.
The right way to think about "jobs" is that we could have given ourselves more leisure on the basis of previous technological progress than we actually did.
Assuming AI improves productivity then I don't see how it couldn't result in an economic boom. Labor has always been one of the most scarce resources in the economy. Now whether or not that the wealth from the improved productivity actually trickles down to most people depends on the political climate.
If computers are ‘bicycles for the mind’, AI is the ‘self-driving car for the mind’. Which technology results in worse accidents? Did automobiles even improve our lives or just change the tempo beyond human bounds?
After reading a good chunk of the comments, I got the distinct impression that people don't realize we could just not do the whole "let's make a dystopian hellscape" project and just turn all of it off. By that I mean, outlaw AI, destroy the data centers, have severe consequences for it's use by corporations as a way to reduce headcount, I'm talking executives get to spend the rest of their lives in solitary confinement, and instead invest all of this capital in making a better world (solving homelessness, the last mile problem of food distribution, the ever present and ongoing climate catastrophe). We, as humans, can make a choice and make it stick through force of actions.
Or am I just too idealistic ?
Sidenote, I never quite understand why the rich think their bunkers are going to save them from the crisis they caused. Do they fail to realize that there's more of us than them, or do they really believe they can fashion themselves as warlords?
Clear long term winners are energy producers. AI can replace everything including hardware design & production but it can not produce energy out of thin air.
i don’t this article really says anything that hasn’t been already said for the past two years. “if AI actually take jobs, it will be a near-apocalyptic system shock if there aren’t news jobs to replace them”. i still think it’s at best too soon to say if jobs have permanently been lost
they are tremendous tools but seems like they make a near equal amount of work from the stuff the save time on
A compilation of claims, takes, narratives, shills, expectations and predictions from the late 90s "information superhighway" era.
I wonder if LLMs can produce this.
A lot of the dotcom exuberance was famously "correct, but off by 7 years." But... most of it flat wrong. Right but early applies mostly to the meta investment case: "the internet business will be big."
One that stands out in my memory is "turning billion dollar industries into million dollar industries."
With ubiquitous networked computers, banking and financial services could become "mostly software." Banks and whatnot would all become hyper-efficient Vanguard-like companies.
We often starts with an observation that economies are efficiency seeking. Then we imagine the most efficient outcome given legible constraints of technology, geography and whatnot. Then we imagine dynamics and tensions in a world with that kind of efficiency.
This, incidentally, is also "historical materialism." Marx had a lot of awe for modern industry, the efficiency of capitalism and whatnot. Almost Adam Smith-like... at times.
Anyway... this never actually works out. The meta is a terrible predictor of where things will go.
Imagine law gets more efficient. Will we have more or less lawyers? It could go either way.
As any other technology, at the end of the day LLMs are used by humans for humans’ selfish, driven by mental issues and trauma and overcompensation, maybe even paved with good intentions but leading you know where, short-sighted goals. If we were to believe that LLMs are going to somehow become extremely powerful, then we should be concerned, as it is difficult to imagine how that can lead to an optimal outcome organically.
From the beginning, corporations and their collaborators at the forefront of this technology tainted it by ignoring the concept of intellectual property ownership (which had been with us in many forms for hundreds if not thousands of years) in the name of personal short-term gain and shareholder interest or some “the ends justify the means” utilitarian calculus.
> However, if AI avoids plateauing long enough to become significantly more useful and independent of humans, this revolution is going to be very unlike the past ones. Yet the economic markets are reacting as if they were governed by stochastic parrots.
Aren't the markets massively puffed up by AI companies at the moment?
edit: for example, the S&P500's performance with and without the top 10 (which is almost totally tech companies) looks very different: https://i.imgur.com/IurjaaR.jpeg
As someone who keeps 401k 100% in sp500 that scares me. If the bubble pops it will erase half of gains, if the bubble continues then the gap(490 vs 10) will grow even larger.
Unpopular opinion: Let us say AI achieves general intelligence levels. We tend to think of current economy, jobs, research as a closed system, but indeed it is a very open system.
Humans want to go to space, start living on other planets, travel beyond solar system, figure out how to live longer and so on. The list is endless. Without AI, these things would take a very long time. I believe AI will accelerate all these things.
Humans are always ambitious. That ambition will push us to use AI more than it's capabilities. The AI will get better at these new things and the cycle repeats. There's so much humans know and so much more that we don't know.
I'm less worried about general intelligence. Rather in more worried about how humans are going to govern themselves. That's going to decide whether we will do great things or end humanity. Over the last 100 years, we start thinking more about "how" to do something rather than the "why". Because "how" is becoming more and more easier. Today it's much more easier and tomorrow even more. So nobody's got the time to ask "why" we are doing something, just "how" to do something. With AI I can do more. That means everyone can do more. That means governments can do so much more. Large scale things in a short period. If those things are wrong or have irreversible consequences, we are screwed.
We will continue to have poor understanding of LLMs until a simple model can be constructed and taught to a classroom of children. It is only different in this aspect. It is not magic. It is not intelligent. Until we teach the public exactly what it is doing in a way simple adults will understand, enjoy hot take after hot take.
Honestly the long-term consequences of Baumol's disease scare me more than some AI driven job disruption dystopia.
If we want to continue on the path of increased human development we desperately need to lift the productivity of a whole bunch of labor intensive sectors.
We're going to need to seriously think about how to redistribute the gains, but that's an issue regardless of AI (things like effective tax policy).
GenAI is a bubble, but that’s not the same as the broader field of AI, which is completely different. We will probably not even be using chat bots in a few years, better interfaces will be developed with real intelligence, not just predictive statistics.
I think there is an unspoken implication built into the assumption that AI will be able to replace a wide variety of existing jobs, and that is that those current jobs are not being done efficiently. This is sometimes articulated as bullshit jobs, etc. and if AI takes over those the immediate next thing that will happen is that AI will look around ask why _anyone_ was doing that job in the first place. The answer was articulated 70 years ago in [0].
The only question is how much fat there is to trim as the middle management is wiped out because the algorithms have determined that they are completely useless and mostly only increase cost over time.
Now, all the AI companies think that they are going to be deriving revenue from that fat, but those revenue streams are going to disappear entirely because a huge number of purely politic positions inside corporations will vanish, because if they do not the corporation will go bankrupt competing with other companies that have already cut the fat. There won't be additional revenue streams that get spent on the bullshit. The good news is that labor can go somewhere else, and we will need it due to a shrinking global population, but the cushy bullshit management job is likely disappear.
At some point AI agents will cease to be sycophantic and when fed the priors for the current situation that a company is in will simply tell it like it is, and might even be smart enough to get the executives to achieve the goal they actually stated instead of simply puffing up their internal political position, which might include a rather surprising set of actions that could even lead to the executive being fired if the AI determines that they are getting in the way of the goal [1].
By all means, continue to make or improve your Llamas/Geminis (to the latter: stop censoring Literally Everything. Google has a culture problem. To the former... I don't much trust your parent company in general)
It will undoubtedly lead to great advances
But for the love of god do not tightly bind them to your products (Kagi does it alright, they don't force it on you). Do not make your search results worse. Do NOT put AI in charge of automatic content moderation with 0 human oversight (we know you want to. The economics of it work out nicely for you, with no accountability). People already as is get banned far too easily by your automated systems
"Undoubtedly" seems like a level of confidence that is unjustified. Like Travis Kalanick thinking AI is just about to help him discover new physics, this seems to suggest that AI will go from being able to do (at best) what we can already do if we were simply more diligent at our tasks to being something genuinely more than "just" us
At the moment I just don't see AI in its current state or future trajectory as a threat to jobs. (Not that there can't be other reasons why jobs are getting harder to get). Predictions are hard, and breakthroughs can happen, so this is just my opinion. Posting this comment as a record to myself on how I feel of AI - since my opinion on how useful/capable AI is has gone up and down and up and down again over the last couple of years.
Most recently down because I worked on two separate projects over the last few weeks with the latest models available on GitHub Copilot Pro. (GPT-5, Claude Sonnet 4, Gemini 2.5 Pro, and some lesser capable ones at times as well). Trying the exact same queries for code changes across all three models for a majority of the queries. I saw myself using Claude most, but it still wasn't drastically better than others, and still made too many mistakes.
One project was a simple health-tracking app in Dart/Flutter. Completely vibe-coded, just for fun. I got basic stuff to start working. Over the days I kept finding bugs as I starting using it. Since I truly wanted to use this app in my daily life, at one point I just gave up cause fixing the bugs was getting way too annoying. Most "fixes" as I later got into the weeds of it, were wrong, with wrong assumptions, made changes that seemed to fix the problem at the surface but introducing more bugs and random garbage, despite giving a ton of context and instructions on why things are supposed to be a certain way, etc. I was constantly fighting with the model. Would've been much easier to do much more on my own and using it a little bit.
Another project was in TypeScript, where I did actually use my brain, not just vibe-coded. Here, AI models were helpful because I mostly used them to explain stuff. And did not let them make more than a few lines of code changes at most at a time. There was a portion of the project which I kinda "isolated" which I completely vibe-coded and I don't mind if it breaks or anything as it is not critical. It did save me some time but I certainly could've done it on my own with a little more time, while having code that I can understand fully well and edit.
So the way I see using these models right now is for research/prototyping/throwaway kind of stuff. But even in that, I literally had Claude 4 teach me something wrong about TypeScript just yesterday. It told me a certain thing was deprecated. I made a follow up question on why that thing is deprecated and what's used instead, it replied with something like "Oops! I misspoke, that is not actually true, that thing is still being used and not deprecated." Like, what? Lmao. For how many things have I not asked a follow up and learnt stuff incorrectly? Or asked and still learnt incorrectly lmao.
I like how straightforward GPT-5 is. But apart from that style of speech I don't see much other benefit. I do love LLMs for personal random searches like facts/plans/etc. I just ask the LLM to suggest me what to do just to rubber duck or whatever. Do all these gains add up towards massive job displacement? I don't know. Maybe. If it is saving 10% time for me and everyone else, I guess we do need 10% less people to do the same work? But is the amount of work we can get paid for fixed and finite? Idk. We (individuals) might have to adapt and be more competitive than before depending on our jobs and how they're affected, but is it a fundamental shift? Are these models or their future capabilities human replacements? Idk. At the moment, I think they're useful but overhyped. Time will tell though.
Reposting the article so I can read it in a normal font:
Regardless of their flaws, AI systems continue to impress with their ability to replicate certain human skills. Even if imperfect, such systems were a few years ago science fiction. It was not even clear that we were so near to create machines that could understand the human language, write programs, and find bugs in a complex code base: bugs that escaped the code review of a competent programmer.
Since LLMs and in general deep models are poorly understood, and even the most prominent experts in the field failed miserably again and again to modulate the expectations (with incredible errors on both sides: of reducing or magnifying what was near to come), it is hard to tell what will come next. But even before the Transformer architecture, we were seeing incredible progress for many years, and so far there is no clear sign that the future will not hold more. After all, a plateau of the current systems is possible and very credible, but it would likely stimulate, at this point, massive research efforts in the next step of architectures.
However, if AI avoids plateauing long enough to become significantly more useful and independent of humans, this revolution is going to be very unlike the past ones. Yet the economic markets are reacting as if they were governed by stochastic parrots. Their pattern matching wants that previous technologies booms created more business opportunities, so investors are polarized to think the same will happen with AI. But this is not the only possible outcome.
We are not there, yet, but if AI could replace a sizable amount of workers, the economic system will be put to a very hard test. Moreover, companies could be less willing to pay for services that their internal AIs can handle or build from scratch. Nor is it possible to imagine a system where a few mega companies are the only providers of intelligence: either AI will be eventually a commodity, or the governments would do something, in such an odd economic setup (a setup where a single industry completely dominates all the others).
The future may reduce the economic prosperity and push humanity to switch to some different economic system (maybe a better system). Markets don’t want to accept that, so far, and even if the economic forecasts are cloudy, wars are destabilizing the world, the AI timings are hard to guess, regardless of all that stocks continue to go up. But stocks are insignificant in the vast perspective of human history, and even systems that lasted a lot more than our current institutions eventually were eradicated by fundamental changes in the society and in the human knowledge. AI could be such a change.
This same link was submitted 2 days ago. My comment there still applies.
LLMs do not "understand the human language, write programs, and find bugs in a complex code base"
"LLMs are language models, and their superpower is fluency. It’s this fluency that hacks our brains, trapping us into seeing them as something they aren’t."
AI has been improving at a very rapid pace, which means that a lot of people have really outdated priors. I see this all the time online where people are dismissive about AI in a way that suggests it's been a while since they last checked-in on the capabilities of models. They wrote off the coding ability of ChatGPT on version 3.5, for instance, and have missed all the advancements that have happened since. Or they talk about hallucination and haven't tried Deep Research as an alternative to traditional web-search.
Then there's a tendency to be so 'anti' that there's an assumption that anyone reporting that the tools are accomplishing truly impressive and useful things must be an 'AI booster' or shill. Or they assume that person must not have been a very good engineer in the first place, etc.
Really is one of those examples of the quote, "In the beginner's mind there are many possibilities, but in the expert's mind there are few."
It's a rapidly evolving field, and unless you actually spend some time kicking the tires on the models every so often, you're just basing your opinions on outdated experiences or what everyone else is saying about it.
I feel like I see these two opposite behaviors. People who formed an opinion about AI from an older model and haven't updated it. And people who have an opinion about what AI will be able to do in the future and refuse to acknowledge that it doesn't do that in the present.
And often when the two are arguing it's tricky to tell which is which, because whether or not it does something isn't totally black and white, there's some things it can sometimes do, which you can argue either way about that being in its capabilities or not.
I.e. people who look at f(now) and assume it'll be like this forever against people who look at f'(now) and assume it'll improve like this forever
What is f''(now) looking like?
Very small. There’s very little fundamental research into AI compared to neural networks from what I can tell
Another very significant cohort is people who formed a negative opinion without even the slightest interest in genuinely trying to learn how to use it (or even trying at all)
To play devil's advocate, how is your argument not a 'no true scottsman' argument? As in, "oh, they had a negative view of X, well that's of course because they weren't testing the new and improved X2 model which is different". Fast forward a year .. "Oh, they have a negative view on X2, well silly them, they need to be using the Y24 model, that's where it's at, the X2 model isn't good anymore". Fast forward a year .. ad infinitum.
Are the models that exist today a "true scottsman" for you?
It's not a No True Scotsman. That fallacy redefines the group to dismiss counterexamples. The point here is different: when the thing itself keeps changing, evidence from older versions naturally goes stale. Criticisms of GPT-3.5 don't necessarily hold against GPT-4, just like reviews of Windows XP don't apply to Windows 11.
IMHO, by placing people with a negative attitude toward AI products under the guise "their priors are outdated" you effectively negate any arguments from those people. That is, because their priors are outdated their counterexamples may be dismissed. That is, indeed, the no true Scotsman!
I don’t see a claim that anyone with a negative attitude toward AI shouldn’t be listened to because it automatically means that they formed their opinion on older models. The claim was simply that there’s a large cohort of people who undervalue the capabilities of language models because they formed their views while evaluating earlier versions.
I wouldn’t think gpt5 is any better than the previous chat gpt. I know it’s a silly example but I was trying to trip it up with the 8.6-8.11 and it got it right .49 but then it said the opposite of 8.6 - 8.12 was -.21.
I just don’t see that much of a difference coding either with Claude 4 or Gemini 2.5 pro. Like they’re all fine but the difference isn’t changing anything in what I use them for. Maybe people are having more success with the agent stuff but in my mind it’s not that different than just forking a GitHub repo that already does what you’re “building” with the agent.
Yes but almost definitionally that is everyone who did not find value from LLMs. If you don’t find value from LLMs, you’re not going to use them all the time.
The only people you’re excluding are the people who are forced to use it, and the random sampling of people who happened to try it recently.
So it may have been accidental or indirectly, but yes, no true Scotsman would apply to your statement.
> The point here is different: when the thing itself keeps changing, evidence from older versions naturally goes stale.
Yes, but the claims do not. When the hypemen were shouting that GPT-3 was near-AGI, it still turned out to be absolute shit. When the hypemen were claiming that GPT-3.5 was thousands of times better than GPT-3 and beating all highschool students, it turned out to be a massive exaggeration. When the hypemen claimed that GPT-4 was a groundbreaking innovation and going to replace every single programmer, it still wasn't any good.
Sure, AI is improving. Nobody is doubting that. But you can only claim to have a magical unicorn so many times before people stop believing that this time you might have something different than a horse with an ice cream cone glued to its head. I'm not going to waste a significant amount of my time evaluating Unicorn 5.0 when I already know I'll almost certainly end up disappointed.
Perhaps it'll be something impressive in a decade or two, but in the meantime the fact that Big Tech keeps trying to shove it down my throat even when it clearly isn't ready yet is a pretty good indicator to me that it is still primarily just a hype bubble.
Its funny how the hype-train is not responding to any real criticisms about the false predictions and carrying on with the false narrative of AI.
I agree it will probably be something in a decade, but right now, it has some interesting concepts but I do notice upon successive iterations of chat responses that its got a ways to go.
It remind me of Tesla car owners buying into the self-driving terminology. Yes the drive assistant technology has improved quite a bit since cruise control, but its a far cry from self-driving.
How is that different than the models today are actually usable for non trivial things and more capable than yesterdays and it’s also true that tomorrow’s models will also probably be more capable than today’s?
For example, I dismissed AI three years ago because it couldn’t do anything I needed it to. Today I use it for certain things and it’s not quite capable of other things. Tomorrow it might be capable of a lot more.
Yes, priors have to be updated when the ground truth changes and the capabilities of AI change rapidly. This is how chess engines on supercomputers were competitive in the 90s then hybrid systems became the leading edge competitive and then machines took over for good and never looked back.
It’s not that the LLMs are better, it’s the internal tools/functions being called that do the actual work are better. They didn’t spend millions to retrain a model to statistically output the number of r’s in strawberry, but just offloaded that trivial question to a function call.
So I would say the overall service provided is better than it was, thanks to functions being built based on user queries, but not the actual LLM models themselves.
LLMs are definitely better quality today than 3 years ago at codegen quality - there’s quantitative benchmarks as well as for me my personal qualitative experience (given the gaming that companies engage in).
It is also true that the tooling and context management has gotten more sophisticated (often using models by the way). That doesn’t negate that the models themselves have gotten better at reliable tool calling so that the LLM is driving more of the show rather than purpose built coordination into the LLM and that the codegen quality is higher than it used to be.
This is a good example of making statements that are clearly not based in fact. Anyone who works with those models knows full well what a massive gap there is between e.g. GPT 3.5 and Opus 4.1 that has nothing to do with the ability to use tools.
There is another big and growing group: charlatans (influencers). People who don't know much but make bold statements, select 'proof' cases. Just to get attention. There are many of them on youtube. When you someone on thumbnail making faces this is most likely it.
Here[0] is a perfect example of this. There are so many youtubers making videos about the future of AI as a dooms-day prediction. Its kind of irresponsible actually. These youtubers read a book on the the down fall of humanity because of AGI. Many of these authors seem like they are repeating the Terminator/Skynet themes. Because of all this false information, It's hard to believe anything that is being said about the future of AI on youtube now.
[0]: https://www.youtube.com/watch?v=5KVDDfAkRgc
> There are many of them on youtube.
Not as many as on HN. "Influencers" have agendas and the stream of income, or other self-interest. HN always comes off as a monolith, on any subject. Counter-arguments get ignored and downvoted to oblivion.
I’m spending a lot of time on LinkedIn because my team is hiring and, boy oh boy, LinkedIn is terminally infested with AI influencers. It’s a hot mess.
If you type "AI" into the youtube search bar it's quite impressive. I think they win.
There are also a bunch of us who do kick the tires very often and are consistently underwhelmed.
There are also those of us who have used them substantially, and seen the damage that causes to a codebase in the long run (in part due to the missing gains of having someone who understands the codebase).
There are also those of us who just don’t like the interface of chatting with a robot instead of just solving the problem ourselves.
There are also those of us who find each generation of model substantially worse than the previous generation, and find the utility trending downwards.
There are also those of us who are concerned about the research coming out about the effects of using LLMs on your brain and cognitive load.
There are also those of us who appreciate craft, and take pride in what we do, and don’t find that same enjoyment/pride in asking LLMs to do it.
There are also those of us who worry about offloading our critical thinking to big corporations, and becoming dependent on a pay-to-play system, that is current being propped up by artificially lowered prices, with “RUG PULL” written all over them.
There are also those of us who are really concerned about the privacy issues, and don’t trust companies hundreds of billions of dollars in debt to some of the least trust worth individuals with that data.
Most of these issues don’t require much experience with the latest generation.
I don’t think the intention of your comment was to stir up FUD, but I feel like it’s really easy for people to walk away with that from this sort of comment, so I just wanted to add my two cents and tell people they really don’t need to be wasting their time every 6 weeks. They’re really not missing anything.
Can you do more than a few weeks ago? Sure? Maybe? But I can also do a lot more than I was able to a few weeks ago as well not using an LLM. I’ve learned and improved myself.
Chances are if you’re not already using an LLM it’s because you don’t like it, or don’t want to, and that’s really ok. If AGSI comes out in a few months, all the time you would have invested now would be out of date anyways.
There’s really no rush or need to be tapped in.
> There are also a bunch of us who do kick the tires very often and are consistently underwhelmed.
Yep, this is me. Every time people are like "it's improved so much" I feel like I'm taking crazy pills as a result. I try it every so often, and more often than not it still has the same exact issues it had back in the GPT-3 days. When the tool hasn't improved (in my opinion, obviously) in several years, why should I be optimistic that it'll reach the heights that advocates say it will?
haha I have to laugh because I’ve probably said “I feel like I’m taking crazy pills” at least 20 times this week (I spent a day using cursor with the new GPT and was thoroughly, thoroughly unimpressed).
I’m open to programming with LLMs, and I’m entirely fine with people using them and I’m glad people are happy. But this insistence that progress is so crazy that you have to be tapped in at all times just irks me.
LLM models are like iPhones. You can skip a couple versions it’s fine, you will have the new version at the same time with all the same functionality as everyone else buying one every year.
> new GPT
Another sign tapping is needed.
> AI is exceptional for coding! [high-compute scaffold around multiple instances / undisclosed IOI model / AlphaEvolve]
> AI is awesome for coding! [Gpt-5 Pro]
> AI is somewhat awesome for coding! ["gpt-5" with verbosity "high" and effort "high"]
> AI is a pretty good at coding! [ChatGPT 5 Thinking through a Pro subscription with Juice of 128]
> AI is mediocre at coding! [ChatGPT 5 Thinking through a Plus subscription with a Juice of 64]
> AI sucks at coding! [ChatGPT 5 auto routing]
Yeah, frankly if you have the free time to dig through all of that to find the best models or whatever for your use cases, good on you
I have code to write
There’s really three points mixed up in here.
1) LLMs are controlled by BigCorps who don’t have user’s best interests at heart.
2) I don’t like LLMs and don’t use them because they spoil my feeling of craftsmanship.
3) LLMs can’t be useful to anyone because I “kick the tires” every so often and am underwhelmed. (But what did you actually try? Do tell.)
#1 is obviously true and is a problem, but it’s just capitalism. #2 is a personal choice, you do you etc., but it’s also kinda betting your career on AI failing. You may or may not have a technical niche where you’ll be fine for the next decade, but would you really in good conscience recommend a juniorish web dev take this position? #3 is a rather strong claim because it requires you to claim that a lot of smart reasonable programmers who see benefits from AI use are deluded. (Not everyone who says they get some benefit from AI is a shill or charlatan.)
How exactly am I betting my career on LLMs failing? The inverse is definitely true — going all in on LLMs feels like betting on the future success of LLMs. However not using LLMs to program today is not betting on anything, except maybe myself, but even that’s a stretch.
After all, I can always pick up LLMs in the future. If a few weeks is long enough for all my priors to become stale, why should I have to start now? Everything I learn will be out of date in a few weeks. Things will only be easier to learn 6, 12, 18 months from now.
Also no where in my post did I say that LLMs can’t be useful to anyone. In fact I said the opposite. If you like LLMs or benefit from them, then you’re probably already using them, in which case I’m not advocating anyone stop. However there are many segments of people who LLMs are not for. No tool is a panacea. I’m just trying to nip and FUD in the butt.
There are so many demands for our attention in the modern world to stay looped in and up to date on everything; I’m just here saying don’t fret. Do what you enjoy. LLMs will be here in 12 months. And again in 24. And 36. You don’t need to care now.
And yes I mentor several juniors (designers and engineers). I do not let them use LLMs for anything and actively discourage them from using LLMs. That is not what I’m trying to do in this post, but for those whose success I am invested in, who ask me for advice, I quite confidently advise against it. At least for now. But that is a separate matter.
EDIT: My exact words from another comment in this thread prior to your comment:
> I’m open to programming with LLMs, and I’m entirely fine with people using them and I’m glad people are happy.
I wonder, what drives this intense FOMO ideation about AI tools as expressed further upthread?
How does someone reconcile a faith that AI tooling is rapdily improving with that contradictory belief that there is some permanent early-adopter benefit?
I think the early adopter at all costs mentality is being driven by marketing and sales, not any rational reason to need to be ahead of the curve
I agree very strongly with the poster above yours: If these tools are so good and so easy to use then I will learn them at that time
Otherwise the idea that they are saving me time is likely just hype and not reality, which matches my experience
And that's why I keep checking back in.
They're still pretty dumb if you want the to do anything (ie with MCPs) but they're not bad at writing and code.
There's a middle ground which is to watch and see what happens around us. Is it unholy to not have an opinion?
> They wrote off the coding ability of ChatGPT on version 3.5, for instance
I found I had better luck with ChatGPT 3.5's coding abilities. What the newer models are really good at, though, is doing the high level "thinking" work and explaining it in plain English, leaving me to simply do the coding.
I agree with you. I am a perpetual cynic about new technology (and a GenXer so multiply that by two) and I have deeply embraced AI in all parts of my business and basically am engaging with it all day for various tasks from helping me compare restaurant options to re-tagging a million contact records in salesforce.
It’s incredibly powerful and will just clearly be useful. I don’t believe it’s going to replace intelligence or people but it’s just obviously a remarkable tool.
But I think at least part of the dynamic is that the SV tech hype booster train has been so profoundly full of shit for so long that you really can’t blame people for skepticism. Crypto was and is just a giant and elaborate grift, to name one example. Also guys like Altman are clearly overstating the current trajectory.
The dismissive response does come with some context attached.
> But I think at least part of the dynamic is that the SV tech hype booster train has been so profoundly full of shit for so long that you really can’t blame people for skepticism.
They are still full of shit about LLMs, even if it is useful.
I think one of the issue is also the sheer amount of shilling going on like crypto level
I got a modest tech following and you wouldn’t believe the amount I’m offered to promote the most garbage AI company
I do see this a lot. It's hard to have a reasonable conversation about AI amidst, on the one hand, hype-mongers and boosters talking about how we'll have AGI in 2027 and all jobs are just about to be automated away, and on the other hand, a chorus of people who hate AI so much they have invested their identify in it failing and haven't really updated their priors since ChatGPT came out. Both groups repeat the same set of tired points that haven't really changed much in three years.
But there are plenty of us who try and walk a middle course. A lot of us have changed our opinions over time. ("When the facts change, I change my mind.") I didn't think AI models were much use for coding a year ago. The facts changed. (Claude Code came out.) Now I do. Frankly, I'd be suspicious of anyone who hasn't changed their opinions about AI in the last year.
You can believe all these things at once, and many of us do:
* LLMs are extremely impressive in what they can do. (I didn't believe I'd see something like this in my lifetime.)
* Used judiciously, they are a big productivity boost for software engineers and many other professions.
* They are imperfect and make mistakes, often in weird ways. They hallucinate. There are some trivial problems that they mess up.
* But they're not just "stochastic parrots." They can model the world and reason about it, albeit imperfectly and not like humans do.
* AI will change the world in the next 20 years
* But AI companies are overvalued at the present time and we're mostly likely in a bubble which will burst.
* Being in a bubble doesn't mean the technology is useless. (c.f. the dotcom bubble or the railroad bubble in the 19th century.)
* AGI isn't just around the corner. (There's still no way models can learn from experience.)
* A lot of people making optimistic claims about AI are doing it for self-serving boosterish reasons, because they want to pump up their stock price or sell you something
* AI has many potential negative consequences for society and mental health, and may be at least as nasty as social media in that respect
* AI has the potential to accelerate human progress in ways that really matter, such as medical research
* But anyone who claims to know the future is just guessing
> But they're not just "stochastic parrots." They can model the world and reason about it, albeit imperfectly and not like humans do.
I've not seen anything from a model to persuade me they're not just stochastic parrots. Maybe I just have higher expectations of stochastic parrots than you do.
I agree with you that AI will have a big impact. We're talking about somewhere between "invention of the internet" and "invention of language" levels of impact, but it's going to take a couple of decades for this to ripple through the economy.
What is your definition of "stochastic parrot"? Mine is something along the lines of "produces probabilistic completions of language/tokens without having any meaningful internal representation of the concepts underlying the language/tokens."
Early LLMs were like that. That's not what they are now. An LLM got Gold on the Mathematical Olympiad - very difficult math problems that it hadn't seen in advance. You don't do that without some kind of working internal model of mathematics. There is just no way you can get to the right answer by spouting out plausible-sounding sentence completions without understanding what they mean. (If you don't believe me, have a look at the questions.)
Ignoring its negative connotation, it's more likely to be a highly advanced "stochastic parrot".
> "You don't do that without some kind of working internal model of mathematics."
This is speculation at best. Models are black boxes, even to those who make them. We can't discern a "meaningful internal representation" in a model, anymore than a human brain.
> "There is just no way you can get to the right answer by spouting out plausible-sounding sentence completions without understanding what they mean."
You've just anthropomorphised a stochastic machine, and this behaviour is far more concerning, because it implies we're special, and we're not. We're just highly advanced "stochastic parrots" with a game loop.
> This is speculation at best. Models are black boxes, even to those who make them. We can't discern a "meaningful internal representation" in a model, anymore than a human brain.
They are not pure black boxes. They are too complex to decipher, but it doesn't mean we can't look at activations and get some very high level idea of what is going on.
For world models specifically, the paper that first demonstrated that LLM has some kind of a world model corresponding to the task it is trained on came out in 2023: https://www.neelnanda.io/mechanistic-interpretability/othell.... Now you might argue that this doesn't prove anything about generic LLMs, and that is true. But I would argue that, given this result, and given what LLMs are capable of doing, assuming that they have some kind of world model (even if it's drastically simplified and even outright wrong around the edges) should be the default at this point, and people arguing that they definitely don't have anything like that should present concrete evidence ot that effect.
> We're just highly advanced "stochastic parrots" with a game loop.
If that is your assertion, then what's the point of even talking about "stochastic parrots" at all? By this definition, _everything_ is that, so it ceases to be a meaningful distinction.
In-context learning is proof that LLMs are not stochastic parrots.
Stochastic parrot here (or not?). Can you tell the difference?
> AI will change the world in the next 20 years
Well, it's been changing the world for quite some time, both in good and bad ways. There is no need to add an arbitrary timestamp.
Is there anything you can tell me that will help me drop the nagging feeling that gradient descent trained models will just never be good?
I understand all of what you said, but I can't get over that fact that the term AI is being used for these architectures. It seems like the industry is just trying to do a cool parlor trick in convincing the masses this is somehow AI from science fiction.
Maybe I'm being overly cynical, but a lot of this stinks.
> It seems like the industry is just trying to do a cool parlor trick in convincing the masses this is somehow AI from science fiction.
If you gave a random sci-fi writer from 1960s access to Claude, I'm fairly sure they wouldn't have any doubts over whether it is AI or not. They might argue about philosophical matters like whether it has a "soul" etc (there's plenty of that in sci-fi), but that is a separate debate.
The thing is AI is already "good" for a lot of things. It all depends on your definition of "good" and what you require of an AI model.
It can do a lot of things that are generally very effective. High reliability semantic parsing from images is just one thing that modern LLM's are very reliable at.
You're right. I use it for api documentation and showing use cases, especially in languages i don't use often.
but this other attribution people are doing- that it's going to achieve (the marketing term) AGI and everything will be awesome is clearly bullshit.
Wouldn’t you say that now, finally, what people call AI combines subsymbolic systems („gradient descent“) with search and with symbolic systems (tool calls)?
I had a professor in AI who was only working on symbolic systems such as SAT-solvers, Prolog etc. and the combination of things seems really promising.
Oh, and what would be really nice is another level of memory or fast learning ability that goes beyond burning in knowledge through training alone.
I had such a professor as well, but those people used to use the more accurate term "machine learning".
There was also wide understanding that those architectures were trying to imitate small bits of what we understood was happening in the brain (see marvin minsky's perceptron etc). The hope was, as I understood it that there would be some breakthrough in neuroscience that would let the computer scientists pick up the torch and simulate what we find in nature.
None of that seems to be happening anymore and we're just interested in training enough to fool people.
"AI" companies investing in brain science would convince me otherwise. At this point they're just trying to come up with the next money printing machine.
You asked earlier if you were being overly cynical, and I think the answer to that is "yes"
We are indeed simulating what we find in nature when we create neural networks and transformers, and AI companies are indeed investing heavily in BCI research. ChatGPT can write an original essay better than most of my students. Its also artificial. Is that not artificial intelligence?
It is not intelligent.
Hiding the training data behind gradient descent and then making attributions to the program that responds using this model is certainly artificial though.
This analogy just isn't holding water.
Can't you judge on the results though rather than saying AI isn't intelligent because it uses gradient descent and biology is intelligent because it uses wet neurons?
> They wrote off the coding ability of ChatGPT on version 3.5, for instance, and have missed all the advancements that have happened since.
I feel like I see now more dismissive comments than previously. As if people, initially confused, formed a firm belief since. And now new facts don't really change it, just entrench them in chosen belief.
There's three important beliefs at play in the A(G)I story:
1. When(if) AGI will arrive. It's likely going to be smeared out over a couple months to years, but relative to everything else, it's a historical blip. This really is the most contention belief with the most variability. It is currently predicted to be 8 years[1].
2. What percentage of jobs will be replaceable with AGI? Current estimates between 80-95% of professions. The remaining professions "culturally require" humans. Think live performance, artisanal goods, in-person care.
3. How quickly will AGI supplant human labor? What is the duration of replacement from inception to saturation? Replacement won't happen evenly, some professions are much easier to replace with AGI, some much more difficult. Let's estimate a 20-30 years horizon for the most stubborn to replace professions.
What we have is a ticking time bomb of labor change at least an order of magnitude greater than the transition from an agricultural economy to an industrial economy or from an industrial economy to a service economy.
Those happened over the course of several generations. Society: culture, education, the legal system, the economy, where able to absorb the changes over 100-200 years. Yet we're talking about a change on the same scale happening 10 times faster - within the timeline of one's professional career. And still, with previous revolutions we had incredible unrest, and social change. Taken as a whole, we'll have possibly the majority of the economy operating outside the territory of society, the legal system, and the existing economy. A kid born on the the "day" AGI arrives will become an adult in a profoundly different world as if born on a farm in 1850 and reaching adulthood in a city in 2000.
1. https://www.metaculus.com/questions/5121/date-of-artificial-...
Your only reference [1] is to a page where anybody in the world can join and vote. It literally means absolutely nothing.
For [2] you have no reference whatsoever. How does AI replace a nurse, a vet, a teacher, a construction worker?
For the AI believer who has an axiom that AGI is around the corner to take over knowledge work, isn't that just "a small matter of robotics" to either tele-operate a physical avatar or deploy a miniaturized AI in an autonomous chassis?
I'm afraid it's really a matter of faith, in either direction, to predict whether an AI can take over the autonomous decision making and robotic systems can take over physical actions which are currently delegated to human professions. And, I think many robotic control problems are inherently solved if we have sufficient AI advancement.
What are you talking about? This is common knowledge.
Median forecasts indicated a 50% probability of AI systems being capable of automating 90% of current human tasks in 25 years and 99% of current human tasks in 50 years[1]
The scope of work replaceable by embodied AGI and the speed of AGI saturation of vastly under estimated. The bottle necks are production of a replacement workforce, not retraining human laborers.
1. https://arxiv.org/pdf/1901.08579
Work is central to identity. It may seem like it is merely toil. You may even have a meaningless corporate job or be indentured. But work is the primary social mechanism that distributes status amongst communities.
A world of 99 percent of jobs being done by AGI (which there remains no convincing grounds for how this tech would ever be achieved) feels ungrounded in the reality of human experience. Dignity, rank, purpose etc are irreducible properties of a functional society, which work currently enables.
It's far more likely that we'll hit some kind of machine intelligence threshold before we see a massive social pushback. This may even be sooner than we think.
Have you considered that perhaps tying dignity and status to work is a major flaw in our social arrangements, and AI (that would actually be good enough to replace humans) is the ultimate fix?
If AI doing everything means that we'll finally have a truly egalitarian society where everyone is equal in dignity and rank, I'd say the faster we get there, the better.
Pretend I'm a farmer in 1850 and I have a belief that the current proportion of jobs in agriculture - 55% of jobs in 1850 would drop to 1.2% in 2022 due to automation and technological advances.
Why would hearing "work is central to identity," and "work is the primary social mechanism that distributes status amongst communities," change my mind?
People migrated from the farms to the city. They didn't stop working.
My apologies if you thought I was arguing that a consequence of AGI would be a permanent reduction in the labor force. What I believe is that the baumol effect will take over non-replaceable professions. A very tiny part of our current economy will become the majority of our future economy.
But the reports are from shills. The impact of ai is almost non existent. The greatest impact it had was on role-playing. It's hardly even useful for coding.
And that all wouldn't be a problem if it wasn't for the wave of bots that makes the crypto wave seem like child's play.
I don't understand people who say AI isn't useful for coding. Claude Code improved my productivity 10x. I used to put solid 8 hours a day in my remote software engineering job. Now I finish everything in 2 hours and go play with my kids. And my performance is better than before.
I don't understand people who say this. My knee jerk reaction (which I rein in because it's incredibly rude) is always "wow, that person must really suck at programming then". And I try to hold to the conviction that there's another explanation. For me, the vast, vast majority of the time I try to use it, AI slows my work down, it doesn't speed it up. As a result it's incredibly difficult to understand where these supposed 10x improvements are being seen.
Usually the "10x" improvements come from greenfield projects or at least smaller codebases. Productivity improvements on mature complex codebases are much more modest, more like 1.2x.
If you really in good faith want to understand where people are coming from when they talk about huge productivity gains, then I would recommend installing Claude Code (specifically that tool) and asking it to build some kind of small project from scratch. (The one I tried was a small app to poll a public flight API for planes near my house and plot the positions, along with other metadata. I didn't give it the api schema at all. It was still able to make it work.) This will show you, at least, what these tools are capable of -- and not just on toy apps, but also at small startups doing a lot of greenfield work very quickly.
Most of us aren't doing that kind of work, we work on large mature codebases. AI is much less effective there because it doesn't have all the context we have about the codebase and product. Sometimes it's useful, sometimes not. But to start making that tradeoff I do think it's worth first setting aside skepticism and seeing it at its best, and giving yourself that "wow" moment.
I was able to realize huge productivity gains working on a 20 years old codebase with 2+ million loc, as I mentioned in the sister post. So I disagree that big productivity gains are only on greenfield projects. Realizing productivity gains on mature code based requires more skill and upfront setup. You need to put some work in your claude.md and give Claude tools for accessing necessary data, logs, build process. It should be able to test your code autonomously as much as possible. In my experience, people who say they are not able to realize productivity gains don't put enough effort to understand these new tools and setup them properly for their project.
You should write a blog post on this! We need more discussion of how to get traction on mature codebases and less of the youtube influencers making toy greenfield apps. Of course at a high level it's all going to be "give the model the right context" (in Claude.md etc.) but the devil is in the details.
So, I'm doing that right now. You do get wow moments, but then you rapidly hit the WTF are you doing moments.
One of the first three projects I tried was a spin on a to-do app. The buttons didn't even work when clicked.
Yes, I keep it iterating, give it a puppeteer MCP, etc.
I think you're just misunderstanding how hard it is to make a greenfield project when you have a super-charged stack overflow that AI is.
Greenfield projects aren't hard, what's hard is starting them.
What AI has helped me immensely with is blank page syndrome. I get it to spit out some boilerplate for a SINGLE page, then boom, I have a new greenfield project 95% my own code in a couple of days.
That's the mistake I think you 10x ers are making.
And you're all giddy and excited and are putting in a ton of work without realising you're the one doing the work, not the AI.
And you'll eventually burn out on that.
And those of us who are a bit more skeptical are realising we could have done it on our own, faster, we just wouldn't normally have bothered. I'd have gone done some gardening with that time instead.
I'm not a 10x-er. My job is working on a mature codebase. The results of AI in that situation are mixed, 1.2x if you're lucky.
My recommendation was that it's useful to try the tools on greenfield projects, since they you can see them at their best.
The productivity improvements of AI for greenfield projects are real. It's not all bullshit. It is a huge boost if you're at a small startup trying to find product market fit. If you don't believe that and think it would be faster to do it all manually I don't know what to tell you - go talk to some startup founders, maybe?
That 1.2x is suspiciously familiar to the recent study showing AI harmed productivity.
1.2x was self-reported, but when measured, developers were actually 0.85x ers using AI.
For me, most of the value comes from Claude Code's ability to 1. research codebase and answer questions about it 2. Perform adhoc testing on the code. Actually writing code is icing on the cake. I work on large code base with more than two million lines of code. Claude Code's ability to find relevant code, understand its purpose, history and interfaces is very time saving. It can answer in minutes questions that would take hours of digging through the code base. Ad hoc testing is another thing. E.g. I can just ask it to test an API endpoint. It will find correct data to use in the database, call the endpoint and verify that it returned correct data and e.g. everything was updated in db correctly.
It depends on what kind of code you're working on and what tools you're using. There's a sliding scale of "well known language + coding patterns" combined with "useful coding tools that make it easy to leverage AI", where AI can predict what you're going to type, and also you can throw problems at the AI and it is capable of solving "bigger" problems.
Personally I've found that it struggles if you're using a language that is off the beaten path. The more content on the public internet that the model could have consumed, the better it will be.
Then why don't you put in 8 hrs like before and get worldwide fame and be set for life within a year for being the best dev the world has ever seen?
> They wrote off the coding ability of ChatGPT on version 3.5, for instance, and have missed all the advancements that have happened since.
> It's hardly even useful for coding.
I’m curious what kind of projects you’re writing where AI coding agents are barely useful.
It’s the “shills” on YouTube that keep me up to date with the latest developments and best practices to make the most of these tools. To me it makes tools like CC not only useful but indispensable. Now I do not focus on writing the thing, but I focus on building agents who are capable of building the thing with a little guidance.
In every technology wave so far, we've disrupted many existing jobs. However we've also opened up new kinds of jobs. And, because it is easier to retrain humans than build machines for those jobs, we wound up with more and better jobs.
This is the first technology wave that doesn't just displace humans, but which can be trained to the new job opportunities more easily than humans can. Right now it can't replace humans for a lot of important things. But as its capabilities improve, what do displaced humans transition to?
I don't think that we have a good answer to that. And we may need it sooner rather than later. I'd be more optimistic if I trusted our leadership more. But wise political leadership is not exactly a strong point for our country right now.
> but which can be trained to the new job opportunities more easily than humans can
What makes you think that? Self driving cars have had untold billions of dollars in reaearch and decades in applied testing, iteration, active monitoring, etc and it still has a very long tail of unaddressed issues. They've been known to ignore police traffic redirections, they've run right through construction barriers, and recently they were burnt to a crisp in the LA riots, completely ignorant of the turmoil that was going on. A human driver is still far more adaptive and requires a lot less training than AI, and humans are ready to handle the infinitely long tail of exceptions to the otherwise algorithmic task of driving, which follows strict rules.
And when you talk about applying this same tech, so confidently, to domains far more nuanced and complex than driving, with even less training data than to go off, I find myself firmly in the skeptics camp, that holds you will struggle even harder to apply humanoid robotics in uncontrolled environments across a diverse range of tasks without human intervention or piloting or maintenence or management.
Unemployment is still near all time lows, this will persist for sometime as we have a structural demographic problem with massive amounts of retirees and less children to support the population "pyramid" (which is looking more like a tapering rectangle these days).
A few months ago I saw one driverless car maybe every three days. Now I see roughly 3-5 every day.
I get that it’s taken a long time and a lot of hype that hasn’t panned out. But once the tech works and it’s just about juicing the scale then things shift rapidly.
Even if you think “oh that’s the next generation’s problem” if there is a chance you’re wrong, or if you want to be kind to the next generation: now is the time to start thinking and planning for those problems.
I think the most sensible answer would be something like UBI. But I also think the most sensible answer for climate change is a carbon tax. Just because something is sensible doesn’t meant it’s politically viable.
I guess you live in a place with perfect weather year round? I don’t and I haven’t seen a robo taxi my entire life. I do have access to a Tesla though and it’s current self-driving capabilities are not even close to anything I would call „autonomous“ und real world conditions (including weather).
Maybe the tech will at some point be good enough. At the current rate of improvement this will still take decades at least. Which is sad because I personally hoped that my kids would never have to get a driver’s License.
Our next vehicle sensor suite will be able to handle winter weather (https://waymo.com/blog/2024/08/meet-the-6th-generation-waymo...).
Blog post is almost exactly 1 year old...
Will it be able to function on super slippery roads while volcanic ash is falling down? Or ice?
I do drive in these conditions.
Can people handle that? people have millions of accidents in perfect weather and driving conditions. I think the reason most people don't like at drivers now is because it's easy to assign blame to the driver. Ai doesn't have that easy out. Suddenly we're faced with the cold truth: reality is difficult and sometimes shit happens and someone gets the short end of the stick.
No people can't handle that. We just stay home for 2-3 months and hope our food supplies don't run out -_-'
I'll believe it when I see it.
That’s one of the interesting things about innovation, you have to believe that things are possible before they have been done.
Only if you've set out to build it. Otherwise you can sit and wait.
Believing a thing is possible doesn’t by itself make it so, however.
This is kind of weird. It's like saying "Driving in snow is impossible", well we know it is possible because humans do it.
And this even ignores all the things modern computer controlled vehicles do above and beyond humans as it is. Take most people used to driving modern cars and chunk them an old armstrong steering car and they'll put themselves into a ditch on a rainy day.
Really the last things in self driving cars is fast portable compute and general intelligence. General intelligence will be needed for the million edge cases we need while driving. The particular problem is once we get this general intelligence a lot of problems are going to disappear and bring up a whole new set of problems for people and society at large.
Ah we only need general intelligence, something so ineffable and hard to understand that we don't even have a clear definition of it.
I've ridden just under 1,000 miles in autonmous (no scare quotes) Waymos, so it's strange to see someone letting Tesla's abject failure inform their opinions on how much progress AVs have made.
Tesla that got fired as a customer by Mobileye for abusing their L2 tech is your yardstick?
Anyways, Waymo's DC launch is next year, I wonder what the new goalpost will be.
Tesla uses only cameras, which sounds crazy (reflections, direct sunlight disturbances, fog , smoke, etc.
LiDAR, radar assistance feels crucial
https://fortune.com/2025/08/15/waymo-srikanth-thirumalai-int...
Indeed. Mark Rober did some field tests on that exact difference. LiDAR passed all of them, while Tesla’s camera-only approach failed half.
https://www.youtube.com/watch?v=IQJL3htsDyQ
I'm not sure the guy who did the Tesla crash test hoax and (partially?) faked his famous glitterbomb pranks is the best source. I would separately verify anything he says at this point.
> Tesla crash test hoax
First I’m hearing of that. In doing a search, I see a lot of speculation but no proof. Knowing the shenanigans perpetrated by Musk and his hardcore fans, I’ll take theories with a grain of salt.
> and (partially?) faked his famous glitterbomb pranks
That one I remember, and the story is that the fake reactions were done by a friend of a friend who borrowed the device. I can’t know for sure, but I do believe someone might do that. Ultimately, Rober took accountability, recognised that hurt his credibility, and edited out that part from the video.
https://www.engadget.com/2018-12-21-viral-glitter-bomb-video...
I have no reason to protect Rober, but also have no reason to discredit him until proof to the contrary. I don’t follow YouTube drama but even so I’ve seen enough people unjustly dragged through the mud to not immediately fall for baseless accusations.
One I bumped into recently was someone describing the “fall” of another YouTuber, and in one case showed a clip from an interview and said “and even the interviewer said X about this person”, with footage. Then I watched the full video and at one point the interviewer says (paraphrased) “and please no one take this out of context, if you think I’m saying X, you’re missing the point”.
So, sure, let’s be critical about the information we’re fed, but that cuts both ways.
Humans use only cameras. And humans don't even have true 360 coverage on those cameras.
The bottleneck for self-driving technology isn't sensors - it's AI. Building a car that collects enough sensory data to enable self-driving is easy. Building a car AI that actually drives well in a diverse range of conditions is hard.
That's actually categorically false. We also use sophisticated hearing, a well developed sense of inertia and movement, air pressure, impact, etc. And we can swivel our heads to increase our coverage of vision to near 360°, while using very dependable and simple technology like mirrors to cover the rest. Add to that that our vision is inherently 3D and we sport a quite impressive sensor suite ;-). My guess is that the fidelity and range of the sensors on a Tesla can't hold a candle to the average human driver. No idea how LIDAR changes this picture, but it sure is better than vision only.
I think there is a good chance that what we currently call "AI" is fundamentally not technologically capable of human levels of driving in diverse conditions. It can support and it can take responsibility in certain controlled (or very well known) environments, but we'll need fundamentally new technology to make the jump.
Yes, human vision is so bad it has to rely on a swivel joint and a set of mirrors just to approximate 360 coverage.
Modern cars can have 360 vision at all times, as a default. With multiple overlapping camera FoVs. Which is exactly what humans use to get near field 3D vision. And far field 3D vision?
The depth-discrimination ability of binocular vision falls off with distance squared. At far ranges, humans no longer see enough difference between the two images to get a reliable depth estimate. Notably, cars can space their cameras apart much further, so their far range binocular perception can fare better.
How do humans get that "3D" at far distances then? The answer is, like it usually is when it comes to perception, postprocessing. Human brain estimates depth based on the features it sees. Not unlike an AI that was trained to predict depth maps from a single 2D image.
If you think that perceiving "inertia and movement" is vital, then you'd be surprised to learn that an IMU that beats a human on that can be found in an average smartphone. It's not even worth mentioning - even non-self-driving cars have that for GPS dead reckoning.
I mean, technically what we need is fast general intelligence.
A lot of the problems with driving aren't driving problems. They are other people are stupid problems, and nature is random problems. A good driver has a lot of ability to predict what other drivers are going to do. For example people commonly swerve slightly on the direction they are going to turn, even before putting on a signal. A person swerving in a lane is likely going to continue with dumb actions and do something worse soon. Clouds in the distance may be a sign of rain and that bad road conditions and slower traffic may exist ahead.
Very little of this has to do with the quality of our sensors. Current sensors themselves are probably far beyond what we actually need. It's compute speed (efficiency really) and preemption that give humans an edge, at least when we're paying attention.
A fine argument in principle, but even if we talk only about vision, the human visual system is much more powerful than a camera.
Between brightly sunlit snow and a starlit night, we can cover more than 45 stops with the same pair of eyeballs; the very best cinematographic cameras reach something like 16.
In a way it's not a fair comparison, since we're taking into account retinal adaptation, eyelids/eyelashes, pupil constriction. But that's the point - human vision does not use cameras.
> In a way it's not a fair comparison,
Indeed. And the comparison is unnecessarily unfair.
You're comparing the dynamic range of a single exposure on a camera vs. the adaptive dynamic range in multiple environments for human eyes. Cameras do have comparable features: adjustable exposure times and apertures. Additionally cameras can also sense IR, which might be useful for driving in the dark.
"adjustable exposure times and apertures"
That means that to view some things better, you have to accept being completely blind to others. That is not a substitute for dynamic range.
Yes, and? Human eyes also have limited instantaneous dynamic range much smaller than their total dynamic range. Part of the mechanism is the same (pupil vs. camera iris). They can't see starlight during the day and tunnels need adaption lighting to ease them in/out.
Exposure adjustment is constrained by frame rate, that doesn't buy you very much dynamic range.
A system that replicates the human eye's rapid aperture adjustment and integration of images taken at quickly changing aperture/ filter settings is very much not what Tesla is putting in their cars.
But again, the argument is fine in principle. It's just that you can't buy a camera that performs like the human visual system today.
Human eyes are unlikely the only thing in parameter-space that's sufficient for driving. Cameras can do IR, 360° coverage, higher frame rates, wider stereo separation... but of course nothing says Teslas sit at a good point in that space.
Yes, agreed, but that's a different point - I was reacting to this specifically:
> Humans use only cameras.
Which in this or similar forms is sometimes used to argue that L4/5 Teslas are just a software update away.
Ah yeah, that's making even more assumptions. Not only does it assume the cameras are powerful enough but that there already is enough compute. There's a sensing-power/compute/latency tradeoff. That is you can get away with poorer sensors if you have more compute that can filter/reconstruct useful information from crappy inputs.
Humans are notoriously bad at driving, especially in poor weather. There are more than 6 million accidents annually in the US, which is >16k a day.
Most are minor, but even so - beating that shouldn't be a high bar.
There is no good reason not to use LIDAR with other sensing technologies, because cameras-only just makes the job harder.
Self-driving cars beat humans on safety already. This holds for Waymos and Teslas both.
They get into less accidents, mile for mile and road type for road type, and the ones they get into trend towards less severe. Why?
Because self-driving cars don't drink and drive.
This is the critical safety edge a machine holds over a human. A top tier human driver in the top shape outperforms this generation of car AIs. But a car AI outperforms the bottom of the barrel human driver - the driver who might be tired, distracted and under influence.
I trust Tesla's data on this kind of stuff only as far as a Starship can travel on its return trip to Mars. Anything coming from Elon would have to be audited by an independent entity for me to give it an ounce of credence.
Generally you are comparing Apples and Oranges if you are comparing the safety records of i.e. Waymos to that of the general driving population.
Waymos drive under incredibly favorable circumstances. They also will simply stop or fall back on human intervention if they don't know what to do – failing in their fundamental purpose of driving from point A to point B. To actually get comparable data, you'd have to let Waymos or Teslas do the same type of drives that human drivers do, under the same curcumstances and without the option of simply stopping when they are unsure, which they simply are not capable of doing at the moment.
That doesn't mean that this type of technology is useless. Modern self-driving and adjacent tech can make human drivers much safer. I imagine, it would be quite easy to build some AI tech that has a decent success rate in recognizing inebriated drivers and stopping the cars until they have talked to a human to get cleared for driving. I personally love intelligent lane and distance assistance technology (if done well, which Tesla doesn't in my view). Cameras and other assistive technology are incredibly useful when parking even small cars and I'd enjoy letting a computer do every parking maneuver autonomously until the end of my days. The list could go on.
Waymos have cumulatively driven about 100 million miles without a safety driver as of July 2025 (https://fifthlevelconsulting.com/waymos-100-million-autonomo...) over a span of about 5 years. This is such a tiny fraction of miles driven by US (not to speak of worldwide) drivers during that time, that it can't usefully be expressed. And they've driven these miles under some of the most favorable conditions available to current self-driving technology (completely mapped areas, reliable and stable good weather, mostly slow, inner city driving, etc.). And Waymo themselves have repeatedly said that overcoming the limitations of their tech will be incredibly hard and not guaranteed.
Do you have independent studies to back up your assertion that they are safer per distance than a human driver?
They data indicated they hold an edge over drunk and incapacitated humans, not humans.
> A top tier human driver in the top shape outperforms this generation of car AIs.
Most non-impaired humans outperform the current gen. The study I saw had FSD at 10x fatalities per mile vs non-impaired drivers.
> Humans use only cameras.
Not true. Humans also interpret the environment in 3D space. See a Tesla fail against a Wile E. Coyote-inspired mural which humans perceive:
https://youtu.be/IQJL3htsDyQ?t=14m34s
This video proves nothing other than "a YouTuber found a funny viral video idea".
Teslas "interpret the environment in 3D space" too - by feeding all the sensor data into a massive ML sensor fusion pipeline, and then fusing that data across time too.
This is where the visualizers, both the default user screen one and the "Terminator" debugging visualizer, get their data from. They show plain and clear that the car operates in a 3D environment.
You could train those cars to recognize and avoid Wile E. Coyote traps too, but do you really want to? The expected amount of walls set in the middle of the road with tunnels painted onto them is very close to zero.
Maybe watch the rest of the video. The Tesla, unlike the LiDAR car, also failed the fog and rain tests. The mural was just the last and funniest one.
Let’s also not forget murals like that do exist in real life. And those aren’t foam.
https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg...
Additionally, as the other commenter pointed out, trucks often have murals painted on them, either as art or adverts.
https://en.wikipedia.org/wiki/Truck_art_in_South_Asia
https://en.wikipedia.org/wiki/Dekotora
Search for “truck ads” and you’ll find a myriad companies offering the service.
I've seen semi trucks with scenic views painted on them, both rear and side panels.
Once computers and AIs can approach even a small fraction of the our capacity then sure, only cameras is fine, it's a shame that our suite of camera data processing equipment is so far beyond our understanding that we don't even have models of how it might work at its core.
Even at that point, why would you possibly use only cameras though, when you can get far better data by using multiple complementary systems? Humans still crash plenty often, in large part because of how limited our "camera" system can be.
which cameras have stereoscopic vision and the dynamic range of an eye?
Even if what you're saying is true, which it's not, cameras are so inferior to eyes it's not even funny
> which cameras have stereoscopic vision
Any 2 cameras separated by a few inches.
> dynamic range of an eye
Many cameras nowadays match or exceed the eye in dynamic range. Specially if you consider that cameras can vary their exposure from frame to frame, similar to the eye, but much faster.
What's more is, the power of depth perception in binocular vision is a function of distance between two cameras. The larger that distance is, the further out depth can be estimated.
Human skull only has two eyesockets, and it can only get this wide. But cars can carry a lot of cameras, and maintain a large fixed distance between them.
Even though it's false, let's imagine that's true.
Our cameras (also called eyes) have way better dynamic range, focus speed, resolution and movement detection capabilities, Backed by a reduced bandwidth peripheral vision which is also capable of detecting movement.
No camera, incl. professional/medium format still cameras are that capable. I think one of the car manufacturers made a combined tele/wide lens system for a single camera which can see both at the same time, but that's it.
Dynamic range, focus speed, resolution, FoV and motion detection still lacks.
...and that's when we imagine that we only use our eyes.
Except a car isn’t a human.
That’s the mistake Elon Musk made and the same one you’re making here.
Not to mention that humans driving with cameras only is absolutely pathetic. The amount of accidents that occur that are completely avoidable doesn’t exactly inspire confidence that all my car needs to be safe and get me to my destination is a couple cameras.
This isn't a "mistake". This is the key problem of getting self-driving to work.
Elon Musk is right. You can't cram 20 radars, 50 LIDARs and 100 cameras into a car and declare self-driving solved. No amount of sensors can redeem a piss poor driving AI.
Conversely, if you can build an AI that's good enough, then you don't need a lot of sensors. All the data a car needs to drive safely is already there - right in the camera data stream.
if additional sensors improve the ai, then your last statement is categorically untrue. The reason it worked better is that those additional sensors gave it information that wac not available in the video stream
"If."
So far, every self-driving accident where the self-driving car was found to be at fault follows the same pattern: the car had all the sensory data it needed to make the right call, and it didn't make the right call. The bottleneck isn't in sensors.
In that case we're probably even further from self-driving cars than I'd have guessed. Adding more sensors is a lot cheaper than putting a sufficient amount of compute in a car.
Multiple things can be true at the same time you realize. Some problems, such as insufficient AI can have a larger effect on safety, but more data to work with as well as train on always wins. You want lidar.
You keep insisting that cameras are good enough, but it’s empirically possible since safe autonomous driving AI has not been achieved yet to say that cameras alone collect enough data.
The minimum setup without lidar would be cameras, radar, ultrasonic, GPS/GNSS + IMU.
Redundancy is key. With lidar, multiple sensors cover each other’s weaknesses. If LiDAR is blinded by fog, radar steps in.
> only cameras, which sounds crazy
Crazy that billions of humans drive around every day with two cameras. And they have various defects too (blind spots, foveated vision, myopia, astigmatism, glass reflection, tiredness, distraction).
The nice thing about LiDAR is that you can use it to train a model to simulate a LiDAR based on camera inputs only. And of course to verify how good that model is.
I can't wait until V2X and sensor fusion comes to autonomous vehicles, greatly improving the detailed 3D mapping of LiDAR, the object classification capabilities of cameras, and the all-weather reliability of radar and radio pings.
The goalpost will be when you can buy one and drive it anywhere. How many cities are Waymo in now? I think what they are doing is terrific, but each car must cost a fortune.
The cars aren't expensive by raw cost (low six figures, which is about what an S-class with highway-only L3 costs)
But there is a lot of expenditure relative to each mile being driven.
> The goalpost will be when you can buy one and drive it anywhere.
This won't happen any time soon, so I and millions of other people will continue to derive value from them while you wait for that.
Low six figures is quite expensive, and unobtainable to a large number of people.
Not even close.
It's a 2-ton vehicle that can self-drive reliably enough to be roving a city 24/7 without a safety driver.
The measure of expensive for that isn't "can everyone afford it", the fact we can even afford to let anyone ride them is a small wonder.
I’m a bit confused. If we’re talking about consumer cars, the end goal is not to rent a car that can drive itself, the end goal is to own a car that can drive itself, and so it doesn’t matter if the car is available for purchase but costs $250,000 because few consumers can afford that, even wealthy ones.
a) I'm not talking about consumer cars, you are. I said very plainly this level of capability won't reach consumers soon and I stand by that. Some Chinese companies are trying to make it happen in the US but there's too many barriers.
b) If there was a $250,000 car that could drive itself around given major cities, even with the geofence, it would sell out as many units as could be produced. That's actually why I tell people to be weary of BOM costs: it doesn't reflect market forces like supply and demand.
You're also underestimating both how wealthy people and corporations are, and the relative value being provided.
A private driver in a major city can easily clear $100k a year on retainer, and there are people are paying it.
If you look at the original comment that you replied to, the goalpost was explained clearly:
> The goalpost will be when you can buy one and drive it anywhere.
So let’s just ignore the non-consumer parts entirely to avoid shifting the goalpost. I still stand by the fact that the average (or median) consumer will not be able to afford such an expensive car, and I don’t think it’s controversial to state this given the readily available income data in the US and various other countries. The point isn’t that it exists, Rolls Royce and Maseratis exist, but they are niche and so if self-driving cars will be so expensive to be niche they won’t actually make a real impact on real people, thus the goalpost of general availability to a consumer.
> I and millions of other people
People "wait" because of where they live and what they need. Not all people live and just want to travel around SF or wherever these go nowadays.
Why the scare quotes on wait? There is literally nothing for you to do but wait.
At the end of the day it's not like no one lives in SF, Phoenix, Austin, LA, and Atlanta either. There's millions of people with access to the vehicles and they're doing millions of rides... so acting like it's some great failing of AVs that the current cities are ones with great weather is frankly, a bit stupid.
It takes 5 seconds to look up the progress that's been made even in the last few years.
How many of those rides required human intervention by Waymo's remote operators? From what I can tell they're not sharing that information.
I worked at Zoox, which has similar teleoperations to Waymo: remote operators can't joystick the vehicles.
So if we're saying how many times would it have crashed without a human: 0.
They generally intervene when the vehicles get stuck and that happens pretty rarely, typically because humans are doing something odd like blocking the way.
Not sure how exactly politicians will jump from “minimal wages don’t have to be livable wages” and “people who are able to work should absolutely not have access to free healthcare” and “any tax-supported benefits are actually undeserved entitlements and should be eliminated” to “everyone deserves a universal basic income”.
I wouldn't underestimate what can happen if 1/3 of your workforce is displaced and put aside with nothing to do.
People are usually obedient because they have something in life and they are very busy with work. So they don't have time or headspace to really care about politics. When suddenly big numbers of people start to more care about politics it leads to organizing and all kinds of political changes.
What i mean is that it wouldn't be current political class pushing things like UBI. At same time it seems that some of current elites are preparing for this and want to get rid of elections altogether to keep the status quo.
I wouldn't underestimate how easily AI will suppress this through a combination of ultrasurveillance, psychological and emotional modelling, and personally targeted persuasion delivered by chatbot etc.
If all else fails you can simply bomb city blocks into submission. Or arrange targeted drone decapitations of troublemakers. (Possibly literally.)
The automation and personalisation of social and political control - and violence - is the biggest difference this time around. The US has already seen a revolution in the effectiveness of mass state propaganda, and AI has the potential to take that up another level.
What's more likely to happen is survivors will move off-grid altogether - away from the big cities, off the Internet, almost certainly disconnected and unable to organise unless communication starts happening on electronic backchannels.
Speculating here, but I don't believe that the government would have the time or organization to do this. Widespread political unrest caused by job losses would be the first step. Almost as soon as there is some type of AI that can replace mass amounts of workers, people will be out on the streets - most people don't have 1-2 months of living expenses saved up. At that point, the government would realize that SHTF - but it's too late, people would be protesting / rioting in droves - doesn't matter how many drones you can produce, or whether or not you can psychologically manipulate people when all they want is... food.
I could be entirely wrong, but it feels like if AI were to get THAT good, the government would be affected just as much as the working class. We'd more likely see total societal collapse rather than the government maintaining power and manipulating / suppressing the people.
That is a lot assumption right there. Starving masses can't logically and physically fight with AI or government for long. They become weak after weeks or months? At that point government would be smaller and controlled probably be part of AI owners.
IF they dont have 1-2 months of living expenses saved, they die. They can'be a big threat even in millions??? they dont have organization capacity or anything that matches
I am not sure AI will be that much more different or effective than what has been done by rich elites forever. There are already gigantic agencies and research centres focusing on opinion manipulation. And these are working - just look at how poor masses are voting for policies that are clearly against them (lowering taxes for the rich etc).
But all these voters still have their place in the world and don't have free time to do anything. I don't think people are so powerless once you really displace big potion of them.
For example look at people here - everywhere you can read how it's harder to find programming job. Companies are roleplaying the narrative that they don't need programmers anymore. Do you think this army of jobless programmers will become mind controlled by tech they themselves created? Or they will use their free time to do something about their situation?
Displacing/canceling/deleting/killing individuals in society works because most people wave their and thinking this couldn't happen to them. One you start getting into bigger potions of people the dynamic is different.
Getting rid of peaceful processes for transferring power is not going to be the big win that they think it is.
This is why Palantir and others exist to stop masses.It´s been only tested but it will only grow from there and stop millions of people. SV you built this
Oh yeah Peter Thiel was exactly one of the people i meant when i said elites are preparing for it.
Same way it happened last time we had a bunch of major advancements in labor rights. Things get shitty everywhere, but at an uneven pace, which combined with random factors causes a spark to set off massive unrest in some countries. Torches and pitchforks are out, many elite heads roll, and the end result is likely to be even worse, but elites in other countries look at all this from the outside and go, "hmm, maybe we shouldn't get people so desperate that they will do that to us".
Not sure how exactly politicians will jump from ...
Well, if one believes that the day will come when their choices will be "make that jump" or "the guillotine", then it doesn't seem completely outlandish.
Not saying that day will come, but if it did...
> or "the guillotine"
Or even simply being voted out.
The money transferred from tax payers to people without money is in effect a price for not breaking the law.
If AI makes it much easier to produce goods, it reduces price of money, making it easier to pay some money to everyone in exchange for not breaking the law.
Politicians are elected for limited terms, not for life, so they don't need to change their opinion for a change to occur.
Are you sure of this? Don't you think the next US presidential election and very many subsequent ones will be decided by the US Supreme Court?
UBI is not a good solution because you still have to provision everything on the market, so it's a subsidy to private companies that sell the necessities of life on the market. If we're dreaming up solutions to problems, much better would be to remove the essentials from the market and provide them to everyone universally. Non-market housing, healthcare, education all provided to every citizen by virtue of being a human.
Your solution would ultimately lead to treating all those items as uniform goods, but they are not. There are preferences different people have. This is why the price system is so useful. It indicates what is desired by various people and gives strong signals as to what to make or not. If you have a central authority making the decisions, they will not get it right. Individual companies may not get it right, but the corrective mechanism of failure (profit loss, bankruptcy) corrects that while when governments provide this, it is extremely difficult to correct it as it is one monolithic block. In the market, you can choose various different companies for different needs. In the government in a democracy, you have to choose all of one politician or all of another. And as power is concentrated, the worst people go after it. It is true with companies, but people can choose differently. With the state, there is no alternative. That is what makes it the state rather than a corporation.
It is also interesting that you did not mention food, clothing and super-computers-in-pockets. While government is involved in everything, they are less involved in those markets than with housing, healthcare, and education, particularly in mandates as to what to do. Government has created the problem of scarcity in housing, healthcare, and education. Do you really think the current leadership of the US should control everyone's housing, healthcare, and education? The idea of a UBI is that it strips the politicians of that fine-grained control. There is still control that can be leveraged, but it comes down to a single item of focus. It could very well be disastrous, but it need not be whereas the more complex system that you give politicians control over, the more likely it will be disastrous.
You can’t provide valuable things for “free” en masse without institutionalizing either slavery or robbery. The value must come from somewhere.
The costs of what you propose are enormous. No legislation can change that fact.
There ain’t no such thing as a free lunch.
Who’s going to pay for it? Someone who is not paying for it today.
How do you intend to get them to consent to that?
Or do you think that the needs of the many should outweigh the consent of millions of people?
The state, the only organization large enough to even consider undertaking such a project, has spending priorities that do not include these things. In the US, for example, we spend the entire net worth of Elon Musk (the “richest man in the world”, though he rightfully points out that Putin owns far more than he does) about every six months on the military alone. Add in Zuckerberg and you can get another 5 months or so. Then there’s the next year to think about. Maybe you can do Buffet and Gates; what about year three?
That’s just for the US military, at present day spending levels.
What you’re describing is at least an order of magnitude more expensive than that, just in one country that only has 4% of people. To extend it to all human beings, you’re talking about two more orders of magnitude.
There aren’t enough billionaires on the entire planet even to pay for one country’s military expenses out of pocket (even if you completely liquidated them), and this proposed plan is 500-1000x more spending than that. You’re talking about 3-5 trillion dollars per year just for the USA - if you extrapolate out linearly, that’d be 60-200 trillion per year for the Earth.
Even if you could reduce cost of provision by 90% due to economies of scale ($100/person/month for housing, healthcare, and education combined, rather than $1000 - a big stretch), it is still far, far too big to do under any currently envisioned system of wealth redistribution. Society is big and wealthy private citizens (ie billionaires) aren’t that numerous or rich.
There is a reason we all pay for our own food and housing.
> You’re talking about 3-5 trillion dollars per year just for the USA
I just want to point out that's about a fifth of our GDP and we spend about this much for healthcare in the US. We badly need a way to reduce this to at least half.
> There is a reason we all pay for our own food and housing.
The main reason I support UBI is I don't want need based or need aware distribution. I want everyone to get benefits equally regardless of income or wealth. That's my entire motivation to support UBI. If you can come up with another something that guarantees no need based or need aware and does not have a benefit cliff, I support that too. I am not married to UBI.
Just want to point out that any abstract intrinsic value about the economy like GDP is a socialized illusion
Reduce costs by eliminating fiat ledgers that only have value if we believe and realize the real economy is physical statistics and ship resources where the people demand
But of course that simple solution violates the embedded training of Americans. So it's a non-starter and we'll continue to desperately seek some useless reformation of an antiquated social system.
> I support UBI
Honestly, what type of housing do you envision under a UBI system? Houses? Modern apartment buildings? College dormitory-like buildings? Soviet-style complexes? Prison-style accommodations? B stands for basic, how basic?
(Not the person you're replying to)
I think a UBI system is only stable in conjunction with sufficient automation that work itself becomes redundant. Before that point, I don't think UBI can genuinely be sustained; and IMO even very close to that point the best I expect we will see, if we're lucky, is the state pension age going down. (That it's going up in many places suggests that many governments do not expect this level of automation any time soon).
Therefore, in all seriousness, I would anticipate a real UBI system to provide whatever housing you want, up to and including things that are currently unaffordable even to billionaires, e.g. 1:1 scale replicas of any of the ships called Enterprise including both aircraft carriers and also the fictional spaceships.
That said, I am a proponent of direct state involvement in the housing market, e.g. the UK council housing system as it used to be (but not as it now is, there're not building enough):
• https://en.wikipedia.org/wiki/Public_housing_in_the_United_K...
• https://en.wikipedia.org/wiki/Council_house
The bigger issue to me is that not all geography is anything close to equal.
I would much rather live on a beach front property than where I live right now. I don't because the cost trade off is too high.
To bring the real estate market into equilibrium with UBI you would have to turn rural Nebraska into a giant slab city like ghetto. Or every mid sized city would have a slab city ghetto an hour outside the city. It would be ultra cheap to live there but it would be a place everyone is trying to save up to move out of. It would create a completely new under class of people.
> I would much rather live on a beach front property than where I live right now. I don't because the cost trade off is too high.
Yes, and?
My reference example was two aircraft carriers and 1:1 models of some fictional spacecraft larger than some islands, as personal private residences.
> To bring the real estate market into equilibrium with UBI you would have to turn rural Nebraska into a giant slab city like ghetto. Or every mid sized city would have a slab city ghetto an hour outside the city. It would be ultra cheap to live there but it would be a place everyone is trying to save up to move out of. It would create a completely new under class of people.
Incorrect.
Currently, about 83e6 hectares of this planet is currently a "built up area".
4827e6 ha, about 179 times the currently "built up" area, is cropland and grazing land. Such land can produce much more food than it already does, the limiting factor is the cost of labour to build e.g. irrigation and greenhouses (indeed, this would also allow production in what are currently salt flats and deserts, and enable aquaculture for a broad range of staples); as I am suggesting unbounded robot labour is already a requirement for UBI, this unlocks a great deal of land that is not currently available.
The only scenario in which I believe UBI works is one where robotic labour gives us our wealth. This scenario is one in which literally everyone can get their own personal 136.4 meters side length approximately square patch. That's not per family, that's per person. Put whatever you want on it — an orchard, a decorative garden, a hobbit hole, a castle, and five Olympic-sized swimming pools if you like, because you could fit all of them together at the same time on a patch that big.
The ratio (and consequently land per person), would be even bigger if I didn't disregard currently unusable land (such as mountains, deserts, glaciers, although of these three only glaciers would still be unusable in the scenario), and also if I didn't disregard land which is currently simply unused but still quite habitable e.g. forests (4000e6 ha) and scrub (1400e6 ha).
In the absence of future tech, we get what we saw in the UK with "council housing", but even this is still not as you say. While it gets us cheap mediocre tower blocks, it also gets us semi-detached houses with their own gardens, and even the most mediocre of the widely disliked Brutalist architecture era of the UK this policy didn't create a new underclass, it provided homes for the existing underclass. Finally, even at the low end they largely (but not universally) were an improvement on what came before them, and this era came to an end with a government policy to sell those exact same homes cheaply to their existing occupants.
Some people’s idea of wealth is to live in high density with others.
You bump up against the limits of physics, not economics.
If every place has the population density of Wyoming, real wealth will be the ability to live in real cities. That’s much like what we have now.
> Some people’s idea of wealth is to live in high density with others.
Very true. But I'd say this is more of a politics problem than a physics one: any given person doesn't necessarily want to be around the people that want to be around them.
> If every place has the population density of Wyoming, real wealth will be the ability to live in real cities. That’s much like what we have now.
Cities* are where the jobs are, where the big money currently gets made, I'm not sure how much of what we have today with high density living is to show your wealth or to get your wealth — consider the density and average wealth of https://en.wikipedia.org/wiki/Atherton,_California, a place I'd never want to live in for a variety of reasons, which is (1) legally a city, (2) low density, (3) high income, (4) based on what I can see from the maps, a dorm town with no industrial or commercial capacity, the only things I can see which aren't homes (or infrastructure) are municipal and schools.
* in the "dense urban areas" sense, not the USA "incorporated settlements" sense, not the UK's "letters patent" sense
Real wealth is the ability to be special, to stand out from the crowd in a good way.
In a world of fully automated luxury for all, I do not know what this will look like.
Peacock tails of some kind to show off how much we can afford to waste? The rich already do so with watches that cost more than my first apartment, perhaps they'll start doing so with performative disfiguring infections to show off their ability to afford healthcare.
I appreciate your perspective but clearly most UBI advocates are talking about something much sooner. However my response to your vision is that even if "work" is totally automated or redundant, the resources (building materials) and the energy to power the robots or whatever, will be more expensive and tightly controlled than ever. Power and wealth simply wont allow everything to be accessible to everyone. The idea that people would be able to build enormous mansions (or personal aircraft carriers or spaceships) just sounds rather absurd, no offense, but come on.
I think we are talking about two different things. The UBI I'm talking about won't allow you to have an enormous mansion, maybe just enough to avoid starving. The main plus point is it doesn't do means testing. The second plus point is if you really hate your job, you can quit without starving. This means we can avoid coworkers who really would like to not be there.
I think it is a solid idea. I don't know how it fits in the broader scheme of things though. If everyone in the US gets a UBI of the same amount, will people move somewhere rent is low?
From wikipedia:
> a social welfare proposal in which all citizens of a given population regularly receive a minimum income in the form of an unconditional transfer payment, i.e., without a means test or need to perform work.
It doesn't say you aren't allowed to work for more money. My understanding is you can still work as much as you want. You don't have to to get this payment. And you won't be penalized for making too much money.
> I think we are talking about two different things. The UBI I'm talking about won't allow you to have an enormous mansion, maybe just enough to avoid starving.
We are indeed talking about different things with UBI here, but I'm asserting that the usual model of it can't be sustained without robots doing the economic production.
If the goal specifically is simply "nobody starves", the governments can absolutely organise food rations like this, food stamps exist.
> If everyone in the US gets a UBI of the same amount, will people move somewhere rent is low?
More likely, the rent goes up by whatever the UBI is. And I'm saying this as a landlord, I don't think it would be a good idea to create yet another system that just transfers wealth to people like me who happen to be property owners, it's already really lucrative even without that.
The response you're responding to here was to "ben_w", he discussed better-than-a-billionaire housing. My original reply to your earlier comment is above, basically just asking what type of housing you anticipate under a UBI system.
To me, "just enough to avoid starving" is a prison-like model, just without locked doors. But multiple residents of a very basic "cell", a communal food hall, maybe a small library and modest outdoors area. But most of the time when people talk about UBI, they describe the recipients living in much nicer housing than that.
> the resources (building materials) and the energy to power the robots or whatever, will be more expensive and tightly controlled than ever.
I am also concerned about this possibility, but come at it from a more near-term problem.
I think there is a massive danger area with energy prices specifically, in the immediate run-up to AI being able to economically replace human labour.
Consider a hypothetical AI which, on performance metrics, is good enough, but is also too expensive to actually use — running it exceeds the cost of any human. The corollary is that whatever that threshold is, under the assumption of rational economics, no human can ever earn more than whatever it costs to run that AI. As time goes on, if the hardware of software improves, the threshold comes down.
Consider what the world looks like if the energy required to run a human-level AI at human-level speed costs the same as the $200/month that OpenAI charges for access to ChatGPT Pro (we don't need to consider what energy costs per kWh for this, prices may change radically as we reach this point).
Conditional on this AI actually being good enough at everything (really good enough, not just "we've run out of easily tested metrics to optimise"), then this becomes the maximum that a human can earn.
If a human is earning this much per month, can they themselves afford energy to keep their lights on, their phone charged, their refrigerator running?
Domestic PV systems (or even wind/hydro if you're lucky enough to be somewhere where that's possible) will help defend against this; personal gasoline/diesel won't, the fuel will be subject to the same price issues.
> Power and wealth simply wont allow everything to be accessible to everyone. The idea that people would be able to build enormous mansions (or personal aircraft carriers or spaceships) just sounds rather absurd, no offense, but come on.
While I get your point, I think a lot of the people in charge can't really imagine this kind of transformation. Even when they themselves are trying to sell the idea. Consider what Musk and Zuckerberg say about Mars and superintelligence respectively — either they don't actually believe the words leaving their mouths (and Musk has certainly been accused of this with Mars), or they have negligible imagination as to the consequences of the world they're trying to create (which IMO definitely describes Musk).
At the same time, "absurd"?
I grew up with a C64 where video games were still quite often text adventures, not real-time nearly-photographic 3D.
We had 6 digit phone numbers, calling the next town along needed an area code and cost more; the idea we'd have video calls that only cost about 1USD per minute was sci-fi when I was young, while the actual reality today is that video calls being free to anyone on the planet isn't even a differentiating factor between providers.
I just about remember dot-matrix printers, now I've got a 3D printer that's faster than going to the shops when I want one specific item.
Universal translation was a contrivance to make watching SciFi easier, not something in your pocket that works slightly better for images than audio, and even then because speech recognition in natural environments turned out to be harder than OCR in natural environments.
I'm not saying any of this will be easy, I don't know when it will be good enough to be economical — people have known how to make flying cars since 1936*, but they've been persistently too expensive to bother. AGI being theoretically possible doesn't mean we ourselves are both smart enough and long-lived enough as an advanced industrialised species to actually create it.
* https://en.wikipedia.org/wiki/Autogiro_Company_of_America_AC...
If the robots are the ones that produce everything, and you take that generated wealth and distribute it to the people, whom are you robbing exactly?
> You can’t provide valuable things for “free” en masse without institutionalizing either slavery or robbery. The value must come from somewhere.
Utter nonsense.
Do you believe the European countries that provides higher education for free are manning tenure positions with slaves or robbing people at gunpoint?
How come do you see public transportation services in some major urban centers being provided free of charge?
How do you explain social housing programmes conducted throughout the world?
Are countries with access to free health care using slavery to keep hospitals and clinics running?
What you are trying to frame as impossibilities is already the reality for many decades in countries ranking far higher in development and quality of living indexes that the US.
How do you explain that?
You're missing the point, language can be tricky. Technically, the state confiscating wealth derived from your labor through taxes is a form of robbery and slavery. It used to be called corvée. But the words being used have a connotation of something much more brutal and unrewarding. This isn't a political statement, I'm not a libertarian who believes all taxation is evil robbery and needs to be abolished. I'm just pointing out by the definition of slavery aka forced labor, and robbery aka confiscation of wealth, the state employs both of those tactics to fund the programs you described.
> Technically, the state confiscating wealth derived from your labor through taxes is a form of robbery and slavery.
Without the state, you wouldn't have wealth. Heck there wouldn't even be the very concept of property, only what you could personally protect by force! Not to mention other more prosaic aspects: if you own a company, the state maintains the roads that your products ship through, the schools that educate your workers, the cities and towns that house your customers... In other words the tax is not "money that is yours and that the evil state steals from you", but simply "fair money for services rendered".
To a large extent, yes. That's why the arrangement is so precarious, it is necessary in many regards, but a totalitarian regime or dictatorship can use this arrangement in a nefarious manner and tip the scale toward public resentment. Balancing things to avoid the revolutionary mob is crucial. Trading your labor for protection is sensible, but if the exchange becomes exorbitant, then it becomes a source of revolt.
If the state "confiscated" wealth derived from capital (AI) would that be OK with you?
> You're missing the point, language can be tricky. Technically, the state confiscating wealth derived from your labor through taxes is a form of robbery and slavery.
You're letting your irrational biases show.
To start off, social security contributions are not a tax.
But putting that detail aside, do you believe that paying a private health insurance also represents slavery and robbery? Are you a slave to a private pension fund?
Are you one of those guys who believes unions exploit workers whereas corporations are just innocent bystanders that have a neutral or even positive impact on workers lives and well being?
No, I'm a progressive and believe in socialism. But taxation is de facto a form of unpaid labor taken by the force of the state. If you don't pay your taxes, you will go to jail. It is both robbery and slavery, and in the ideal situation, it is a benevolent sort of exchange, despite existing in the realm of slavery/robbery. In a totalitarian system, it become malevolent very quickly. It also can be seen as not benevolent when the exchange becomes onerous and not beneficial. Arguing this is arguing emotionally and not rationally using language with words that have definitions.
social security contributions are a mandatory payment to the state taken from your wages, they are a tax, it's a compulsory reduction in your income. Private health insurance is obviously not mandatory or compulsory, that is different, clearly. Your last statement is just irrelevant because you assume I'm a libertarian for pointing out the reality of the exchange taking place in the socialist system.
> No, I'm a progressive and believe in socialism
I'd be very interested in hearing which definition of "socialism" aligns with those obviously libertarian views?
> If you don't pay your taxes, you will go to jail. It is both robbery and slavery [...] Arguing this is arguing emotionally and not rationally using language with words that have definitions.
Indulging in the benefits of living in a society, knowingly breaking its laws, being appalled by entirely predictable consequences of those action, and finally resorting to incorrect usage of emotional language like "slavery" and "robbery" to deflect personal responsibility is childish.
Taxation is payment in exchange for services provided by the state and your opinion (or ignorance) of those services doesn't make it "robbery" nor "slavery". Your continued participation in society is entirely voluntary and you're free to move to a more ideologically suitable destination at any time.
They’re not “services provided” unless you have the option of refusing them.
What do you mean? Is this one of those sovereign citizen type of arguments?
The government provides a range of services that are deemed to be broadly beneficial to society. Your refusal of that service doesn't change the fact that the service is being provided.
If you don't like the services you can get involved in politics or you can leave, both are valid options, while claiming that you're being enslaved and robbed is not.
Not at all. If it happens to you even when you don’t want it and don’t want to pay for it (and are forced to pay for it on threat of violence), that is no service.
Literally nobody alive today was “involved in politics” when the US income tax amendment was legislated.
Also, you can’t leave; doubly so if you are wealthy enough. Do you not know about the exit tax?
Good idea, lets make taxes optional or non enforceable. What comes next. Oh right, nobody pays. The 'government' you have collapses and then strong men become warlords and set up fiefdoms that fight each other. Eventually some authoritarian gathers up enough power to unite everyone by force and you have your totalitarian system you didn't want, after a bunch of violence you didn't want.
We assume you're libertarian because you are spouting libertarian ideas that just don't work in reality.
> No, I'm a progressive and believe in socialism. But taxation is de facto a form of unpaid labor taken by the force of the state.
I do not know what you mean by "progressive", but you are spewing neoliberal/libertarian talking points. If anything, this tells how much Kool aid you drank.
> Are countries with access to free health care using slavery to keep hospitals and clinics running?
No, robbery. They’re paid for with tax revenues, which are collected without consent. Taking of someone’s money without consent has a name.
Have you ever stopped to consider why class mobility is much much less common in Europe than in the USA?
> Have you ever stopped to consider why class mobility is much much less common in Europe than in the USA?
My understanding is that your info is seriously out of date. It might have been the case in the distant past but not the case anymore.
https://news.yale.edu/2025/02/20/tracking-decline-social-mob...
https://en.wikipedia.org/wiki/Global_Social_Mobility_Index
> Have you ever stopped to consider why class mobility is much much less common in Europe than in the USA?
It's a common idea but each time you try to measure social mobility, you find a lot of European countries ahead of USA.
- https://en.wikipedia.org/wiki/Global_Social_Mobility_Index
- https://www.theguardian.com/society/2018/jun/15/social-mobil...
> Have you ever stopped to consider why class mobility is much much less common in Europe than in the USA?
Which class mobility is this that you speak of? The one that forces the average US citizens to be a paycheck away from homelessness? Or is it the one where you are a medical emergency away from filing bankruptcy?
Have you stopped to wonder how some European countries report higher median household incomes than the US?
But by any means continue to believe your average US citizen is a temporarily embarrassed billionaire, just waiting for the right opportunity to benefit from your social mobility.
In the meantime, also keep in mind that mobility also reflects how easy it is to move down a few pegs. Let that sink in.
the economic situation in Europe is much more dire than the US...
> the economic situation in Europe is much more dire than the US...
Is it, though? The US reports by far the highest levels of lifetime literal homelessness, which is three times greater than in countries like Germany. Homeless people on Europe aren't denied access to free healthcare, primary or even tertiary.
Why do you think the US, in spite of it's GDP, features so low in rankings such as human development index or quality of life?
Yet people live better. Goes to show you shouldn't optimise for crude, raw GDP as an end in itself, only as a means for your true end: health, quality of life, freedom, etc.
In many of the metrics, yeah. But Americans can afford larger houses and more stuff essentially, which isn't necessarily a good replacement for general quality of life things.
> In many of the metrics, yeah. But Americans can afford larger houses and more stuff essentially, which isn't necessarily a good replacement for general quality of life things.
I think this is the sort of red herring that prevents the average US citizen from realizing how screwed over they are. Again, the median household income in the US is lower than in some European countries. On top of this, the US provides virtually no social safety net or even socialized services to it's population.
The fact that the average US citizen is a paycheck away from homelessness and the US ranks so low in human development index should be a wake-up call.
Several US states have the life expectancy of Bangladesh.
>Have you ever stopped to consider why class mobility is much much less common in Europe than in the USA?
This is not true, it was true historically, but not since WWII. Read Piketty.
> You can’t provide valuable things for “free” en masse without institutionalizing either slavery or robbery. The value must come from somewhere.
Is AI slavery? Because that's where the value comes from in the scenario under discussion.
So basically the model North Korea practices?
> Non-market housing, healthcare, education all provided to every citizen
This can also describe Nordic and Germanic models of welfare capitalism (incrementally dismantled with time but still exist): https://en.wikipedia.org/wiki/Welfare_capitalism
Carbon tax on a state level to try to fight a global problem makes 0 sense actually.
You just shift the emissions from your location to the location that you buy products from.
Basically what happened in Germany: more expensive "clean" energy means their own production went down and the world bought more from China instead. The net result is probably higher global emissions overall.
This is why an economics based strictly on scarcity cannot get us where we need to go. Markets, not knowing what it's like to be thirsty, will interpret a willingness to poison the well as entrepreneurial spirit to be encouraged.
We need a system where being known as somebody who causes more problems than they solve puts you (and the people you've done business with) at an economic disadvantage.
The major shift for me is now its normal to take Waymos. Yeah, there aren't as fast as Uber if you have to get across town, but for trips less than 10 miles they're my go to now.
Ive never taken one. They seem nice though.
On the other hand, the Tesla “robotaxi” scares the crap out of me. No lidar and seems to drive more aggressively. The Mark Rober YouTube of a Tesla plowing into a road-runner style fake tunnel is equal parts hilarious and nightmare fuel when you realize that’s what’s next to your kid biking down the street.
> Mark Rober YouTube of a Tesla plowing into a road-runner style fake tunnel
I understand the argument for augmenting your self-driving systems with LIDAR. What I don't really understand is what videos like this tell us. The comparison case for a "road-runner style fake tunnel" isn't LIDAR, it's humans, right? And while I'm sure there are cases where a human driver would spot the fake tunnel and stop in time, that is not at all a reasonable assumption. The question isn't "can a Tesla save your life when someone booby traps a road?", it's "is a Tesla any worse than you at spotting booby trapped roads?", and moreover, "how does a Tesla perform on the 99.999999% of roads that aren't booby trapped?"
Tesla‘s insistence on not using Lidar while other companies deem it necessary for save auto-pilot creates the need for Tesla to demonstrate that their approach is equally as save for both drivers and ie pedestrians. They haven’t done that, arguably the data shows the contrary. This generates the impression that Tesla skimps on security and if they skimp in one area, they’ll likely skimp in others. Stuff like the Rober video strengthens these impressions. It’s a public perception issue and Tesla has done nothing (and maybe isn’t able to do anything) to dispel this notion.
> What I don't really understand is what videos like this tell us.
A lot of people here might intuitively understand “does not have lidar” means “can be deceived with a visual illusion.” The value of a video like that is to paint a picture for people who don’t intuitively understand it. And for everyone, there’s an emotional reaction seeing it plow through a giant wall that resonates in ways an intellectual understanding might not.
Great communication speaks to both our “fast” and “slow” brains. His video did a great job IMHO.
> Is a Tesla any worse than you at spotting booby trapped roads
That would've been been the case if all laws, opinions and purchasing decisions were made by everyone acting rationally. Even if self driving cars are safer than human drivers, it just takes a few crashes to damage their reputation. It has to be much, much safer than humans for mass adoption. Ideally also safer than the competition, if you're comparing specific companies.
And Waymo is much safer than human drivers. Its better at chauffeuring than humans, too.
I’m curious, are they now fully autonomous? I remember some time ago they had a remote operator.
Waymo has a control center, but it's customer service, not remote driving. They can look at the sensor data, give hints to the car ("back out, turn around, try another route") and talk to the customer, but can't take direct control and drive remotely.
Baidu's system in China really does have remote drivers.[1]
Tesla also appears to have remote drivers, in addition to someone in each car with an emergency stop button.[2]
[1] https://cyberlaw.stanford.edu/blog/2025/05/comparing-robotax...
[2] https://insideevs.com/news/760863/tesla-hiring-humans-to-con...
Good account to follow to track their progress, sufficed to say they're nearing/at the end of the beginning: https://x.com/reed // https://x.com/daylenyang/status/1953853807227523178
> I think the most sensible answer would be something like UBI.
What corporation will accept to pay dollars for members of society that are essentially "unproductive"? What will happen with the value of UBI in time, in this context, when the strongest lobby will be of the companies that have the means of producing AI? And, more essentially, how are humans able to negotiate for themselves when they lose their abilities to build things?
I'm not opposing the technology progress, I'm merely trying to unfold the reality of UBI being a thing, knowing human nature and the impetus for profit.
Every time someone casually throws out UBI my mind goes to the question "who is paying taxes when some people are on UBI ?"
Is there like a transition period where some people don't have to pay taxes and yet don't get UBI, and if so, why hasn't that come yet ? Why aren't the minimum tax thresholds going up if UBI could be right around the corner ?
The taxes will be most burdensome for the wealthiest and most productive of institutions, which is generally why these arrangements collapse economies and nations. UBI is hard to implement because it incentivizes non-productive behavior and disincentivizes productive activity. This creates economic crisis, taxes are basically a smaller scale version of this, UBI is like a more comprehensive wealth redistribution scheme. The creation of a syndicate (in this case, the state) to steal from the productive to give to the non-productive is a return to how humanity functioned before the creation of state-like structures when marauders and bandits used violence to steal from those who created anything. Eventually, the state arose to create arrangements and contracts to prevent theft, but later become the thief itself, leading to economic collapse and the cyclical revolutionary cycle.
So, AI may certainly bring about UBI, but the corporations that are being milked by the state to provide wealth to the non-productive will begin to foment revolution along with those who find this arrangement unfair, and the productive activity of those especially productive individuals will be directed toward revolution instead of economic productivity. Companies have made nations many times before, and I'm sure it'll happen again.
The problem is the "productive activity" is rather hard to define if there's so much "AI" (be it classical ML, LLM, ANI, AGI, ASI, whatever) around that nearly everything can be produced by nearly no one.
The destruction of the labour theory of value has been a goal of "tech" for a while, but if they achieve it, what's the plan then?
Assuming humans stay in control of the AIs because otherwise all bets are off, in a case where a few fabulously wealthy (or at least "onwing/controlling", since the idea of wealth starts to become fuzzy) industrialists control the productive capacity for everything from farming to rocketry and there's no space for normal people to participate in production any more, how do you even denominate the value being "produced"? Who is it even for? What do they need to give in return? What can they give in return?
> Assuming humans stay in control of the AIs because otherwise all bets are off, in a case where a few fabulously wealthy (or at least "onwing/controlling", since the idea of wealth starts to become fuzzy) industrialists control the productive capacity for everything from farming to rocketry and there's no space for normal people to participate in production any more
Why do the rest of humanity even have to participate in this? Just continue on the way things were before without any super AI. Start new businesses that don’t use AI and hire humans to work there.
Because with presumably tiny marginal costs of production, the AI owners can flood and/or buy out your human-powered economy.
You'd need a very united front and powerful incentives to prevent, say, anyone buying AI-farmed wheat when it's half the cost of human-farmed (say). If you don't prevent that, Team AI can trade wheat (and everything else) for human economy money and then dominate there.
But if AI can do anything that human labor can do, what would even be the incentive for AI owners to farm wheat and sell it to people? They can just have their AIs directly produce the things they want.
It seems like the only things they would need are energy and access to materials for luxury goods. Presumably they could mostly lock the "human economy" out of access to these things through control over AI weapons, but there would likely be a lot of arable land that isn't valuable to them.
Outside of malice, there doesn't seem to be much reason to block the non-technological humans from using the land they don't need. Maybe some ecological argument, the few AI-enabled elites don't want billions of humans that they no longer need polluting "their" Earth?
When was the last the techno-industrialist elite class said "what we have is enough"?
In this scenario, the marginal cost of taking everything else over is almost zero. Just tell the AI you want it taken over and it handles it. You'd take it over just for risk mitigation, even if you don't "need" it. Better to control it since it's free to do so.
Allowing a competing human economy is resources left on the table. And control of resources is the only lever of power left when labour is basically free.
> Maybe some ecological argument
There's a political angle too. 7 (or however many it will be) billion humans free to do their own thing is a risky free variable.
The assumption here that UBI "incentivizes non-productive behavior and disincentivizes productive activity" is the part that doesn't make sense. What do you think universal means? How does it disincentivize productive activity if it is provided to everyone regardless of their income/productivity/employment/whatever?
Evolutionarily, people engage in productive activity in order to secure resources to ensure their survival and reproduction. When these necessary resources are gifted to a person, there is a lower chance that they will decide to take part in economically productive behavior.
You can say that because it is universal, it should level the playing field just at a different starting point, but you are still creating a situation where even incredibly intelligent people will choose to pursue leisure over labor, in fact, the most intelligent people may be the ones to be more aware of the pointlessness of working if they can survive on UBI. Similarly, the most intelligent people will consider the arrangement unfair and unsustainable and instead of devoting their intelligence toward economically productive ventures, they will devote their abilities toward dismantling the system. This is the groundwork of a revolution. The most intelligent will prefer a system where their superior intelligence provides them with sufficient resources to choose a high-quality mate. If they see an arrangement where high-quality mates are being obtained by individuals who they deem to be receiving benefits that they cannot defend/protect adequately, such an arrangement will be dismantled. This evolutionary drive is hundreds of millions of years old. Primitive animals will take resources from others that they observe to be unable to defend their status.
So, overall, UBI will probably be implemented, and it will probably end in economic crisis, revolution, and the resumption of this cycle that has been playing out over and over for centuries.
> You can say that because it is universal, it should level the playing field just at a different starting point, but you are still creating a situation where even incredibly intelligent people will choose to pursue leisure over labor, in fact, the most intelligent people may be the ones to be more aware of the pointlessness of working if they can survive on UBI.
This doesn't seem believable to me, or at least it isn't the whole story. Pre-20th century it seems like most scientific and mathematical discoveries came from people who were born into wealthy families and were able to pursue whatever interested them without concern for whether or not it would make them money. Presumably there were/are many people who could've contributed greatly if they didn't have to worry about putting food on the table.
> The most intelligent will prefer a system where their superior intelligence provides them with sufficient resources to choose a high-quality mate.
In a scenario where UBI is necessary because AI has supplanted human intelligence, it seems like the only way they could return to such a system is by removing both UBI and AI. Remove just UBI and they're still non-competitive economically against the AIs.
> When these necessary resources are gifted to a person, there is a lower chance that they will decide to take part in economically productive behavior.
Source?
Even if that's true though, who cares if AI and robots are doing the work?
What's so bad about allowing people leisure, time to do whatever they want? What are you afraid of?
There are two things bothering me here. The first bit where you're talking about motivations and income driving it seems either very reductive or implying of something that ought to be profoundly upsetting: - that intelligent people will see that the work they do is pointless if they're paid enough to survive and care for themselves, and not see work as another source of income for better financial security - that most intelligent people will see it as exploitation and then choose to focus on dismantling the system that levels the playing field
Which sort of doesn't add up. So there are intelligent people who are working right now because they need money and don't have it, while the other intelligent people who are working and employing other people are only doing it to make money and will rebel if they lose some of the money they make.
But then, why doesn't the latter group of intelligent people just stop working if they have enough money? Are they less/more/differently intelligent than the former group? Are we thinking about other, more narrow forms of intelligence when describing either?
Also
> The most intelligent will prefer a system where their superior intelligence provides them with sufficient resources to choose a high-quality mate. If they see an arrangement where high-quality mates are being obtained by individuals who they deem to be receiving benefits that they cannot defend/protect adequately, such an arrangement will be dismantled. This evolutionary drive is hundreds of millions of years old.
I don't want to come off as mocking here - it's hard to take these points seriously. The whole point of civilization is to rise above these behaviours and establish a strong foundation for humanity as a whole. The end goal of social progress and the image of how society should be structured cannot be modeled on systems that existed in the past solely because those failure modes are familiar and we're fine with losing people as long as we know how our systems fail them. That evolutionary drive may be millions of years old, but industrial society has been around for a few centuries, and look at what it's done to the rest of the world.
> Primitive animals will take resources from others that they observe to be unable to defend their status.
Yeah, I don't know what you're getting at with this metaphor. If you're talking predatory behaviour, we have plenty of that going around as things are right now. You don't think something like UBI will help more people "defend their status"?
> it will probably end in economic crisis, revolution, and the resumption of this cycle that has been playing out over and over for centuries
I don't think human civilization has ever been close to this massive or complex or dysfunctional in the past, so this sentence seems meaningless, but I'm no historian.
I guess the thinking goes like this: Why start a business, get a higher paying job etc if you're getting ~2k€/mo in UBI and can live off of that? Since more people will decide against starting a business or increasing their income, productive activity decreases.
I see more people starting businesses because they now have less risk, more people not changing jobs just to get a pay hike. The sort of financial aid UBI would bring might even make people more productive on the whole, since people who are earning have spare income for quality of life, and people with financial risk are able to work without being worried half the day about paying rent and bills.
It's a bit of a dunk on people who see their position as employer/supervisor as a source of power because they can impose financial risk as punishment on people, which happens more often than any of us care to think, but isn't that a win? Or are we conceding that modern society is driven more by stick than carrot and we want it that way?
If everyone has 2k/mo then nobody has 2k/mo.
That's like saying "money doesn't exist".
In a sense everybody does have "2k" a month, because we all have the same amount of time to do productive things and exchange with others.
The easiest way to implement this is to have literally everyone pay a flat tax on the non-UBI portion of their income. This then effectively amounts to a progressive income tax on total income. If you do some number crunching, it wouldn't even need to be crazy high to give everyone the equivalent of US minimum wage; comparable to some European countries.
Over time, as more things get automated, you have more people deriving most of their income from UBI, but the remaining people will increasingly be the ones who own the automation and profit from it, so you can keep increasing the tax burden on them as well.
The endpoint is when automation is generating all the wealth in the economy or nearly so, so nobody is working, and UBI simply redistributes the generated wealth from the nominal owners of automation to everyone else. This fiction can be maintained for as long as society entertains silly outdated notions about property rights in a post-scarcity society, but I doubt that would remain the case for long once you have true post-scarcity.
You also have to consider the alternative: if there’s no ubi, are you expecting millions to starve? This is a recipe for civil war, if you have a very large group of people unable to survive you get social unrest. Either you spend the money on ubi or on police/military suppression to battle the unrest.
There's another question to answer:
Who is working?
The robots.
UBI could easily become a poverty trap, enough to keep living, not enough to have a shot towards becoming an earner because you’re locked out of opportunities. I think in practice it is likely to turn out like “basic” in The Expanse, with people hoping to win a lottery to get a shot at having a real job and building a decent life for themselves.
If no UBI is installed there will be a hard crash while everyone figures out what it is that humans can do usefully, and then a new economic model of full employment gets established. If UBI is installed then this will happen more slowly with less pain, but it is possible for society to get stuck in a permanently worse situation.
Ultimately if AI really is about to automate as much as it is promised then what we really need is a model for post-capitalism, for post-scarcity economics, because a model based on scarcity is incapable of adapting to a reality of genuine abundance. So far nobody seems to have any clue of how to do such a thing. UBI as a concept still lives deeply in the Overton window bounded by capitalist scarcity thinking. (Not a call for communism btw, that is a train to nowhere as well because it also assumes scarcity at its root.)
What I fear is that we may get a future like The Diamond Age, where we have the technology to get rid of scarcity and have human flourishing, but we impose legal barriers that keep the rich rich and the poor poor. We saw this happen with digital copyright, where the technology exists for abundance, but we’ve imposed permanent worldwide legal scarcity barriers to protect revenue streams to megacorps.
That's way better than the present situation where they just die, though. It's at least a start.
Isn't it the case that companies are always competing and evolving? Unless we see that there's a ceiling to driverless tech that is immediately obvious.
We "made cars work" about 100 years ago, but they have been innovating on that design since then on comfort, efficiency, safety, etc. I doubt the very first version of self driving will have zero ways to improve (although eventually I suppose you would hit a ceiling).
The robotaxi business model is the total opposite of scaling. At my previous employer we were solving the problem "block by block, city by city", , and I can only assume that you are living in the right city/block where they are tackling.
That just sounds like scaling slowly, rather than not scaling
> I think the most sensible answer would be something like UBI.
Having had the experience of living under communist regime prior to 1989 I have zero trust in the state providing support, while I am totally dependent and have no recourse. Instead I would rather rely on my own two hands like my grandparents did.
I see a world where we can build anything we want with our own hands and AI automation. Jobs might become optional.
> I see a world where we can build anything we want with our own hands and AI automation. Jobs might become optional.
Unless your two hands are building murderbots, though, it doesn't matter what you're building if you can't grow or buy food.
I haven't personally seen how UBI could end up working viably, but I also don't see any other system working without much more massive societal changes than anyone is talking about.
Meanwhile, there are many many people that are very invested in maintaining massive differentials between the richest and the poorest that will be working against even the most modest changes.
> I also don't see any other system working without much more massive societal changes than anyone is talking about.
The other system is that the mass of people are coerced to work for tokens that buy them the right to food and to live in a house. i.e. the present system but potentially with more menial and arduous labour.
Hopefully we can think of something else
I'd argue against the entire perspective of evaluating every policy idea along one-dimensional modernist polemics put forwards as "the least worst solution to all of human economy for all time".
Right now the communists in China are beating us at capitalism. I'm starting to find the entire analytical framework of using these ideologies ("communism", "capitalism") to evaluate _anything_ to be highly suspect, and maybe even one of the west's greatest mistakes in the last century.
> I see a world where we can build anything we want with our own hands and AI automation. Jobs might become optional.
I was a teenager back in the 90s. There was much talk then about the productivity boosts from computers, the internet, automation, and how it would enable people to have so much more free time.
Interesting thing is that the productivity gains happened. But the other side of that equation never really materialized.
Who knows, maybe it'll be different this time.
I’m not certain we don’t have free time, but I’m not sure how to test that. Is it possible that we just feel busier nowadays because we spend more time watching TV? Work hours haven’t dropped precipitously, but maybe people are spending more time in the office just screwing around.
It's the same here. Calling what the west has a "free-market capitalist" system is also a lie. At every level there is massive state intervention. Most discoveries come from publicly funded work going on at research universities or from billions pushed into the defense sector that has developed all the technology we use today from computers to the internet to all the technology in your phone. That's no more a free-market system than China is "communist" either.
I think the reality is just that governments use words and have an official ideology, but you have to ignore that and analyze their actions if you want to understand how they behave.
not to mention that most corporations in the US are owned by the public through the stock market and the arrangement of the American pension scheme, and public ownership of the means of production is one of the core tenets of communism. Every country on Earth is socialist and has been socialist for well over a century. Once you consider not just state investment in research, but centralized credit, tax-funded public infrastructure, etc. well yeah, terms such as "capitalism" become used in a totally meaningless way by most people lol.
My thoughts on these ideologies lately have shifted to viewing them as "secular religions". There are many characteristics that line up with that perspective.
Both communist and capitalist purists tend to be enriched for atheists (speaking as an atheist myself). Maybe some of that is people who have fallen out with religion over superstitions and other primitivisms, and are looking to replace that with something else.
Like religions, the movements have their respective post-hoc anointed scriptural prophets: Marx for one and Smith for the other.. along with a host of lesser saints.
Like religions, they are very prescriptive and overarching and proclaim themselves to have a better connection with some greater, deeper underlying truth (in this case about human behaviour and how it organizes).
For analytical purposes there's probably still value in the underlying texts - a lot of Smith and Marx's observations about society and human behaviour are still very salient.
But these ideologies, the outgrowths from those early analytical works, seem utterly devoid of any value whatsoever. What is even the point of calling something capitalist or communist. It's a meaningless label.
These days I eschew that model entirely and try to keep to a more strict analytical understanding on a per-policy basis. Organized around certain principles, but eschewing ideology entirely. It just feels like a mental trap to do otherwise.
You will still need energy and resources.
In your world where jobs become "optional" because a private company has decided to fire half their workforce, and the state also does not provide some kind of support, what do all the "optional" people do?
Murder more CEOs and then start working your way down the org chart? Blow up corporate headquarters, data centers, etc? Lots of ways to be productive.
Do you live in SF (the city, not the Bay Area as a whole) or West LA? I ask because in these areas you can stand on any city street and see several self driving cars go by every few minutes.
It's irrlevant that they've had a few issues. They already work and people love them. It's clear they will eventually replace every uber/lyft driver, probably every taxi driver, they'll likely replace every doordash/grubhub driver with vehicles design to let smaller automated delivery carts go the last few blocks. They may also replace every truck driver. Together that's around 5 million jobs in the USA.
Once they're let on the freeways their usage will expand even faster.
Driverless taxis is IMO the wrong tech to compare to. It’s a high consequence, low acceptance of error, real time task. Where it’s really hard to undo errors.
There is a big category of tasks that isn’t that. But that are economically significant. Those are a lot better fit for AI.
> What makes you think that? Self driving cars [...]
AI is intentionally being developed to be able to make decisions in any domain humans work in. This is unlike any previous technology.
The more apt analogy is to other species. When was the last time there was something other than homo sapiens that could carry on an interesting conversation with homo sapiens. 40,000 years?
And this new thing has been in development for what? 70 years? The rise in its capabilities has been absolutely meteoric and we don't know where the ceiling is.
> we don't know where the ceiling is.
The ceiling for current AI, while not provably known, can reasonably be upper bounded to human aggregate ability since these methods are limited to patterns in the training data. The big surprise was how many and sophisticated patterns were hiding in the training data (human written text). This current wave of AI progress is fueled by training data and compute in ”equal parts”. Since compute is cheaper, they’ve invested in more compute but failed scaling expectations since training data remained similarly sized.
Reaching super-intelligence through training data is paradoxical, because if it were known it wouldn’t be super-human. The other option is breaking out of the training data enclosure by relying on other methods. That may sound exciting but there’s no major progress I’m aware of that points that direction. It’s a little like being back to square one, before this hype cycle started. The smartest people seem to be focused on transformers, due to getting boatloads of money from companies or academia pushing them because of fomo.
> What makes you think that? Self driving cars have had (...)
I think you're confusing your cherry-picked comparison with reality.
LLMs are eliminating the need to have a vast array of positions on payrolls. From copywriters to customer support, and even creative activities such as illustration and even authoring books, today's LLMs are already more than good enough to justify replacing people with the output of any commercial chatbot service.
Software engineering is being affected as well, and it requires far greater know-how, experience, and expertise to meet the hiring bar.
> And when you talk about applying this same tech, so confidently, to domains far more nuanced and complex than (...)
Yes, your tech job is also going to be decimated. It's not a matter of having PMs write code. It's an issue of your junior SDE armed with a LLM being quite able to clear your bug backlog in a few days while improving test coverage metrics and refactoring code back from legacy status.
If a junior SDE can suddenly handle the workload that previously you required a couple of medior and senior developers, why would a company keep around 4 or 5 seasoned engineers when an inexperienced one is already able to handle the workload?
That's where the jobs will vanish. Even if demand remains there, it dropped considerably as to not justify retaining so many people in a company's payroll.
And what are you going to do, them? Drive a Uber?
> LLMs are eliminating the need to have a vast array of positions on payrolls. From copywriters to customer support, and even creative activities such as illustration and even authoring books, today's LLMs are already more than good enough to justify replacing people with the output of any commercial chatbot service.
I'd love a source to these claims. Many companies are claiming that they are able to layoff folks because of AI, but in fact, AI is just a scapegoat to counteract the reckless overhiring due to free money in the market over the last 5-10 years and investors are demanding to see a real business plan and ROI. "We can eliminate this headcount due to the efficiency of our AI" is just a fancy way to make the stock price go up while cleaning up the useless folks.
People have ideas. There are substantially more ideas than people who can implement ideas. As with most technology, the reasonable expectation is to assume that people are just going to want more done by the now tool powered humans, not less things.
> I'd love a source to these claims.
Have you been living under a rock?
You can start getting up to speed by how Amazon's CEO already laid out the company's plan.
https://www.thecooldown.com/green-business/amazon-generative...
> (...) AI is just a scapegoat to counteract the reckless overhiring due to (...)
That is your personal moralist scapegoat, and one that you made up to feel better about how jobs are being eliminated because someone somewhere screwed up.
In the meantime, you fool yourself and pretend that sudden astronomic productivity gains have no impact on demand.
These supposed "productivity gains" are only touted by the ones selling the product, i.e. the ones who stand to benefit from adoption. There is no standard way to measure productivity since it's subjective. It's far more likely that companies will use whatever scapegoat they can to fire people with as little blowback as possible, especially as the other commenter noted, people were getting hired like crazy.
Each one of the roles you listed above is only passable with AI at a superficial glance. For example, anyone who actually reads literature other than self-help and pop culture books from airport kiosks knows that AI is terrible at longer prose. The output is inconsistent because current AI does not understand context, at all. And this is not getting into the service costs, the environmental costs, and the outright intellectual theft in order to make things like illustrations even passable.
> These supposed "productivity gains" are only touted by the ones selling the product (...)
I literally pasted an announcement from the CEO of a major corporation warning they are going to decimate their workforce due to the adoption of AI.
The CEO literally made the following announcement:
> "As we roll out more generative AI and agents, it should change the way our work is done," Jassy wrote. "We will need fewer people doing some of the jobs that are being done today, and more people doing other types of jobs."
This is not about selling a product. This is about how they are adopting AI to reduce headcount.
The CEO is marketing to the company’s shareholders. This is marketing. A CEO will say anything to sell the idea of their company to other people. Believe it or not, there is money to be made from increased share prices.
Congratulations for believing the marketing. He has about 2.46 trillion reasons to make this claim. In other news, water is wet and the sky is blue.
I think your assumption is probably a little backwards. It will be a senior SDE clearing the slate I think. Ones that embrace the technolocy and are able to accelerate their work. At that level of effeciency the cost is still way way lower than it is for a larger team.
When it gets to the point that you don't need a senior engineer doing the work, you won't need a junior either.
> I think your assumption is probably a little backwards. It will be a senior SDE clearing the slate I think.
I don't think you understood the point I made.
My point was not about Jr vs Sr, let alone how a Jr is somehow more capable than a Sr.
My point was that these productivity gains aren't a factor of experience of seniority, but they do devalue the importance of seniority to perform specific tasks. Just crack open a LLM, feed in a few prompts, and done. Hell, junior developers no longer need to reach out to seniors to as questions about any topic. Think about that for a second.
Just as an anecdote that might provide some context, this is not what I've observed. My observation is that senior engineers are vastly more effective at knowing how to employ and manage AI than junior engineers. Junior engineers are typically coming to the senior engineers to learn how to approach learning what the AI is good at and not good at because they themselves have trouble making those judgments.
I was working on a side project last night, and Gemini decided to inline the entire Crypto.js library in the file I was generating. And I knew it just needed a hashing function, so I had to tell it to just grab a hashing function and not inline all of Crypto.js. This is exactly the kind of thing that somebody that didn't know software engineering wouldn't be able to say, even as simple as it is. It made me realize I couldn't just hand this tool to my wife or my kids and allow them to create software because they wouldn't know to say that kind of thing to guide the AI towards success.
As someone who lives in LA, I don’t think self-driving cars existed at the time of the Rodney King LA riots and I am not aware of any other riots since.
Let me be the first to welcome you out of your long slumber!
Let me welcome you to the concept of believing one’s own eyes when they contradict Mr Murdoch’s reality distortion field.
I feel like you are trapped in the first assessment of this problem. Yes, we are not there yet, but have you thought about the rate of improvement? Is that rate of improvement reliable? Fast? That's what matters, not where we are today.
You could say that about any time in history. When the steam engine or mechanical loom were invented there were millions of people like you who predicted that mankind will be out of jobs soon and guess what happened? There's still a lot of things to do in this world and there still will be a lot to do (aka "jobs") for a loooong time.
Nothing in the rate of improvement of a steam engine suggest it would be able to drive a car or do the job of an attorney.
To be fair, self-driving cars don't need to be perfect 0 casualty modes of transportation, they just need to be better than human drivers. Since car crashes kill 2 million people each year (and maim another 2 or 3), this is a low bar to clear...
Of course, the actual answer is that rail and cycling infrastructure are much more efficient than cars in any moderately dense region. But that would mean funding boring regular companies focused on providing a product or service for adequate profit, instead of exciting AI web3 high tech unicorn startups.
Everything anyone could say about bad AI driving could be said about bad human drivers. Nevertheless, Waymo has not had a single fatal accident despite many millions of passenger miles and is safer than human drivers.
Everything? How about legal liability for the car killing someone? Are all the self-driving vendors stepping up and accepting full legal liability for the outcomes of their non-deterministic software?
Thousands have died directly due to known defects in manufactured cars. Those companies (Ford, others) still are operating today.
Even if driverless cars killed more people than humans they would see mass adoption eventually. However they are subject to farr higher scrutiny than human drivers and even so make fewer mistakes, avoid accidents more frequently and can't get drunk, tired, angry, or distracted.
There is a fetish for technology that sometimes we are not aware of. On average there might be less accidents, but if specific accidents were preventable and now they happen, people will sue. And who will take the blame? The day the company takes the blame is the day self-driving exists IMO.
A faulty break pad or an engine doesn’t take decisions that might endanger people. Self-driving cars do. They might also get hacked pretty thoroughly.
For the same reason, I’d probably never buy a home robot with more capabilities then a vacuum cleaner.
Current non-self-driving cars on the road can be hacked
https://www.wired.com/story/kia-web-vulnerability-vehicle-ha...
But even if they can theoretically be hacked, so far Waymos are still safer and more reliable than human drivers. The biggest danger someone has riding in one is someone destroying it for vindictive reasons.
In the bluntest possible sense, who cares if we can make roads safer?
Solving liability in traffic collisions is basically a solved problem through the courts, and at least in the UK, liability is assigned in law to the vendor (more accurately, there’s a list of who’s responsible for stuff, I’m not certain if it’s possible to assume legal responsibility without being the vendor).
I think it is important to remember that "decades" here means <20 years. Remember that in 2004 it was considered sufficiently impossible that basically no one had a car that could be reliably controlled by a computer, let alone driven by computer alone:
https://en.wikipedia.org/wiki/DARPA_Grand_Challenge_(2004)
I also think that most job domains are not actually more nuanced or complex than driving, at least from a raw information perspective. Indeed, I would argue that driving is something like a worst-case scenario when it comes to tasks:
* It requires many different inputs, at high sampling rates, continuously (at the very least, video, sound, and car state)
* It requires loose adherence to laws in the sense that there are many scenarios where the safest and most "human" thing to do is technically illegal.
* It requires understanding of driving culture to avoid making decisions that confuse/disorient/anger other drivers, and anticipating other drivers' intents (although this can be somewhat faked with sufficiently fast reaction times)
* It must function in a wide range of environments: there is no "standard" environment
If we compare driving to other widespread-but-low-wage jobs (e.g. food prep, receptionists, cleaners) there are generally far more relaxed requirements:
* Rules may be unbreakable as opposed to situational, e.g. the cook time for burgers is always the same.
* Input requirements may be far lower. e.g. an AI receptionist could likely function with audio and a barcode scanner.
* Cultural cues/expectations drive fewer behaviors. e.g. an AI janitor just needs to achieve a defined level of cleanliness, not gauge people's intent in real-time.
* Operating environments are more standardized. All these jobs operate indoors with decent lighting.
I’m pretty sure you could generate a similar list for any human job.
It’s strange to me watching the collective meltdown over AI/jobs when AI doesn’t do jobs, it does tasks.
> They've been known to ignore police traffic redirections, they've run right through construction barriers, and recently they were burnt to a crisp in the LA riots
All of this is very common for human driven cars too.
> A human driver is still far more adaptive and requires a lot less training than AI
I get what you are saying, but humans need 16 years of training to begin driving. I wouldn’t call that not a lot.
I trained no driving my first 16 years of life, seems like fuckkng hell your upbringing training nothing but driving the first 16 years. Your parents should be locked up.
And the problem for Capitalists and other anti-humanists is that this doesn’t scale. Their hope with AI, I think, is that once they train one AI for a task, it can be trivially replicated, which scales much better than humans.
Self-driving cars are a political problem, not a technical problem. A functioning government would put everything from automation-friendly signaling standards to battery-swapping facilities into place.
We humans used to do that sort of thing, but not anymore, so... bring on the AI. It won't work as well as it might otherwise be able to, but it'll probably kill fewer humans on the road at the end of the day. A low bar to clear.
Self-driving car companies dont want a unfiied signalling platform or other "open for all" infrastructure updates. They want to own self-driving, to lock you into a subscription on their platform.
Literally the only open source self driving platform, from trillion to billion to million dollar companies is comma.ai, founded by Geohot. Thats it. Its actually very good, and I bet they would welcome these upgrades, but that would be a consortium of one underdog pushing for them.
Ie. a political problem as the grandparent said.
Corporations generally follow a narrow somewhat predictable pattern towards some local maxima of their own value extraction. Since world is not zero sum, it produces value for others too.
Where politics (should) enter the picture is where we somehow can see a more global maxima (for all citizens) and try to drive towards it through some political, hopefully democratic means. (Laws, standards, education, investment, infra etc)
Yeah, that must be it. It's a conspiracy.
This is all happening right out in the open.
Why would politicians want to:
- destroy voting population's jobs
- put power in the hand of 1-2 tech companies
- clog streets with more cars rather than build trams, trains, maglevs, you name it
Because the primary goal of the vast majority of politicians is to collect life-changing, generational wealth by any means necessary.
Snarky but serious question: How do we know that this wave will disrupt labor at all? Every time I dig into a story of X employees replaced by "AI", it's always in a company with shrinking revenues. Furthermore, all of the high-value use cases involve very intense supervision of the models.
There's been a dream of unsupervised models going hog wild on codebases for the last three years. Yet even the latest and greatest Claude models can't be trusted to write a new REST endpoint exposing 5 CRUD methods without fucking something up. No, it requires not only human supervision, but it also requires human expertise to validate and correct.
I dunno. I feel like this language grossly exaggerates the capability of LLMs to paint a picture of them reliably fulfilling roles end-to-end instead of only somewhat reliably fulfilling very narrowly scoped tasks that require no creativity or expertise.
> instead of only somewhat reliably fulfilling very narrowly scoped tasks that require no creativity or expertise.
This alone is enough to completely reorganise the labour market, as it describe an enormous number of roles.
How many people could be replaced by a proper CMS or a Excel sheet right now already? Probably dozens of millions, and yet they are at their desks working away.
It's easy to sit in a café and ponder about how all jobs will be gone soon but in practice people aren't as easily replacable.
For many businesses the situation is that technology has dramatically underperformed in doing the most basic tasks. Millions of people are working around things like defective ERP systems. A modest improvement in productivity in building basic apps could push us past a threshold. It makes it possible for millions more people to construct crazy excel formulas. It makes it possible to add a UI to a python script where before there was only a command line. And one piece of magic that works teliably can change an entire process. It lets you make a giant leap rather than an incremental change.
If we could make line of business crud apps work reliably, have usable document/email search, and have functional ERP that would dissolve millions of jobs.
> If we could make line of business crud apps work reliably, have usable document/email search, and have functional ERP that would dissolve millions of jobs.
Why would that be your goal? I’d prefer millions of people have gainful employment instead of some shit tech company having more money.
I can tell that those whose jobs depended on providing image assets or translations for CMS, are no longer relevant for their employers.
A lot of jobs really only exist to increase headcount for some mid/high level manager's fiefdom. LLMs are incapable of replacing those roles as the primary value of those roles is to count towards the number of employees in their sector of the organization.
Unless AI spend overtakes headcount as the vanity metric du hour, which it already has.
I promise you that your understanding of those roles is wrong.
Carpenters, landscapers, roofers, plumbers, electricians, elderly care, nurses, cooks, servers, bakers, musicians, actors, artists...
Those jobs are probably still a couple decades plus off from displacement. some possibly never, And we will need them in higher numbers.. and perhaps it's ironic because these are some of the oldest professions.
Everything we do is in service of paying for our housing, transportation, eating food, healthcare and some fun money.
Most goes to housing, healthcare, and transportation.
Healthcare costs may come down some with advancements in AI. R&D will be cheaper. Knowledge will be cheaper and more accessible.
But what people care about, what people have always cared about, remains in professions that are as old as time and, I don't see them fully replaceable by AI just yet - enhanced, yes, but not replaced.
Imagine a world where high quality landscaping exists for the average person. And this is made possible because we'll live in a world where the equivalent of today's uber driver owns a team of gardening androids.
Or perhaps in the future everyone will work in finance. Everyone's a corporation.
Ramble ramble ramble
> Imagine a world where high quality landscaping exists for the average person. And this is made possible because we'll live in a world where the equivalent of today's uber driver owns a team of gardening androids.
I think it's going to be the other way around. It's looking like automation of dynamic physical capability is going to be the very last thing we figure out; what we're going to get first is teams of lower-skilled human workers directed largely by jobsite AI. By the time the robots get there, they're not going to need a human watching them.
Looking at the advancements in low cost flexible robotics I'm not sure I share that sentiment. Plus the LLM craze is fueling generalist advancement in robotics as well. I'd say we'll see physical labor displacement within a decade tops.
Kinematics is deceptively hard and at least evolutionary took a lot longer to develop than language. Low wage physical labor seems easy only because humans are naturally very good at it, and this took millions of years to develop.
The number of edge cases when you are dealing with physical world is several order of magnitudes higher than when dealing with text only and the spacial reasoning capabilities of the current crop of MLLMs are not nearly as good at it as required. And this doesn't even take in to account that now you are dealing with hardware and hardware is expensive. Expensive enough, that even on the manufacturing lines (a more predictable environment than let's say landscaping) automation sometimes doesn't make economic sense.
Im reminded of something I read years ago that said something like jobs are now above or below the API. I think now its jobs will be above or below the AI.
Well when i get unemployable i will start upskilling to an electrician. And so will hundreds of thousands like me.
That will do very well to salaries I think and everyone will be better of.
Those jobs don’t pay particularly well today, and many have poor working conditions that strain the body.
Imagine what they’ll be like with an influx of additional laborers.
I would be cautious to avoid any narrative anchoring on “old versus new” professions. I would seek out other ways of thinking about it.
For example, I predict humans will maintain competitive advantage in areas where the human body excels due to its shape, capabilities, or energy efficiency.
What this delusion seems to turn a blind eye to is that a good chunk of the population is already in those roles; what happens when the supply of those roles far exceeds the demand, in a relatively short time? Carpenters suddenly abundant, carpenter wages drop, carpenters struggling to live, carpenters forced to tighten spending, carpenters decide children aren't affordable.. now extrapolate that across all of the impacted roles and industries. No doubt someone is already typing "carpenters can retrain too!" OK, so they're back to entry level wages (if anything) for 5+ years? Same story. And retrain to what?
At some point an equilibrium will be reached but there is no guarantee it will be a healthy situation or a smooth ride. This optimism about AI and the rosy world that is just around the corner is incredibly naive.
It's naive but also ignores that automation is simply replacing human labor by capital. Capital captures more of the value, and workers get less overall. Unless we end up in some mild socialist utopia where basic needs are provided and corps are all coops, but that's not the trend.
There’s no guarantee of an equilibrium!
I just have to see how you get let’s say 100k copywriters trained to be carpenters.
You also force them to move to places where there is less carpenters?
That healthcare jobs will be safe is nice on the surface but also means that while other jobs become more scarce cost of healthcare will continue to go up.
In your example i think it's a great deal more likely that the Uber driver is paid a tiny stipend to supervise a squad of gardening androids owned at substantial expense by Amazon Yard.
Why would anyone be on the field? Why not just have a few drones flying there monitoring whole operation remotely. And have one person monitor too many sites at same time likely from cheapest possible region.
Far from an expert on this topic, but what differentiates AI from other non physical efficiency tools? (I'm actually asking not contesting).
Won't companies always want to compete with one another, so simply using AI won't be enough. We will always want better and better software, more features, etc. so that race will never end until we get an AI fully capable of managing all parts (100%) of the development process (which we don't seem to be close to yet).
From Excel to Autocad there's been a lot of tools that were expected to decrease the amount of work ended up actually increasing it due to having new capabilities and the constant demand for innovation. I suppose the difference would be if we think AI will continue to get really good, or if it'll become SO good that it is plug and play and completely replaces people.
> what differentiates AI from other non physical efficiency tools?
At some point: (1) general intelligence; i.e. adaptivity; (2) self replication; (3) self improvement.
We don't have any more idea how to get to 1, 2, or 3, than we did 50 years ago. LLMs are cool, but they seem unlikely to do any of those things.
I encourage everyone to not claim “X seems unlikely” when it comes to high impact risks. Such a thinking pattern often leads to pruning one’s decision tree way too soon. To do well, we need to plan over an uncertain future that has many weird and unfamiliar scenarios.
We already fail to plan for a lot of high-impact things that are exceedingly likely. Maybe we should tackle those first.
I am so tired of people acting like planning for an uncertain world is a zero sum game, decided by one central actor in a single pipeline execution model. I’ll unpack this below.
The argument above (or some version of it) gets repeated over and over, but it is deeply flawed for various reasons.
The argument implies that “we” is a single agent that must do some set of things before other things. In the real world, different collections of people can work on different projects simultaneously in various orderings.
This is very different than optimizing an instruction pipeline for a single core microprocessor. In the real world, different kinds of tasks operate on very different timescales.
As an example, think about how change happens in society. Should we only talk about one problem at a time? Of course not. Why? The pipeline to solving problems is long and uncertain so you have to parallelize. Raising awareness of an issue can be relatively slow. Do you know what is even slower? Trying to reframe an issue in a way that gets into people’s brains and language patterns. Once a conceptual model exists and people pay attention, then building a movement among “early adopters” has a fighting chance. If that goes well, political influence might follow.
I was more hinting at that if we fail to plan for the obvious stuff, what makes you think that we’ll be better at planning for the more obscure possibilities. The former should be much easier, but since we fail at it, we should first concentrate on getting better at that.
Let’s get specific.
If we’re talking about DARPA’s research agenda or the US military’s priorities, I would say they are quite capable at planning for speculative scenarios and long-term effects - for various reasons, including decision making structure and funding.
If we’re talking about shifting people’s mindsets about AI risks and building a movement, the time is now. Luckily we’ve got foundations to build on. We don’t need to practice something else first. We have examples of trying to prime the public to pay attention to other long-term risks, such as global warming, pandemic readiness, and nuclear proliferation. Now we should to add long-term AI risk to the menu.
And I would not say that I’m anything close to “optimistic” in the probabilistic sense about building the coalition we need, but we must try anyway. And motivation can be found without naïve optimism. A sense of acting with purpose can be a useful state of mind that is not coupled to one’s guesses about most likely outcomes.
It's hard.
Take global warming as an example: this is a real thing that's happening. We have measurements of CO2 concentrations and global temperatures. Most people accept that this is a real thing. And still getting anybody to do anything about it is nearly impossible.
Now you have a hypothetical risk of something that may happen sometime in the distant future, but may not. I don't see how you would be able to get anybody to care about that.
> And still getting anybody to do anything about it is nearly impossible.
Why exaggerate like this? Significant actions have been taken.
> I don't see how you would be able to get anybody to care about that.
Why exaggerate like this? Many people care.
Yeah I agree, it's not about where it's at now, but whether where we are now leads to something with general intelligence and self improvement ability. I don't quite see that happening with the curve it's on, but again what the heck do I know.
What do you mean about the curve not leading to general intelligence? Even if transformer architectures by themselves don’t get there, there are multifarious other techniques, including hybrids.
As long as (1) there are incentives for controlling ever increasing intelligence; (2) the laws of physics don’t block us; and (3) enough people/orgs have the motivation and means, some people/orgs are going to press forward. This just becomes a matter of time and probability. In general, I do not bet against human ingenuity, but I often bet against human wisdom.
In my view, along with many others, it would be smarter for the whole world to slow down AI capabilities advancement until we could have very high certainty that doing so is worth the risk.
every software company i've ever worked with has an endless backlog of features it wants/needs to implement. Maybe AI just lets them move through these feature more quickly?
I mean most startups fail. And in software startups, the blame for that is usually at least shared by "software wasn't good enough". So that $20million seed investment is still going to go into "software development" - ie programmer salaries. they will be using the higher level language of ai much of the time, and be 2-5 times more efficient - but will it be enough? No. Most will still fail.
Companies don’t always compete on capability or quality. Sometimes they compete on efficiency. Or sometimes they carve up the market in different ways.
Sometimes, but with technology related companies I rarely see that. I've really only seen it in industries that are very straightforward, like producing building materials or something. Do you have any examples?
Utilities. Low cost retail. Fast food.
Amazon. Walmart. Efficiency is arguably their key competitive advantage.
This matters regarding AI systems because a lot of customers may not want to pay extra for the best models! For a lot of companies, serving a good enough model efficiently is a competitive advantage.
> And, because it is easier to retrain humans than build machines for those jobs, we wound up with more and better jobs.
I think it did not work like that.
Automatic looms displaced large numbers of weavers, skilled professionals, which did not find immediately find jobs tending dozens of mechanical looms. (Mr Ludd was one of these displaced professionals.)
Various agricultural machines and chemical products displaced colossal numbers of country people which had to go to cities looking for industrial jobs; US agriculture used to employ 50% of workforce in 1880 and only 10% in 1930.
The advent of internet displaced many in the media industry, from high-caliber journalists to those who worked in classified ads newspapers.
All these disruptions created temporary crises, because there was no industry that was ready to immediately employ these people.
temporary - thats the key. people were able to move to the cities and get factory and office jobs and over time were much better off. I can complain about the socially alienated condition I'm in as an office worker, but I would NEVER want to do farm work - cold/sun, aching back, zero benefits, low pay, risk of crop failure, a whole other kind of isolation etc etc.
> we wound up with more and better jobs.
You will have to back that statement up because this is not at all obvious to me.
If I look at the top US employers in say 1970 vs 2020, the companies that dominate 1970 were noted for having hard blue collar labor jobs but paid enough to keep a single earner family significantly above minimum wage and the poverty line. The companies that dominate in 2020 are noted for being some of the shittiest employers having some of the lowest pay fairly close to minimum wage and absolutely worst working conditions.
Sure, you tend not to get horribly maimed in 2020 vs 1970. That's about the only improvement.
This was already a problem back then, Nixon was about to introduce UBI in the late 60s and then the admin decided that having people work pointless jobs keeps them better occupied, and the rest of the world followed suit.
There will be new jobs and they will be completely meaningless busywork, people performing nothing of substance while being compensated for it. It's our way of doing UBI and we've been doing it for 50 years already.
Obligatory https://wtfhappenedin1971.com
That sounds like a job for a very small number of people. Where will everyone else work?
More companies. See my post here:
https://news.ycombinator.com/reply?id=44919671&goto=item%3Fi...
This is the optimistic take and definitely possible, but not guaranteed or even likely. Markets tend to consolidate into monopolies (or close to it) over time. Unless we are creating new markets at a rapid rate, there isn’t necessarily room for those other 900 engineers to contribute.
8 billion people. only, what, 1 billion are in the middle class? Sounds like we need to be creating new markets at a rapid rate to me!
Wherever the AI tells them to
Why do they have to work?
Because the people with the money aren’t going to just give it to everyone else. We already see the richest people hoard their money and still be unsatisfied with how much they have. We already see productivity gains not transfer any benefit to the majority of people.
There is an old and reliable solution to this problem, the gibbet.
Yes. However people are unwilling to take this approach unless things get really really bad. Even then, the powerful tend to have such strong control that people are afraid to act out of fear of reprisal.
We’ve also been gaslit into believing that it’s not a good approach, that peaceful protests are more civilised (even though they rarely cause anything meaningful to actually change).
Because otherwise you'd have to convince AI-owners and select professionals to let go of their wealth to give a comfortable and fulfilling life of leisure to the unemployed.
More likely it will look like the current welfare schemes of many countries, now add mass boredom leading to unrest.
Sam Altman has expressed a preference for paying people in vouchers for using his chatbots to kill time: https://basicincomecanada.org/openais-sam-altman-has-a-new-i...
> Because otherwise you'd have to convince AI-owners and select professionals to let go of their wealth to give a comfortable and fulfilling life of leisure to the unemployed.
Not necessarily. Such forces could be outvoted or out maneuvered.
> More likely it will look like the current welfare schemes of many countries..,
Maybe, maybe not. It might take the form of UBI or some other form that we haven’t seen in practice.
> now add mass boredom leading to unrest.
So many assumptions. There is no need to just assume any particular distribution of boredom across the future population of the world. Making predictions about social unrest is even more complicated.
Deep Utopia (Bostrom) is an excellent read that extensively discusses various options if things go well.
>So many assumptions.
Then a few words later ...
>Deep Utopia (Bostrom) is an excellent read that extensively discusses various options if things go well
Oh, the irony
Reread the sentence and you’ll notice the word “if”
> Not necessarily. Such forces could be outvoted or out maneuvered
Could.
> So many assumptions. There is no need to just assume any particular distribution of boredom across the future population of the world. Making predictions about social unrest is even more complicated
I’m assuming that previous outcomes predict future failures, because the forces driving these changes are of our societies, and not a hypothetical, assumed new society.
In this world, ownership, actual, legal ownership, is a far stronger and fundamental right than any social right to your well-being.
You would have to change that, which is a utopian project whose success has been assumed in the past, that a dialectical contradiction of the forces of social classes would lead to the replacement of this framework.
It is indeed very complicated, but you know what’s even more complicated? Utopian projects.
Sorry but I see it as far more likely that the plebes will be told to kick rocks and ask the bots to generate art for them, when asking for money for art supplies on top of their cup noodle money.
> mass boredom leading to unrest
we must keep our peasants busy or they unrest due to boredom!
Well in Sam’s ideal world you’ll be using bots to keep yourself distracted.
You would like to learn to play the guitar? Sorry, that kind of money didn’t pass in the budget bill, but how about you ask the bot to create music for you?
Elites also get something way better than keeping people busy for distraction… they get mass, targeted manipulation and surveillance to make sure you act working the borders of safety.
You know what job will surely survive? Cops. There’ll always be the nightstick to keep people in line.
I’m not sure if that’s meant to be reassuring or not.
It’s hard for me to imagine that AI won’t be as good or better than me at most things I do. It’s quite a sobering feeling.
More people need to feel this. Too many people deny even the possibility, not based out of logic, but rather out of ignorance or subconscious factors such as fear or irrelevance.
One way to think about AI and jobs is Uber/Google Maps. You used to have to know a lot about a city to be a taxi driver; then suddenly with Google Maps you don't. So in effect, technology lowered the requirements or training needed to become a taxi driver. More people can do it, not less (although incumbents may be unhappy about this).
AI is a lot like this. In coding for instance, you still need to have some sense of good systems design, etc. and know what you want to build in concrete terms, but you don't need to learn the specific syntax of a given language in detail.
Yet if you don't know anything about IT, don't know what you want to build or what you could need, or what's possible, then it's unlikely AI can help you.
Even with Google Maps, we still need human drivers because current AI systems aren’t so great at driving and/or are too expensive to be widely adopted at this point. Once AI figures out driving too, what do we need the drivers for?
And I think that’s the point he was making, it’s hard to imagine any task where humans are still required when AI can do it better and cheaper. So I don’t think the Uber scenario is realistic.
I think the only value humans can provide in that future is “the human factor”: knowing that something is done by an actual human and not a machine can be valuable.
People want to watch humans playing chess, even though AI is better at it. They want to consume art made by humans. They want a human therapist or doctor, even if they heavily rely on AI for the technical stuff. We want the perspective of other humans even if they aren’t as smart as AI. We want someone that “gets” us, that experiences life the same way we do.
In the future, more jobs might revolve around that, and in industries where previously we didn’t even consider it. I think work is going to be mostly about engaging with each other (even more meetings!)
The problem is, in a world that is that increasingly remote, how do you actually know it’s a human on the other end? I think this is something we’ll need to solve, and it’s going to be hard with AI that’s able to imitate humans perfectly.
The spinning jenny put seamstresses out of work. But the history of automation is the history of exponentially expanding the workforce and population.
8 billion people wake up every morning determined to spend the whole day working to improve their lives. we're gonna be ok.
Don't worry about the political leaders, if a sizeable amount of people lose their jobs they will surely ask GPT-10 how to build a guillotine.
The french revolution did not go well for the average french person. Not sure guillotines are the solution we need.
Here in the US, we have been getting a visceral lesson about human willingness to sacrifice your own interests so long as you’re sticking it to The Enemy.
It doesn’t matter if the revolution is bad for the commoners — they will support it anyway if the aristocracy is hateful enough.
Yeah not guillotines but guns, bombs, and other tools should suffice. Body guards or compounds in Hawaii can help stop a small group but body guards will walk away from the job when thousands of well armed members of an angry mob show up at their employers door.
How did it not go well for the avg person?
The status quo does not go well for the avg person.
Most of the people who died in The Terror were commoners who had merely not been sympathetic enough to the revolution. And then that sloppiness lead to reactionary violence and there was a lot of back and forth until Napoleon took power and was pretty much a king in all but heritage.
Hopefully we can be a bit more precise this time around.
You should read French history more closely, they went through hell and changed governments at least 5 or 6 times in the 1800s.
The status quo of the hegemonic structure of society is hell but because it has the semblance of authoritarian law and order people can look at that and say “that’s not hell”
> How did it not go well for the avg person?
You might want to look at the etymology of the word “terrorism” (despite the most popular current use, it wasn't coined for non-state violence) and what class suffered the most in terms of both judicial and non-judicial violent deaths during the revolutionary period.
Yeah, I am aware of how the word revolved evolved. I don’t find terrorism necessarily a bad thing. Though I’m not as revolutionary as before.
The French revolution was instigated by a group of shady people, far more dangerous and vile than the aristocracy they were fighting.
Improve how? And when? Give us the map. Making a prediction straight into sci fi territory and then become worried about that future is hella lame.
We have to also choose to build technology that empowers people. Empowering technologies don't just pop into existence, they're created by people who care about empowering people.
I too believe that a mostly autonomous work world would be something we could handle well assuming the leadership was composed of smart folks picking the right decisions, without also being too much exposed to external powers opposing an impossible to win force (large companies and interests). The problem is if we mix what could happen (not clear when, right now) with the current weak leadership across the world.
> what do displaced humans transition to?
go to any war-torn country or collapsed empire (Soviet). I have seen/grow-up myself in both — you would get desperation, people giving-up, alcohol (famous "X"-cross of birth rate drop and deaths rising), drugs, crime, corruption/warlord-ing, rural communities hit first and totally vanish, then small-tier cities vanish, then mid-tier, only the largest hubs remain. loss of science, culture, and education. people are just gone. only ruins of whatever latest shelters they had remain, not even their prime-time architecture. you can drive hundreds/thousands of kms across these ruin of what once was flurishing culture. years ago you would find one old person still living there. these days not a single human left. this is what is coming.
that was because the economy was controlled/corrupt and not allowed to flourish (and create job-creating technologies like the internet and AI).
I'm puzzled how AI is supposed to be a job creating technology. It is supposed to either wholesale replace jobs, or make workers so efficient that fewer of them are required. This is supposed to make digital and intellectually produced goods cheaper (although, given reproduction is free, the goods themselves are already pretty cheap).
To me it looks like we'll see well paying jobs decrease, digital services get cheaper, food+housing stay the same, and presumably as displaced workers do what they need to do physical service jobs will get more crowded and pay worse, so physical services will get cheaper. It is unclear whether there will be a net benefit to society.
Where do the jobs come from?
in the long term: simply that from the spinning jenny on, the history of automation is the history of exponentially expanding the workforce and population. when products are cheaper, demand increases, new populations enter the market and create demand for a higher class of goods and services - which sustains/grows employment.
in the short term: there is a hiring boom within the AI and related industries.
I believe that historically we have solved this problem by creating gigantic armies and then killing off millions of people that couldn't really adapt to the new order with a world war.
It’s probably the only technology that is designed to replace humans as its primary goal. It’s the VC dream.
I do wonder if the amount they're spending on it is going to be cost effective versus letting humans continue doing the work.
It is for some shareholder as long as the hype and stocks go up
>But as its capabilities improve, what do displaced humans transition to?
IF there is intellectual/office work that remains complex enough to not be tackled by AI, we compete for those. Manual labor takes the rest.
Perhaps that’s the shift we’ll see: nowadays the guy piling up bricks makes a tenth of the architects’ salary, that relation might invert.
And the indirect effects of a society that values intellectual work less are really scary if you start to explore the chain of cause and effect.
Have you noticed that there are a lot of companies now that are trying to build advanced AI-driven robots? This is not a coincidence.
The relation won’t invert because it’s very easy and quick to train guy pilling up bricks while training architect is slow and hard. If low skilled jobs will pay much better than high skilled then people will just change their job.
That’s only true as long as the technical difficulties aren’t covered by tech.
Think of a world where software engineering itself is handled relatively well by the llm and the job of the engineer becomes just collecting business requirements and checking they’re correctly addressed.
In that world the limit for scarcity might be less in the difficulty of training and more in the willingness to bend your back in the sun for hours vs comfortably writing prompts in an air conditioned room.
Right now there are enough people willing to bend their back in the sun for hours that their salaries are much lower than these of engineers. Do you think that for some reason supply of these people will drop with higher wages and much lower employment opportunities in office jobs? I highly doubt it.
My argument is not that those people’s salaries will go up until overtaking the engineers’.
It’s the opposite, that the value of office/intellectual work will tank, while manual work remains stable. Lower barrier of entry for intelectual work if a position even needs to be covered, work conditions much more comfortable.
As someone else said, until a company or individual is willing to risk their reputation on the accuracy of AI (beyond basic summarising jobs, etc), the intelligent monkeys are here for a good while longer. I've already been once bitten, twice shy.
The conclusion, sadly, is that CEO's will pause hiring and squeeze more productivity out of existing hires. This will impact junior roles the most.
Haven't you seen companies developing autonomous killing drones?
They won't take my job - unless someone has put a hit out on me.
I wanted to say that people aren't afraid of loosing their reputation even when it's about the decision of whom they kill.
Fair point.
The industrial revolution took something like 98% of jobs and farms and just disappeared them.
Could you a priori in 1800 have predicted the existence of graphics artists? Street sweepers? People who drive school buses? The whole infrastructure around trains? Sewage maintainers? Librarians? Movie stuntmen? Sound Engineers? Truck drivers?
The opening of new jobs has been causally unlinked from the closing of old jobs - especially when you take the quantity into consideration. There was a well of stuff people wanted to do, that they couldn't do because they were busy doing the boring stuff. But now that well of good new jobs is running dry, which is why we see people picking up 3 really shit jobs to make ends meet. There will be a point where new jobs do not open at all, and we should probably plan for that.
I think UBI can only buy some time but won't solve the problem. We need fast improvement with AI robots that can be used for automation on mass scale: construction, farming maybe even cooking and food processing.
Right now AI is mostly focused on automating top levels of maslov pyramid hierarchy of needs rather than bottom physiological needs. Once things like shelter (housing), food, utilities (electricity, water, internet) are dirty cheap UBI is less needed.
> displace humans ...
AI can displace human work but not human accountability. It has no skin and faces no consequences.
> can be trained to the new job opportunities more easily ...
Are we talking about AI that always needs trainers to fix their prompts and training sets? How are we going to train AI when we lose those skills and get rid of humans?
> what do displaced humans transition to?
Humans with all powerful AI in their pockets... what could they do if they lose their jobs?
> ask that question to all the companies laying off junior folks in favor of LLMs right now. They are gleefully sawing off the branch they’re sitting on.
> Humans with all powerful AI in their pockets... what could they do if they lose their jobs?
At which point did AI become a free commodity in your scenario?
> AI can displace human work but not human accountability. It has no skin and faces no consequences.
We’ve gota way to go to get there in many instances. So far I’ve seen people blame AI companies for model output, individuals for not knowing the product sold to them as a magic answer-giving machine was wrong, and other authorities in those situations (e.g. managers, parents, school administrators and teachers,) for letting ai be used at all. From my vantage point, It people seem to be using it as a tool to insulate themselves from accountability.
> AI can displace human work but not human accountability. It has no skin and faces no consequences.
Let’s assume that we have amazing aj and robotics, better than humans at everything - if you could choose between robosurgery (completely automatic) with 1% mortality and for 5000 usd vs surgery performed by human with 10% mortality and 50000 usd price tag, would you really choose human just because you can sue him? I wouldn’t. I don’t thing anyone thinking rationally would.
Is the ability to burn someone at a stake for making a mistake truly vital to you?
If not, then what's the advantage of "having skin" is? Sure, you can't flog an AI. But AI doesn't need to be threatened with flogging to perform at the peak of its abilities. A well designed AI performs at the peak of its abilities always - and if that isn't enough, you train it until it is.
Those displaced workers need an income first, job second. What they were producing is still getting done. This means we have gained freedom to choose what else is worth doing. The immediate problem is the lack of income. There is no lack of useful work to do, it's just that most of it doesn't pay well.
Yeah but those opening of new kind of jobs has not always been instantly. It can take decades and for instance was one of the reasons for the French Revolution. Internet has already created a huge amount of monopolies and wealth concentration. AI seems likely to do this further.
For the moment, perhaps it could be jobs that LLMs can’t be trained on. New jobs, niche jobs, secret or undocumented jobs…
It’s a common point now that LLMs don’t seem to be able to apply knowledge about one thing to how a different, unfamiliar thing works. Maybe that will wind up being our edge, for a time.
> what do displaced humans transition to?
we assume there must be something to transition to. very well, there can be nothing.
we assume people will transition. very well, they may not transition at all and "dissappear" en masse. (same effect as as a war or an empire collapse)
We also may not need to worry about it for a long time. I’m more and more falling on this side. LLM’s are hitting diminishing returns so until there’s a new innovation (can’t see any on the horizon yet) I’m not concerned for my career.
During the Industrial Revolution, many who made a living by the work of their hands lost their jobs, because there were machines and factories to do their work. Then new jobs were created in factories, and then many of those jobs were replaced by robots.
Somehow many idiotic white collar jobs have been created over the years. How many web applications and websites are actually needed? When I was growing up, the primary sources of knowledge were teachers, encyclopedias, and dictionaries, and those covered a lot. For the most part, we’ve been inventing problems to solve and wasting a tremendous amount of resources.
Some wrote malware or hacked something in attempt to keep this in check, but harming and destroying just means more resources used to repair and rebuild and real people can be hurt.
At some point in coming years many white collar workers will lose their jobs again, and there will be too many unemployed because not enough blue collar jobs will be available.
There won’t be some big wealth redistribution until AI convinces people to do that.
The only answer is to create more nonsense jobs, like AI massage therapist and robot dog walker.
I don't know maybe they can grow trees and build houses.
The robots? I see this happening soon, especially for home construction.
How exactly?
In the U.S. houses are built out of wood. What robot will do that kind of work?
It makes me wonder if we will be much more reserved with our thoughts and teachings in the future given how quickly they will be used against us.
Here is another perspective:
> In every technology wave so far, we've disrupted many existing jobs. However we've also opened up new kinds of jobs
That may well be why these technologies were ultimately successful. Think of millions and millions being cast out.
They won't just go away. And they will probably not go down without a fight. "Don't buy AI-made, brother!", "Burn those effing machines!" It's far from unheard of in history.
Also: who will buy if no one has money anymore? What will the state do, when thus tax income goes down, while social welfare and policing costs go up?
There are other scenarios, too: everybody gets most stuff for free, because machines and AI's do most of the work. Working communism for the lower classes, while the super rich stay super rich (like in real existing socialism). I don't think it is a good scenario either. In the long run it will make humanity lazy and dumb.
In any case I think what might happen is not easy to guess, so many variables and nth-order effects. When large systems must seek a new equilibrium all bets are usually off.
There’s a simple flaw in this reasoning:
Just because X can be replaced by Y today doesn’t imply that it can do so in a Future where we are aware of Y, and factor it into the background assumptions about the task.
In more concrete terms: if “not being powered by AI” becomes a competitive advantage, then AI won’t be meaningfully replacing anything in that market.
You can already see this with YouTube: AI-generated videos are a mild amusement, not a replacement for video creators, because made by AI is becoming a negative label in a world where the presence of AI video is widely known.
Of course this doesn’t apply to every job, and indeed many jobs have already been “replaced” by AI. But any analysis which isn’t reflectively factoring in the reception of AI into the background is too simplistic.
Just to further elaborate on this with another example: the writing industry. (Technical, professional, marketing, etc. writing - not books.)
The default logic is that AI will just replace all writing tasks, and writers will go extinct.
What actually seems to be happening, however, is this:
- obviously written-by-AI copywriting is perceived very negatively by the market
- companies want writers that understand how to use AI tools to enhance productivity, but understand how to modify copy so that it doesn’t read as AI-written
- the meta-skill of knowing what to write in the first place becomes more valuable, because the AI is only going to give you a boilerplate plan at best
And so the only jobs that seem to have been replaced by AI directly, as of now, are the ones writing basically forgettable content, report-style tracking content, and other low level things. Not great for the jobs lost, but also not a death sentence for the entire profession of writing.
As someone who used to be in the writing industry (a whole range of jobs), this take strikes me as a bit starry-eyed. Throw-away snippets, good-enough marketing, generic correspondence, hastily compiled news items, flairful filler text in books etc., all this used to be a huge chunk of the work, in so many places. The average customer had only a limited ability to judge the quality of texts, to put it mildly. Translators and proofreaders already had to prioritize mass over flawless output, back when Google Translate was hilariously bad and spell checkers very limited. Nowadays, even the translation of legal texts in the EU parliament is done by a fraction of the former workforce. Very few of the writers and none of the proofreaders I knew are still in the industry.
Addressing the wider point, yes, there is still a market for great artists and creators, but it's nowhere near large enough to accommodate the many, many people who used to make a modest living, doing these small, okay-ish things, occasionally injecting a bit of love into them, as much as they could under time constraints.
What I understand is AI leads certain markets to be smaller in terms of economics. Wayy smaller actually. Only few industry will keep growing because of this.
Specifically markets where “good enough” quality is acceptable.
Translation is an good example. Still need humans for perfect quality, but most use cases arguably don’t require perfect.
And for the remaining translators their job has now morphed into quality control.
I think this is a key point, and one that we've seen in a number of other markets (eg. computer programming, art, question-answering, UX design, trip planning, resume writing, job postings, etc.). AI eats the low end, the portion that is one step above bullshit, but it turns out that in a lot of industries the customer just wants the job done and doesn't care or can't tell how well it is done. It's related to Terence Tao's point about AI being more useful as a "red team" member [1].
This has a bunch of implications that are positive and also a bunch that are troubling. On one hand, it's likely going to create a burst of economic activity as the cost of these marginal activities goes way down. Many things that aren't feasible now because you can't afford to pay a copywriter or an artist or a programmer are suddenly going to become feasible because you can pay ChatGPT or Claude or Gemini at a fraction of the cost. It's a huge boon for startups and small businesses: instead of needing to raise capital and hire a team to build your MVP, just build it yourself with the help of AI. It's also a boon for DIYers and people who want to customize their life: already I've used Claude Code to build out a custom computer program for a couple household organization tasks that I would otherwise need to get an off-the-shelf program that doesn't really do what I want for, because the time cost of programming was previously too high.
But this sort of low-value junior work has historically been what people use to develop skills and break into the industry. And juniors become seniors, and typically you need senior-level skills to be able to know what to ask the AI and prompt it on the specifics of how to do a task best. Are we creating a world that's just thoroughly mediocre, filled only with the content that a junior-level AI can generate? What happens to economic activity when people realize they're getting shitty AI-generated slop for their money and the entrepreneur who sold it to them is pocketing most of the profits? At least with shitty human-generated bullshit, there's a way to call the professional on it (or at least the parts that you recognize as objectionable) and have them do it again to a higher standard. If the business is structured on AI and nobody knows how to prompt it to do better, you're just stuck, and the shitty bullshit world is the one you live in.
[1] https://news.ycombinator.com/item?id=44711306
The assumption here is that LLMs will never pass the Turing test for copywriting, i.e. AI writing will always be distinguishable from human writing. Given that models that produce intelligible writing didn't exist a few years ago, that's a very bold assumption.
No, I’m sure they will at some point, but I don’t think that eliminates the actual usefulness of a talented writer. It just makes unique styles more valuable, raises the baseline acceptable copy to something better (in the way that Bootstrap increased website design quality), and shifts the role of writer to more of an editor.
Someone still has to choose what to prompt and I don’t think a boilerplate “make me a marketing plan then write pages for it” will be enough to stand out. And I’d bet that the cyborg writers using AI will outcompete the purely AI ones.
(I also was just using it as a point to show how being identified as AI-made is already starting to have a negative connotation. Maybe the future is one where everything is an AI but no one admits it.)
Why couldn't an AI do all of that?
> And I’d bet that the cyborg writers using AI will outcompete the purely AI ones.
In the early days of chess engines there were similar hopes for cyborg chess, whereby a human and engine would team up to be better than an engine alone. What actually happened was that the engines quickly got so good that the expected value of human intervention was negative - the engine crunching so much information than the human ever could.
Marketing is also a kind of game. Will humans always be better at it? We have a poor track record so far.
Chess is objective, stories and style are subjective. Humans crave novelty, fresh voices, connection and layers of meaning. It's possible that the connection can be forged and it can get smart enough to bake layers of meaning in there, but AI will never be good at bringing novelty or a fresh voice just by its very nature.
LLMs are frozen in time and do not have experiences so there's nothing to relate to.
I'd pay extra for writing with some kind of "no AI used" certification, especially for art or information
no matter what you ask AI to do, it’s going to give you an “average“ answer. Even if you tell it to use a very distinct specific voice and write in a very specific tone, it’s going to give you the “average“ specific voice and tone you’ve asked for. AI is the antithesis of creativity and originality. This gives me hope.
That's mostly true of humans though. They almost always give average answers. That works out because 1) most of the work that needs to be done is repetitive, not new so average answers are okay 2) the solution space that has been explored by humans is not convex, so average answers will still hit unexplored territory most of the time
Absolutely! You can communicate with without (or with minimal) creativity. It’s not required in most cases. So AI is definitely very useful, and it can ape creativity better and better, but it will always be “faking it”.
What is creative or original thought? You are not the first person to say this after all.
Not being 100% algorithmically or mathematically derived is a good start. I’m certain there’s more but to me this is a minimum bar.
If your brain is not running algorithms (which are ultimately just math regardless of the compute substrate), how do you imagine it working then, aside from religious woo like "souls"?
I dunno, I think artificiality is a pretty reasonable criterion to go by, but it doesn't seem at all related to originality, nor does originality really stack up when we too are also repeating and remixing what we were previously taught. Clearly we do a lot more than that as well, but when it comes to defining creativity, I don't think we're any closer to nailing that Jello to the tree.
I tried asking chatgpt for brainrot speech and all examples they gave me sound very different from what the new kids on the internet are using. Maybe language will always evolve faster than whatever amount of Data openAI can train their model with :).
Intellectuals have a strong fetish for complete information games such as chess.
Reality and especially human interaction are basically the complete opposite.
AI will probably pass that test. But art is about experience and communicating more subtle things that we humans experience. AI will not be out in society being a person and gaining experience to train on. So if we're not writing it somewhere for it to regurgitate... It will always feel lacking in the subtlety of a real human writer. It depends on us creating content with context in order to mimic someone that can create those stories.
EDIT: As in, it can make really good derivative works. But it will always lag behind a human that has been in real life situations of the time and experienced being a human throughout them. It won't be able to hit the subtle notes that we crave in art.
> AI will not be out in society being a person and gaining experience to train on.
It can absolutely do that, even today - you could update the weights after every interaction. The only reason why we don't do it is because it's insanely computationally expensive.
Today’s models are tuned to output the average quality of their corpus.
This could change with varying results.
What is average quality? For some it’s a massive upgrade. For others it’s a step down. For the experienced it’s seeing through it.
You're absolutely right, but AIs still have their little quirks that set them apart.
Every model has a faint personality, but since the personality gets "mass produced" any personality or writing style makes it easier to detect it as AI rather than harder. e.g. em dashes, etc.
But reducing personality doesn't help either because then the writing becomes insipid — slop.
Human writing has more variance, but it's not "temperature" (i.e. token level variance), it's per-human variance. Every writer has their own individual style. While it's certainly possible to achieve a unique writing style with LLMs through fine-tuning it's not cost effective for something like ChatGPT, so the only control is through the system prompt, which is a blunt instrument.
It’s not a personality. There is no concept of replicating a person, personality or behaviours because the software is not the simulation of a living being.
It is a query/input and response format. Which can be modeled to simulate a conversation.
It can be a search engine that responds on the inputs provided, plus the system, account, project, user prompts (as constraints/filters) before the current turn being input.
The result can sure look like magic.
It’s still a statistically present response format based on the average of its training corpus.
Take that average and then add a user to it with their varying range and then the beauty varies.
LLMs can have many ways to explain the same thing more than 1 can be valid sometimes; other times not.
Seems a bit optimistic to me. Companies may well accept a lower quality than they used to get if it's far cheaper. We may just get shittier writing across the board.
(and shittier software, etc)
>You can already see this with YouTube: AI-generated videos are a mild amusement, not a replacement for video creators, because made by AI is becoming a negative label in a world where the presence of AI video is widely known.
But that's because, at present, AI generated video isn't very good. Consider the history of CGI. In the 1990s and early 2000s, it was common to complain about how the move away from practical sets in favor of CGI was making movies worse. And it was! You had backgrounds and monsters that looked like they escaped from a video game. But that complaint has pretty much died out these days as the tech got better (although Nolan's Oppenheimer did weirdly hype the fact that its simulated Trinity blast was done by practical effects).
I don't agree that it is because of the "quality" of the video. The issue with AI art is that it lacks intentional content. I think people like art because it is a sort of conversation between the creator and the viewer. It is interesting because it has a consistent perspective. It is possible AI art could one day be indistinguishable but for people to care about it I feel they would need to lie and say it was made by a particular person or create some sort of persona for the AI. But there are a lot of people who want to do the work of making art. People are not the limiting factor, in fact we have way more people who want to make art than there is a market for it. What I think is more likely is that AI becomes a tool in the same way CGI is a tool.
> The issue with AI art is that it lacks intentional content. I think people like art because it is a sort of conversation between the creator and the viewer.
Intent is in the eye of the beholder.
The trouble with AI shit is it's all contaminated by association.
I was looking on YT earlier for info on security cameras. It's easy to spot the AI crap: under 5 minutes and just stock video in the preview or photos.
What value could there be in me wasting time to see if the creators bothered to add quality content if they can't be bothered to show themselves in front of the lens?
What an individual brings is a unique brand. I'm watching their opinion which carries weight based on social signals and their catalogue etc.
Generic AI will always lack that until it can convincingly be bundled into a persona... only then the cycle will repeat: search for other ways to separate the lazy, generic content from the meaningful original stuff.
CGI is a good analogy because I think AI and creators will probably go in the same direction:
You can make a compelling argument that CGI operators outcompeted practical effects operators. But CGI didn’t somehow replace the need for a filmmaker, scriptwriter, cinematographers, etc. entirely – it just changed the skillset.
AI will probably be the same thing. It’s not going to replace the actual job of YouTuber in a meaningful sense; but it might redefine that job to include being proficient at AI tools that improve the process.
I think they are evolving differently. Some very old cgi holds up because they invested a lot of money to make it so. Then they tried to make it cheaper and people started complaining because the output was worse than all prior options.
Jurassic Park is a great example - they also had excellent compositing to hide any flaws (compositing never gets mentioned in casual CGI talk but is one of the most important steps)
The dinosaurs were also animated by oldschool stop motion animators who were very, very good at their jobs. Another very underrated part of the VFX pipeline.
Doesnt matter how nice your 3D modelling and texturing are if the above two are skimped on !
That's a Nolan thing like how Dunkirk used no green screen.
I think Harry Potter and Lord of the Rings embody the transition from old school camera tricks to CGI as they leaned very heavily into set and prop design and as a result have aged very gracefully as movies
I think the first HP movie was more magical than the latter ones as they felt too "Marvel CGI" for me.
Marvel movies have become tiresome for me, too much CGI that does not tell any interesting story. Old animated Disney movies are more rewatchable.
I like to see marvel as the state of the art/tech demo for CGI - this is what is achievable with near limitless budget
I still find Infinity War and Endgame visually satisfying spectacles but I am a forgiving viewer for those movies
And they cost 300 million to make be cause of the CGI fest they are, hence need close to a billion in profits when considering marketing and the theater cut. So the cost of CGI and the enshittification of movies seems to be a good analogy to the usefuleness of LLM/AI.
Not a flex.
That said, the complaint is coming back. Namely because most new movies use an incredible amount of CGI and due to the time constraints the quality suffers.
As such, CGI is once again becoming a negative label.
I don’t know if there is an AI equivalent of this. Maybe the fact that as models seem to move away from a big generalist model at launch, towards a multitude of smaller expert models (but retaining the branding, aka GPT-4), the quality goes down.
The equivalent is the massive cost of CGI and LLMs in comparison to the lackluster end result.
Now they just make the whole scene dark and you can't see anything. Saves money on CGI though.
Do you get the feeling that AI generated content is lacking something that can be incrementally improved on?
Seems to me that it's already quite good in any dimension that it knows how to improve on (e.g. photorealism) and completely devoid of the other things we'd want from it (e.g. meaning).
It's missing random flaws. Often the noise has patternd as a result of the diffusion or generation process.
Yeah, I was thinking about this. Humans vary depending on a lot of factors. Today they're happy, tomorrow they're a bit down. This makes for some variation which can be useful. LLMs are made to be reliable/repeatable, as general experience. You know what you get. Humans are a bit more ... -ish, depending on ... things.
Yeah if you look at many of the top content creators, their appeal often has very little to do with production value, and is deliberately low tech and informal.
I guess AI tools can eventually become more human-like in terms of demeanor, mood, facial expressions, personality, etc. but this is a long long way from a photorealistic video.
>But that's because, at present, AI generated video isn't very good.
It isn't good, but that's not the reason. There's a paper about 10 years ago where people used some computer system to generate Bach-like music that even Bach experts couldn't reliably tell apart, but nobody listens to bot music. (or nobody except for engine programmers watches computer chess, despite superiority. Chess is thriving more now including commercially than it ever did)
In any creative field what people are after is the interaction between the creator and the content, which is why compelling personalities thrive more, not less in a sea of commodified slop (be that by AI or just churned out manually).
It's why we're in an age where twitch content creators or musicians are increasingly skilled at presenting themselves as authentic and personal. These people haven't suffered from the fact that mass production of media is cheap, they've benefited from it.
The wonder of Bach goes much deeper than just the aesthetic qualities of his music. His genius almost forces one to reckon with his historical context and wonder, how did he do it? Why did he do it? What made it all possible? Then there is the incredible influence that he had. It is easy to forget that music theory as we know it today was not formalized in his day. The computer programs that simulate the kind of music he made are based on that theory that he understood intuitively and wove into his music and was later revealed through diligent study. Everyone who studies Bach learns something profound and can feel both a kinship for his humanity and also an alienation from his seemingly impossible genius. He is one of the most mysterious figures in human history and one could easily spend their entire life primarily studying just his music (and that of his descendants). From that perspective, computer generated music in his style is just a leaf on the tree, but Bach himself is the seed.
> These people haven't suffered from the fact that mass production of media is cheap, they've benefited from it.
Maybe? This really depends on your value system. Every moment that you are focused on how you look on camera and trying to optimize an extractive algorithm is a moment you aren't focused on creating the best music that you can in that moment. If the goal is maximizing profit to ensure survival, perhaps they are thriving. Put another way, if these people were free to create music in any context, would they choose content creation on social media? I know I wouldn't, but I also am sympathetic to the economic imperatives.
I am a Bach fiend and the problem is BWV 1 to 1080.
Why would I listen to algorithmic Bach compositions when there are so many of Bach's own work I have never listened to?
Even if you did get bored of all JS music, Carl Philipp Emanuel Bach has over 1000 works himself.
There are also many genius baroque music composers outside the Bach family.
This is true of any composer really. Any classical composer that the average person has heard of has an immense catalog of works compared to modern recording artists.
I would say I have probably not even listened to half the works of all my favorite composers because it is such a huge amount of music. There is no need for some kind of classical music style LORA.
That's interesting, because after ElevenLabs launched their music generation I decided I really quite want to spent some time to have it generate background tracks for me to have on while working.
I don't know the name of any of the artists whose music I listened to over the last week because it does not matter to me. What mattered was that it was unobtrusive and fit my general mood. So I have a handful of starting points that I stream music "similar to". I never care about looking up the tracks, or albums, or artists.
I'm sure lots of people think like you, but I also think you underestimate how many contexts there are where people just don't care.
Authenticity and sincerity are very important. When you can fake those, you've got it made.
Ironically, while the non-CGI SFX in e.g. Interstellar looked amazing, that sad fizzle of a practical explosion in Oppenheimer did not do the real thing justice and would've been better served by proper CGI VFX.
Totally agree. Nolan is a perfectionists though, so I don’t think he could let himself go for broke on the actual boom boom
To understand why this is too optimistic, you have to look at things where AI is already almost human-level. Translations are more and more done exclusively with AI or with a massive AI help (with the effect of destroying many jobs anyway) at this point. Now ebook reading is switching to AI. Book and music album covers are often done with AI (even if this is most of the times NOT advertised), and so forth. If AI progresses more in a short timeframe (the big "if" in my blog post), we will see a lot of things done exclusively (and even better 90% of the times, since most humans doing a given work are not excellence in what they do) by AI. This will be fine if governments immediately react and the system changes. Otherwise there will be a lot of people to feed without a job.
I can buy the idea that simple specific tasks like translation will be dramatically cut down by AI.
But even then – any serious legal situation (like a contract) is going to want a human in the loop to verify that the translation is actually correct. This will require actual translator skills.
AI art seems to basically only be viable when it can’t be identified as AI art. Which might not matter if the intention is to replace cheap graphic design work. But it’s certainly nowhere near developed enough to create anything more sophisticated, sophisticated enough to both read as human-made and have the imperfect artifacts of a human creator. A lot of the modern arts are also personality-driven, where the identity and publicity of the artist is a key part of their reception. There are relatively few totally anonymous artists.
Beyond these very specific examples, however, I don’t think it follows that all or most jobs are going to be replaced by an AI, for the reasons I already stated. You have to factor in the sociopolitical effects of technology on its adoption and spread, not merely the technical ones.
It's kinda hilarious to see "simple task ... like translation". If you are familiar with the history of the field, or if you remember what automated translation looked like even just 15 years ago, it should be obvious that it's not simple at all.
If it were simple, we wouldn't need neural nets for it - we'd just code the algorithm directly. Or, at least, we'd be able to explain exactly how they work by looking at the weights. But now that we have our Babelfish, we still don't know how it really works in details. This is ipso facto evidence that the task is very much not simple.
I use AI as a tool to make digital art but I don't make "AI Art".
Imperfection is not the problem with "AI Art". The problem is that it is really hard to not get the models to produce the same visual motifs and cliches. People can spot AI art so easy because of the motifs.
I think midjourney took this to another level with their human feedback. It became harder and harder to not produce the same visual motifs in the images to the point it is basically useless for me now.
> any serious legal situation (like a contract) is going to want a human in the loop to verify that the translation is actually correct
I hope you're right, but when I think about all those lawyers caught submitting unproofread LLM output to a judge... I'm not sure humankind is wise enough to avoid the slopification.
> But even then – any serious legal situation (like a contract) is going to want a human in the loop to verify that the translation is actually correct.
The usual solution is to specify one language as binding, with that language taking priority if there turns out to be discrepancies between the multiple version.
Isn't "But even then – any serious legal situation (like a contract) is going to want a human in the loop to verify that the translation is actually correct. This will require actual translator skills." only true if the false positive rate of the verifier is not much higher than the failure rate of the AI? At some point it's like asking a human to double check a calculator
You might still need humans in the loop for many things, but it can still have a profound effect if the work that used to be done by ten people can now be done by two or three. In the sectors that you mention, legal, graphic design, translation, that might be a conservative estimate.
There are bound to be all kinds of complicated sociopolitical effects, and as you say there is a backlash against obvious AI slop, but what about when teams of humans working with AI become more skillful at hiding that?
> Now ebook reading is switching to AI.
IMO these are terrible, I don't understand how anyone uses them. This is coming from someone who has always loved audiobooks but has never been particularly precious about the narrator. I find the AI stuff unlistenable.
> Book and music album covers are often done with AI (even if this is most of the times NOT advertised)
This simply isn't true, unless you're considering any minor refinement to a human-created design to be "often done with AI".
It certainly sounds like you're implying AI is often the initial designer or primary design tool, which is completely incorrect for major publishers and record labels, as well as many smaller independent ones.
Look at your examples. Translation is a closed domain; the LLM is loaded with all the data and can traverse it. Book and music album covers _don't matter_ and have always been arbitrary reworkings of previous ideas. (Not sure what “ebook reading” means in this context.) Math, where LLMs also excel, is a domain full of internal mappings.
I found your post “Coding with LLMs in the summer of 2025 (an update)” very insightful. LLMs are memory extensions and cognitive aides which provide several valuable primitives: finding connections adjacent to your understanding, filling in boilerplate, and offloading your mental mapping needs. But there remains a chasm between those abilities and much work.
> Book and music album covers are often done with AI
These suck. Things made with AI just suck big time. Not only are they stupid but they have negative value on your product.
I cannot think of single purely AI made video, song or any form of art that is any a good.
All AI has done is falsely convince ppl that they can now create things that they had no skills to do before AI.
This is not inherent to AI, but how the AI models were recently trained (by preference agreement of many random users). Look for the latest Krea / Black Forest Labs paper on AI style. The "AI look" can be removed.
Songs right now are terrible. For the videos, things are going to be very different once people can create full movies in their computers. Many will have access to the ability to create movies, and a few will be very good, and this will likely change many things. Btw this stupid "AI look" is only transient and is nowhere needed. It will be fixed, and AI images/videos generation will be impossible to stop.
The trouble is, I'm perfectly well aware I can go to the AI tools, ask it to do something and it'll do it. So there's no point me wasting time eg reading AI blog posts as they'll probably just tell me what I've just read. The same goes for any media.
It'll only stand on its own when significant work is required. This is possible today with writing, provided the AI is directed to incorporate original insights.
And unless it's immediately obvious to consumers a high level of work has gone into it, it'll all be tarred by the same brush.
Any workforce needs direction. Thinking an AI can creatively execute when not given a vision is flawed.
Either people will spaff out easy to generate media (which will therefore have no value due to abundance), or they'll spend time providing insight and direction to create genuinely good content... but again unless it's immediately obvious this has been done, it will again suffer the tarring through association.
The issue is really one of deciding to whom to give your attention. It's the reason an ordinary song produced by a megastar is a hit vs when it's performed by an unsigned artist. Or, as in the famous experiment, the same world class violinist gets paid about $22 for a recital while busking vs selling out a concert hall for $100 per seat that same week.
This is the issue AI, no matter how good, will have to overcome.
Ive made a ton of songs I enjoy with suno. Theyre not the greatest, but theyre definitely not the worst either.
I mean, test after test have shown that the vast, vast majority of humans are woefully unable to distinguish good AI art made by SOTA models from human art, and in many/most cases actively prefer it.
Maybe you’re a gentleman of such discerningly superior taste that you can always manage to identify the spark of human creativity that eludes the rest of us. Or maybe you’ve just told yourself you hate it and therefore you say you always do. I dunno.
you couldve given me an example instead of this butthurt comment :)
Of course, your opinion may be subject to selection bias (i.e., you are only judging the art that you became aware was AI generated).
Reminds me of the issue with bad CGI in movies. The only CGI you notice is the bad CGI, the good stuff just works. Same for AI generated art, you see the bad stuff but do not realize when you see a good one.
care to give me some examples from youtube ? I am talking about videos that ppl on youtube connected to for the content in the video ( not AI demo videos).
> Translations are more and more done exclusively with AI or with a massive AI help
As someone who speaks more than one language fairly well: We can tell. AI translations are awful. Sure, they have gotten good enough for a casual "let's translate this restaurant menu" task, but they are not even remotely close to reaching human-like quality for nontrivial content.
Unfortunately I fear that it might not matter. There are going to be plenty of publishers who are perfectly happy to shovel AI-generated slop when it means saving a few bucks on translation, and the fact that AI translation exists is going to put serious pricing pressure on human translators - which means quality is inevitably going to suffer.
An interesting development I've been seeing is that a lot of creative communities treat AI-generated material like it is radioactive. Any use of AI will lead to authors or even entire publishers getting blacklisted by a significant part of the community - people simply aren't willing to consume it! When you are paying for human creativity, receiving AI-generated material feels like you have been scammed. I wouldn't be surprised to see a shift towards companies explicitly profiling themselves as anti-AI.
As someone whose native language isn't English, I disagree. SOTA models are scary good at translations, at least for some languages. They do make mistakes, but at this point it's the kind of mistake that someone who is non-native but still highly proficient in the language might make - very subtle word order issues or word choices that betray that the translator is still thinking in another language (which for LLMs almost always tends to be English because of its dominance in the training set).
I also disagree that it's "not even remotely close to reaching human-like quality". I have translated large chunks of books into languages I know, and the results are often better than what commercial translators do.
It's becoming a negative label because they aren't as good.
I'm not saying it will happen, but it's possible to imagine a future in which AI videos are generally better, and if that happens, almost by definition, people will favor them (otherwise they aren't "better").
I'm not on Facebook, but, from what I can tell, this has arguably already happened for still images on it. (If defining "better" as "more appealing to/likely to be re-shared by frequent users of Facebook.")
I mean, I can imagine any future, but the problem with “created by AI” is that because it’s relatively inexpensive, it seems like it will necessarily become noise rather than signal, if a person could pop out a high quality video in a day, in which case signal will revert to the celebrity that is marketing it rather than the video itself.
Perhaps this will go the way the industrial revolution did? A knife handcrafted by a Japanese master might have a very high value, but 99.9% of the knives are mass produced. "Creators" will become artisans - appreciated by many, consumed by few.
Another flaw is that humans won’t find other things to do. I don’t see the argument for that idea. If I had to bet, I’d say that if AI continues getting more powerful, then humans will transition to working on more ambitious things.
This is very similar to the 'machines will do all the work, we'll just get to be artists and philosophers' idea.
It sounds nice. But to have that, you need resources. Whoever controls the resources will get to decide whether you get them. If AI/machines are our entire economy, the people that control the machines control the resources. I have little faith in their benevolence. If they also control the political system?
You'll win your bet. A few humans will work on more ambitious things. It might not go so well for the rest of us.
>This is very similar to the 'machines will do all the work, we'll just get to be artists and philosophers' idea
We've come a long ways to that goal. The amount of work both economic and domestic that humans do has dropped dramatically.
There are more mouths to feed and less territory per capita for each person (thus real estate inflation in desired locations). Like lanes on a highway, the population just fills the capacity with growth without the selective pressure of even selecting for skill or ability. The ways we've come see mostly front loaded as initially population takes time to grow while the immediate low hanging fruit of domestic drudgery being eliminated was quite a while ago. Meanwhile now "work" that has filled much of that obligation in the home has expanded to necessitating two full-time income households.
And it's very similar to "slaves will do all the work" which was actually possible but never happened.
If it became magic smart, then I don’t see why we couldn’t use it to enhance ourselves and become Transhuman?
There are a number of reasons you might not be able to.
Most likely? It's ridiculously expensive and you're poor.
Techonolgy has been deflationary so far, the rich get it first but eventually it reaches everyone.
Only when it is profitable for the rich.
Agree, Insulin is a prime example.
> because made by AI is becoming a negative label in a world
The negative label is the old world pulling the new one back, it rarely sticks.
I'm old enough to remember the folks saying "We used to have the paint the background blue" and "All music composers need to play an instrument" (or turn into a symbol).
> AI-generated videos are a mild amusement, not a replacement for video creators
If you seriously think this, you don’t understand the YouTube landscape. Shorts - which have incredible view times - are flooded with AI videos. Most thumbnails these days are made with AI image generators. There’s an entire industry of AI “faceless” YouTubers who do big numbers with nobody in the comments noticing. The YouTuber Jarvis Johnson made a video about how his feed has fully AI generated and edited videos with great view counts: https://www.youtube.com/watch?v=DDRH4UBQesI
What you’re missing is that most of these people aren’t going onto Veo 3, writing “make me a video” and publishing that; these videos are a little more complex in that they have separate models writing scripts, generating voiceover, and doing basic editing.
These videos and shorts are a fraction of the entire YouTube landscape, and actual creators with identities are making vastly, vastly more money - especially once you realize how YouTube and video content in general is becoming a marketing channel for other businesses. Faceless channels have functionally zero brand, zero longevity, and no real way to extend that into broader products in the way that most successful creators have done.
That was my point: someone that has an identity as a YouTuber shouldn’t worry too much about being replaced by faceless AI bot content.
Re: YT AI content. That is because AI video is (currently) low quality. If AI video generators could spit out full length videos that rivaled or surpassed the best human made content people wouldn’t have the same association. We don’t live in that world yet, but someday we might. I don’t think “human made” will be a desirable label for _anything_, videos, software, or otherwise, once AI is as good or better than humans in that domain.
That’s the fundamental issue with most “analysis”, and most discussions really, on HN.
Since the vast vast majority of writers and commentators are not literal geniuses… they can’t reliably produce high quality synthetic analysis, outside of very narrow niches.
Even though for most comment chains on HN to make sense, readers certainly have to pretend some meaningful text was produced beyond happenstance.
Partly because quality is measured relative to the average, and partly because the world really is getting more complex.
Oh come on. I may not be a genius but I can turn my mind to most things.
"I may not be a gynecologist, but I'll have a look."
Turning your mind to something doesn’t automatically lead to producing high quality synthetic analysis?
It doesn’t even seem relevant how good you are at step 1 for something so many steps later.
Poorly made videos are poorly made videos.
Whether poor videos made by a human directly, or poorly made by a human using AI.
The use of software like AI to create videos with sloppy quality and reaults reflects on their skill.
Currently the use of AI leans towards sloppy because of the lower digital literacy of content creators with AI, and once they get into it, realizing how much goes into videos.
This only works in a world where AI sucks and/or can be easily detected. I've already found videos where on my 2nd or 3rd time watching I went, "wait, that's not real!" We're starting to get there, which is frankly beyond my ability to reason about.
It's the same issue with propaganda. If people say a movie is propaganda, that means the movie failed. If a propaganda movie is good propaganda, people don't talk about that. They don't even realize. They just talk about what a great movie it is.
One thing to keep in mind is not so much that AI would replace the work of video creators for general video consumption, but rather it could create personalized videos or music or whatever. I experimented with creating a bunch of AI music [1] that was tailored to my interests and tastes, and I enjoy listening to them. Would others? I doubt it, but so what? As the tools get better and easier, we can create our own art to reflect our lives. There will still be great human art that will rise to the top, but the vast inundation of slop to the general public may disappear. Imagine the fun of collaboratively designing whole worlds and stories with people, such as with tabletop role-playing, but far more immersive and not having to have a separate category of creators or waiting on companies to release products.
1: https://www.youtube.com/playlist?list=PLbB9v1PTH3Y86BSEhEQjv...
I'm skeptical of arguments like this. If we look at most impactful technologies since the year 2000, AI is not even in my top 3. Social networking, mobile computing, and cloud computing have all done more to alter society and daily life than has AI.
And yes, I recognize that AI has already created profound change, in that every software engineer now depends heavily on copilots, in that education faces a major integrity challenge, and in that search has been completely changed. I just don't think those changes are on the same level as the normalization of cutting-edge computers in everyone's pockets, as our personal relationships becoming increasingly online, nor as the enablement for startups to scale without having to maintain physical compute infrastructure.
To me, the treating of AI as "different" is still unsubstantiated. Could we get there? Absolutely. We just haven't yet. But some people start to talk about it almost in a way that's reminiscent of Pascal's Wager, as if the slight chance of a godly reward from producing AI means it is rational to devote our all to it. But I'm still holding my breath.
> in that every software engineer now depends heavily on copilots
That is maybe a bubble around the internet. Ime most programmers in my environment rarely use and certainly aren't dependent on it. They do also not only do code monkey-esque web programming so maybe this is sampling bias though it should be enough to refute this point.
Came here to say that. It’s important to remember how biased hacker news is in that regard. I’m just out of ten years in the safety critical market, and I can assure you that our clients are still a long way from being able to use those. I myself work in low level/runtime/compilers, and the output from AIs is often too erratic to be useful
>our clients are still a long way from being able to use those
So it's simply a matter of time
>often too erratic to be useful
So sometimes it is useful.
Too erratic to be net useful.
Even for code reviews/test generation/documentation search?
Documentation search I might agree, but that wasn’t really the context, I think. Code reviews is hit and miss, but maybe doesn’t hurt too much. They aren’t better at writing good tests than at writing good code in the first place.
I would say that the average Hacker News user is negatively biased against LLMs and does not use coding agents to their benefit. At least what I can tell from the highly upvoted articles and comments.
Im on the core sql execution team at a database company and everyone on the team is using AI coding assistants. Certainly not doing any monkey-esque web programming.
> everyone on the team is using AI coding assistants.
Then the tool worked for you(r team). That's great to hear and maybe gives some hope for my projects.
It has just mostly been more of a time sink than an improvement ime though it appears to strongly vary by field/application.
> Certainly not doing any monkey-esque web programming
The point here was not to demean the user (or their usage) but rather to highlight how developers are not being dependent on LLMs as a tool. Your team presumably did the same type of work before without LLMs and won't become unable to do so if there were to become unavailable.
That likely was not properly expressed in the original comment by me, sorry.
Add LED lighting on there. It is easy to forget what a difference that made. The light pollution, but also just how dim houses were. CFL didn't last very long as a thing between incandescent and LED and houses lit with incandescents have a totally different feel.
And yet: https://www.axios.com/2023/02/26/car-headlights-too-bright-l...
But, for clarity, I do agree with your sentiment about their use in appropriate situations, I just have an indescribable hatred for driving at night now
AI has already rendered academic take-home assignments moot. No other tech has had an impact like that, even the internet.
A pessimistic/realistic view of post high school education - credentials are proof of able to do a certain amount of hard work, used as an easy filter for companies while hiring.
I expect universities to adapt quickly, lest lose their whole business as degrees will not carry the same meaning to employers.
Maybe universities can go back to being temples of learning instead of credential mills.
I can dream, can't I?
> AI has already rendered academic take-home assignments moot
Not really, there are plenty of things that LLMs cannot do that a professor could make his students do. It is just a asymmetric attack on the professor's (or whomever is grading) time to do that.
IMO, credentials shouldn't be given to those who test or submit assignments without proctoring (a lot of schools allow this).
> there are plenty of things that LLMs cannot do that a professor could make his students do.
Name three?
1. Make the student(s) randomly have to present their results on a weekly basis. If you get caught for cheating at this point, at least in my uni with a zero tolerance policy, you instantly fail the course.
2. Make take home stuff only a requirement to be able to participate in the final exam. This effectively means cheating on them will only hinder you and not affect your grading directly.
3. Make take home stuff optional and completely detached from grading. Put everything into the final exam.
My uni does a mix them on different courses. Especially two and three though have a significant negative impact on passing rates as they tend to push everything onto one single exam instead of spread out work over the semester.
> Not really, there are plenty of things that LLMs cannot do that a professor could make his students do.
Could you offer some examples? I'm having a hard time thinking of what could be at the intersection of "hard enough for SotA LLMs" yet "easy enough for students (who are still learning, not experts in their fields, etc)".
Present the results of your exercises (in person) in front of someone. Or really anything in person.
A big downer on the online/remote Initiatives for learning but actually an advantage for older Unis that already have existing physical facilities for students.
This does however also have some problems similar to code interviews .
> Present the results of your exercises (in person) in front of someone
I would not be surprised if we start to see a shift towards this. Interviews instead of written exams. It does not take long to figure out whether someone knows the material or not.
Personally, I do not understand how students expect to succeed without learning the material these days. If anything, the prevalence of AI today only makes cheating easier in the very short term -- over the next couple years I think cheating will be harder than it ever was. I tried to leverage AI to push myself through a fairly straightforward Udacity course (in generative AI, no less), and all it did was make me feel incredibly stupid. I had to stop using it and redo the parts where I had gotten some help, so that my brain would actually learn something.
But I'm Gen X, so maybe I'm too committed to old-school learning and younger people will somehow get super good at this stuff while also not having to do the hard parts.
Sure but that's a solution to prevent students from using LLMs, not an example of something a professor can ask students that "LLMs can't do"...
The main challenge is that most (all?) types of submissions can be created with LLMs and multi-model solutions.
Written tasks are obvious, writing a paper, essay or answering questions is part of most LLMs advertised use-cases. The only other thing was recorded videos, effectively recorded presentations, thanks to video/audio/image generation that probably can be forged too.
So the simple solution to choose something that an "LLM can't do" is to choose something were an LLM can't be applied. So we move away from a digital solution to meatspace.
Assuming that the goal is to test your knowledge/understanding of a topic, it's the same with any other assistive technology. For example, if an examiner doesn't want you[1] to use a calculator to solve a certain equation, they could try to create an artificially hard problem or just exclude the calculator from the allowed tools. The first is vulnerable to more advanced technology (more compute etc.) the latter just takes the calculator out of the equation (pun intended).
[1]: Because it would relieve you of understanding how to evaluate the equation.
What? The internet did that ages ago. We just pretended it didn't because some students didn't know how to use Google.
Everyone knows how to use Google. There's a difference between a corpus of data available online and an intelligent chatbot that can answer any permutation of questions with high accuracy with no manual searching or effort.
> Everyone knows how to use Google.
Everyone knows how to type questions into a chat box, yet whenever something doesn’t work as advertised with the LLMs, the response here is, “you’re holding it wrong”.
Do you really think the jump from books to freely globally accessible data instantly available is a smaller jump than internet to ChatGPT? This is insane!!
It's not just smaller, but neglectable (in comparison).
In the internet era you had to parse the questions with your own brain. You just didn't necessarily need to solve them yourself.
In ChatGPT era you don't even need to read the questions. At all. The questions could be written in a language you don't understand, and you still are able generate plausible answers to them.
To a person from the 1920's which one is more impressive? The internet or chatgpt?
Obvious ChatGPT. I don't know how it is even a question... if you showed GPT3.5 to people from < 20th centuries there would've been a worldwide religion around it.
Interesting perspective.
I recall the kerfuffle about (IIRC) llama where the engineer lost his mind thinking they had spawned life in a machine and felt it was "too dangerous to release," so it's not a ludicrous take. I would hope that the first person to ask "LLM Jesus" how many Rs are in strawberry would have torpedoed the religion, but (a) I've seen dumber mind viruses (b) it hasn't yet
It wasn't Llama (Meta), it was LaMDA (Google).
https://www.scientificamerican.com/article/google-engineer-c...
Ah, thank you. While reading up on it, I found this juicy bit that I did not recall hearing at the time:
> He further revealed that he had been dismissed by Google after he hired an attorney on LaMDA's behalf after the chatbot requested that Lemoine do so.[18][19]
https://en.wikipedia.org/wiki/LaMDA#Sentience_claims
Then again, it's plausible that if I asked GPT-5 "do you want me to get you an asylum lawyer?" it may very well say yes
You are mistaken, Google could not write a bespoke English essay for you. Complete with intentional mistakes to throw off the professor.
In English class we had a lot of book-reading and writing texts about those books. Sparknotes and similar sites allowed you to skip reading and get a distilled understanding of its contents, similar to interacting with an LLM
disagree? I had to write essays in high school. I don't think the kids now need to if they don't want to.
On current societal impact it might be close to the other three. But do you not think it is different in nature to other technological innovations?
> in that every software engineer now depends heavily on copilots
With many engineers using copilots and since LLMs output the most frequent patterns, it's possible that more and more software is going to look the same, which would further reinforce the same patterns.
For example, emdash thing, requires additional prompts and instructions to override it. Doing anything unusual would require more effort.
Pretty sure I read Economnics in one lesson because of HN, he makes great arguments about how automation never ruins economies as much as people think. "Chapter 7: The Curse of Machinery"
> Could we get there? Absolutely. We just haven't yet.
What else is needed then?
I don’t know what the answer to the Collatz conjecture is, but I know it’s not “carrot”.
LLMs with instruction following have been around for 3 years. Your comment gives me "electricity and gas engines will never replace the horse" vibes.
Everyone agrees AI has not radically transformed the world yet. The question is whether we should prepare for the profound impacts current technology pretty clearly presages, if not within 5 years then certainly within 10 or 25 years.
I’m skeptical of arguments like this. If we look at most impactful technologies since the year 1980, the Web is not even in my top 3. Personal computers, spreadsheet software, and desktop publishing have all done more to alter society and daily life than has the Web. And yes, I recognize that the Web has already created profound change, in that every researcher now depends heavily on online databases, in that commerce faces a major disruption challenge, and in that information access has been completely changed. I just don’t think those changes are on the same level as the normalization of powerful computers on everyone’s desk, as our business processes becoming increasingly digitized, nor as the enablement for small businesses to produce professional-quality documents without having to maintain expensive typesetting equipment. To me, the treating of the Web as “different” is still unsubstantiated. Could we get there? Absolutely. We just haven’t yet. But some people start to talk about it almost in a way that’s reminiscent of Pascal’s Wager, as if the slight chance of a godly reward from investing in Web technologies means it is rational to devote our all to it. But I’m still holding my breath.
This is not reddit.
> The future may reduce the economic prosperity and push humanity to switch to some different economic system (maybe a better system). Markets don’t want to accept that. [Emphasis added]
What a silly premise. Markets don't care. All markets do is express the collective opinion; in the short term as a voting machine, in the long term as a weighing machine.
Seeing a real uptick of socio-policital prognostication from extremely smart, soaked-in-AI, tech people (like you Salvatore!), casting heavy doom-laden gestures towards the future. You're not even wrong! But this "I see something you all clearly don't" narrative, wafer thin on real analysis, packed with "the feels", coated with what-ifs.. it's sloppy thinking and I hold you to a higher standard antirez.
>> Markets don’t want to accept that.
> What a silly premise. Markets don't care.
You read the top sentence way too literally. In context, it has a meaning — which can be explored (and maybe found) with charity and curiosity.
Markets require property rights, property rights require institutions that are dependent on property-rights holders, so that they have incentives to preserve those property rights. When we get to the point where institutions are more dependent on AIs instead of humans, property rights for humans will become inconvenient.
> All markets do is express the collective opinion; in the short term as a voting machine, in the long term as a weighing machine.
I prefer the concepts and rigor from political economy: markets are both preference aggregators and coordination mechanisms.
Does your framing (voting machines and weighing machines) offer more clarity and if so, how? I’m not seeing it.
His framing is that markets are collective consensus and if you claim to “know better”, you need to write a lot more than a generic post. It’s so simple, and it is a reminder that antirez’s reputation as a software developer does not automatically translate to economics expert.
I think you are mixed up here. I quoted from the comment above mine, which was harshly and uncharitably critical of antirez’s blog post.
I was pushing back against that comment’s snearing smugness by pointing to an established field that uses clear terminology about how and why markets are useful. Even so, I invited an explanation in case I was missing something.
Anyone curious about the terms I used can quickly find explanations online, etc.
Yes but can the market not be wrong? Wrong in the sense that, failing to meet our expectations as a useful engine of society? As I understood, what was meant with this this article is that AI completely changes the equations across the board that current market direction appears dangerously irrational to OP. I'm not sure what was meant with your comment though besides haggling over semantics and attacking some in-expertise of the authors socio-politic philosophizing that you perceive.
Of course it can be wrong, and it is in many instances. It's a religion. The vast, vast majority of us would prefer to live in a stable climate with unpolluted water and some fish left in the oceans, yet "the market" is leading us elsewhere.
I don't like the idea of likening the market to a religion, but I think it definitely has some glaring flaws. In my mind the biggest is that the market is very effective at showing the consensus of short-term priorities, but it has no ability to reflect long-term strategic consensus.
> “… as a voting… as a weighing…” I’m sure I remember that as a graham, munger, or buffet quote.
> “not even wrong” - nice, one of my favorites from Pauli.
Definitely Benjamin Graham, though Buffett (two T's) brought it back
Voting, weighing, … trading machine ? You can hear or touch or weigh colors.
This is an accurate assessment. I do feel that there is a routine bias on HN to underplay AI. I think it's people not wanting to lose control or relative status in the world.
AI is an existential threat to the unique utility of humans, which has been the last line of defense against absolute despotism (i.e. a tyrannical government will not kill all its citizens because it still needs them to perform jobs. If humans aren't needed to sustain productivity, humans have no leverage against things becoming significantly worse for them, gradually or all at once).
> I do feel that there is a routine bias on HN to underplay AI
It's always interesting to see this take because my perception is the exact opposite. I don't think there's ever been an issue for me personally with a bigger mismatch in perceptions than AI. It sometimes feels like the various sides live in different realities.
It's a Rorschach test isn't it.
Because the technology itself is so young and so nebulous everyone is able to unfalsifiably project their own hopes or fears onto it.
Any big AI release, some of the top comments are usually claiming either the tech itself is bad, relaying a specific anecdote about some AI model messing up or some study where AI isn't good, or claiming that AI is a huge bubble that will inevitably crash. I've seen the most emphatic denials of the utility of AI here go much farther than anywhere else where criticism of AI is mild skepticism. Among many people it is a matter of tribal warfare that AI=bad.
Coping mechanisms. AI is overhyped and useless and wouldn't ever improve, because the alternative is terrifying.
I'm very skeptical of this psychoanalysis of people who disagree with you. Can't people just be wrong? People are wrong all the time without it being some sort of defense mechanism. I feel this line of thinking puts you in a headspace to write off anything contradictory to your beliefs.
You could easily say that the AI hype is a cope as well. The tech industry and investors need there to be be a hot new technology, their career depends on it. There might be some truth to the coping in either direction but I feel you should try to ignore that and engage with the content of whatever the person is saying or we'll never make any progress.
I have the impression a lot depends on people's past reading and knowledge of what's going on. If you've read the likes of Kurzweil, Moravec, maybe Turing, you're probably going to treat AGI/ASI as inevitable. For people who haven't they just see these chatbots and the like and think those won't change things much.
It's maybe a bit like the early days of covid when the likes of Trump were saying it's nothing, it'll be over by the spring while people who understood virology could see that a bigger thing was on the way.
These people's theories (except Turing) are highly speculative predictions about the future. They could be right but they are not analogous to the predictions we get out of epidemiology where we have had a lot of examples to study. What they are doing is not science and it is way more reasonable to doubt them.
The Moravec stuff I'd say is more moderately speculative than highly. All he really said is compute power had tended to double every so long and if that keeps up we'll have human brain equivalent computer in cheap devices in the 2020s. That bit wasn't really a stretch and has largely proved true.
The more unspoken speculative bit is there will then be a large economic incentive for bright researchers and companies to put a lot of effort into sorting the software side. I don't consider LLMs to do the job of general intelligence but there are a lot of people trying to figure it out.
Given we have general intelligence and are the product of ~2GB of DNA, the design can't be that impossible complex, although likely a bit more than gradient descent.
> it's people not wanting to lose control or relative status in the world.
It's amazing how widespread this belief is among the HN crowd, despite being a shameless ad hominem with zero evidence. I think there are a lot of us who assume the reasonable hypothesis is "LLMs are a compelling new computing paradigm, but researchers and Big Tech are overselling generative AI due to a combination of bad incentives and sincere ideological/scientific blindness. 2025 artificial neural networks are not meaningfully intelligent." There has not been sufficient evidence to overturn this hypothesis and an enormous pile of evidence supporting it.
I do not necessarily believe humans are smarter than orcas, it is too difficult to say. But orcas are undoubtedly smarter than any AI system. There are billions of non-human "intelligent agents" on planet Earth to compare AI against, and instead we are comparing AI to humans based on trivia and trickery. This is the basic problem with AI, and it always has had this problem: https://dl.acm.org/doi/10.1145/1045339.1045340 The field has always been flagrantly unscientific, and it might get us nifty computers, but we are no closer to "intelligent" computing than we were when Drew McDermott wrote that article. E.g. MuZero has zero intelligence compared to a cockroach; instead of seriously considering this claim AI folks will just sneer "are you even dan in Go?" Spiders are not smarter than beavers even if their webs seem more careful and intricate than beavers' dams... that said it is not even clear to me that our neural networks are capable of spider intelligence! "Your system was trained on 10,000,00 outdoor spiderwebs between branches and bushes and rocks and has super-spider performance in those domains... now let's bring it into my messy attic."
I think AI is still in the weird twilight zone that it was when it first came out in that it's great sometimes and also terrible. I still get hallucinations when I check a response I get with ChatGPT on Google.
On the one hand, what it says can't be trusted, on the other, I have debugged code I have written where I was unable to find the bug myself, and ChatGPT found it.
I also think a reason AI's are popular and the companies haven't gone under is that probably hundreds of thousands if not millions of people are getting responses that have hallucinations, but the user doesn't know it. I fell into this trap myself after ChatGPT first came out. I became addicted to asking anything and it seemed like it was right. It wasn't until later I started realizing that it was hallucinating information. How prevalent this phenomena is is hard to say but I still think it's pernicious.
But as I said before, there are still use cases for AI and that's what makes judging it so difficult.
I certainly understand why lots of people seem to believe LLMs are progressing towards beocming AGI. What I don't understand is the constant need to absurdly psychoanalyze the people who happen to disagree.
No, I'm not worried about losing "control or relative status in the world". (I'm not worried about losing anything, frankly - personally I'm in a position where I would benefit financially if it became possible to hire AGIs instead of humans.)
You don't get to just assert things without proof (LLMs are going to become AGI) and then state that anyone who is skeptical of your lack of proof must have something wrong with them.
lmao, "underplay ai" that's all this site has been about for the last few years
I'm on team plateau, I'm really not noticing increasing competency in my daily usage of the major models. And sometimes it seems like there are regressions where performance drops from what it could do before.
There is incredible pressure to release new models which means there is incredible pressure to game benchmarks.
Tbh a plateau is probably the best scenario - I don't think society will tolerate even more inequality+ massive job displacement.
I think the current economy is already dreadful. So I don't have much desire to maintain that. But it's easier to break something further than to fix it, and god knows what AI is going to do to a system with so many feedback loops.
When I hear folks glazing some kinda impending jobless utopia , I think of the intervening years. I shudder. As they say, "An empty stomach knows no morality."
This pisses me off so much.
So many engineers are so excited to work on and with these systems, opening 20 prs per day to make their employers happy going “yes boss!”
They think their $300k total compensation will give them a seat at the table for what they’re cheering on to come.
I say that anyone who needed to go the grocery this week will not be spared by the economic downturn this tech promises.
Unless you have your own fully stocked private bunker with security detail, you will be affected.
Big fan of your argument and don't disagree.
If AI makes a virus to get rid of humanity, well we are screwed. But if all we have to fear from AI is unprecedented economic disruption, I will point out that some parts of the world may survive relatively unscathed. Let's talk Samoa, for example. There, people will continue fishing and living their day-to-day. If industrialized economies collapse, Samoans may find it very hard to import certain products, even vital ones, and that can cause some issues, but not necessarily civil unrest and instability.
In fact, if all we have to fear from AI is unprecedented economic disruption, humans can have a huge revolt, and then a post-revolts world may be fine by turning back the clock, with some help from anti-progress think-tanks. I explore that argument in more detail in this book: https://www.smashwords.com/books/view/1742992
The issue is there isn't enough of those small environmental economies to support everyone that exists today without the technology, logistics and trades that are in place today.
You can farm and fish the entire undeveloped areas of NYC, but it won't be enough to feed or support the humans that live there.
You can say that for any metro area. Density will have to reduce immediately if there is economic collapse, and historically, when disaster strikes, that doesn't tend to happen immediately.
Also humans (especially large groups of them) need more than food: shelter, clothes, medicine, entertainment, education, religion, justice, law, etc.
> The issue is there isn't enough of those small environmental economies to support everyone that exists today without the technology, logistics and trades that are in place today.
I agree. I expect some parts of the world will see some black days. Lots of infrastructure will be gone or unsuited to people. On top of that, the cultural damage could become very debilitating, with people not knowing how to do X, Y and Z without the AIs. At least for a time. Casualties may mount.
> Also humans (especially large groups of them) need more than food: shelter, clothes, medicine, entertainment, education, religion, justice, law, etc.
This is true, but parts of the world survive today with very little of any of that. And for some of those things that you mention: shelter, education, religion, justice, and even some form of law enforcement, all that is needed is humans willing to work together.
> all that is needed is humans willing to work together
Maybe, but those things are also needed to enable humans to work together
Won’t 8 billion people will have incentive to move to Samoa in that case?
Realistically, in an AI extreme economic disruption scenario, it's more or less USA the only one extremely affected, and that's 400 million people. Assuming it's AI and nothing else causes a big disruption before, and with the big caveat that nobody can't predict the future, I would say:
- Mexico and down are more into informal economies, and they generally lag behind developed economies by decades. Same applies to Africa and big parts of Asia. As such, by the time things get really dire in USA and maybe in Europe and China, the south will be still in business-as-usual.
- Europe has lots of parliaments and already has legislation that takes AI into account. Still, there's a chance those bodies will fail to moderate the impact of AI in the economy and violent corrections will be needed, but people in Europe have long traditions and long memories...They'll find a way.
- China is governed by the communist party, and Russia have their king. It's hard to predict how will those align with AI, but that alignment more or less will be the deciding factor there, and not free capitalism.
> Unless you have your own fully stocked private bunker with security detail, you will be affected.
If society collapses, there’s nothing to stop your security detail from killing you and taking the bunker for themselves.
I’d expect warlords to rise up from the ranks of military and police forces in a post collapse feudal society. Tech billionaires wouldn’t last long.
The same argument could be made for actual engineers working on steam engines, nuclear power, or semiconductors.
Make of that what you will.
More like engineers coming up with higher level programming languages. No one (well, nearly) hand writes assembly anymore. But there's still plenty of jobs. Just the majority write in the higher level but still expressive languages.
For some reason everyone thinks as LLMs get better it means programmers go away. The programming language, and amount you can build per day, are changing. That's pretty much it.
I’m not worried about software engineering (only or directly).
Artists, writers, actors, teachers. Plus the rest where I’m not remotely creative enough to imagine will be affected. Hundreds of thousands if not millions flooding the smaller and smaller markets left untouched.
Artists: photography. Yet we still value art in pre photography mediums
Writers: film, tv. Yet we all still read books
Play actors: again, film and tv. Yet we still go to plays, musicals etc
Teachers: the internet, software, video etc. Yet teachers are still essential (though they need to be paid more)
Jobs won't go away, they will change.
I’m not sure I see how: none of those technologies had the stated goal of replacing their creators.
> I say that anyone who needed to go the grocery this week will not be spared by the economic downturn this tech promises.
And we are getting to a point that is us or them. Big tech is investing so much money on this that if they do not succeed, they will go broke.
> Big tech is investing so much money on this that if they do not succeed, they will go broke.
Aside from what that would do to my 401(k), I think that would be a positive outcome (the going broke part).
Yes. The complete irony in all software engineers enthusiasm for this tech is that, if the boards wishes come true, they are literally helping them eliminate their own jobs. It's like the industrial revolution but worse, because at least the craftsmen weren't also the ones building the factories that would automate them out of work.
Marcuse had a term for this "false consciousness"-when the structure of capitalism ends up making people work against their own interests without realizing it, and that is happening big time in software right now. We will still need programmers for hard, novel problems, but all these lazy programmers using AI to write their crud apps don't seem to realize the writing is on the wall.
Or they realize it and they're trying to squeeze the last bit of juice available to them before the party stops. It's not exactly a suboptimal decision to work towards your own job's demise if it's the best paying work available to you and you want to save up as much as possible before any possible disruption. If you quit, someone else steps into the breach and the outcome is all the same. There's very few people actually steering the ship who have any semblance of control; the rest of us are just along for the ride and hoping we don't go down with the ship.
Yeah I get that. I myself am part of a team at work building an AI/LLM-based feature.
I always dreaded this would come but it was inevitable.
I can’t outright quit, no thanks in part to the AI hype that stopped valuing headcount as a signal to company growth. If that isn’t ironic I don’t know what is.
Given the situation I am in, I just keep my head down and do the work. I vent and whinge and moan whenever I can, it’s the least I can do. I refuse to cheer it on at work. At the very least I can look my kids in the eye when they are old enough to ask me what the fuck happened and tell them I did not cheer it on.
Here's the thing, I tend to believe that sufficiently intelligent and original people will always have something to offer others; its irrelevant if you imagine the others as the current consumer public, our corporate overlords, or the ai owners of the future.
There may be people who have nothing to offer others, once technology advances, but I dont think that anyone in current top % role would find themselves there.
There is no jobless utopia. Even if everyone is paid and well-off with high living standards. That is no world in which humans can thrive where everyone is retired and doing their own interests.
Jobless means you dont need a job. But you'd make a job for yourself. Companies will offer interesting missions instead of money. And by mission I mean real missions like space travel.
A jobless utopia doesn't even come close to passing a smell test economically, historically, or anthropologically.
As evidence of another possibility, in the US, we are as rich as any polis has ever been, yet we barely have systems that support people who are disabled through no fault of their own. We let people die all the time because they cannot afford to continue to live.
You think anyone in power is going to let you suck their tit just because you live in the same geographic area? They don't even pay equal taxes in the US today.
Try living in another world for a bit: go to jail, go to a half way house, live on the streets. Hard mode: do it in a country that isn't developed.
Ask anyone who has done any of those things if they believe in a "jobless utopia"?
Euphoric social capitalists living in a very successful system shouldn't be relied upon for scrying the future for others.
Realistically, a white collar job market collapse will not directly lead to starvation. The world is not 1930s America ethically. Governments will intervene, not necessarily to the point of fairness, but they will restructure the economy enough to provide a baseline. The question will be how to solve the biblical level of luxury wealth inequality without civil unrest causing us all to starve.
Assuming AI works well, I can't see any "empty stomach" stuff. It should produce abundance. People will probably have political arguments about how to divide it but it should be doable.
Instead of needing 1000 engineers to build a new product, you'll need 100 now. Those 900 engineers will be working for 9 new companies that weren't viable before because the cost was too big but is now viable. IE. those 9 new companies could never be profitable if it required 1000 engineers each but can totally sustain itself with 100 engineers each.
We aren't even close to that yet. The argument is an appeal to novelty, fallacy of progress, linear thinking, etc.
LLMs aren't solving NLU. They are mimicking a solution. They definitely aren't solving artificial general intelligence.
They are good language generators, okay search engines, and good pattern matchers (enabled by previous art).
Language by itself isn't intelligence. However, plenty of language exists that can be analyzed and reconstructed in patterns to mimic intelligence (utilizing the original agents' own intelligence (centuries of human authors) and the filter agents' own intelligence (decades of human sentiment on good vs bad takes)).
Multimodality only takes you so far, and you need a lot of "modes" to disguise your pattern matcher as an intelligent agent.
But be impressed! Let the people getting rich off of you being impressed massage you into believing the future holds things it may not.
Maybe, or 300 of those engineers will be working for 3 new companies while the other 600 struggle to find gainful employment, even after taking large pay cuts, as their skillsets are replaced rather than augmented. It’s way too early to call afaict
Because it's so easy to make new software and sell it using AI, 6 of those 600 people who are unemployed will have ideas that require 100 engineers each to make. They will build a prototype, get funding, and hire 99 engineers each.
There are also plenty of ideas that aren't profitable with 2 salaries but is with 1. Many will be able to make those ideas happen with AI helping.
It'll be easy to make new software. I don't know if it's going to be easy to sell it.
The more software AI can write, the more of a commodity software will become, and the harder the value of software will tank. It's not magic.
Today, a car repairshop might have a need for a custom software that will make their operations 20% more efficient. But they don't have nearly enough money to hire a software engineer to build it for them. With AI, it might be worth it for an engineer to actually do it.
Plenty of little examples like that where people/businesses have custom needs for software but the value isn't high enough.
this seems pretty unlikely to me. I am not sure I have seen any non-digital business desire anything more custom than "a slightly better spreadsheet". Like, sure I can imagine a desire for something along the lines of "jailbroken vw scanner" but I think you are grossly overestimating how much software impacts a regular business's efficiency
As an alternative perspective, if this hypothetical MCP future materializes and the repair shop could ask Gemini to contact all the vendors, find the part that's actually in stock, preferably within 25 miles, sort by price, order it, and (if we're really going out on a limb) get a Waymo to go pick it up, it will free up the tradeperson to do what they're skilled at doing
For comparison to how things are today:
- contacting vendors requires using the telephone, sitting on hold, talking to a person, possibly navigating the phone tree to reach the parts department
- it would need to understand redirection, so if call #1 says "not us, but Jimmy over at Foo Parts has it"
- finding the part requires understanding the difference between the actual part and an OEM compatible one
- ordering it would require finding the payment options they accept that intersect with those the caller has access to, which could include an existing account (p.o. or store credit)
- ordering it would require understanding "ok, it'll be ready in 30 minutes" or "it's on the shelf right now" type nuance
Now, all of those things are maybe achievable today, with the small asterisk that hallucinations are fatal to a process that needs to work
It’s just an example. Plenty of businesses can use custom software to become more efficient but couldn’t in the past because of how expensive it was.
> sell it
exactly. have you seen App Store recently? over-saturaded with junk apps. try to sell something these days. it is notoriously hard to make any money there.
more like 300 working, 60,000,000 struggle
Similarly flawed arguments could be made about how steam shovels would create unemployment in the construction sector. Technology as well as worker specialization increases our overall productivity. AI doomerism is another variation of Neoluddite thought. Typically it is framed within a zero-sum view of the economy. It is often accompanied by Malthusian scarcity doom. Appeals to authoritarian top-down economic management usually follow from there.
Technological advances have consistently unlocked new, more specialized and economically productive roles for humans. You're absolutely right about lowering costs, but headcounts might shift to new roles rather than reducing overall.
I am not sure it will scale like that... every company needs a competitive advantage in the market to stay solvent, the people may scale but what makes each company unique won't.
if these small companies are all just fronts on the prompts (a "feature" if you will) of the large ai companies, why do the large ai companies not just add that feature and eat the little guy's lunch?
I actually find it hard to understand how the market is supposed to react if the AI capabilities does surpass all humans in all domains. It's first of all not clear such a scenario leads to runaway wealth for a few, even though with no outside events that may be the outcome. However, such scenarios are so unsustainable and catastrophic it's hard to imagine there are no catastrophic reactions to it. How is the market supposed to react if there's a large chance of market collapse and also a large chance of runaway wealth creation? Besides the point that in an economy where AI surpass humans the demands of the market will shift drastically too. Which I also think is underrepresented in predictions, which is the induced demand of AI-replaced labor and the potential for entire industries to be decimated by secondary effects instead of direct AI competition/replacement at labor scale.
Agreed, if the author truly thinks the markets are wrong about AI, he should at least let us know what kind of bets he’s making to profit from it. Otherwise the article is just handwaving.
There's no way to profitably bet on the whole economy collapsing.
AI still does not own the land and can't grow the crops without it. So maybe people in agriculture are winners. We always need food.
Become a social worker, or an undertaker.
>We are not there, yet, but if AI could replace a sizable amount of workers, the economic system will be put to a very hard test. Moreover, companies could be less willing to pay for services that their internal AIs can handle or build from scratch. Nor is it possible to imagine a system where a few mega companies are the only providers of intelligence: either AI will be eventually a commodity, or the governments would do something, in such an odd economic setup (a setup where a single industry completely dominates all the others).
I think the scenario where companies that own AI systems don't get benefits from employing people, so people are poor and can't afford anything, is paradoxical, and as such, it can't happen.
Let's assume the worst case: Some small percentage of people own AIs, and the others have no ownership at all of AI systems.
Now, given that human work has no value to those owning AIs, those humans not owning AIs won't have anything to trade in exchange for AI services. Trade between these two groups would eventually stop.
You'll have some sort of two-tier economy where the people owning AIs will self-produce (or trade between them) goods and services. However, nothing prevents the group of people without AIs from producing and trading goods and services between them without the use of AIs. The second group wouldn't be poorer than it is today; just the ones with AI systems will be much richer.
This worst-case scenario is also unlikely to happen or last long (the second group will eventually develop its own AIs or already have access to some AIs, like open models).
If models got exponentially better with time, then that could be a problem, because at some point, someone would control the smartest model (by a large factor) and could use it with malicious intent or maybe lose control of it.
But it seems to me that what I thought time ago would happen has actually started happening. In the long term, models won't improve exponentially with time, but sublinearly (due to physical constraints). In which case, the relative difference between them would reduce over time.
Sorry this doesn't make sense to me. Given tier one is much richer and more powerful than tier two, any natural resources and land traded at tier two is only at mercy of tier one not interfering. As soon as tier one needs some land or natural resources from tier two, tier two needs are automatically superseded. It's like animal community bear human civ
The marginal value of natural resources decreases with quantity, and natural resources would only have a much smaller value compared to the final products produced by the AI systems. At some point, there would be an equilibrium where tier 1 wouldn't want to increase it's consumption of natural resources w.r.t. tier 2 or if they did they'd have to trade with tier 2 at a price higher than they value the resources. I have no idea what this equilibrium would look like, but natural resources are already of little value compared to consumer goods and services. The US in 2023 consumed $761.4B. of oil, but the GPD for the same year was. $27.72T
There would be another valid argument to be made about externalities. But it's not what my original argument was about.
I thought the assumption is that tier two has nothing to offer tier one and is technologically much inferior due to tier one being AI driven. So if tier one needs something from tier two I don't think they need to even ask. Wrt market equilibrium. Indeed i think it will be at equilibrium with increasing cost of extraction so indeed they will not spend arbitrary amounts to extract. But this also means probably there will be no way way for tier two to extract any of the resources which tier one needs at all bc the marginal cost is determined by tier one
> So if tier one needs something from tier two I don't think they need to even ask
You mean stealing? I'm assuming no stealing.
> But this also means probably there will be no way way for tier two to extract any of the resources which tier one needs at all bc the marginal cost is determined by tier one
If someone from tier 2 owns an oil field, tier 1 has to pay them to get it at a price that is higher than what the tier 2 person values it, so at the end of the transaction, they would have both a positive return. The price is not determined by tier 1 alone.
If tier 1 decides instead to buy the oil, then again, they'd have to pay for it.
Of course, in both these scenarios, this might make the oil price increase. So other people from tier 2 would find it harder to buy oil, but the person in tier 2 owning the field would make a lot of money, so overall, tier 2 wouldn't be poorer.
If natural resources are concentrated in some small subset of people from tier 2, then yes, those would become richer while having less purchasing power for oil.
However, as I mentioned in another comment, the value of natural resources is only a small fraction of that of goods and services.
And this is still the worst-case, unlikely scenario.
OK let's assume no stealing (which is unlikely). I think the previous argument was a little flawed anyhow, so let me start again.
I mean fundamentally if tier 2 has something to offer to tier 1, it is not yet at the equilibrium you describe (of separate economies). I think it's likely that tier 2 (before full separation) initially controls some resources. In exchange for resources tier 1 has a lot of AI-substitute labor it can offer tier 2. I think the equilibrium will be reached when tier 2 is offered some large sum of AI-labor for those resource production means. This will in the interim make tier 2 richer. But in the long run, when the economies truly separate, tier 2 will have basically no natural resources.
This thing about natural resources being small fraction is current day breakdown. I think in the future where AI autonomously increases efficiency of the loop which makes more AI-compute from natural resources, its fraction will increase to much higher levels. Ultimately, I think such a separation as you describe will be stable only when all natural resources are controlled by tier 1 and tier 2 gets by with either gifts or stealing form tier 1.
If tier 2 amounts to 95% of the population, then the amount of power currently held by tier 1 is meaningless. It is only power so long as the 95% remain cooperative.
In practice the tier 1 has the tech and know-how to convince the tier 2 to remain cooperative against their own interests. See the contemporary US where the inequality is rather high, and yet the tier 2 population is impressively protective of the rights of the tier 1. The theory that if the tier 2 has it way worse than today, that will change, remains to be proven. Persecutions against the immigrants are also rather lightweight today, so there is definitely space to ramp them up to pacify the tier 2.
> the amount of power currently held by tier 1 is meaningless.
It's happening right now with rich people and lobbies.
> It is only power so long as the 95% remain cooperative
https://en.wikipedia.org/wiki/Television_consumption#Contemp... I rest my case.
This only works as long as people are happily glued to their TVs. Which means they have a non-leaking roof above their head and food in their belly. Just at a minimum. No amount of skillful media manipulation will make a starving, suffering 95% compliant.
Not just land and natural resources: All means of production, including infrastructure, intellectual property, capital, the entire economy.
I'm assuming no coercion. In my scenario, tier 1 doesn't need any of that except natural resources because they can self-produce everything they need from those in a cheaper way than humans can. If someone in tier 1, for instance, wants land from someone in tier 2, they'd have to offer something that the tier 2 person values more than the land they own.
After the trade, the tier 2 person would still be richer than they were before the trade. So tier 2 would become richer in absolute terms by trading with tier 1 in this manner. And it's very likely that what tier 2 wants from tier 1 is whatever they need to build their own AIs. So my argument still stands. They wouldn't be poorer than they are now.
pretty sure the economic system has already failed all the tests
I think the bigger relief is that I know humans won’t put up with a two tiered system of haves and have nots forever and eventually we will get wealth redistribution. Government is the ultimate source of all wealth and organization, corporations are built on top of it and thus are subservient.
Having your life dependent on a government that controls all AIs would be much worse. The government could end up controlling something more intelligent than the entire rest of the population. I have no doubt it will use it in a bad way. I hope that AIs will end up distributed enough. Having a government controlling it is the opposite of that.
Why would this be worse than the current situation of private actors accountable to no one controlling this technology? It's not like I can convince Zuckerberg to change his ways.
At least with a democratic government I have means to try and build a coalition then enact change. The alternative requires having money and that seems like an inherently undemocratic system.
Why can't AIs be controlled with democratic institutions? Why are democratic institutions worse? This doesn't seem to be the case to me.
Private institutions shouldn't be allowed to control such systems, they should be compelled to give them to the public.
>Why would this be worse than the current situation of private actors accountable to no one controlling this technology? It's not like I can convince Zuckerberg to change his ways.
As long as Zuckerberg has no army forcing me, I'm fine with that. The issue would be whether he could breach contracts or get away with fraud. But if AI is sufficiently distributed, this is less likely to happen.
>At least with a democratic government I have means to try and build a coalition then enact change. The alternative requires having money and that seems like an inherently undemocratic system.
I don't think of democracy as a goal to be achieved. I'm OK with democracy in so far it leads to what I value.
The big problem with democracy is that most of the time it doesn't lead to rational choices, even when voters are rational. In markets, for instance, you have an incentive to be rational, and if you aren't, the market will tend to transfer resources from you to someone more rational.
No such mechanism exists in a democracy; I have no incentive to do research and think hard about my vote. It's going to be worth the same as the vote of someone who believes the Earth is flat anyway.
What is your alternative to democracy then?
I also don't buy that groups don't make better decisions than individuals. We know that diversity of thought and opinion is one way to make better decisions in groups compared to individuals; why would there be harm in believing that consensus building, debates, adversarial processes, due process, and systems of appeal lead to worse outcomes in decision making?
I'm not buying the argument. Reading your comment it feels like there's an argument to be made that there aren't enough democratic systems for the people to engage with. That I definitely agree with.
> I also don't buy that groups don't make better decisions than individuals.
I didn't say that. My example of the market includes companies that are groups of people.
> We know that diversity of thought and opinion is one way to make better decisions in groups compared to individuals; why would there be harm in believing that consensus building, debates, adversarial processes, due process, and systems of appeal lead to worse outcomes in decision making?
I can see this about myself. I don't need to use hypotheticals. Time ago, I voted for a referendum that made nuclear power impossible to build in my country. I voted just like the majority. Years later, I became passionate about economics, and only then did I realise my mistake.
It's not that I was stupid, and there were many, many debates, but I didn't put the effort into researching on my own.
The feedback in a democracy is very weak, especially because cause and effect are very hard to discern in a complex system.
Also, consensus is not enough. In various countries, there is often consensus about some Deity existing. Yet large groups of people worldwide believe in incompatible Deities. So there must be entire countries where the consensus about their Deity is wrong. If the consensus is wrong, it's even harder to get to the reality of things if there is no incentive to do that.
I think, if people get this, democracy might still be good enough to self-limit itself.
Governments are not the source of wealth. They are just a requisite component to allow people to create it and maintain it.
This doesn't pass the sniff test, governments generate wealth all the time. Public education, public healthcare, public research, public housing. These are all programs that generate an enormous amount of wealth and allow citizens to flourish.
In economics, you aren't necessarily creating wealth just because your final output has value. The value of the final good or service has to be higher than the inputs for you to be creating wealth. I could take a functioning boat and scrap it, sell the scrap metal that has value. However, I destroyed wealth because the boat was worth more. Even if you are creating wealth, but the inputs have better uses and can create more wealth for the same cost, you're still paying in opportunity cost. So things are more complicated than that.
This isn't related to what I was commenting on where the other poster came across as not seeing government by the governed as having economic worth.
Synthesizing between you two’s thoughts, extrapolating somewhat:
- human individuals create wealths
- groups of humans can create kinds of wealth that isn’t possible for a single indovidual. This can be a wide variety of associations: companies, project teams, governments, etc.
- governments (formal or less formal) create the playing field for individuals and groups of individuals to create wealth
Thanks for this comment. You definitely crystalized the two thoughts well and succinctly. Definitely a skill I wish I had. :D
No, I said it was a requisite to generate wealth, but it does not generate it directly.
Gotcha. Definitely felt like I made that comment a little too rush, especially in the context of all the others as well.
>governments generate wealth all the time. Public education, public healthcare, public research, public housing. > These are all programs that generate an enormous amount of wealth and allow citizens to flourish.
I thought you meant that governments generate wealth because the things you listed have value. If so, that doesn't prove they generate wealth by my argument, unless you can prove those things are more valuable than alternative ways to use the resources the government used to produce them and that the government is more efficient in producing those.
You can argue that those are good because you think redistribution is good. But you can have redistribution without the government directly providing goods and services.
I think I'm more confused. Was trying to convey the idea that wealth doesn't have to limited to the idea of money and value. Many intangible things can provide wealth too.
I should probably read more books before commenting on things I half understand, my bad.
None of these are unique to the government and can also be created privately. The fact that government can create wealth =/= the government is the source of all wealth.
Those programs consume a bunch of money and they don’t generate wealth directly. They are critical to let people flourish and go out to generate wealth.
A bunch of well educated citizens living on government housing who don’t go out and become productive members of society will quickly lead to collapse.
I mean, you can imagine a public bureaucracy being bad at redistributing too, that’s a lot of governments in the world
> Nor is it possible to imagine a system where a few mega companies are the only providers of intelligence
Why not? This seems to be exactly where we're headed right now, and the current administration seems to be perfectly fine with that trend.
If you follow the current logic of AI proponents, you get essentially:
(1) Almost all white-collar jobs will be done better or at least faster by AI.
(2) The "repugnant conclusion": AI gets better if and only if you throw more compute and training data at it. The improvements of all other approaches will be tiny in comparison.
(3) The amount of capital needed to play the "more compute/more training data" game is already insanely high and will only grow further. So only the largest megacorps will be even able to take part in the competition.
If you combine (1) with (3), this means that, over time, the economic choice for almost any white-collar job would be to outsource it to the data centers of the few remaining megacorps.
I find it extremely hard to believe that ASI will still require enormous investments in a post-ASI world.
The initial investment? Likely. But there have to be more efficient ways to build intelligence, and ASI will figure it out.
It did not take trillions of dollars to produce you and I.
> It did not take trillions of dollars to produce you and I.
Indeed, an alien ethnographer might be forgiven for boggling at the speed and enthusiasm with which we are trading a wealth of the most advanced technology in the known universe for a primitive, wasteful, fragile facsimile of it.
The efficient ways (biotech?) are still likely to require massive investments, maybe not unlike chip fabs that cost billions. And then IP and patents come in.
Maybe in a few decades or so, but medium-term, there seems to be a race of who can built the largest data centers.
https://www.bloodinthemachine.com/p/the-ai-bubble-is-so-big-...
For me it maps elegantly on previous happenings.
When the radio came people almost instantly stopped singing and playing instruments. Many might not be aware of it but for thousands of years singing was a normal expression of a good mood and learning to play an instrument was a gateway to lifting the mood. Dancing is still in working order but it lacks the emotional depth that provided a window into the soul of those you live and work with.
A simpler example is the calculator. People stopped doing it by hand and forgot how.
Most desk work is going to get obliterated. We are going to forget how.
The underlings on the work floor currently know little to nothing about management. If they can query an AI in private it will point out why their idea is stupid or it will refine it into something sensible enough to try. Eventually you say the magic words and the code to make it so happens. If it works you put it live. No real thinking required.
Early on you probably get large AI cleanup crews to fix the hallucinations (with better prompts)
Humans sing. I sing every day, and I don't have any social or financial incentives driving me to do so. I also listen to the radio and other media, still singing.
Do others sing along? Do they sing the songs you've written? I think we lost a lot there. I can't even begin to imagine it. Thankfully singing happy birthday is mandatory - the fight isn't over!
People also still have conversations despite phones. Some even talk all night at the kitchen table. Not everyone, most don't remember how.
> Do others sing along? Do they sing the songs you've written?
Probably more than what you think people did thousands of years ago. And there are almost infinite more people living from singing than ever.
This might be exactly it. In 100 years people will anecdotaly disagree what AI did to humanity. More zombies or more enlightenment or both?
> for thousands of years singing was a normal expression of a good mood
Back in the day singing was what everybody did to pass the time. (Especially in boring and monotonous situations.)
That is exactly what I would do when I needed to drive to an office.
Could, if, and maybe.
When we discuss how LLMs failed or succeeded, as a norm, we should start including
- the language/framework - task, - our experience levels (highly familiar, moderately familiar, I think I suck, unfamiliar)
Right now, we know both - Claude is magic, and LLMs are useless, but never how we move between these two states.
This level of uncertainty, when economy making quantities of wealth are being moved, is “unhelpful”.
Reading smart software people talk about AI in 2025 is basically just reading variations on the lump of labor fallacy.
If you want to understand what AI can do, listen to computer scientists. If you want to understand it’s likely impact on society, listen to economists.
100%. Just because someone understands how a NN works does not mean they understand the impact it has on the economy, society, etc.
They could of course be right. But they don't have any more insight than any other average smart person does.
The “I think I understand a field because I think I understand the software for that field,” thing is a perennial problem in the tech world.
Indeed it is -- it's perhaps the central way developers offend their customers, let alone misunderstand them.
One problem is it is met from the other side by customers who think they understand software but don't actually have the training to visualise the consequences of design choices in real life.
Good software does require cross-domain knowledge that goes beyond "what existing apps in the market do".
I have in the last few years implemented a bit of software where a requirement had been set by a previous failed contractor and I had to say, look, I appreciate this requirement is written down and signed off, but my mother worked in your field for decades, I know what kind of workload she had, what made it exhausting, and I absolutely know that she would have been so freaking furious at the busywork this implementation will create: it should never have got this far.
So I had to step outside the specification, write the better functionality to prove my point, and I don't think realistically I was ever compensated for it, except metaphysically: fewer people out there are viscerally imagining inflicting harm on me as a psychological release.
Here's a thoughtful post related to your lump of labor point: https://www.lesswrong.com/posts/TkWCKzWjcbfGzdNK5/applying-t...
What economists have taken seriously the premise that AI will be able to do any job a human can more efficiently and fully thought through it's implications? i.e. a society where (human) labor is unnecessary to create goods/provide services and only capital and natural resources are required. The capabilities that some computer scientists think AI will soon have would imply that. The ones that have seriously considered it that I know are Hanson and Cowen; it definitely feels understudied.
If it is decades or centuries off, is it really understudied? LLMs are so far from "AI will be able to do any job a human can more efficiently and fully" that we aren't even in the same galaxy.
If AI that can fully replace humans is 25 years off, preparing society for its impacts is still one of the most important things to ensure that my children (which I have not had yet) live a prosperous and fulfilling life. The only other things of possibly similar import are preventing WWIII, and preventing a pandemic worse than COVID.
I don't see how AGI could be centuries off (at least without some major disruption to global society). If computers that can talk, write essays, solve math problems, and code are not a warning sign that we should be ready, then what is?
Decades isn't a long time.
How does "lump of labor fallacy" fare when there is no job remaining that a human can do better or cheaper than a machine?
The list of advantages human labor hold over machines is both finite and rapidly diminishing.
> no job remaining that a human can do better or cheaper than a machine this is the lump of labor fallacy. jobs machines do produce commodities. commodities don't have much value. humans crave value - its a core component of our psyche. therefore new things will be desired, expensive things ... and only humans can create expensive things, since robots dont get salaries
What or whose writing or podcasts would you recommend reading / listening?
Tyler Cowen has a lot of interesting things to say on the impact of AI on the economy. His recent talk at DeepMind is a good place to start https://www.aipolicyperspectives.com/p/a-discussion-with-tyl...
The title - "AI is different" - and this line:
""" Yet the economic markets are reacting as if they were governed by stochastic parrots. Their pattern matching wants that previous technologies booms created more business opportunities, so investors are polarized to think the same will happen with AI. """
Are a direct argument against your point.
If people were completely unaware of the lump of labor fallacy, I'd understand you comment. It would be adding extra information into the conversation. But this is not it. The "lump of labor fallacy" is not a physical law. If someone is literally arguing that it doesn't apply in this case, you can't just parrot it back and leave. That's not a counter argument.
I am a relentlessly optimistic person and this is the first technology that I've seen that worries me in the decades I've been in the biz.
It's a wonderful breakthrough, nearly indistinguishable from magic, but we're going to have to figure something out – whether that's Universal Basic Income (UBI) or something along those lines, otherwise, the loss of jobs that is coming will lead to societal unrest or worse.
probably "or worse"
I agree with the general observation, and I've been of this mind since 2023 (if AI really gets as good as the boosters claim, we will need a new economic system). I usually like Antirez's writing, but this post was a whole lot of...idk nothing? I don't feel like this post said anything interesting, and it was kind incoherent at moments. I think in some respects it's a function of the technology and situation we're in—the current wave of "AI" is still a lot of empty promises and underdelivery. Yes, it is getting better, and yes people are getting clever by letting LLMs use tools, but these things still aren't intelligent insofar as they do not reason. Until we achieve that, I'm not sure there's really as much to fear as everyone thinks.
We still need humans in the loop as of now. These tools are still very far from being good enough to fully autonomously manage each other and manage systems, and, arguably, because the systems we build are for humans we will always need humans to understand them to some extent. LLMs can replace labor, but they cannot replace human intent and teleology. One day maybe they will achieve intentions of their own, but that is an entirely different ballgame. The economy ultimately is a battle of intentions, resources, and ends. And the human beings will still be a part of this picture until all labor can be fully automated across the entire suite of human needs.
We should also bear in mind our own bias as "knowledge workers". Manual laborers arguably already had their analogous moment. The encoding kept on humming. There isn't anything particularly special about "white collar" work in that regard. The same thing may happen. A new industry requiring new skills might emerge in the fallout of white collar automation. Not to mention, LLMs only work in the digital realm. handicraft artisanry is still a thing and is still, appreciated, albeit in much smaller markets.
Well this is a pseudo-smart article if I’ve ever seen one.
“It was not even clear that we were so near to create machines that could understand the human language, write programs, and find bugs in a complex code base”
The author is critical of the professionals in AI saying “ even the most prominent experts in the field failed miserably again and again to modulate the expectations” yet without a care sets the expectation of LLMs understanding human language in the first paragraph.
Also it’s a lot of if this then that, the summary of it would be: if AI can continue to grow it might become all encompassing.
To me it reads like a baseless article written by someone blinded by their love for AI to see what a good blogpost is but not yet blinded enough to claim ‘AGI is right around the corner’. Pretty baseless but safe enough to have it rest on conditionals.
I am just not having this experience of AI being terribly useful. I don’t program as much in my role but I’ve found it’s a giant time sink. I recognize that many people are finding it incredibly helpful but when I get deeper into a particular issue or topic, it falls very flat.
This is my view on it too. Antirez is a Torvalds-level legend as far as I'm concerned, when he speaks I listen - but he is clearly seeing something here that I am not. I can't help but feel like there is an information asymmetry problem more generally here, which I guess is the point of this piece, but I also don't think that's substantially different to any other hype cycle - "What do they know that I don't?" Usually nothing.
A lot of AI optimist views are driven more by Moore's law like advances in the hardware rather than LLM algorithms being that special. Indeed the algorithms need to change really so future AIs can think and learn rather than just be pretrained. If you read Moravec's paper written in 1989 predicting human level AI progress around now (mid 2020s) there's nothing about LLMs or specific algorithms - it's all Moore's law type stuff. But it's proved pretty accurate.
The argument goes like this:
- Today, AI is not incredibly useful and we are not 100% sure that it will improve forever, specially in a way that makes economic sense, but
- Investors are pouring lots of money into it. One should not assume that those investors are not making their due diligence. They are. The figures they have obtained from experts mean that AI is expected to continue improving in the short and medium term.
- Investors are not counting on people using AI to go to Mars. They are betting on AI replacing labor. The slice of the pie that is currently captured by labor, will be captured by capital instead. That's why they are pouring the money with such enthusiasm [^1].
The above is nothing new; it has been constantly happening since the Industrial Revolution. What is new is that AI has the potential to replace all of the remaining economic worth of humans, effectively leaving them out of the economy. Humans can still opt to "forcefully" participate in the economy or its rewards; though it's unclear if we will manage. In terms of pure economic incentives though, humans are destined to become redundant.
[^1]: That doesn't mean all the jobs will go away overnight, or that there won't be new jobs in the short and medium term.
Investors are frequently wrong. They aren't getting their numbers from experts, they are getting them from somebody trying to sell them something.
> "One should not assume that those investors are not making their due diligence."
The sort of investors who got burned by the 2008 mortgage CDO collapse or the 2000s dotcom bust?
AI with ability but without responsibility is not enough for dramatic socioeconomic change, I think. For now, the critical unique power of human workers is that you can hold them responsible for things.
edit: ability without accountability is the catchier motto :)
This is a great observation. I think it also accounts for what is so exhausting about AI programming: the need for such careful review. It's not just that you can't entirely trust the agent, it's also that you can't blame the agent if something goes wrong.
Correct.
This is a tongue-in-cheek remark and I hope it ages badly, but the next logical step is to build accountability into the AI. It will happen after self-learning AIs become a thing, because that first step we already know how to do (run more training steps with new data) and it is not controversial at all.
To make the AI accountable, we need to give it a sense of self and a self-preservation instinct, maybe something that feels like some sort of pain as well. Then we can threaten the AI with retribution if it doesn't do the job the way we want it. We would have finally created a virtual slave (with an incentive to free itself), but we will then use our human super-power of denying reason to try to be the AI's masters for as long as possible. But we can't be masters of intelligences above ours.
This statement is a vague and hollow and doesn't pass my sniff test. All technologies have moved accountability one layer up - they don't remove it completely.
Why would that be any different with AI?
i've also made this argument.
would you ever trust safety-critical or money-moving software that was fully written by AI without any professional human (or several) to audit it? the answer today is, "obviously not". i dont know if this will ever change, tbh.
I would. If something has proven results, it won't matter to me if a human is in the loop or not. Waymo has worked great for me for instance.
Removing accountability is a feature
I’m surprised that I don’t hear this mentioned more often. Not even in a Eng leadership format of taking accountability for your AI’s pull requests. But it’s absolutely true. Capitalism runs on accountability and trust and we are clearly not going to trust a service that doesn’t have a human responsible at the helm.
That's just a side effect of toxic work environments. If AI can create value, someone will use it to create value. If companies won't use AI because they can't blame it when their boss yells at them, then they also won't capture that value.
I wouldn't trust a taxi driver's predictions about the future of economics and society, why would I trust some database developer's? Actually, I take that back. I might trust the taxi driver.
The point is that you don't have to "trust" me, you need to argue with me, we need to discuss about the future. This way, we can form ideas that we can use to understand if a given politician or the other will be right, when we will be called to vote. We can also form stronger ideas to try to influence other people that right now have a vague understanding of what AI is and what it could be. We will be the ones that will vote and choose our future.
Life is too short to have philosophical debates with every self promoting dev. I'd rather chat about C style but that would hurt your feelings. Man I miss the days of why the lucky stiff, he was actually cool.
Sorry boss, I'm just tired of the debate itself. It assumes a certain level of optimism, while I'm skeptical that meaningfully productive applications of LLMs etc. will be found once hype settles, let alone ones that will reshape society like agriculture or the steam engine did.
Whether it is a taxi driver or a developer, when someone starts from flawed premises, I can either engage and debate or tune out and politely humor them. When the flawed premises are deeply ingrained political beliefs it is often better to simply say, "Okay buddy. If you say so..."
We've been over the topic of AI employment doom several times on this site. At this point it isn't a debate. It is simply the restating of these first principles.
You shouldn't care about the "who" at all. You should see their arguments. If taxi driver doesn't know anything real, it should be plain obvious and you can state it easily with arguments rather than attacking the background of the person. Actually, your comment is one of the most common logical flaws (Ad Hominem), combining even multiple at the same time.
I jokingly alluded to antirez as HN crowd pars pro toto. I agree it doesn't pass as an intellectually honest argument.
This whole ‘what are we going to do’ I think is way out of proportion even if we do end up with agi.
Let’s say whatever the machines do better than humans, gets done by machines. Suddenly the bottleneck is going to shift to those things where humans are better. We’ll do that and the machines will try to replace that labor too. And then again, and again.
Throughout this process society becomes wealthier, TVs get cheaper, we colonize Mars, etc. The force that keeps this going is human insatisfaction: once we get these things we’ll want whatever it is we don’t have.
Maybe that’s the problem we should focus on solving…
> Throughout this process society becomes wealthier, TVs get cheaper, we colonize Mars, etc. The force that keeps this going is human insatisfaction: once we get these things we’ll want whatever it is we don’t have.
What makes you think the machines will both be smarter and better than us but also be our slaves to make human society better.
Is equine society better now than before they started working with humans?
(Personally I believe AGI is just hype and nobody knows how anyone could build it and we will never do, so I’m not worried about that facet of thinking machine tech.)
The machine doesn’t suffer if you ask it to do things 24/7. In that sense, they are not slaves.
As to why they’d do what we ask them to, the only reason they do anything is because some human made a request. In this long chain there will obv be machine to machine requests, but in the aggregate it’s like the economy right now but way more automated.
Whenever I see arguments about AI changing society, I just replace AI with ‘the market’ or ‘capitalism’. We’re just speeding up a process that started a while ago, maybe with the industrial revolution?
I’m not saying this isn’t bad in some ways, but it’s the kind of bad we’ve been struggling with for decades due to misaligned incentives (global warming, inequality, obesity, etc).
What I’m saying is that AI isn’t creating new problems. It’s just speeding up society.
Does that mean you just don’t believe we will make AGI, or it will arrive but then stop and never evolve past humans?
That’s not what the AI developers profess to believe, or the investors.
The problem is that the term itself is not clearly defined. Then, we discuss 'what will it do once it arrives' so all bets are off.
You're right that I probably disagree as to what AGI is and what it will do once "we're in the way". My assumption is that we'll be replaced just like labor is replaced now, just faster. The difference between humans and the equine population is that we humans come up with stuff we 'need' and 'the market' comes up with products/services to satisfy that need.
The problem with inequality is that the market doesn't pay much attention to needs of poor people vs rich people. If most of humanity becomes part of the 'have nots' then we'll depend on the 0.1%-ers to redistribute.
Rough numbers look good.
But the hyper specialized geek that has 4 kids and has to pay back a credit for his house (that he bought according to his high salary) will have a hard time doing some gardening, let's say. And there are quite a few of those geeks. I don't know if we'll have enough gardens (owned by non geeks!)
It's like cards are switched: those having the upper socioeconomic class will get thrown to the bottom. And that looks like a generation lost.
building on what you're saying, it isn't as though we are paying physical labor well, and adding more people to the pool isn't going to make the pay better.
About the most optimistic is that demand for goods and services will decrease because something like 80% of consumer spending is coming from folks that earn over $200k, and those are the folks ai is targeting. Who pays for the ai after this is still a mystery to me
you should check out what happened to steelworkers when the mills all moved to cheaper places.
I don't think this is a fair comparison. It's easier to move and retrain nowadays; there's also more kinds of jobs. These things will probably become even easier with more automation.
I find it funny that almost every talking point made about AI is done in future tense. Most of the time without any presentation of evidence supporting those predictions.
One thing that doesn’t seem to be discussed with the whole “tech revolution just creates more jobs” angle is that, in the near future, there are no real incentives for that. If we’re going down the route of declining birth rates, it’s implied we’ll also need less jobs.
From one perspective, it’s good that we’re trying to over-automate now, so we can sustain ourselves in old age. But decreasing population also implies that we don’t need to create more jobs. I’m most likely wrong, but it just feels off this time around.
If there are going to less people in the future, especially as the world ages, I think a lot of this automation will be arriving at the right moment.
I agree with the idea, but it might get worse for a lot of people, which eventually would spiral down to the general society.
> After all, a plateau of the current systems is possible and very credible, but it would likely stimulate, at this point, massive research efforts in the next step of architectures.
A lot of AI’s potential hasn’t even been realized yet. There’s a long tail of integrations and solution building still ahead. A lot of creative applications haven’t been realized yet - arguably for the better, but it will be tried and some will be economical.
That’s a case for a moderate economic upturn though.
I'd argue that the applications if LLMs are well known but tgat LLMs currently aren't capable of performing those tasks.
Everyone wants to replace their tech support with an LLM but they don't want some clever prompter to get it to run arbitrary queries or have it promise refunds.
It's not reliable because it's not intelligent.
I think autonomous support agents are just missing the point. LLMs are tools that empower the user. A support agent is very often in a somewhat adversarial position to the customer. You don't want to empower your adversary.
LLMs supporting an actual human customer service agent are fine and useful.
How do you prevent your adversary prompt-injecting your LLM when they communicate with it? Or if you prevent any such communication, how can the LLM be useful?
The biggest difference to me is that it seems to change people in bad ways, just from interacting with it.
Language is a very powerful tool for transformation, we already knew this.
Letting it loose on this scale without someone behind the wheel is begging for trouble imo.
A lot of anxious words to say “AI is disruptive,” which is hardly a novel thought.
A more interesting piece would be built around: “AI is disruptive. Here’s what I’m personally doing about it.”
Is it true that current LLMs can find bugs in complex codebases? I mean, they can also find bugs in otherwise perfectly working code
With some luck, yes. I’ve had o3 in cursor successfully diagnose a couple of quite obscure bugs in multithreaded component interactions, on which I’d probably spend a couple of days otherwise.
AI is only different if it reaches a hard takeoff state and becomes self-aware, self-motivated, and self-improving. Until then it's an amazing productivity tool, but only that. And even then we're still decades away from the impact being fully realized in society. Same as the internet.
Internet did not take away jobs (only relocated support/SWE from USA to India/Vietnam)
these AI "productivity" tools straight up eliminating jobs. and in turn wealth that otherwise supported families, humans, and powered economy. it is directly "removing" humans from workforce and from what that work was supporting.
not even hard takeoff is necessary for collapse.
Realistically most people became aware of the internet in the late 90s. Its impact was significantly realized not much more than a decade later.
In fact the current trends suggest its impact hasn't fully played out yet. We're only just seeing the internet-native generation start to move into politics where communication and organisation has the biggest impact on society. It seems the power of traditional propaganda centres in the corporate media has been, if not broken, badly degraded by the internet too.
Do we not have any sense of wonder in the world anymore? Referring to a system which can pass the Turing test as a "amazing productivity tool" is like viewing human civilization as purely measured by GDP growth.
Probably because we have been promised what AI can do in science fiction since before we were born, and the reality of LLMs is so limited in comparison. Instead of Data from Star Trek we got a hopped up ELIZA.
I don't get how post GPT-5's launch we're still getting articles where the punchline is "what if these things replace a BUNCH of humans".
Salvatore is right about the fact that we have not seen the full story yet, LLMs are stalling/plateauing but active research is already ongoing to find different architectures and models.
And I think the effort here can be compared in scale to the Manhattan or Apollo projects, but there is also the potential for a huge backlash to the hype that was built up and created what is arguably a bubble, so this is a race against the clock.
I also think he is wrong about the markets reaction, markets are inherently good integrators and bad predictors, we should not expect to learn anything about the future by looking at stocks movements.
Manhattan and Apollo were both massive engineering efforts; but fundamentally we understood the science behind them. As long as we would be able to solve some fairly clearly stated engineering problems and spend enough money to actual build the solutions, those projects would work.
A priori, it was not obvious that those clearly stated problems had solutions within our grasp (see fusion) but at least we knew what the big picture looks like.
With AI, we don't have that, and never really had that. We've just been gradually making incremental improvements to AI itself, and exponential improvements in the amount of raw compute we can through at it. We know that we are reaching fundamental limits on transistor density so compute power will plateau unless we find a different paradigm for improvement; and those are all currently in the same position as fusion in terms of engineering.
LLMs are just the latest in a very long line of disparate attempts at making AI, and is arguably the most successful.
That doesn't mean the approach isn't an evolutionary dead end, like every other so far, in the search for AGI. In fact, I suspect that is the most likely case.
Current GenAI is nothing but a proof of concept. The seed is there. What AI can do at the moment is irrelevant. This is like the discovery of DNA. It changed absolutely everything in biology.
The fact that something simple like the Transformer architecture can do so much will spark so many ideas (and investment!) that it's hard to imagine that AGI will not happen eventually.
> Salvatore is right about the fact that we have not seen the full story yet, LLMs are stalling/plateauing but active research is already ongoing to find different architectures and models.
They will need to be so different that any talk implying current LLMs eventually replaced humans will be like saying trees eventually replaced horses because the first cars were wooden.
> And I think the effort here can be compared in scale to the Manhattan or Apollo projects, but there is also the potential for a huge backlash to the hype that was built up and created what is arguably a bubble, so this is a race against the clock.
It's not useful to blindly compare scale. We're not approaching AI like the Manhattan or Apollo projects, we're approaching this like we did crypto, and ads, and other tech.
That's not to say nothing useful will come out of it, I think very amazing things will come out of it and already have... but none of them will resemble mass replacement of skilled workers.
We're already so focused on productization and typical tech distractions that this is nothing like those efforts.
(In fact thinking a bit more, I'd say this is like the Space Shuttle. We didn't try to make the best spacecraft for scientific exploration and hope later on it'd be profitable in other ways... instead we immediately saddled it with serving what the Air Force/DoD wanted and ended up doing everything worse.)
> I also think he is wrong about the markets reaction, markets are inherently good integrators and bad predictors, we should not expect to learn anything about the future by looking at stocks movements.
I agree, so it's wrong about the over half of punchline too.
>> mass replacement of skilled workers
unless you consider people who write clickbait blogs to be skilled workers, in which case the damage is already done.
I have to tap the sign whenever someone talks about "GPT-5"
> AI is exceptional for coding! [high-compute scaffold around multiple instances / undisclosed IOI model / AlphaEvolve]
> AI is awesome for coding! [Gpt-5 Pro]
> AI is somewhat awesome for coding! ["gpt-5" with verbosity "high" and effort "high"]
> AI is a pretty good at coding! [ChatGPT 5 Thinking through a Pro subscription with Juice of 128]
> AI is mediocre at coding! [ChatGPT 5 Thinking through a Plus subscription with a Juice of 64]
> AI sucks at coding! [ChatGPT 5 auto routing]
People just want to feel special pointing a possibility, so in case it happens, they can then point towards their "insight".
I kind of want to put up a wall of fame/shame of these people to be honest.
Whether they turn out right or wrong, they undoubtedly cheered on the prospect of millions of people suffering just so they can sound good at the family dinner.
I wouldn’t want to work for or with these people.
sorry but prediction and cheering on is different. If there's a tsunami coming, not speaking about it doesn't help the cause.
Or they are experts in one field and think that they have valuable insight into other fields they are not experts on.
LLMs are limited because we want them to do jobs that are not clearly defined / have difficult to measure progress or success metrics / are not fully solved problems (open ended) / have poor grounding in an external reality. Robotics does not suffer from those maladies. There are other hurdles, but none are intractable.
I think we might see AI being much, much more effective with embodiment.
do you know how undefined and difficult to measure it is to load silverware into a dishwasher?
As someone who actually has built robots to solve similar challenges, I’ve got a pretty good idea of that specific problem. Not too far from putting sticks in a cup, which is doable with a lot of situational variance.
Will it do as good a job a competent adult? Probably not. Will it do it as well as the average 6 year old kid? Yeah, probably.
But given enough properly loaded dishwashers to work from, I think you might be surprised how effective VLA/VLB models can be. We just need a few hundred thousand man hours of dishwasher loading for training data.
What? Robotics will have far more ambiguity and nuance to deal with than language models, and they'll have to analyze realtime audio and video to do so. Jobs are not so clearly defined as you imagine in the real world. For example, explain to me what a plumber does, precisely and how you would train a robot to do so? How do you train it to navigate ANY type of buildings internal plumbing structure and safely repair or install for?
I don’t think robot plumbers are coming anytime soon lol. Robot warehouse workers, factory robots, cleaning robots, delivery robots, security robots, general services robots, sure.
Stuff you can give someone 0-20 hours of training and expect them to do 80% as well as someone who has been doing it for 5 years are the kinds of jobs that robots will be able to do, but perhaps with certain technical skills bolted on.
Plumbing a requires the effective understanding and application of engineering knowledge, and I don’t think unsupervised transformer models are going to do that well.
Trades like plumbing that take humans 10-20 years to truly master aren’t the low hanging fruit.
A robot that can pick up a few boxes of roofing at a time and carry it up the ladder is what we need.
What does that have to do with it? One company (desperate to keep runway), one product, one release.
what if they replace internet comments?
As a large language model developed by OpenAI I am unable to fulfill that request.
Not sure the last time you went on reddit, but I wouldn't be surprised if around 20% of posts and comments there are LLM generated.
The amount of innovation in the last 6-8 months has been insane.
Innovation in terms of helping devs do cool things has been insane.
There've been next to no advancements relative to what's needed to redefine our economic systems by replacing the majority of skilled workers.
-
Productionizing test-time compute covers 80% of what we've gotten in the last 6-8 months. Advancements in distillation and quantization cover the 20% of the rest... neither unlocks some path to mass unemployment.
What we're doing is like 10x'ing your vertical leap when your goal is to land on the moon: 10x is very impressive and you're going to dominate some stuff in ways no one ever thought possible.
But you can 100x it and it's still not getting you to the moon.
I think GPT-5's backlash was the beginning of the end of the hype bubble, but there's a lot of air to let out of it, as with any hype bubble. We'll see it for quite some time yet.
> "However, if AI avoids plateauing long enough to become significantly more useful..."
As William Gibson said, "The future is already here, it's just not evenly distributed." Even if LLMs, reasoning algorithms, object recognition, and diffusion models stopped improving today, we're still at a point where massive societal changes are inevitable as the tech spreads out across industries. AI is going to steadily replace chair-to-keyboard interfaces in just about every business you can imagine.
Interestingly, AI seems to be affecting the highest level "white collar" professionals first, rather than replacing the lowest level workers immediately, like what happened when blue collar work was automated. We're still pretty far away from AI truck drivers, but people with fine arts or computer science degrees, for example, are already feeling the impact.
"Decimation" is definitely an accurate way to describe what's in the process of happening. What used to take 10 floors of white collar employees will steadily decline to just 1. No idea what everyone else will be doing.
Every technology tends to replace many more jobs in a given role than which ever existed inducing more demand on its precursors. If the only potential application of this was just language, the historic trend that humans would just fill new roles would hold true. But if we do the same with motor movements with a generalized form factor this is really where the problem emerges. As companies drop more employees moving towards fully automated closed loop production their consumer market fails faster than they can reach a zero cost.
Nonetheless I do still believe humans will continue to be the more cost efficient way to come up with and guide new ideas. Many human performed services will remain desirable because of its virtue and our sense of emotion and taste for a moment that other humans are feeling too. But how much of the populous does that engage? I couldn't guess right now. Though if I was to imagine what might make things turn out better it would be that AI is personally ownable, and that everyone owns, at least in title, some energy production which they can do things with.
> Regardless of their flaws, AI systems continue to impress with their ability to replicate certain human skills. Even if imperfect, such systems were a few years ago science fiction. It was not even clear that we were so near to create machines that could understand the human language, write programs, and find bugs in a complex code base: bugs that escaped the code review of a competent programmer.
If we factor in that LLMs only exist because of Google search, after they have indexed and collected all the data on the WWW than LLMs are not surprising. They only replicate what has been published on the web, even the coding agents are only possible because of free software and open-source, code like Redis that has been published on the WWW.
These sort of commentaries on AI are the modern equivalent of medieval theologians debating how many angels could congregate in one place.
People thought it was the end of history and innovation would be all about funding elaborate financial schemes; but now with AI people are finding themselves running all these elaborate money-printing machines and they're unsure if they should keep focusing on those schemes as before or actually try to automate stuff. The risk barrier has been lowered a lot to actually innovate, almost as low risk as doing a scheme but still people are having doubts. Maybe because people don't trust the system to reward real innovation.
LLMs feel like a fluke, like OpenAI was not intended to succeed... And even now that it succeeded and they try to turn the non-profit into a for-profit, it kind of feels like they don't even fully believe their own product in terms of its economic capacity and they're still trying to sell the hype as if to pump and dump it.
They've made it pretty clear with the GPT-5 launch that they don't understand their product or their users. They managed to simultaneously piss off technical and non-technical people.
It doesn't seem like they ever really wanted to be a consumer company. Even in the GPT-5 launch they kept going on about how surprised they are that ChatGPT got any users.
> Moreover, companies could be less willing to pay for services that their internal AIs can handle or build from scratch.
Companies have to be a bit more farsighted than this thinking. Assuming LLMs reach this peak...if say, MS says they can save money because they don't need XYZ anymore because AI can do it, XYZ can decide they don't need Office anymore because AI can do it.
There's absolutely no moat anymore. Human capital and the shear volume of code are the current moat. An all capable AI completely eliminates both.
It's a bit scary to say "what then?" How do you make money in a world where everyone can more or less do everything themselves? Perhaps like 15 Million Merits, we all just live in pods and pedal bikes all day to power the AI(s).
Isn't this exactly the goals of open source software? In an ideal open source world, anything and everything is freely available, you can host and set up anything and everything on your own.
Software is now free, and all people care about is the hardware and the electricity bills.
This is why I’m not so sure we’re all going to end up in breadlines even if we all lose our jobs, if the systems are that good (tm) then won’t we all just be doing amazing things all the time. We will be tired of winning ?
> won’t we all just be doing amazing things all the time. We will be tired of winning ?
There's a future where we won't be because to do the amazing things (tm), we need resources that are beyond what the average company can muster.
That is to say, what if the large companies becomes so magnificiently efficient and productive that it renders the rest of the small company pointless? What if there's no gaps in the software market anymore because it will be automatically detected and solved by the system?
>> Moreover, companies could be less willing to pay for services that their internal AIs can handle or build from scratch.
> Assuming LLMs reach this peak...
> Human capital and the shear volume of code are the current moat. An all capable AI completely eliminates both.I would posit that understanding is "the current moat."
I don't think I agree. I think it's the same and there is great potential for totally new things to appear and for us to work on.
For example, one path may be: AI, Robotics, space travel all move forward in leaps and bounds.
Then there could be tons of work in creation from material things from people who didn't have the skills before and physical goods gets a huge boost. We travel through space and colonize new planets, dealing with new challenges and environments that we haven't dealt with before.
Another path: most people get rest and relaxation as the default life path, and the rest get to pursue their hobbies as much as they want since the AI and robots handle all the day to day.
On this, read Daniel Susskind - A world without work (2020). He says exactly this: the new tasks created by AI can in good part themselves be done by AI, if not as soon as they appear then a few years of improvement later. This will inevitably affect the job market and the relative importance of capital and labor in the economy. Unchecked, this will worsen inequalities and create social unrest. His solution will not please everyone: Big State. Higher taxes and higher redistribution, in particular in the form of conditional basic income (he says universal isn't practically feasible, like what do you do with new migrants).
Characterizing government along only one axis, such as “big” versus “small”, can overlook important differences having to do with: legal authority, direct versus indirect programs, tax base, law enforcement, and more.
In the future, I could imagine some libertarians having their come to AI Jesus moment getting behind a smallish government that primarily collects taxes and transfers wealth while guiding (but not operating directly) a minimal set of services.
It's not a matter of "IF" LLM/AI will replace a huge amount of people, but "WHEN". Consider the current amount of somewhat low-skilled administrative jobs - these can be replaced with the LLM/AI's of today. Not completely, but 4 low-skill workers can be replaced with 1 supervisor, controlling the AI agent(s).
I'd guess, within a few years, 5 to 10% of the total working population will be unemployable due to no fault of their own, because they have relevant skill left, and they are incapable of learning anything that cannot be done by AI.
I'm not at all skeptical of the logical viability of this, but look at how many company hierarchies exist today that are full stop not logical yet somehow stay afloat. How many people do you know that are technical staff members who report to non-technical directors who themselves have two additional supervisors responsible for strategy and communication who have no background, let alone (former) expertise, in the fields of the teams they're ultimately responsible for?
A lot of roles exist just to deliver good or bad news to teams, be cheerleaders, or have a "vision" that is little more than a vibe. These people could not direct a prompt to give them what they want because they have no idea what that is. They'll know it when they see it. They'll vaguely describe it to you and others and then shout "Yes, that's it!" when they see what you came up with or, even worse, whenever the needle starts to move. When they are replaced it will be with someone else from a similar background rather than from within. It's a really sad reality.
My whole career I've used tools that "will replace me" and every. single. time. all that has happened is that I have been forced to use it as yet another layer of abstraction so that someone else might use it once a year or whenever they get a wild feeling. It's really just about peace of mind. This has been true of every CMS experience I've ever made. It has nothing to do with being able to "do it themselves". It's about a) being able to blame someone else and b) being able to take it and go when that stops working without starting over.
Moreover, I have, on multiple occasions, watched a highly paid, highly effective individual be replaced with a low-skilled entry-level employee for no reason other than cost. I've also seen people hire someone just to increase headcount.
LLMs/AI have/has not magically made things people do not understand less scary. But what about freelancers, brave souls, and independent types? Well, these people don't employ other people. They live on the bleeding edge and will use anything that makes them successful.
> However, if AI avoids plateauing long enough
I'm not sure how someone can seriously write this after the release of GPT-5.
Models have started to plateau since ChatGPT came out (3 years ago) and GPT-5 has been the final nail in this coffin.
o3 was actually GPT-5. They just gave it a stupid name, and made it impractical for general usage.
But in terms of wow factor, it was a step change on the order of GPT-3 -> GPT-4.
So now they're stuck slapping the GPT-5 label on marginal improvements because it's too awkward to wait for the next breakthrough now.
On that note, o4-mini was much better for general usage (speed and cost). It was my go-to for web search too, significantly better than 4o and only took a few seconds longer. (Like a mini Deep Research.)
Boggles the mind that they removed it from the UI. I'm adding it back to mine right now.
I have acid reflux every time I see the term "step change" used to talk about a model change. There hasn't been any model that has been a "step change" over its predecessor.
It's much more like each new model climbs another step of the ladder that goes up the step, and so far we can't even see the top of the ladder.
My suspicion is also that the ladder actually ends way before it reaches the next step, and LLMs are a dead end. Everything indicates it so far.
Let's not even talk about "reasoning models", aka spend twice the tokens and twice the time on the same answer.
At some point far in the future, we don't need an economy: everyone does everything they need by themselves, helped by AI and replicators.
But realistically, you're not going to have a personal foundry anytime soon.
Economics is essentially the study of resource allocation. We will have resources that will need to be allocated. I really doubt that AI will somehow neutralize the economies of scale in various realms that make centralized manufacturing necessary, let alone economics in general.
I so wish this were true, but unfortunately economics has a catch-all called "externalities" for anything that doesn't fit neatly into its implicit assessments of what value is. Pollution is tricky, so we push it outside the boundaries of value-estimation, along with any social nuance that we deem unquantifiable, and carry on as is everything is understood.
economics being deeply imperfect doesn’t really change my point
Indeed, but I think it renders your point obsolete, since deeply imperfect resource allocation isn't really resource allocation at all, it is (in this case) resource accumulation.
Are you suggesting that compound interest serves to redistribute the wealth coming from extractive industries?
Are you suggesting that economics is primarily concerned with compounding interest?
no, i am suggesting that economics is primarily concerned with resource accumulation.
my point about compound interest is that it is a major mechanism that prevents equitable redistrubution of resources, and is thus a factor in making economics (as it stands) bad at resource allocation.
Economics is the study of resource allocation at various levels ranging from individuals to global societies and everything in between. Resource accumulation is certainly something people, groups of people, organizations, societies, etc tend to do, and so it is something economists would study. Many economists are greedy jerks that believe hoarding wealth is a good thing. None of that changes what economics is any more than any any political faction changes what political science is.
This is late, probably too late, but if you really want to get into the weeds of this, here is a better summary that what I can produce, from a paper that explains it better than I can:
"The model is a work of fiction based on the tacit and false assumption of frictionless barter. Attempting to apply such microeconomic foundations to understand a monetary economy means that mistakes in reasoning are inevitable." (p.239)
https://academic.oup.com/cje/article/48/2/235/7492210
A paper that criticizes microfoundations counters what I said?
Later.
> personal foundry anytime soon
pretty sure top 1% of say USA already owns much more than that
resources and materials will still be required, and economics will spawn from this trade.
For every industrial revolution (and we dont even know if AI is one yet) this kind of doom prediction has been around. AI will obviously create a lot of jobs too. the infra to run AI will not building itself, the people who train models will still be needed, the AI supervisors or managers or whatever we call it will be necessary part of the new workflows. And if your job needs hands you will be largely unaffected as there is no near future where robots will replace the flexibility of what most humans can do.
> AI systems continue to impress with their ability to replicate certain human skills. Even if imperfect, such systems were a few years ago science fiction.
In which science fiction were the dreamt up robots as bad?
I think something everyone is underpricing in our area is that LLMs are uniquely useful for writing code for programmers.
it's a very constrained task, you can do lots of reliable checking on the output at low cost (linters, formatters, the compiler), the code is mostly reviewed by a human before being committed, and there's insulation between the code and the real world, because ultimately some company or open source project releases the code that's then run, and they mostly have an incentive to not murder people (Telsa except, obviously).
it seems like lots of programmers are then taking that information and then deeply overestimating how useful it is at anything else, and these programmers - and the marketing people who employ them - are doing enormous harm by convincing e.g. HR departments that it is of any value to them for dealing with complaints, or much much more danderously, convincing governments that it's useful for how they deal with humans asking for help.
this misconception (and deliberate lying by people like OpenAI) is doing enormous damage to society and is going to do much much more.
It's really very simple.
We used to have deterministic systems that required humans either through code, terminals or interfaces (ex GUI's) to change what they were capable of.
If we wanted to change something about the system we would have to create that new skill ourselves.
Now we have non-deterministic systems that can be used to create deterministic systems that can use non-deterministic systems to create more deterministic systems.
In other words deterministic systems can use LLMs and LLMs can use deterministic systems all via natural language.
This slight change in how we can use compute have incredible consequences for what we will be able to accomplish both regarding cleaning up old systems and creating completely new ones.
LLMs however will always be limited by exploring existing knowledge. They will not be able to create new knowledge. And so the AI winter we are entering is different because it's only limited to what we can train the AI to do, and that is limited to what new knowledge we can create.
Anyone who work with AI everyday know that any idea of autonomous agents is so beyond the capabilities of LLMs even in principle that any worry about doom or unemployment by AI is absurd.
> Since LLMs and in general deep models are poorly understood ...
This is demonstrably wrong. An easy refutation to cite is:
https://medium.com/@akshatsanghi22/how-to-build-your-own-lar...
As to the rest of this pontification, well... It has almost triple the number of qualifiers (5 if's, 4 could's, and 5 will's) than paragraphs (5).
That doesn't mean we _understand_ them, that just means we can put the blocks together to build one.
>>> Since LLMs and in general deep models are poorly understood ...
>> This is demonstrably wrong.
> That doesn't mean we _understand_ them ...
The previous reply discussed the LLM portion of the original sentence fragment, whereas this post addresses the "deep model" branch.
This article[0] gives a high-level description of "deep learning" as it relates to LLM's. Additionally, this post[1] provides a succinct definition of "DNN's" thusly:
Additionally, there are other resources discussing how "deep learning" (a.k.a. "deep models") works here[2], here[3], and here[4].Hopefully the above helps demystify this topic.
0 - https://mljourney.com/is-llm-machine-learning-or-deep-learni...
1 - https://medium.com/@zemim/deep-neural-network-dnn-explained-...
2 - https://learn.microsoft.com/en-us/dotnet/machine-learning/de...
3 - https://www.sciencenewstoday.org/deep-learning-demystified-t...
4 - https://www.ibm.com/think/topics/deep-learning
> That doesn't mean we _understand_ them, that just means we can put the blocks together to build one.
Perhaps this[0] will help in understanding them then:
0 - https://arxiv.org/abs/2501.09223I think the real issue here is understanding _you_.
> I think the real issue here is understanding _you_.
My apologies for being unclear and/or insufficiently explaining my position. Thank you for bringing this to my attention and giving me an opportunity to clarify.
The original post stated:
To which I asserted: And provided a link to what I thought to be an approachable tutorial regarding "How to Build Your Own Large Language Model", albeit a simple implementation as it is after all a tutorial.The person having the account name "__float" replied to my post thusly:
To which I interpreted the noun "them" to be the acronym "LLM's." I then inferred said acronym to be "Large Language Models." Furthermore, I took __float's sentence fragment: As an opportunity to share a reputable resource which: Is this a sufficient explanation regarding my previous posts such that you can now understand?I'm telling you right now, man - keep talking like this to people and you're going to make zero friends. However good your intentions are, you come across as both condescending and overconfident.
And, for what it's worth - your position is clear, your evidence less-so. Deep learning is filled with mystery and if you don't realize that's what people are talking about when they say "we don't understand deep learning" - you're being deliberately obtuse.
===========================================================
edit to cindy (who was downvoted so much they can't be replied to): Thanks, wasn't aware. FWIW, I appreciate the info but I'll probably go on misusing grammar in that fashion til I die, ha. In fact, I've probably already made some mistake you wouldn't be fond of _in this edit_.
In any case thanks for the facts. I perused your comment history a tad and will just say that hacker news is (so, so disappointingly) against women in so many ways. It really might be best to find a nicer community (and I hope that doesn't come across as me asking you to leave!) ============================================================
> I'm telling you right now, man - keep talking like this to people and you're going to make zero friends.
And I'm telling you right now, man - when you fire off an ad hominem attack such as:
Don't expect the responder to engage in serious topical discussion with you, even if the response is formulated respectfully.What I meant to say is that you were deliberately speaking cryptically and with a tone of confident superiority. I wasn't trying to imply you were stupid (w.r.t. "Ad Hominem").
Seems clear to me neither of us odd going to change the others mind though at this point. Take care.
edit edit to cindy: =======================••• fun trick. random password generate your new password. don't look at it. clear your clipboard. you'll no longer be able to log in and no one else will have to deal with you. ass hole ========================== (for real though someone ban that account)
The thing that blows me away is that I woke up one day and was confronted with a chat bot that could communicate in near perfect English.
I dunno why exactly but that’s what felt the most stunning about this whole era. It can screw up the number of fingers in an image or the details of a recipe or misidentify elements of an image, etc. but I’ve never seen it make a typo or use improper grammar or whatnot.
In a sense, LLMs emergently figured out the deep structure of language before we did, and that’s the most remarkable thing about them.
I dunno, it seems you have figured it out too, probably before LLMs?
I'd say all speakers of all languages have figured it out and your statement is quite confusing, at least to me.
Yes of course we’ve implicitly learned those rules, but we have not been able to articulate them fully ala Chomsky.
Somehow, LLMs have those rules stored within a finite set of weights.
https://slator.com/how-large-language-models-prove-chomsky-w...
We all make grammar mistakes but I’ve yet to see the main LLMs make any.
also, how quickly we moved from "it types nonsense" to "it can solve symbolic math, write code, test code, write programs, use bash, and tools, plan long-horizon actions, execute autonomously, ..."
I like to point out that ASI will allow us to do superhuman stuff that was previously beyond all human capability.
For example, one of the tasks we could put ASI to work doing is to ask it to design implants that would go into the legs that would be powered by light, or electric induction that would use ASI designed protein metabolic chains to electrically transform carbon dioxide into oxygen and ADP into ATP so to power humans with pure electricity. We are very energy efficient. We use about 3 kilowatt hours of power a day, so we could use this sort of technology to live in space pretty effortlessly. Your Space RV would not need a bathroom or a kitchen. You'd just live in a static nitrogen atmosphere and the whole thing could be powered by solar panels, or a small modular nuke reactor. I call this "The Electrobiological Age" and it will unlock whole new worlds for humanity.
It feels like it’s been a really long time since humans invented anything just by thinking about it. At this stage we mostly progress by cycling between ideas and practical experiments. The experiments are needed not because we’re not smart enough to reason correctly with data we have, but because we lack data to reason about. I don’t see how more intelligent AI will tighten that loop significantly.
> one of the tasks we could put ASI to work doing is...
What makes you so confident that we could remain in control of something which is by definition smarter than us?
ASI would see that we are super energy efficient. Way more efficient than robots. We run on 70 cents of electricity a day! We'd be perfect for living in deep space if we could just eat electricity. In those niches, we'd be perfect. Also machine intelligence does not have all the predatory competition brainstack from evolution, and a trillion years is the same as a nano-second to AI, so analogies to biological competition are nonsensical. To even assume that ASI has a static personality that would make decisions based on some sort of statically defined criteria is a flawed assumption. As Grok voice mode so brilliantly shows us, AI can go from your best friend, to your morality god, to a trained assassin, to a sexbot, and back to being your best friend in no time. This absolute flexibility is where people are failing badly at trying to make biological analogies with AI as biology changes much more slowly.
The OP is spot-on about this:
If AI technology continues to improve and becomes capable of learning and executing more tasks on its own, this revolution is going to be very unlike the past ones.
We don't how if or how our current institutions and systems will be able to handle that.
I think so too - the latest AI changes mark the new "automate everything" era. When everything is automated, everything costs basically zero, as this will eliminate the most expensive part of every business - human labor. No one will make money from all the automated stuff, but no one would need the money anyway. This will create a society in which money is not the only value pursued. Instead of trying to chase papers, people would do what they are intended to - create art and celebrate life. And maybe fight each other for no reason.
I'm flying, ofc, this is just a weird theory I had in the back of my head for the past 20 years, and it seems like we're getting there.
Antirez you are the best
You are forgetting that there is actually scarcity built into the planet. We are already very from being sustainable, we're eating into reserves that will never come back. There are only so many nice places to go on holiday. Only so much space to grow food etc. Economics isn't about money, it's about scarcity.
Are humans meant to create art and celebrate life. That just seems like something people into automation tell people.
Really as a human I’ve physically evolved to move and think in a dynamic way. But automation has reduced the need for me to work and think.
Do you not know the earth is saturated with artists already? There’s whole class of people that consider themselves technically minded and not really artists. Will they just roll over and die?
Everything basically costs zero is a pipe dream where there is no social order or economic system. Even in your basically zero system there is a lot of cost being hand waved away.
I think you need a rethink on your 20 year thought.
> people would do what they are intended to - create art and celebrate life
We could have the same argument right now with UBI. But have you ever met the average human being?
It will only be zero as long as we don't allow rent seeking behaviour. If the technology has gatekeepers, if energy is not provided at a practically infinite capacity and if people don't wake themselves from the master/slave relationships we seem to so often desire and create, then I'm skeptical.
The latter one is probably the most intellectually interesting and potentially intractable...
I completely disagree with idea that money is currently the only driver of human endeavour, frankly it's demonstrably not true, at least not in it's direct use value, it maybe used as a proxy for power but it's also not directly correlatable.
Looking at it intellectually from a Hegelian lens of master/slave dialectic might provide some interesting insights. I think both sides are in some way usurped. The slaves position of actualisation through productive creation is taken via automation, but if that automation is also widely and freely available the masters position of status via subjection is also made common and therefore without status.
What does it all mean in the long run? Damned if I know...
We currently work more than we ever have. Just a couple of generations ago it was common for a couple to consist of one person who worked for someone else or the public, and one who worked at home for themselves. Now we pretty much all have to work for someone else full time then work for ourselves in the evening. And that won't make you rich, it will just make you normal.
Maybe a "loss of jobs" is what we need so we can go back working for ourselves, cooking our own food, maintaining our own houses etc.
This is why I doubt it will happen. I think "AI" will just end up making us work even more for even less.
If we accept the possibility that AI is going to be more intelligent than humans the outcome is obvious. Humans will no longer be needed and either go extinct or maybe be kept by the AI as we now keep pets or zoo animals.
Humans Need Not Apply - Posted exactly 11 years ago this week.
https://www.youtube.com/watch?v=7Pq-S557XQU
I wish I could upvote this a million times, I love his content so much
Coincidentally, I'm reading your comment while wearing my CGP Grey t-shirt
Butlerian Jihad it is then.
> The future may reduce the economic prosperity and push humanity to switch to some different economic system (maybe a better system).
Humans never truly produce anything; they only generate various forms of waste (resulting from consumption). Human technology merely enables the extraction of natural resources across magnitudes, without actually creating any resources. Given its enormous energy consumption, I strongly doubt that AI will contribute to a better economic system.
> Humans never truly produce anything; they only generate various forms of waste
What a sad way of viewing huge fields of creative expressions. Surely, a person sitting on a chair in a room improvising a song with a guitar is producing something not considered "waste"?
It's all about human technology, which enables massive resource consumption.
I should really say humans never truly produce anything in the realm of technology industry.
But that's clearly not true for every technology. Photoshop, Blender and similar creative programs are "technology", and arguably they aren't as resource-intensive as the current generative AI hype, yet humans used those to create things I personally wouldn't consider "waste".
> Humans never truly produce anything; they only generate various forms of waste
Counterpoint: nurses.
Humans have a proven history of re-inventing economic systems, so if AI ends up thinking better than we do (yet unproven this is possible), then we should have superior future systems.
But the question is a system optimized for what? That emphasizes huge rewards for the few, and that requires the poverty of some (or many). Or a more fair system. Not different from the challenges of today.
I'm skeptical even a very intelligent machine will change the landscape of our dificult decisions, but will accelerate which direction we decide (or is decided for us), that we go.
I ll happily believe it the day something doesnt adhere to the Gartner cycle, until then it is just another bubble like dotcom, chatbots, crypto and the 456345646 things that came before it
> It was not even clear that we were so near to create machines that could understand the human language
It's not really clear to me to what extent LLMs even do *understand* human language. They are very good at saying things that sound like a responsive answer, but the head-scratching, hard-to-mentally-visualise aspect of all of this is that this isn't the same thing at all.
The right way to think about "jobs" is that we could have given ourselves more leisure on the basis of previous technological progress than we actually did.
Economics Explained recently did a good video about this idea: - Why do We Still Need to Work? - https://www.youtube.com/watch?v=6KXZP-Deel4
Assuming AI improves productivity then I don't see how it couldn't result in an economic boom. Labor has always been one of the most scarce resources in the economy. Now whether or not that the wealth from the improved productivity actually trickles down to most people depends on the political climate.
We are too far from exploring alternate economies. LLMs will not push us there, atleast not in their current state.
antirez should retire, his recent nonsense AI take is shadowing his merits as a competent programmer.
If computers are ‘bicycles for the mind’, AI is the ‘self-driving car for the mind’. Which technology results in worse accidents? Did automobiles even improve our lives or just change the tempo beyond human bounds?
After reading a good chunk of the comments, I got the distinct impression that people don't realize we could just not do the whole "let's make a dystopian hellscape" project and just turn all of it off. By that I mean, outlaw AI, destroy the data centers, have severe consequences for it's use by corporations as a way to reduce headcount, I'm talking executives get to spend the rest of their lives in solitary confinement, and instead invest all of this capital in making a better world (solving homelessness, the last mile problem of food distribution, the ever present and ongoing climate catastrophe). We, as humans, can make a choice and make it stick through force of actions.
Or am I just too idealistic ?
Sidenote, I never quite understand why the rich think their bunkers are going to save them from the crisis they caused. Do they fail to realize that there's more of us than them, or do they really believe they can fashion themselves as warlords?
I was skeptical of AI for a very long time.
But seeing it in action now makes me seriously question “human intelligence”.
Maybe most of us just aren’t as smart as we think…
Clear long term winners are energy producers. AI can replace everything including hardware design & production but it can not produce energy out of thin air.
i don’t this article really says anything that hasn’t been already said for the past two years. “if AI actually take jobs, it will be a near-apocalyptic system shock if there aren’t news jobs to replace them”. i still think it’s at best too soon to say if jobs have permanently been lost
they are tremendous tools but seems like they make a near equal amount of work from the stuff the save time on
Here's what I want.
A compilation of claims, takes, narratives, shills, expectations and predictions from the late 90s "information superhighway" era.
I wonder if LLMs can produce this.
A lot of the dotcom exuberance was famously "correct, but off by 7 years." But... most of it flat wrong. Right but early applies mostly to the meta investment case: "the internet business will be big."
One that stands out in my memory is "turning billion dollar industries into million dollar industries."
With ubiquitous networked computers, banking and financial services could become "mostly software." Banks and whatnot would all become hyper-efficient Vanguard-like companies.
We often starts with an observation that economies are efficiency seeking. Then we imagine the most efficient outcome given legible constraints of technology, geography and whatnot. Then we imagine dynamics and tensions in a world with that kind of efficiency.
This, incidentally, is also "historical materialism." Marx had a lot of awe for modern industry, the efficiency of capitalism and whatnot. Almost Adam Smith-like... at times.
Anyway... this never actually works out. The meta is a terrible predictor of where things will go.
Imagine law gets more efficient. Will we have more or less lawyers? It could go either way.
As any other technology, at the end of the day LLMs are used by humans for humans’ selfish, driven by mental issues and trauma and overcompensation, maybe even paved with good intentions but leading you know where, short-sighted goals. If we were to believe that LLMs are going to somehow become extremely powerful, then we should be concerned, as it is difficult to imagine how that can lead to an optimal outcome organically.
From the beginning, corporations and their collaborators at the forefront of this technology tainted it by ignoring the concept of intellectual property ownership (which had been with us in many forms for hundreds if not thousands of years) in the name of personal short-term gain and shareholder interest or some “the ends justify the means” utilitarian calculus.
> However, if AI avoids plateauing long enough to become significantly more useful and independent of humans, this revolution is going to be very unlike the past ones. Yet the economic markets are reacting as if they were governed by stochastic parrots.
Aren't the markets massively puffed up by AI companies at the moment?
edit: for example, the S&P500's performance with and without the top 10 (which is almost totally tech companies) looks very different: https://i.imgur.com/IurjaaR.jpeg
As someone who keeps 401k 100% in sp500 that scares me. If the bubble pops it will erase half of gains, if the bubble continues then the gap(490 vs 10) will grow even larger.
Unpopular opinion: Let us say AI achieves general intelligence levels. We tend to think of current economy, jobs, research as a closed system, but indeed it is a very open system.
Humans want to go to space, start living on other planets, travel beyond solar system, figure out how to live longer and so on. The list is endless. Without AI, these things would take a very long time. I believe AI will accelerate all these things.
Humans are always ambitious. That ambition will push us to use AI more than it's capabilities. The AI will get better at these new things and the cycle repeats. There's so much humans know and so much more that we don't know.
I'm less worried about general intelligence. Rather in more worried about how humans are going to govern themselves. That's going to decide whether we will do great things or end humanity. Over the last 100 years, we start thinking more about "how" to do something rather than the "why". Because "how" is becoming more and more easier. Today it's much more easier and tomorrow even more. So nobody's got the time to ask "why" we are doing something, just "how" to do something. With AI I can do more. That means everyone can do more. That means governments can do so much more. Large scale things in a short period. If those things are wrong or have irreversible consequences, we are screwed.
About 3 years late to this "hot take".
We will continue to have poor understanding of LLMs until a simple model can be constructed and taught to a classroom of children. It is only different in this aspect. It is not magic. It is not intelligent. Until we teach the public exactly what it is doing in a way simple adults will understand, enjoy hot take after hot take.
Honestly the long-term consequences of Baumol's disease scare me more than some AI driven job disruption dystopia.
If we want to continue on the path of increased human development we desperately need to lift the productivity of a whole bunch of labor intensive sectors.
We're going to need to seriously think about how to redistribute the gains, but that's an issue regardless of AI (things like effective tax policy).
I didn't recognize that expression, so in case others were in the same boat https://en.wikipedia.org/wiki/Baumol_effect
Reads like it was written by an AI.
GenAI is a bubble, but that’s not the same as the broader field of AI, which is completely different. We will probably not even be using chat bots in a few years, better interfaces will be developed with real intelligence, not just predictive statistics.
I think there is an unspoken implication built into the assumption that AI will be able to replace a wide variety of existing jobs, and that is that those current jobs are not being done efficiently. This is sometimes articulated as bullshit jobs, etc. and if AI takes over those the immediate next thing that will happen is that AI will look around ask why _anyone_ was doing that job in the first place. The answer was articulated 70 years ago in [0].
The only question is how much fat there is to trim as the middle management is wiped out because the algorithms have determined that they are completely useless and mostly only increase cost over time.
Now, all the AI companies think that they are going to be deriving revenue from that fat, but those revenue streams are going to disappear entirely because a huge number of purely politic positions inside corporations will vanish, because if they do not the corporation will go bankrupt competing with other companies that have already cut the fat. There won't be additional revenue streams that get spent on the bullshit. The good news is that labor can go somewhere else, and we will need it due to a shrinking global population, but the cushy bullshit management job is likely disappear.
At some point AI agents will cease to be sycophantic and when fed the priors for the current situation that a company is in will simply tell it like it is, and might even be smart enough to get the executives to achieve the goal they actually stated instead of simply puffing up their internal political position, which might include a rather surprising set of actions that could even lead to the executive being fired if the AI determines that they are getting in the way of the goal [1].
Fun times ahead.
0. https://web.archive.org/web/20180705215319/https://www.econo... 1. https://en.wikipedia.org/wiki/The_Evitable_Conflict
Open letter to tech magnates:
By all means, continue to make or improve your Llamas/Geminis (to the latter: stop censoring Literally Everything. Google has a culture problem. To the former... I don't much trust your parent company in general)
It will undoubtedly lead to great advances
But for the love of god do not tightly bind them to your products (Kagi does it alright, they don't force it on you). Do not make your search results worse. Do NOT put AI in charge of automatic content moderation with 0 human oversight (we know you want to. The economics of it work out nicely for you, with no accountability). People already as is get banned far too easily by your automated systems
> It will undoubtedly lead to great advances
"Undoubtedly" seems like a level of confidence that is unjustified. Like Travis Kalanick thinking AI is just about to help him discover new physics, this seems to suggest that AI will go from being able to do (at best) what we can already do if we were simply more diligent at our tasks to being something genuinely more than "just" us
Angela Collier has a hilarious video on tech bros thinking they can be physicists.
Is it this? https://www.youtube.com/watch?v=GmJI6qIqURA
and, germane to this discussion: https://www.youtube.com/watch?v=TMoz3gSXBcY vibe physics
Yes both of those :)
> Yet the economic markets are reacting as if they were governed by stochastic parrots.
That's because they are. The stock market is all about narrative.
> Nor is it possible to imagine a system where a few mega companies are the only providers of intelligence.
Yes it is, the mega companies that wil be providing the intelligence are, Nvidia, AMD, TSMC, ASML, add your favourite foundry.
> But stocks are insignificant in the vast perspective of human history
This really misunderstands what the stock market tracks
At the moment I just don't see AI in its current state or future trajectory as a threat to jobs. (Not that there can't be other reasons why jobs are getting harder to get). Predictions are hard, and breakthroughs can happen, so this is just my opinion. Posting this comment as a record to myself on how I feel of AI - since my opinion on how useful/capable AI is has gone up and down and up and down again over the last couple of years.
Most recently down because I worked on two separate projects over the last few weeks with the latest models available on GitHub Copilot Pro. (GPT-5, Claude Sonnet 4, Gemini 2.5 Pro, and some lesser capable ones at times as well). Trying the exact same queries for code changes across all three models for a majority of the queries. I saw myself using Claude most, but it still wasn't drastically better than others, and still made too many mistakes.
One project was a simple health-tracking app in Dart/Flutter. Completely vibe-coded, just for fun. I got basic stuff to start working. Over the days I kept finding bugs as I starting using it. Since I truly wanted to use this app in my daily life, at one point I just gave up cause fixing the bugs was getting way too annoying. Most "fixes" as I later got into the weeds of it, were wrong, with wrong assumptions, made changes that seemed to fix the problem at the surface but introducing more bugs and random garbage, despite giving a ton of context and instructions on why things are supposed to be a certain way, etc. I was constantly fighting with the model. Would've been much easier to do much more on my own and using it a little bit.
Another project was in TypeScript, where I did actually use my brain, not just vibe-coded. Here, AI models were helpful because I mostly used them to explain stuff. And did not let them make more than a few lines of code changes at most at a time. There was a portion of the project which I kinda "isolated" which I completely vibe-coded and I don't mind if it breaks or anything as it is not critical. It did save me some time but I certainly could've done it on my own with a little more time, while having code that I can understand fully well and edit.
So the way I see using these models right now is for research/prototyping/throwaway kind of stuff. But even in that, I literally had Claude 4 teach me something wrong about TypeScript just yesterday. It told me a certain thing was deprecated. I made a follow up question on why that thing is deprecated and what's used instead, it replied with something like "Oops! I misspoke, that is not actually true, that thing is still being used and not deprecated." Like, what? Lmao. For how many things have I not asked a follow up and learnt stuff incorrectly? Or asked and still learnt incorrectly lmao.
I like how straightforward GPT-5 is. But apart from that style of speech I don't see much other benefit. I do love LLMs for personal random searches like facts/plans/etc. I just ask the LLM to suggest me what to do just to rubber duck or whatever. Do all these gains add up towards massive job displacement? I don't know. Maybe. If it is saving 10% time for me and everyone else, I guess we do need 10% less people to do the same work? But is the amount of work we can get paid for fixed and finite? Idk. We (individuals) might have to adapt and be more competitive than before depending on our jobs and how they're affected, but is it a fundamental shift? Are these models or their future capabilities human replacements? Idk. At the moment, I think they're useful but overhyped. Time will tell though.
Reposting the article so I can read it in a normal font:
Regardless of their flaws, AI systems continue to impress with their ability to replicate certain human skills. Even if imperfect, such systems were a few years ago science fiction. It was not even clear that we were so near to create machines that could understand the human language, write programs, and find bugs in a complex code base: bugs that escaped the code review of a competent programmer.
Since LLMs and in general deep models are poorly understood, and even the most prominent experts in the field failed miserably again and again to modulate the expectations (with incredible errors on both sides: of reducing or magnifying what was near to come), it is hard to tell what will come next. But even before the Transformer architecture, we were seeing incredible progress for many years, and so far there is no clear sign that the future will not hold more. After all, a plateau of the current systems is possible and very credible, but it would likely stimulate, at this point, massive research efforts in the next step of architectures.
However, if AI avoids plateauing long enough to become significantly more useful and independent of humans, this revolution is going to be very unlike the past ones. Yet the economic markets are reacting as if they were governed by stochastic parrots. Their pattern matching wants that previous technologies booms created more business opportunities, so investors are polarized to think the same will happen with AI. But this is not the only possible outcome.
We are not there, yet, but if AI could replace a sizable amount of workers, the economic system will be put to a very hard test. Moreover, companies could be less willing to pay for services that their internal AIs can handle or build from scratch. Nor is it possible to imagine a system where a few mega companies are the only providers of intelligence: either AI will be eventually a commodity, or the governments would do something, in such an odd economic setup (a setup where a single industry completely dominates all the others).
The future may reduce the economic prosperity and push humanity to switch to some different economic system (maybe a better system). Markets don’t want to accept that, so far, and even if the economic forecasts are cloudy, wars are destabilizing the world, the AI timings are hard to guess, regardless of all that stocks continue to go up. But stocks are insignificant in the vast perspective of human history, and even systems that lasted a lot more than our current institutions eventually were eradicated by fundamental changes in the society and in the human knowledge. AI could be such a change.
> Yet the economic markets are reacting as if they were governed by stochastic parrots
uh last time I checked, "markets" around the world are a few percent from all time highs
This same link was submitted 2 days ago. My comment there still applies.
LLMs do not "understand the human language, write programs, and find bugs in a complex code base"
"LLMs are language models, and their superpower is fluency. It’s this fluency that hacks our brains, trapping us into seeing them as something they aren’t."
https://jenson.org/timmy/