The LLM capabilities/usefulness debate seems so ridiculously polarized. On the one hand you have a gullible C-suite gulping down the "on the brink of super-intelligence" cool-aid making plans to ditch 80% of their workforce in the next 2 years. On the other side you have the Ivory Tower academics still parroting the "stochastic parrot" good for nothing memes.
The serious middle ground, appreciating the new tools for what they can offer but while aware of the limitations and fitting them into useful slots in the business-processes, feels very underrepresented.
I'm interacting on this with lots of people on both sides of this isle and am constantly amazed by the almost caricatural shallowness of understanding on either side.
This same phenomenon seems to happen often. My go-to explanation is that the "serious middle ground" you mentioned is actually using the new tools doing some actual work, just a little more efficiently, without much fuss.
That usually does not make it into the headlines, and doesn't "drive engagement metrics" or whatever, so it also isn't in the interest of the news to push that middle ground narrative, and so we get the ridiculous polarization.
The academics are correct, and they are very serious. Your "serious middle ground" is itself a rather extreme viewpoint, and that's just not how this debate should be defined. Here's how I'd define it:
* Extreme pro-LLM: sure these tools are ethical, super useful, and should be adopted by virtually everyone into most workflows.
* Moderate middle ground: these tools are largely unethical and extremely problematic on socio-economic & philosophical grounds—we should publicise these numerous issues and craft strict regulation around their use.
* Extreme anti-LLM: we should sabotage the data centers powering these existential threats to humanity and the environment. If governments won't outlaw the technology, we'll take matters into our own hands.
FWIW, I'm somewhere on the spectrum between moderate and extreme anti-LLM.
I sympathize but there has not been a single time in history in which people anti-new technology have ever halted progress or stopped it - once the cat is out of the bag, all you can do is deal with the cat, you can't put the cat back in.
Car-free & car-limited urban centers around the world would disagree with you. Smoking-free indoor establishments in many corners of the globe would also disagree with you. I can name numerous other examples.
People can make choices to put cats back in bags. They've already done it.
Not really. What cats did they return to bags? The cats/cars are still out there, prowling around. The car limited urban centres are still surrounded by suburban areas. I used to live in Copenhagen and we were the only people in our little suburb without a car. We had the most cats though.
What you're describing is just dealing with the cats being around. Personal motorised transport isn't going away because there are areas where it looks different (electric bikes, for example). And I think that's the point - we have LLMs now, they're not going away, so we have to live with them one way or another.
It's an intellectual wasteland, to be sure. I think the problem is it became yet another thing to take a side on, which inevitably ends up becoming an end in and of itself.
Sadly, I think this person is wrong, but not for straightforward reasons. The fact is, that attention is limited, and you can grab a large swath of that attention with crap, just look at like Netflix's entire context strategy, which used to be about prestige drama and now is just about throwing as much content at the wall to see what sticks, and with micro-targeting and an unlimited amount of AI generated content, that's a lot less $$ for "good" content.
He kinda says it in the piece:
"This pattern repeats. It’s not that AI can’t be helpful in talking about humanities concepts. If the level of understanding you’re looking for is high school or maybe undergraduate, these tools can teach you a lot, and for a lot of people, that’s more than enough. But if your aim is graduate level analysis and output—a level surely included in “almost all cognitive tasks a human can do”—you’re going to quickly be led astray."
The whole world is moving towards high school and maybe undergraduate levels of doing things, if that. Most business tasks can probably be handled by someone with a high school education, certainly by an undergraduate education.
But like, I want to live in the world Aaron Ross Powell seems to live in, but this article has not convinced me that we do.
I don't think any of this is incompatible with the argument I made in the essay. This paragraph speaks to it:
The same holds for art. AI can, right now, produce pretty passable mediocre art. Which is a threat to plenty of artists, writers, etc., because plenty of artists, writers, etc., produce mediocre art. I’m pretty confident existing frontier LLM models could come up with an episode of the ABC drama 9-1-1 indistinguishable from the output of that show’s writing room. But, again, “almost all cognitive tasks a human can do” aims a bit higher than 9-1-1.
So the claim I'm making isn't "AI can't generate art that satisfies many or most people." Rather it's that Roose is wrong to believe it is capable, or will become capable, of "almost all cognitive tasks a human can do," which is a different standard. That lots of people are happy to accept less than the best doesn't mean that something that is capable only of achieving that less-than-the-best level is actually capable of achieving the best. It just means that lots of people are happy to accept less than the best.
Maybe this will bifurcate business strategies. Maybe "almost all human cognitive tasks" describes breadth not depth, that it can only do 20% of the things that humans as a whole are capable of, but humans as a whole spend 80% of their time on the easiest 20%, and that you can re-engineer your business without it. We kinda see that with outsourcing and other strategies that have downsides in terms of maybe longevity of employees that can lead to brain drain. But really, I'd much rather be wrong :-) So, thanks for your submission!
I've been pondering -- at the same relative point, what are/were people more bearish on, the 2008 housing bubble or the 2025 AI bubble? This doesn't mean that AGI isn't real, but there are plenty of examples in history where everyone sort of believed the same lie. I wonder -- what are the true "original" sources of opinions, and how many of the opinions you hear repeated in the news are incestuous? Like most journalists are certainly not opinions of people with on the ground knowledge, I'm thinking the recent NY Times opinion column by Kevin Roose. And so like everyone says "I talked to a bunch of people, and they all said." And then it's "You haven't seen what's in the OpenAI labs"
And what we have now is amazing, but like, what is the tipping point?
I had an otherwise very competent VP of Engineering who was enamored by a former employee. One day during our 1-on-1 we got to the topic of what we know for sure vs. what we don't. He said that he tried to avoid thinking about that because this former employee of his had told him of his own adventures with this train of thought and that it "took him to a very dark place".
Apparently he had never heard of Descartes? It was a great lesson in the fact that hubris is very real and just because you're good at one thing doesn't make you worth listening to on any other topic.
He may be right about the quality of the current state of the art.
But like most of us I think he struggles to comprehend the implications of exponential improvement. What if 2 or 4 years now the LLMs have improved by a factor of 10 or 100? Will they still not be as good as the best humans at writing a movie script? What about 2 years after that with similar rate of improvement?
Are there enough examples of "best humans at writing a movie script?" Is there an objective measure of "best movie scripts"? I think LLM's are probably already at the level where they can generate enough quality to grab people's attention, but I think there is a human component, not to sort of the quality of a film, but the ability to sell people on something that is probably just not common, consistent, or replicable enough to emulate. Like, I don't think objective quality can tell you why Taylor swift sold 3 times as many records as the rolling stones. Nor do I think that kind of fame would be replicable by AI.
And actually here's what makes me somewhat sure about this. If AI is good enough to replicate the popularity of Taylor Swift, it's probably good enough to do it 1000 times, and microtarget, diluting it's own popularity. Is an artist that's 1/1000 times less popular than Taylor Swift as good as Taylor Swift if there's no objective measure of "good"?
Now, could AI capture 98% of the attention on Tik Tok through experiments and algorithms? Absolutely. If not today, very soon.
Trend lines don't always keep going up, in absolute terms or in their rate. And I'm skeptical that the nature of LLM technology specifically can achieve AGI, because it's at base a prediction model (though an incredibly sophisticated and powerful one) and not actual intelligence. To think that LLMs can grow into AGI is, thus, a category mistake, even if they get better at what they do, and even if they prove super useful in all kinds of ways.
> But like most of us I think he struggles to comprehend the implications of exponential improvement.
Or he successfully comprehends the implications of sigmoid improvement.
Consider self-driving cars: there was a few years of huge progress, and it looked like the end of human driving was right around the corner. And then… the exponential part of the curve stopped, and it reverted to a much slower rate of improvement. We’ll get there eventually, but… it’s not around the corner.
I pretty much agree because all really creative art will have some kind of novelty to it which a LLM wont be able to generate cause its outside the training data, except of course the initial burst of novelty from when the tech was new, which is more novelty being introduced into the field than most artists will ever accomplish.
Article gets at something that's worth discussing: the feedback loops between being a "techbro" and social networking. There is a very low cost to professing to be on team AI, and a big upside in the form of social currency and connections among fellow team members. The "positive vibes" from a shared belief manifest as increased activity, which can cause the algorithms to favor such posts, which then has the effect of further entrenching belief (because you're on record of professing as such) and pulling others in.
People looking to get on the hype train interact with other people of a similar temperament, and end up creating their own reality-distorting bubble of positivity. This is the exact same playbook that the rationalist techies would accuse religious people of falling into!
I think people want to believe AI is capped out or won't come for the things they ascribe value to.
We don't know, but articles like this aren't an analysis based on facts or a theory (or even opinion with supporting arguments) of what AI will be capable of in the near future as much as they are an expression of anxiety and hope that there is something special about the things they want to continue to believe are special. That's a perfectly reasonable and very normal reaction whenever the world starts to change quickly and in uncomfortable ways, but there is practically zero information content towards helping understand the situation with the technology (however it is very informative wrt the situation with people and the sociology of disruption).
Aaron Ross Powell isn't an expert or even a highly informed amateur. He's an internet influence who calls himself a radical liberal philosopher. He seems quite good at what he does, but what he does lends no real information content as to what AI is capable or will be capable of at any point.
> The second feature is a basic lack of taste. That Sam Altman thinks his chatbot’s short story is brilliant tells us much more about Altman’s literary sophistication than it does the nearness of AGI.
ARP's book of short stories can be found here: https://amazon.com/Animus-Six-Tales-Crime-Terror-ebook/dp/B0... It has 3 reviews and they are poor. Glass houses etc. The one good review is from a reviewer who says he is as good as Stephen King, the other reviews from this reviewer are exclusively for baby toys. "Sophistication"
It's more of a cultural thing of something getting extremely popular too quick.
When something rises to an absurd amount of popularity really quickly it becomes stupid. People like to talk shit about it. It happens with bands and crocs.
People have forgotten just how revolutionary the LLM. The hype is justified.
I'm telling you if somebody next week invents cheap space cars with lightspeed drives that cost only 4$... within a year those space cars will be referred to as junk and overhyped by the very people who use it everyday.
I still disagree. People SHOULD be afraid of AI. Not just because it might turn the world into paperclips, but because it can already do this blog author's job almost as good as he can, but at arbitrarily larger volume and 1000's of times faster. It is very likely that in a year or two it will be better than him at his 'witty' and 'cutting' shallow commentary, but even if it's only 85% as good it will be available way way sooner and on every single topic you can imagine. He's toast. A lot of us are toast.
Edit: AI doesn't make Stephen King redundant, AI isn't better than the best humans, or even close. Lets assume it never will be that good (not a good assumption but lets assume.) Most of us are very mediocre, even at things we make our living at. Most software engineers are poor or mediocre (a statistical tautology) and AI is better than a lot of them already. Most authors aren't very good and nobody will remember their work, but they can still find work and get paid for it. A huge fraction of those jobs are just going to go away.
Not sure if I'm ready to buy into the "be afraid" mentality yet.
But what articles like this often seem to just completely gloss over is that while, yes, AI might not today be the force that makes every single short-story writer redundant, tech like this seldom goes backwards in capability and instead tends to get better... Cumulative small improvements for, say, 10-15 years does wonders, like the advent of the Internet showed.
I'm afraid of it in a very abstract way. I'm sure the danger is real but I feel most people just aren't that scared.
It's like global warming. Global warming will probably make a lot of us toast. Yet I still drive a gas guzzling car and I'm not screaming out of my mind.
It's totally irrational and even I'm like that. Do you sort of get what I'm saying?
Yea I get what you're saying. I think your global warming analogy is :chef's kiss:. I think AI has within it the looming artificial god. Buuut there's also a much shorter term risk that global warming doesn't have, which is that you might just not be able to do work that anyone will be willing to pay for starting about 2 years from now if you are an artist, or designer, or internet blog author that isn't famous, computer programmer? We don't know, but a lot of people will no longer be able to sell their labor that can right now. A lot of people will have to find a new niche for themselves and they probably are quite comfortable in their current niche, and they worked hard to make it for themselves.
> Aaron Ross Powell isn't an expert or even a highly informed amateur
Yeah, it was a embarrassing how hard he missed this mark while at the same time lashing out at "tHe TeCH BRos" and their inadequacy in taste two times in every sentence...
I have been seeing models get better, and each time the SOTA improves, the upgrade seems amazing.
Bonus irony: this is despite the fact I've personally drawn attention to the exact same trend with increasing quality of 3D rendering over the 90s, where each improvement was seen as "photorealistic" only to be forgotten 6 months later when a better engine (or even tool) got the same aclaim.
I said this before: AI only exists because business hates creative people and wants to automate us away at any cost. And because most businesspeople don't have good taste in art (they never bothered to build their taste to begin with!), gen AI looks good enough for them.
absolutely never? Like when Bill Gates estimated the success of microsoft his estimate failed and microsoft also failed? Like it's cool to say Tech bros never estimate anything right, but honestly the statement isn't at all universally true.
Sarcasm aside, AGI may not be 1 to 3 years away but I think it could be about a decade away. Within our lifetimes. The last decade till now was a huge jump. The next decade should be an even bigger jump.
I think more then hype, is the absolute negativity around AI. Like there's definitely unreasonable excitement among entrepreneurs who bet on AI. That's expected but that's a minority of the population.
Most people are really negative against AI. And I think the negativity is overblown. As overblown as the positivity from the AI entrepreneurs.
It's very generic. Anything that gets too popular too fast becomes stupid really quick as well. Have we forgotten just how revolutionary an LLM even is? I could not have predicted we would get to this point in my life time.
I'm telling you this negativity pervades the culture an influences our opinions more then the reality. If AI gets to the point where it can write a novel as good as a human, our negativity will delude most of us into thinking it's still trash. It's sort of like the opposite of modern art. Modern art is so obscure it's good even though it's really trash. AI is so crazy popular it's trash even though it's completely revolutionary.
What is the point of just asserting something like this when it's SOOOO obviously an issue with definitions?
Unless you just want to be difficult and disagreeable, the usual format would be "Hypothesis, supporting argument" not just saying the opposite.
Here I'll get you started: Bill Gates wasn't a tech bro because ... (you fill in the ...)
Edit: I think that if you write down your criteria for 'who is' and 'who isn't' a tech bro, we might not agree on those criteria but we can at least argue with evidence whether Bill Gates fits your definition or not.
Bill Gates wasn't a tech bro because "tech bro" refers to a very specific subculture that Gates was never a part of. It didn't even exist in his day. I'm not sure that's any better.
In any case, you're right about it being definitional. I was just surprised that anybody would think of Gates as a tech bro. That's just an alien concept to me and I never expected to hear it asserted.
But it doesn't really matter. Whether or not I agree with anyone about what a "tech bro" is (and clearly there is disagreement about that) is irrelevant.
Bill Gates was not a tech bro, he was a tech nerd.
If you want to argue someone who is a tech bro and was also very successful Steve Jobs might be a better fit, as he never did tech work he just sold things.
Bill Gates was one of the first tech bros. Either way Pick any successful start up founded by a tech bro... same story: The start up succeeded base off of a correct estimate.
The LLM capabilities/usefulness debate seems so ridiculously polarized. On the one hand you have a gullible C-suite gulping down the "on the brink of super-intelligence" cool-aid making plans to ditch 80% of their workforce in the next 2 years. On the other side you have the Ivory Tower academics still parroting the "stochastic parrot" good for nothing memes.
The serious middle ground, appreciating the new tools for what they can offer but while aware of the limitations and fitting them into useful slots in the business-processes, feels very underrepresented.
I'm interacting on this with lots of people on both sides of this isle and am constantly amazed by the almost caricatural shallowness of understanding on either side.
This same phenomenon seems to happen often. My go-to explanation is that the "serious middle ground" you mentioned is actually using the new tools doing some actual work, just a little more efficiently, without much fuss.
That usually does not make it into the headlines, and doesn't "drive engagement metrics" or whatever, so it also isn't in the interest of the news to push that middle ground narrative, and so we get the ridiculous polarization.
The academics are correct, and they are very serious. Your "serious middle ground" is itself a rather extreme viewpoint, and that's just not how this debate should be defined. Here's how I'd define it:
* Extreme pro-LLM: sure these tools are ethical, super useful, and should be adopted by virtually everyone into most workflows.
* Moderate middle ground: these tools are largely unethical and extremely problematic on socio-economic & philosophical grounds—we should publicise these numerous issues and craft strict regulation around their use.
* Extreme anti-LLM: we should sabotage the data centers powering these existential threats to humanity and the environment. If governments won't outlaw the technology, we'll take matters into our own hands.
FWIW, I'm somewhere on the spectrum between moderate and extreme anti-LLM.
I sympathize but there has not been a single time in history in which people anti-new technology have ever halted progress or stopped it - once the cat is out of the bag, all you can do is deal with the cat, you can't put the cat back in.
Car-free & car-limited urban centers around the world would disagree with you. Smoking-free indoor establishments in many corners of the globe would also disagree with you. I can name numerous other examples.
People can make choices to put cats back in bags. They've already done it.
Not really. What cats did they return to bags? The cats/cars are still out there, prowling around. The car limited urban centres are still surrounded by suburban areas. I used to live in Copenhagen and we were the only people in our little suburb without a car. We had the most cats though.
What you're describing is just dealing with the cats being around. Personal motorised transport isn't going away because there are areas where it looks different (electric bikes, for example). And I think that's the point - we have LLMs now, they're not going away, so we have to live with them one way or another.
It's an intellectual wasteland, to be sure. I think the problem is it became yet another thing to take a side on, which inevitably ends up becoming an end in and of itself.
Sadly, I think this person is wrong, but not for straightforward reasons. The fact is, that attention is limited, and you can grab a large swath of that attention with crap, just look at like Netflix's entire context strategy, which used to be about prestige drama and now is just about throwing as much content at the wall to see what sticks, and with micro-targeting and an unlimited amount of AI generated content, that's a lot less $$ for "good" content.
He kinda says it in the piece:
"This pattern repeats. It’s not that AI can’t be helpful in talking about humanities concepts. If the level of understanding you’re looking for is high school or maybe undergraduate, these tools can teach you a lot, and for a lot of people, that’s more than enough. But if your aim is graduate level analysis and output—a level surely included in “almost all cognitive tasks a human can do”—you’re going to quickly be led astray."
The whole world is moving towards high school and maybe undergraduate levels of doing things, if that. Most business tasks can probably be handled by someone with a high school education, certainly by an undergraduate education.
But like, I want to live in the world Aaron Ross Powell seems to live in, but this article has not convinced me that we do.
I don't think any of this is incompatible with the argument I made in the essay. This paragraph speaks to it:
The same holds for art. AI can, right now, produce pretty passable mediocre art. Which is a threat to plenty of artists, writers, etc., because plenty of artists, writers, etc., produce mediocre art. I’m pretty confident existing frontier LLM models could come up with an episode of the ABC drama 9-1-1 indistinguishable from the output of that show’s writing room. But, again, “almost all cognitive tasks a human can do” aims a bit higher than 9-1-1.
So the claim I'm making isn't "AI can't generate art that satisfies many or most people." Rather it's that Roose is wrong to believe it is capable, or will become capable, of "almost all cognitive tasks a human can do," which is a different standard. That lots of people are happy to accept less than the best doesn't mean that something that is capable only of achieving that less-than-the-best level is actually capable of achieving the best. It just means that lots of people are happy to accept less than the best.
Maybe this will bifurcate business strategies. Maybe "almost all human cognitive tasks" describes breadth not depth, that it can only do 20% of the things that humans as a whole are capable of, but humans as a whole spend 80% of their time on the easiest 20%, and that you can re-engineer your business without it. We kinda see that with outsourcing and other strategies that have downsides in terms of maybe longevity of employees that can lead to brain drain. But really, I'd much rather be wrong :-) So, thanks for your submission!
This article presents a good explanation of the conundrum I face when I encounter so much praise of LLM output: "How is somebody impressed by this?"
I've been pondering -- at the same relative point, what are/were people more bearish on, the 2008 housing bubble or the 2025 AI bubble? This doesn't mean that AGI isn't real, but there are plenty of examples in history where everyone sort of believed the same lie. I wonder -- what are the true "original" sources of opinions, and how many of the opinions you hear repeated in the news are incestuous? Like most journalists are certainly not opinions of people with on the ground knowledge, I'm thinking the recent NY Times opinion column by Kevin Roose. And so like everyone says "I talked to a bunch of people, and they all said." And then it's "You haven't seen what's in the OpenAI labs"
And what we have now is amazing, but like, what is the tipping point?
I had an otherwise very competent VP of Engineering who was enamored by a former employee. One day during our 1-on-1 we got to the topic of what we know for sure vs. what we don't. He said that he tried to avoid thinking about that because this former employee of his had told him of his own adventures with this train of thought and that it "took him to a very dark place".
Apparently he had never heard of Descartes? It was a great lesson in the fact that hubris is very real and just because you're good at one thing doesn't make you worth listening to on any other topic.
He may be right about the quality of the current state of the art.
But like most of us I think he struggles to comprehend the implications of exponential improvement. What if 2 or 4 years now the LLMs have improved by a factor of 10 or 100? Will they still not be as good as the best humans at writing a movie script? What about 2 years after that with similar rate of improvement?
Are there enough examples of "best humans at writing a movie script?" Is there an objective measure of "best movie scripts"? I think LLM's are probably already at the level where they can generate enough quality to grab people's attention, but I think there is a human component, not to sort of the quality of a film, but the ability to sell people on something that is probably just not common, consistent, or replicable enough to emulate. Like, I don't think objective quality can tell you why Taylor swift sold 3 times as many records as the rolling stones. Nor do I think that kind of fame would be replicable by AI.
And actually here's what makes me somewhat sure about this. If AI is good enough to replicate the popularity of Taylor Swift, it's probably good enough to do it 1000 times, and microtarget, diluting it's own popularity. Is an artist that's 1/1000 times less popular than Taylor Swift as good as Taylor Swift if there's no objective measure of "good"?
Now, could AI capture 98% of the attention on Tik Tok through experiments and algorithms? Absolutely. If not today, very soon.
Trend lines don't always keep going up, in absolute terms or in their rate. And I'm skeptical that the nature of LLM technology specifically can achieve AGI, because it's at base a prediction model (though an incredibly sophisticated and powerful one) and not actual intelligence. To think that LLMs can grow into AGI is, thus, a category mistake, even if they get better at what they do, and even if they prove super useful in all kinds of ways.
Current LLMs are already far beyond what a "prediction model" might be capable of.
> But like most of us I think he struggles to comprehend the implications of exponential improvement.
Or he successfully comprehends the implications of sigmoid improvement.
Consider self-driving cars: there was a few years of huge progress, and it looked like the end of human driving was right around the corner. And then… the exponential part of the curve stopped, and it reverted to a much slower rate of improvement. We’ll get there eventually, but… it’s not around the corner.
Where are we now on the LLM sigmoid? How can you tell?
That’s the neat thing: you can’t. You get explosive improvement, and then suddenly you hit an invisible inflection point and they slow to a crawl.
I pretty much agree because all really creative art will have some kind of novelty to it which a LLM wont be able to generate cause its outside the training data, except of course the initial burst of novelty from when the tech was new, which is more novelty being introduced into the field than most artists will ever accomplish.
Article gets at something that's worth discussing: the feedback loops between being a "techbro" and social networking. There is a very low cost to professing to be on team AI, and a big upside in the form of social currency and connections among fellow team members. The "positive vibes" from a shared belief manifest as increased activity, which can cause the algorithms to favor such posts, which then has the effect of further entrenching belief (because you're on record of professing as such) and pulling others in.
People looking to get on the hype train interact with other people of a similar temperament, and end up creating their own reality-distorting bubble of positivity. This is the exact same playbook that the rationalist techies would accuse religious people of falling into!
FWIW
Try reading that article (or anything at that site) with Dark Reader extension. I had to turn it off to see anything at all.
I think people want to believe AI is capped out or won't come for the things they ascribe value to.
We don't know, but articles like this aren't an analysis based on facts or a theory (or even opinion with supporting arguments) of what AI will be capable of in the near future as much as they are an expression of anxiety and hope that there is something special about the things they want to continue to believe are special. That's a perfectly reasonable and very normal reaction whenever the world starts to change quickly and in uncomfortable ways, but there is practically zero information content towards helping understand the situation with the technology (however it is very informative wrt the situation with people and the sociology of disruption).
Aaron Ross Powell isn't an expert or even a highly informed amateur. He's an internet influence who calls himself a radical liberal philosopher. He seems quite good at what he does, but what he does lends no real information content as to what AI is capable or will be capable of at any point.
> The second feature is a basic lack of taste. That Sam Altman thinks his chatbot’s short story is brilliant tells us much more about Altman’s literary sophistication than it does the nearness of AGI.
ARP's book of short stories can be found here: https://amazon.com/Animus-Six-Tales-Crime-Terror-ebook/dp/B0... It has 3 reviews and they are poor. Glass houses etc. The one good review is from a reviewer who says he is as good as Stephen King, the other reviews from this reviewer are exclusively for baby toys. "Sophistication"
I don't think it's fear.
It's more of a cultural thing of something getting extremely popular too quick.
When something rises to an absurd amount of popularity really quickly it becomes stupid. People like to talk shit about it. It happens with bands and crocs.
People have forgotten just how revolutionary the LLM. The hype is justified.
I'm telling you if somebody next week invents cheap space cars with lightspeed drives that cost only 4$... within a year those space cars will be referred to as junk and overhyped by the very people who use it everyday.
That's how humans are.
I still disagree. People SHOULD be afraid of AI. Not just because it might turn the world into paperclips, but because it can already do this blog author's job almost as good as he can, but at arbitrarily larger volume and 1000's of times faster. It is very likely that in a year or two it will be better than him at his 'witty' and 'cutting' shallow commentary, but even if it's only 85% as good it will be available way way sooner and on every single topic you can imagine. He's toast. A lot of us are toast.
Edit: AI doesn't make Stephen King redundant, AI isn't better than the best humans, or even close. Lets assume it never will be that good (not a good assumption but lets assume.) Most of us are very mediocre, even at things we make our living at. Most software engineers are poor or mediocre (a statistical tautology) and AI is better than a lot of them already. Most authors aren't very good and nobody will remember their work, but they can still find work and get paid for it. A huge fraction of those jobs are just going to go away.
Not sure if I'm ready to buy into the "be afraid" mentality yet.
But what articles like this often seem to just completely gloss over is that while, yes, AI might not today be the force that makes every single short-story writer redundant, tech like this seldom goes backwards in capability and instead tends to get better... Cumulative small improvements for, say, 10-15 years does wonders, like the advent of the Internet showed.
I'm afraid of it in a very abstract way. I'm sure the danger is real but I feel most people just aren't that scared.
It's like global warming. Global warming will probably make a lot of us toast. Yet I still drive a gas guzzling car and I'm not screaming out of my mind.
It's totally irrational and even I'm like that. Do you sort of get what I'm saying?
Yea I get what you're saying. I think your global warming analogy is :chef's kiss:. I think AI has within it the looming artificial god. Buuut there's also a much shorter term risk that global warming doesn't have, which is that you might just not be able to do work that anyone will be willing to pay for starting about 2 years from now if you are an artist, or designer, or internet blog author that isn't famous, computer programmer? We don't know, but a lot of people will no longer be able to sell their labor that can right now. A lot of people will have to find a new niche for themselves and they probably are quite comfortable in their current niche, and they worked hard to make it for themselves.
> Aaron Ross Powell isn't an expert or even a highly informed amateur
Yeah, it was a embarrassing how hard he missed this mark while at the same time lashing out at "tHe TeCH BRos" and their inadequacy in taste two times in every sentence...
Overestimated or oversold?
Both.
I have been seeing models get better, and each time the SOTA improves, the upgrade seems amazing.
Bonus irony: this is despite the fact I've personally drawn attention to the exact same trend with increasing quality of 3D rendering over the 90s, where each improvement was seen as "photorealistic" only to be forgotten 6 months later when a better engine (or even tool) got the same aclaim.
Both. The article also takes up the latter, with the last 4 paragraphs.
I said this before: AI only exists because business hates creative people and wants to automate us away at any cost. And because most businesspeople don't have good taste in art (they never bothered to build their taste to begin with!), gen AI looks good enough for them.
That's it.
without even reading the article I'm guessing because Tech Bros have zero creativity themselves and/or have zero media literacy and/or taste
* because most of 'em aren't all that creative *
[dead]
When have tech bros ever correctly estimated anything?
absolutely never? Like when Bill Gates estimated the success of microsoft his estimate failed and microsoft also failed? Like it's cool to say Tech bros never estimate anything right, but honestly the statement isn't at all universally true.
Sarcasm aside, AGI may not be 1 to 3 years away but I think it could be about a decade away. Within our lifetimes. The last decade till now was a huge jump. The next decade should be an even bigger jump.
I think more then hype, is the absolute negativity around AI. Like there's definitely unreasonable excitement among entrepreneurs who bet on AI. That's expected but that's a minority of the population.
Most people are really negative against AI. And I think the negativity is overblown. As overblown as the positivity from the AI entrepreneurs.
It's very generic. Anything that gets too popular too fast becomes stupid really quick as well. Have we forgotten just how revolutionary an LLM even is? I could not have predicted we would get to this point in my life time.
I'm telling you this negativity pervades the culture an influences our opinions more then the reality. If AI gets to the point where it can write a novel as good as a human, our negativity will delude most of us into thinking it's still trash. It's sort of like the opposite of modern art. Modern art is so obscure it's good even though it's really trash. AI is so crazy popular it's trash even though it's completely revolutionary.
Bill Gates was never a tech bro.
What is the point of just asserting something like this when it's SOOOO obviously an issue with definitions?
Unless you just want to be difficult and disagreeable, the usual format would be "Hypothesis, supporting argument" not just saying the opposite.
Here I'll get you started: Bill Gates wasn't a tech bro because ... (you fill in the ...)
Edit: I think that if you write down your criteria for 'who is' and 'who isn't' a tech bro, we might not agree on those criteria but we can at least argue with evidence whether Bill Gates fits your definition or not.
Bill Gates wasn't a tech bro because "tech bro" refers to a very specific subculture that Gates was never a part of. It didn't even exist in his day. I'm not sure that's any better.
In any case, you're right about it being definitional. I was just surprised that anybody would think of Gates as a tech bro. That's just an alien concept to me and I never expected to hear it asserted.
But it doesn't really matter. Whether or not I agree with anyone about what a "tech bro" is (and clearly there is disagreement about that) is irrelevant.
Bill Gates was not a tech bro, he was a tech nerd.
If you want to argue someone who is a tech bro and was also very successful Steve Jobs might be a better fit, as he never did tech work he just sold things.
tech bros don't actually do tech?
Bill Gates was one of the first tech bros. Either way Pick any successful start up founded by a tech bro... same story: The start up succeeded base off of a correct estimate.
We must have very, very different definitions of "tech bro". To me, it indicates a very specific subculture and Gates was never part of that.