I think the author leaves out one important point which is that most people sound like idiots when put on the spot and asked to talk about things outside their core competency, and for these men that core competency is business. It's entirely possible they're bad at that as well but a priori you would probably expect them to do a lot better.
It's the human person Gell-Mann effect, we listen to CEOs talk about science, tech, and engineering and they sound like morons because we know these fields. But their audience is specifically people who don't-- and to them, thanks to the effect, they sound like they know what they're talking about.
I'm in the camp now that asks the question, if AI is so good, why are we still tethered to big tech? why hasn't an untrained human prompted a product out that is 10 times better than anything big tech has to offer. After all intelligence is free :-)
It's not even that. Why are trained humans who are now 10 times more productive haven't created new amazing operating systems, programming languages, game engines, browsers etc. We should have had a lot of outstanding products since AI hype started, but instead every company except the few doing foundational models seems to be just stuck and confused.
Ed Zitron's writing is inflamatory, but his points are extremely easy to grasp. I always find those reads very therapeutic, as looking at too much genAI discussion can make it seem like I'm the crazy one.
GenAI has all of the media attention in the world, all the capital in the world, and a huge amount of human resources put into it. I think (or at least hope) that this fact isn't controversial to anyone, be they for or against it. We can then ask ourselves if having models that can write an e-shop API really is an acceptable result, when looking at the near-incomprehensible amounts spent into it.
One could say "It has also led to advances in [other field of expertise]", but couldn't a fraction of that money have achieved greater results if invested directly in that field? To build actual specialized tools and structures? That's an unfalsifiable hypothetical, but reading Sam Altman's "Gentle Singularity[1]" blogpost, it seems like wild guesses are a perfectly fair arguing ground.
On a small tangent about "Gentle Singularity", I think it's not fair to scoff at Ed's delivery when Sam Altman also pulls a lot of sneaky tricks when addressing the public:
> being amazed that it can make live-saving medical diagnoses
The classification model that spots tumor has nothing to do with his product category, I find it very dishonest to sandwich this one example between to examples of generative AI.
> A lot more people will be able to create software, and art. But the world wants a lot more of both, and experts will probably still be much better than novices, as long as they embrace the new tools.
"the world wants a lot more of both" doesn't quite justify the flood of slop on every single art-sharing platform. That's like saying the world wants a lot more of communication to hand wave the 10s of spam calls you get each day. "As long as they embrace the new tools" is just parroting the "adapt or die Luddite!" argument. As the CEO of the world's foremost AI company I expect more than the average rage bait comment you'd see on a forum, the fact that it's somehow an "improvement" is taken for granted, even though Sam is talking about fields he's never even dabbled in.
The statement probably doesn't weigh much since my biases are transparent, but I believe there's just so much more intellectual honesty in Ed's arguments than in much of what Sam Altman says.
I find any claim that superintelligence helps with physics to be a hoot.
Dark matter is the most notable contradiction in physics today, where there is a complete mismatch between the physics we see in the lab, in the solar system, and globular clusters and the physics we see at the galactic scale. Contrast that to Newton's unified treatment of gravity on Earth and the Solar System.
There is no lack of darkon candidates or MOND ideas [1] what is lacking is an experiment or observation that can confirm one or the other. Similarly, a 1000x bigger TeraKamiokande or GigaKATRIN could constrain proton decay or put some precision on the neutrino mass but both of these are basically blue-collar problems.
[1] I used to like MOND but the more I've looked at it the more I've adopted the mainstream view of "this dark matter has a galaxy in it" as opposed to "this galaxy has dark matter in it". MOND fever is driven by a revisionist history where dark matter was discovered by Vera Rubin, not Zwicky [2] and that privileges galactic rotation curves (which MOND does great at) over many other kinds of evidence for DM.
[2] ... which I'd love to believe since Rubin did her work at my Uni!
Altman claimed superintelligence would revolutionize physics. This is one of many bullshit statements attributed to Altman, just one I feel qualified to counter. I could say plenty about software dev too.
Lots of things were said about AI. I could take this sort of discussion to any subject I want.
The article tries to put these personalities into a "master manipulator" figure. It doesn't matter if 99% of the text actually _criticizes_ these personalities.
What matters is the takeaway a typical reader would get from reading it. It's in the lines of "they're not tech geniuses, they're manipulators".
This takeaway is carefully designed to cater to selected audiences (well, the author claims to be a media trainer, so fair game, I guess).
I think the intent is to actually _promote_ these personalities. I know it sounds contradictory, but as I said, it gives them too much credit for "being good manipulators".
Which audiences are catered to and how they are expected to react is an exercise I'll leave for the reader.
That's a general overview of what this article does. Nothing related to actual claims.
Yeah, it's like the way you have to ask leftists "What is your expectation for how many votes this post will move the next election D or R?" with the recognition that their radical posturing might really be something the Koch Organization should fund.
There's a theory that "the meaning of a communication is it's effect". If I make a message that I think is a left-wing message but it causes sufficient right-wing backlash to motivate opponents to oppose me, my message wasn't really a left-wing message but a right-wing message from that viewpoint -- one that benefits my enemies.
Isn't that what you're saying here? The author pretty obviously thinks that Altman and company are horrible bullshitters and that's a bad thing, but you seem to think that somebody could come to the conclusion that they are actually really good bullshitters.
My original comment included a political example, but I removed it before posting precisely to avoid confusion between what I said and simple polarization schemes.
It's not about backlash. Media has several ways to deliver a different message, to different audiences, using a single piece.
What I mean by "audiences" is much more granular than "against" or "in favor".
The article attempts to do that (there are hints of this granularity of target profiles all over it), but it's not very subtle at doing it.
Author makes a number of valid and valuable points, but desperately needed to edit this down for poignancy out of respect for everyone's time (including their own), taking their own advice ("clearly articulate what you're saying"). Don't make using an LLM to extract your point seem like such a good idea, eh?
I read the entire thing. It’s nothing more than a hateful screed by someone who tears down, but does not build; who is in a permanent state of (incorrectly) claiming that the sky is falling. What’s so interesting and urgent about that? It’s one of a million similar cries of faux-despair from certain over-privileged overreacting regions of the Internet. Yawn.
> It’s also the frustrated, desperate plea of someone who has been in the trenches for long enough to have seen the same nihilistic response to their pleas.
Not sure what trenches you are talking about. The author is a journalist and a founder of a PR agency, who got prominence ranting against AI industry. I must admit, as much I empathise with author's stated sentiment to gen AI, I couldn't read any of his pieces to the end either. It's not just the tone, I don't feel I am getting anything useful from them, just any food for thought - as opposed to writings of Nikhil Suresh for instance, that also have a benefit of being well-written.
The sentiment highlighted in various comments on this post isn't that the blog piece is 'too long'. The sentiment is that it is incoherent, illegible and borderline unhinged. It clashes so strongly with what I consider to be calm, reasonable writing that I can't justify continuing to spend my time reading it.
> It clashes so strongly with what I consider to be calm, reasonable writing that I can't justify continuing to spend my time reading it.
Which makes the author’s point for them. It wasn’t calm, reasonable writing at all, and it’s your mistake to assume all writing must be calm and reasonable in order for it to be valid or of import. It was written with passion, zeal, distress, and rage. It was shaking the reader and demanding an answer: do they not see what the author does? Do they not connect the puzzle pieces in the same way? Is the reader not seeing the same visage as the author?
That’s the entire point of its length, its tone, its language. It was meant to appeal to humans, not machines, and to be an exasperated plea for others to either acknowledge what the author sees and act upon it, or to rebuke the author’s viewpoint with supporting evidence.
Instead, the response is a bunch of “too long and emotions are bad” whinging. It neither refutes the author nor critiques the piece as written, simply vapidly complains the message wasn’t delivered in a way they personally preferred.
Which, if I were to extrapolate, is its own damning indictment of a technology industry so focused on designing around niche edge cases and shareholder demands that it neglects fundamentals. If that also holds true, then it’s of no wonder LLM summaries are so celebrated:
Content without context, in as dry and inoffensive a format as possible. Never a threat to someone’s thinking, never a challenge to their positions, never complex enough to warrant consideration.
Style matters. If you submit a manuscript full of typographical errors, no one is ever going to publish your novel, no matter how ground-breaking it might be. If you write a mathematical paper but scribble all your math notation in pencil rather than Latex, no one will ever read it. And if the only way that you can get your point across is through an incoherent rant, no one will take you seriously.
There are plenty of comments here that go deeper than "too long and emotions are bad", see [1][2]. I find it fascinating that you don't seem to be able to either see or comprehend their critique, and instead reduce it to something far more infantile. When you say "The fact you’re crying over length", it is apparent to anyone that you are not trying to have a conversation, you're trying to score points - but your opponent isn't here to debate with you, because he doesn't exist.
Well said. Detractors specifically whinging about the tone and appeal of the piece are exactly who the piece is criticizing.
They want to be catered to. To have their hype built up. Negatives must not be allowed in their worldview, and if reality has negatives then to hell with reality.
It’s bleak, as the author so eloquently describes.
You really are saying, on Hacker News, that "negatives must not be allowed?" Hacker News, probably the most negative news site on the Internet, the place that hates everything, the place where the Dropbox 'Show HN' post got dismissive comments about how it could be done trivially? That's the place you're saying that negatives can't be allowed?
You think that's more likely than that this is just a bad article?
I agree that the tone is a bit much in places, but he does address this:
> I have learned to accept who I am — that I am not like most people — and people conflate my passion and vigor with anger or hate, when what they’re experiencing is somebody different who deeply resents what the powerful have done to the computer.
Agreed. OP says "I think it’s because we live in Hell." and then goes on to describe how sometimes buttons on apps don't work or he gets notifications that aren't well-targeted. Really? That's Hell? Hell must have gotten a lot better since the last time I checked.
That’s fine, your complete dismissal of the OP’s point because you couldn’t be bothered to empathize with distressful emotion to understand the gravitas of the point being made should have been my cue that you were similarly impossible to engage with.
Guess I learned something new from this exchange after all.
So the author can't conceive of a situation where a scientist can use AI to reduce his busy work by 4 hours a day? He's the one that sounds stupid here.
I read the first few paragraphs but when I saw the size of my scrollbar, I decided the author is putting way too much thought and effort into owning a VC-funded tech CEO playing the game you play when you're running a a VC-funded tech company.
"Our product is the second coming of Christ and if you give me money now you'll 100000x your investment!" is the correct answer to all questions when you're in that position. I'm not saying it's admirable, but it's what you do to keep money coming in for the time being. It's not that deep.
This was worth reading for the Snowflake section alone. I’ve seen it happen in real life.
> a savvy negotiator and manipulator
You give them too much street cred. I'm not convinced they're even good at that.
I think the author leaves out one important point which is that most people sound like idiots when put on the spot and asked to talk about things outside their core competency, and for these men that core competency is business. It's entirely possible they're bad at that as well but a priori you would probably expect them to do a lot better.
It's the human person Gell-Mann effect, we listen to CEOs talk about science, tech, and engineering and they sound like morons because we know these fields. But their audience is specifically people who don't-- and to them, thanks to the effect, they sound like they know what they're talking about.
Zitron is correct. He reminds me of Linux advocates, who are also correct.
I'm in the camp now that asks the question, if AI is so good, why are we still tethered to big tech? why hasn't an untrained human prompted a product out that is 10 times better than anything big tech has to offer. After all intelligence is free :-)
It's not even that. Why are trained humans who are now 10 times more productive haven't created new amazing operating systems, programming languages, game engines, browsers etc. We should have had a lot of outstanding products since AI hype started, but instead every company except the few doing foundational models seems to be just stuck and confused.
If you're referring specifically to the Altman brothers, ask them where you can find a soda and a sunduh for a qwarter on Rowte Farty-Far.
Teasing over the various Midwestern accents is sort of like dealing with boxing great Joe Louis: you can run, but you just can't hide.
Ed Zitron's writing is inflamatory, but his points are extremely easy to grasp. I always find those reads very therapeutic, as looking at too much genAI discussion can make it seem like I'm the crazy one.
GenAI has all of the media attention in the world, all the capital in the world, and a huge amount of human resources put into it. I think (or at least hope) that this fact isn't controversial to anyone, be they for or against it. We can then ask ourselves if having models that can write an e-shop API really is an acceptable result, when looking at the near-incomprehensible amounts spent into it.
One could say "It has also led to advances in [other field of expertise]", but couldn't a fraction of that money have achieved greater results if invested directly in that field? To build actual specialized tools and structures? That's an unfalsifiable hypothetical, but reading Sam Altman's "Gentle Singularity[1]" blogpost, it seems like wild guesses are a perfectly fair arguing ground.
On a small tangent about "Gentle Singularity", I think it's not fair to scoff at Ed's delivery when Sam Altman also pulls a lot of sneaky tricks when addressing the public:
> being amazed that it can make live-saving medical diagnoses
The classification model that spots tumor has nothing to do with his product category, I find it very dishonest to sandwich this one example between to examples of generative AI.
> A lot more people will be able to create software, and art. But the world wants a lot more of both, and experts will probably still be much better than novices, as long as they embrace the new tools.
"the world wants a lot more of both" doesn't quite justify the flood of slop on every single art-sharing platform. That's like saying the world wants a lot more of communication to hand wave the 10s of spam calls you get each day. "As long as they embrace the new tools" is just parroting the "adapt or die Luddite!" argument. As the CEO of the world's foremost AI company I expect more than the average rage bait comment you'd see on a forum, the fact that it's somehow an "improvement" is taken for granted, even though Sam is talking about fields he's never even dabbled in.
The statement probably doesn't weigh much since my biases are transparent, but I believe there's just so much more intellectual honesty in Ed's arguments than in much of what Sam Altman says.
[1] https://blog.samaltman.com/the-gentle-singularity
I find any claim that superintelligence helps with physics to be a hoot.
Dark matter is the most notable contradiction in physics today, where there is a complete mismatch between the physics we see in the lab, in the solar system, and globular clusters and the physics we see at the galactic scale. Contrast that to Newton's unified treatment of gravity on Earth and the Solar System.
There is no lack of darkon candidates or MOND ideas [1] what is lacking is an experiment or observation that can confirm one or the other. Similarly, a 1000x bigger TeraKamiokande or GigaKATRIN could constrain proton decay or put some precision on the neutrino mass but both of these are basically blue-collar problems.
[1] I used to like MOND but the more I've looked at it the more I've adopted the mainstream view of "this dark matter has a galaxy in it" as opposed to "this galaxy has dark matter in it". MOND fever is driven by a revisionist history where dark matter was discovered by Vera Rubin, not Zwicky [2] and that privileges galactic rotation curves (which MOND does great at) over many other kinds of evidence for DM.
[2] ... which I'd love to believe since Rubin did her work at my Uni!
That's not the point of the article though.
Altman claimed superintelligence would revolutionize physics. This is one of many bullshit statements attributed to Altman, just one I feel qualified to counter. I could say plenty about software dev too.
Lots of things were said about AI. I could take this sort of discussion to any subject I want.
The article tries to put these personalities into a "master manipulator" figure. It doesn't matter if 99% of the text actually _criticizes_ these personalities.
What matters is the takeaway a typical reader would get from reading it. It's in the lines of "they're not tech geniuses, they're manipulators".
This takeaway is carefully designed to cater to selected audiences (well, the author claims to be a media trainer, so fair game, I guess).
I think the intent is to actually _promote_ these personalities. I know it sounds contradictory, but as I said, it gives them too much credit for "being good manipulators".
Which audiences are catered to and how they are expected to react is an exercise I'll leave for the reader.
That's a general overview of what this article does. Nothing related to actual claims.
Yeah, it's like the way you have to ask leftists "What is your expectation for how many votes this post will move the next election D or R?" with the recognition that their radical posturing might really be something the Koch Organization should fund.
I don't understand your comparison.
There's a theory that "the meaning of a communication is it's effect". If I make a message that I think is a left-wing message but it causes sufficient right-wing backlash to motivate opponents to oppose me, my message wasn't really a left-wing message but a right-wing message from that viewpoint -- one that benefits my enemies.
Isn't that what you're saying here? The author pretty obviously thinks that Altman and company are horrible bullshitters and that's a bad thing, but you seem to think that somebody could come to the conclusion that they are actually really good bullshitters.
My original comment included a political example, but I removed it before posting precisely to avoid confusion between what I said and simple polarization schemes.
It's not about backlash. Media has several ways to deliver a different message, to different audiences, using a single piece.
What I mean by "audiences" is much more granular than "against" or "in favor".
The article attempts to do that (there are hints of this granularity of target profiles all over it), but it's not very subtle at doing it.
Author makes a number of valid and valuable points, but desperately needed to edit this down for poignancy out of respect for everyone's time (including their own), taking their own advice ("clearly articulate what you're saying"). Don't make using an LLM to extract your point seem like such a good idea, eh?
[dead]
[flagged]
I read the entire thing. It’s nothing more than a hateful screed by someone who tears down, but does not build; who is in a permanent state of (incorrectly) claiming that the sky is falling. What’s so interesting and urgent about that? It’s one of a million similar cries of faux-despair from certain over-privileged overreacting regions of the Internet. Yawn.
> It’s also the frustrated, desperate plea of someone who has been in the trenches for long enough to have seen the same nihilistic response to their pleas.
Not sure what trenches you are talking about. The author is a journalist and a founder of a PR agency, who got prominence ranting against AI industry. I must admit, as much I empathise with author's stated sentiment to gen AI, I couldn't read any of his pieces to the end either. It's not just the tone, I don't feel I am getting anything useful from them, just any food for thought - as opposed to writings of Nikhil Suresh for instance, that also have a benefit of being well-written.
The sentiment highlighted in various comments on this post isn't that the blog piece is 'too long'. The sentiment is that it is incoherent, illegible and borderline unhinged. It clashes so strongly with what I consider to be calm, reasonable writing that I can't justify continuing to spend my time reading it.
> It clashes so strongly with what I consider to be calm, reasonable writing that I can't justify continuing to spend my time reading it.
Which makes the author’s point for them. It wasn’t calm, reasonable writing at all, and it’s your mistake to assume all writing must be calm and reasonable in order for it to be valid or of import. It was written with passion, zeal, distress, and rage. It was shaking the reader and demanding an answer: do they not see what the author does? Do they not connect the puzzle pieces in the same way? Is the reader not seeing the same visage as the author?
That’s the entire point of its length, its tone, its language. It was meant to appeal to humans, not machines, and to be an exasperated plea for others to either acknowledge what the author sees and act upon it, or to rebuke the author’s viewpoint with supporting evidence.
Instead, the response is a bunch of “too long and emotions are bad” whinging. It neither refutes the author nor critiques the piece as written, simply vapidly complains the message wasn’t delivered in a way they personally preferred.
Which, if I were to extrapolate, is its own damning indictment of a technology industry so focused on designing around niche edge cases and shareholder demands that it neglects fundamentals. If that also holds true, then it’s of no wonder LLM summaries are so celebrated:
Content without context, in as dry and inoffensive a format as possible. Never a threat to someone’s thinking, never a challenge to their positions, never complex enough to warrant consideration.
Style matters. If you submit a manuscript full of typographical errors, no one is ever going to publish your novel, no matter how ground-breaking it might be. If you write a mathematical paper but scribble all your math notation in pencil rather than Latex, no one will ever read it. And if the only way that you can get your point across is through an incoherent rant, no one will take you seriously.
There are plenty of comments here that go deeper than "too long and emotions are bad", see [1][2]. I find it fascinating that you don't seem to be able to either see or comprehend their critique, and instead reduce it to something far more infantile. When you say "The fact you’re crying over length", it is apparent to anyone that you are not trying to have a conversation, you're trying to score points - but your opponent isn't here to debate with you, because he doesn't exist.
[1]: https://news.ycombinator.com/item?id=44426139 [2]: https://news.ycombinator.com/item?id=44428303
Well said. Detractors specifically whinging about the tone and appeal of the piece are exactly who the piece is criticizing.
They want to be catered to. To have their hype built up. Negatives must not be allowed in their worldview, and if reality has negatives then to hell with reality.
It’s bleak, as the author so eloquently describes.
You really are saying, on Hacker News, that "negatives must not be allowed?" Hacker News, probably the most negative news site on the Internet, the place that hates everything, the place where the Dropbox 'Show HN' post got dismissive comments about how it could be done trivially? That's the place you're saying that negatives can't be allowed?
You think that's more likely than that this is just a bad article?
[flagged]
I agree that the tone is a bit much in places, but he does address this:
> I have learned to accept who I am — that I am not like most people — and people conflate my passion and vigor with anger or hate, when what they’re experiencing is somebody different who deeply resents what the powerful have done to the computer.
Agreed. OP says "I think it’s because we live in Hell." and then goes on to describe how sometimes buttons on apps don't work or he gets notifications that aren't well-targeted. Really? That's Hell? Hell must have gotten a lot better since the last time I checked.
I agree with your concern for the author.
The other kind of ChatGPT psychosis?
[dead]
[flagged]
> You and the rest of the Technosphere who has been so blindly absorbed with startup culture and CEBro worship for the past twenty years
Making a wild unfounded assumption like this makes the rest of your argument impossible to engage with.
That’s fine, your complete dismissal of the OP’s point because you couldn’t be bothered to empathize with distressful emotion to understand the gravitas of the point being made should have been my cue that you were similarly impossible to engage with.
Guess I learned something new from this exchange after all.
Doubtful you did.
Has this guy ever heard the american president speaking? Compared to Trump Altman et al. are geniuses.
I'm almost tempted to run this lengthy article through chatGPT for a summary but the irony was too much so I just stopped reading.
So the author can't conceive of a situation where a scientist can use AI to reduce his busy work by 4 hours a day? He's the one that sounds stupid here.
Like, to write grant reports? That's already happening on scale.
Any productivity gained in writing grant reports is lost on the back end evaluating them.
I'd bet AI is used there as well
I read the first few paragraphs but when I saw the size of my scrollbar, I decided the author is putting way too much thought and effort into owning a VC-funded tech CEO playing the game you play when you're running a a VC-funded tech company.
"Our product is the second coming of Christ and if you give me money now you'll 100000x your investment!" is the correct answer to all questions when you're in that position. I'm not saying it's admirable, but it's what you do to keep money coming in for the time being. It's not that deep.
In that case everything should just answer "piss off, salesman" and walk away because their words are of no value.