I also find these features annoying and useless and wish they would go away. But that's not because LLMs are useless, nor because the public isn't using them (as daishi55 pointed out here: https://news.ycombinator.com/item?id=44479578)
It's because the integrations with existing products are arbitrary and poorly thought through, the same way that software imposed by executive fiat in BigCo offices for trend-chasing reasons has always been.
The weird thing about AI is that it doesn't learn over time but just in context. It doesn't get better the way a 12 year old learning to play the saxophone gets better.
But using it heavily has a corollary effect: engineers learn less as a result of their dependence on it.
Less learning all around equals enshittification. Really not looking forward to this.
The major AI gatekeepers, with their powerful models, are already experiencing capacity and scale issues. This won't change unless the underlying technology (LLMs) undergoes a fundamental shift. As more and more things become AI-enabled, how dependent will we be on these gatekeepers and their computing capacity? And how much will they charge us for prioritised access to these resources? And we haven't really gotten to the wearable devices stage yet.
Also, everyone who requires these sophisticated models now needs to send everything to the gatekeepers. You could argue that we already send a lot of data to public clouds. However, there was no economically viable way for cloud vendors to read, interpret, and reuse my data — my intellectual property and private information. With more and more companies forcing AI capabilities on us, it's often unclear who runs those models and who receives the data and what is really happening to the data.
This aggregation of power and centralisation of data worries me as much as the shortcomings of LLMs. The technology is still not accurate enough. But we want it to be accurate because we are lazy. So I fear that we will end up with many things of diminished quality in favour of cheaper operating costs — time will tell.
Just moments ago I noticed for the first time that Gmail was giving me a summary of email I had received.
Please don't. I am going to read this email. Adding more text just makes me read more.
I am sure there's a common use case of people who get a ton of faintly important email from colleagues. But this is my personal account and the only people contacting me are friends. (Everyone else should not be summarized; they should be trashed. And to be fair I am very grateful for Gmail's excellent spam filtering.)
I agree with the general gist of this piece, but the awkward flow of the writing style makes me wonder if it itself was written by AI…
There are open source or affordable, paid alternatives for everything the author mentioned. However, there are many places where you must use these things due to social pressure, lock-in with a service provider (health insurance co, perhaps), and yes unfortunately I see some of these things as soon or now unavoidable.
Another commenter mentioned that ChatGPT is one of the most popular websites on the internet and therefore users clearly do want this. I can easily think of two points that refute that:
1. The internet has shown us time and time again that popularity doesn’t indicate willingness to pay (which paid social networks had strong popularity…?)
2. There are many extremely popular websites that users wouldn’t want to be woven throughout the rest of their personal and professional digital lives
I don't think this is true. A lot of people had no interest until smartphones arrived. Doing anything on a smartphone is a miserable experience compared to using a desktop computer, but it's more convenient. "Worse but more convenient" is the same sales pitch as for AI, so I can only assume that AI will be accepted by the masses too.
It's like talking into a void. The issue with AI is that it is too subtle, too easy to get acceptable junk answers and too subtle for the majority to realize we've made a universal crib sheet, software developers included, perhaps one of the worst populations due to their extremely weak communications as a community. To be repeatedly successful with AI, one has to exert mental effort to prompt AI effectively, but pretty much nobody is willing to even consider that. Attempts to discuss the language aspects of using an LLM get ridiculed as 'prompt engineer is not engineering' and dismissed, while that is exactly what it is: prompt engineering using a new software language, natural language, that the industry refuses to take seriously, but is in fact an extremely technical programming language so subtle few to none of you realize it, nor the power that is embodied by it within LLMs. They are incredible, they are subtle, to the degree the majority think they are fraud.
But why are the CEOs insisting so much on AI? Because stock investors prefer to invest on anything with "AI inside". So the "AI business model" would not collapse , because it is what investors want. It is a bubble. It will be bubbly for a while, until it isn't.
It's simply a money grab. You get this feature you don't need or want and hey, we're going to raise your price because of this. See for instance this - priceless - email:
Dear administrator,
We recently added the best of Google AI to Workspace plans to help your teams accomplish more, faster. In addition, we added new, simple to use security insights and controls to help you keep your business data safe.
We also announced updated subscription pricing. Your subscription will be subject to this updated pricing starting July 7, 2025.
We’ve provided additional information below to guide you through this change.
What this means for your organization
New Workspace features
Your updated pricing reflects the many new features now included in your Google Workspace edition. With these changes, you can:
Summarize long email threads, draft replies, and compose professional emails faster and easier with Help me write in Gmail
Write and refine documents with Gemini in Docs
Generate charts and insights with Gemini in Sheets
Automatically capture meeting notes so you can focus on the conversation with Take notes for me in Meet
Get AI assistance with brainstorming, researching, coding, data analysis, and more with Gemini Advanced
Accelerate learning by uploading your docs, PDFs, videos, websites, and more to get instant insights and podcast-style Audio Overviews with NotebookLM Plus
Enhance your organization’s security with security advisor, a new set of insights and tools. Use security advisor for threat defense with app access protection, account security with Gmail Enhanced Safe Browsing, and data protection capabilities
Customize email campaigns in Gmail. Add color schemes, logos, images, and other design elements
Starting as early as July 7, 2025, your Google Workspace Business Plus subscription price will be automatically updated to $22.00* per user, per month with an Annual/Fixed-Term Plan (or $26.40 if you have a monthly Flexible Plan).
The specific date that your subscription price will increase depends on your plan type, number of user licenses, and other factors.
*Prices will be updated in all local payment currencies.
If you have an Annual/Fixed-Term Plan, your subscription will be subject to updated pricing on your next plan renewal starting July 7, 2025. We will provide you with more specific information at least 30 days before updates to your Google Workspace plan pricing are made.
What you need to do
No action is required from you. Features have already rolled out to Google Workspace Business Plus subscriptions, including AI features in many additional languages, and subscription prices will be updated automatically starting July 7, 2025.
We know that data security and compliance are top priorities for business leaders when adopting AI, and we are committed to helping you keep your data safe. You can understand how to effectively utilize generative AI in your organization, and learn how to keep your data confidential and protected.
We’re here to help
If you wish to make changes to your subscription or payment plan, please visit the Admin console. Find which edition and payment plan you have on Google Workspace Admin Help.
Refer to the Help Center for details regarding the AI features and price updates, including updated local currency pricing.
ChatGPT is the 5th most-visited website on the planet and growing quickly. that’s one of many popular products. Hardly call that unwilling. I bet only something like 8% of Instagram users say they would pay for it. Are we to take this to mean that Instagram is an unpopular product that is rbi g forced on an unwilling public?
I looked for the right term but force-feeding is what it is. I yesterday also changed my default search engine from Duckduckgo to Ecosia as they seem the only one left not to provide flaky AI summaries.
In fact I also tried the communication part - outside of Outlook - but people don't like superficial AI polish
As far as I can tell, the AI-hate is most prominent in tech circles (creativity too, but they don't like media generation, largely embrace text though).
It seems here on the ground in non-tech bubble land, people use ChatGPT a ton and lean hard on AI features.
When Google judges the success of bolted on AI, they are looking at how Jane and John General Public use it, not how xleet007 uses it(or doesn't).
There is also the fact that AI is still just being bolted onto things now. The next iteration of this software will be AI native, and the revisions after that will iron out big wrinkles.
When settings menus and ribbon panels are optional because you can just tell the program what to do in plain English, that will be AI integration.
It's annoying having AI features force fed I imagine but it's come about due to many of the public liking some AI - apparently ChatGPT now has 800 million weekly users (https://www.digitalinformationworld.com/2025/05/chatgpt-stat...) and then competing companies think they should try to keep up.
I say I imagine it's annoying because I've yet to actually be annoyed much but I get the idea. I actually quite like the Google AI bit - you can always not read it if you don't want to. AI generated content on youtube is a bit of a mixed bag - it tends to be kinda bad but you can click stop and play another video. My office 2019 is gloriously out of date and does that stuff I want without the recent nonsense.
Even worse: they are using your data that you are inputting into these programs to continuously train their data. That’s an even bigger violation since it breaches data privacy.
The top of the list has got to be that one of their testimonials presented to investors is from "DrDeflowerMe". It's also interesting to me because they list financials which position them as unbelievably tiny: 6,215 subscribing accounts, 400 average new accounts per month, which to me sounds like they have a lot of churn.
I'm in my third year of subscribing and I'm actively looking for a replacement. This "Start Engine" investment makes me even more confident that's the right decision. Over the years I've paid nearly $200/year for this and watched them fail to deliver basic functionality. They just don't have the team to deliver AI tooling. For example: 2 years ago I spoke with support about the screen that shows you your credit card numbers being nearly unreadable (very light grey numbers on a white background), which still isn't fixed. Around a year ago a bunch of my auto transfers disappeared, causing me hundreds of dollars in late fees. I contacted support and they eventually "recovered" all the missing auto-transfers, but it ended up with some of them doubled up, and support stopped responding when I asked them to fix that.
I question if they'll be able to implement the changes they want, let alone be able to support those features if they do.
> Before proceeding let me ask a simple question: Has there ever been a major innovation that helped society, but only 8% of the public would pay for it?
I am moderately hyped for AI, but I treat these corporate intrusions into my workflows the same as ads or age verification, pointing uBlock to elements which are easy to point-and-click block, and making quick browser plugins and Tampermonkey scripts for things like Google to intercept my web searches and redirect them from the All/AI search page. -And if I can, it does amuse me to have Gemini write the plugins to block Google ads/inconveniences.
Companies didn't ask your opinion when they offshore manufacturing to Asia. They didn't ask your opinion when they offshore support to call centers in Asia. Companies don't ask your opinion, they do what they think is best for their financial interest, and that is how capitalism works.
Once upon a time, not too long ago, there was someone who would bag your groceries, and someone who would clean your window at the gas station. Now you do self-checkout. Has anyone asked for this? Your quality of life is worse, the companies are automating away humanity into something they think is more profitable for them.
In a society where you don't have government protection for such companies, there would be other companies who provide a better service whose competition would win. But when you have a fat corrupt government, lobbying makes sense, and crony-capitalism births monopolies which cannot have any competition. Then they do whatever they want to you and society at large, and they don't owe you, you owe them. Your tax dollars sponsor all of this even more than your direct payments do.
They force more and more AI into everything so that AI can continue to learn.
Also the requests aren't answered locally. Your data is forwarded to the AI's DC, processed and the answer returned. You can be absolutely certain that they keep a copy of your data.
I noticed that some of his choices contributed to his problem. I haven't been forced into accepting AI (so far) while I've been using duckduckgo for search, libreoffice, protonmail, and linux.
I feel an urge to build personal local AI bots that would be personal spam filters. AI filtering AI, fight fire with fire. Mostly because the world OP wants is never coming back. Everything will be AI and it's everywhere.
I also feel an urge to build spaces in the internet just for humans, with some 'turrets' to protect against AI invasion and exploitation. I just don't know what content would be shared in those spaces because AI is already everywhere in content production.
But that's exactly the problem with proprietary software. It's not force-feeding you anything, it's working exactly as intended.
Software is loyal to the owner. If you don't own your software, software won't be loyal to you. It can be convenient for you, but as time passes and interest changes, if you don't own software it can turn against you. And you shouldn't blame Microsoft or it's utilities. It doesn't owe you anything just because you put effort in it and invested time in it. It'll work according to who it's loyal to, who owns it.
If it bothers you, choose software you can own. If you can't choose software you own now, change your life so you can in the future. And if you just can't, you have to accept the consequences.
"Any sufficiently advanced AI technology is indistinguishable from bullshit."
- me, a few years ago.
I find the whole situation with regard to AI utterly ridiculous and boring. While those algos might have some interesting applications, they're not as earth-shattering as we are made to believe, and their utility is, to me at least, questionable.
I mostly agree with TFA, with one glaring exception: The quality of Google search results has regressed so badly in the past years (played by SEO experts), that AI was actually a welcome improvement.
I think there’s a difference between the tool that helps you do work better and the service that generates the end result.
People would be less upset if ai is shown to support the person. This also allows that person to curate the output and ignore it if needed before sharing it, so it’s a win/win.
As a note on Microsoft's obnoxious Copilot push, I too got the "Your 365 subscription price is increasing because we're forcing AI on you".
Only when I went to cancel[1], suddenly they made me aware that there was a "classic" subscription that was the normal price, without CoPilot. So they basically just upsized everyone to try to force uptake.
[1] - I'm in the AI business and am a user and abuser of AI daily, but I don't need it built directly into every app. I Already have AI subscriptions and local models and solutions.
Excellent Frank Zappa reference in The Famous Article is "I'm the Slime"[1].
The thing that really chafes me about this AI, irrespective of whether it is awesome or not, is emitting all of the information to some unknown server. To go with another Zappa reference, AI becomes The Central Scrutinizer[2].
I predict an increasing use of Free Software by discerning people who want to maintain more control of their information.
This guy calls himself honest broker but his articles are just expressions of status anxiety. The kind of media the he loves to write about is becoming less relevant and so he lashes out at everything new from AI to TikTok.
Just a quick quibble…the subtitle of the article calls this problem tyranny.
Tyranny is a real thing which exists in the world and is not exemplified by “product manager adding text expansion to word processor.”
The natural state of capitalism is trying things which get voted on by money. It’s always subject to boom-bust cycles and we are in a big boom. This will eventually correct itself once the public makes its position clear and the features which truly suck will get fixed or removed.
I agree copilot for answering emails is negative value.
But I find Google AI search results are very useful, can't see how they will monetise this, but can't complain for now.
I honestly can’t think of reasons to use AI. At work I have to give myself reminders to show my bosses that I used the internal ai tool so I don’t get in shit.
I don’t see the utility, all I see is slop and constant notifications in google.
You can say skill issue but that’s kind of the point; this was all dropped on me by people who don’t understand it themselves. I didn’t ask or want to built the skills to understand ai. Nor did my bosses: they are just following the latest wave. We are the blind leading the blind.
Like crypto ai will prove to be a dead end mistake that only enabled grifters
the title can be shortened to "force feeding an unwilling public" which is a fairly reasonable description of our current econimic system.
we went from "supply and demand", to "we can supply demand"(the heydays of hype and advertising), to "surprise!, like it or lump it"
Why do people who attempt to critique AI lean on the "no one wants this, everyone hates this" instead of just making their point. If your arguments are strong you don't need to wrap them in false statistics.
I’ve observed the opposite—not enough people are leveraging AI, especially in government institutions. Critical time and taxpayer money are wasted on tasks that could be automated with state-of-the-art models. Instead of embracing efficiency, these organizations perpetuate inefficiency at public expense.
The same issue plagues many private companies. I’ve seen employees spend days drafting documents that a free tool like Mistral could generate in seconds, leaving them 30-60 minutes to review and refine. There's a lot of resistance from the public. They're probably thinking that their job will be saved if they refuse to adopt AI tools.
"There ought to be a law" is why we have nanny-state government. I imagine that is why there have been "no spitting" and "no chewing gum" laws on the books.
People going to lord it over others in the pursuit of what they think is proper.
Society is over-rated, once it gets beyond a certain size.
Along the same lines, I am currently starting my morning with blocking ranges of IP addresses to get Internet service back, due to someone's current desire to SYN Flood my webserver, which being hosted in my office, affects my office Internet.
It may soon come to a point where I choose to block all IP addresses except a few to get work done.
I also find these features annoying and useless and wish they would go away. But that's not because LLMs are useless, nor because the public isn't using them (as daishi55 pointed out here: https://news.ycombinator.com/item?id=44479578)
It's because the integrations with existing products are arbitrary and poorly thought through, the same way that software imposed by executive fiat in BigCo offices for trend-chasing reasons has always been.
petekoomen made this point recently in a creative way: AI Horseless Carriages - https://news.ycombinator.com/item?id=43773813 - April 2025 (478 comments)
The weird thing about AI is that it doesn't learn over time but just in context. It doesn't get better the way a 12 year old learning to play the saxophone gets better.
But using it heavily has a corollary effect: engineers learn less as a result of their dependence on it.
Less learning all around equals enshittification. Really not looking forward to this.
The major AI gatekeepers, with their powerful models, are already experiencing capacity and scale issues. This won't change unless the underlying technology (LLMs) undergoes a fundamental shift. As more and more things become AI-enabled, how dependent will we be on these gatekeepers and their computing capacity? And how much will they charge us for prioritised access to these resources? And we haven't really gotten to the wearable devices stage yet.
Also, everyone who requires these sophisticated models now needs to send everything to the gatekeepers. You could argue that we already send a lot of data to public clouds. However, there was no economically viable way for cloud vendors to read, interpret, and reuse my data — my intellectual property and private information. With more and more companies forcing AI capabilities on us, it's often unclear who runs those models and who receives the data and what is really happening to the data.
This aggregation of power and centralisation of data worries me as much as the shortcomings of LLMs. The technology is still not accurate enough. But we want it to be accurate because we are lazy. So I fear that we will end up with many things of diminished quality in favour of cheaper operating costs — time will tell.
Just moments ago I noticed for the first time that Gmail was giving me a summary of email I had received.
Please don't. I am going to read this email. Adding more text just makes me read more.
I am sure there's a common use case of people who get a ton of faintly important email from colleagues. But this is my personal account and the only people contacting me are friends. (Everyone else should not be summarized; they should be trashed. And to be fair I am very grateful for Gmail's excellent spam filtering.)
I agree with the general gist of this piece, but the awkward flow of the writing style makes me wonder if it itself was written by AI…
There are open source or affordable, paid alternatives for everything the author mentioned. However, there are many places where you must use these things due to social pressure, lock-in with a service provider (health insurance co, perhaps), and yes unfortunately I see some of these things as soon or now unavoidable.
Another commenter mentioned that ChatGPT is one of the most popular websites on the internet and therefore users clearly do want this. I can easily think of two points that refute that: 1. The internet has shown us time and time again that popularity doesn’t indicate willingness to pay (which paid social networks had strong popularity…?) 2. There are many extremely popular websites that users wouldn’t want to be woven throughout the rest of their personal and professional digital lives
This article is spot on. There is a small market for mediocre cheaters, for the rest of us "AI" is spam (glad that the article finally calls it out).
It is like Clippy, which no one wanted. Hopefully, like Clippy, "AI" will be scrapped at some point.
>Everybody wanted the Internet.
I don't think this is true. A lot of people had no interest until smartphones arrived. Doing anything on a smartphone is a miserable experience compared to using a desktop computer, but it's more convenient. "Worse but more convenient" is the same sales pitch as for AI, so I can only assume that AI will be accepted by the masses too.
It's like talking into a void. The issue with AI is that it is too subtle, too easy to get acceptable junk answers and too subtle for the majority to realize we've made a universal crib sheet, software developers included, perhaps one of the worst populations due to their extremely weak communications as a community. To be repeatedly successful with AI, one has to exert mental effort to prompt AI effectively, but pretty much nobody is willing to even consider that. Attempts to discuss the language aspects of using an LLM get ridiculed as 'prompt engineer is not engineering' and dismissed, while that is exactly what it is: prompt engineering using a new software language, natural language, that the industry refuses to take seriously, but is in fact an extremely technical programming language so subtle few to none of you realize it, nor the power that is embodied by it within LLMs. They are incredible, they are subtle, to the degree the majority think they are fraud.
But why are the CEOs insisting so much on AI? Because stock investors prefer to invest on anything with "AI inside". So the "AI business model" would not collapse , because it is what investors want. It is a bubble. It will be bubbly for a while, until it isn't.
It's simply a money grab. You get this feature you don't need or want and hey, we're going to raise your price because of this. See for instance this - priceless - email:
Dear administrator,
We recently added the best of Google AI to Workspace plans to help your teams accomplish more, faster. In addition, we added new, simple to use security insights and controls to help you keep your business data safe.
We also announced updated subscription pricing. Your subscription will be subject to this updated pricing starting July 7, 2025.
We’ve provided additional information below to guide you through this change. What this means for your organization
New Workspace features
Your updated pricing reflects the many new features now included in your Google Workspace edition. With these changes, you can:
Starting as early as July 7, 2025, your Google Workspace Business Plus subscription price will be automatically updated to $22.00* per user, per month with an Annual/Fixed-Term Plan (or $26.40 if you have a monthly Flexible Plan).The specific date that your subscription price will increase depends on your plan type, number of user licenses, and other factors.
*Prices will be updated in all local payment currencies.
If you have an Annual/Fixed-Term Plan, your subscription will be subject to updated pricing on your next plan renewal starting July 7, 2025. We will provide you with more specific information at least 30 days before updates to your Google Workspace plan pricing are made. What you need to do
No action is required from you. Features have already rolled out to Google Workspace Business Plus subscriptions, including AI features in many additional languages, and subscription prices will be updated automatically starting July 7, 2025.
We know that data security and compliance are top priorities for business leaders when adopting AI, and we are committed to helping you keep your data safe. You can understand how to effectively utilize generative AI in your organization, and learn how to keep your data confidential and protected. We’re here to help
If you wish to make changes to your subscription or payment plan, please visit the Admin console. Find which edition and payment plan you have on Google Workspace Admin Help.
Refer to the Help Center for details regarding the AI features and price updates, including updated local currency pricing.
ChatGPT is the 5th most-visited website on the planet and growing quickly. that’s one of many popular products. Hardly call that unwilling. I bet only something like 8% of Instagram users say they would pay for it. Are we to take this to mean that Instagram is an unpopular product that is rbi g forced on an unwilling public?
I looked for the right term but force-feeding is what it is. I yesterday also changed my default search engine from Duckduckgo to Ecosia as they seem the only one left not to provide flaky AI summaries.
In fact I also tried the communication part - outside of Outlook - but people don't like superficial AI polish
Remembering the failure of Google+, I wonder if hostilely forcing a product to your users makes it less likely to succeed.
As far as I can tell, the AI-hate is most prominent in tech circles (creativity too, but they don't like media generation, largely embrace text though).
It seems here on the ground in non-tech bubble land, people use ChatGPT a ton and lean hard on AI features.
When Google judges the success of bolted on AI, they are looking at how Jane and John General Public use it, not how xleet007 uses it(or doesn't).
There is also the fact that AI is still just being bolted onto things now. The next iteration of this software will be AI native, and the revisions after that will iron out big wrinkles.
When settings menus and ribbon panels are optional because you can just tell the program what to do in plain English, that will be AI integration.
It's annoying having AI features force fed I imagine but it's come about due to many of the public liking some AI - apparently ChatGPT now has 800 million weekly users (https://www.digitalinformationworld.com/2025/05/chatgpt-stat...) and then competing companies think they should try to keep up.
I say I imagine it's annoying because I've yet to actually be annoyed much but I get the idea. I actually quite like the Google AI bit - you can always not read it if you don't want to. AI generated content on youtube is a bit of a mixed bag - it tends to be kinda bad but you can click stop and play another video. My office 2019 is gloriously out of date and does that stuff I want without the recent nonsense.
Even worse: they are using your data that you are inputting into these programs to continuously train their data. That’s an even bigger violation since it breaches data privacy.
My fintech bank, Qube, is running some sort of croudfunded investment round to add AI. It's super interesting to me in a number of ways. https://www.startengine.com/offering/qube-money
The top of the list has got to be that one of their testimonials presented to investors is from "DrDeflowerMe". It's also interesting to me because they list financials which position them as unbelievably tiny: 6,215 subscribing accounts, 400 average new accounts per month, which to me sounds like they have a lot of churn.
I'm in my third year of subscribing and I'm actively looking for a replacement. This "Start Engine" investment makes me even more confident that's the right decision. Over the years I've paid nearly $200/year for this and watched them fail to deliver basic functionality. They just don't have the team to deliver AI tooling. For example: 2 years ago I spoke with support about the screen that shows you your credit card numbers being nearly unreadable (very light grey numbers on a white background), which still isn't fixed. Around a year ago a bunch of my auto transfers disappeared, causing me hundreds of dollars in late fees. I contacted support and they eventually "recovered" all the missing auto-transfers, but it ended up with some of them doubled up, and support stopped responding when I asked them to fix that.
I question if they'll be able to implement the changes they want, let alone be able to support those features if they do.
> Before proceeding let me ask a simple question: Has there ever been a major innovation that helped society, but only 8% of the public would pay for it?
Highways.
I am moderately hyped for AI, but I treat these corporate intrusions into my workflows the same as ads or age verification, pointing uBlock to elements which are easy to point-and-click block, and making quick browser plugins and Tampermonkey scripts for things like Google to intercept my web searches and redirect them from the All/AI search page. -And if I can, it does amuse me to have Gemini write the plugins to block Google ads/inconveniences.
Companies didn't ask your opinion when they offshore manufacturing to Asia. They didn't ask your opinion when they offshore support to call centers in Asia. Companies don't ask your opinion, they do what they think is best for their financial interest, and that is how capitalism works.
Once upon a time, not too long ago, there was someone who would bag your groceries, and someone who would clean your window at the gas station. Now you do self-checkout. Has anyone asked for this? Your quality of life is worse, the companies are automating away humanity into something they think is more profitable for them.
In a society where you don't have government protection for such companies, there would be other companies who provide a better service whose competition would win. But when you have a fat corrupt government, lobbying makes sense, and crony-capitalism births monopolies which cannot have any competition. Then they do whatever they want to you and society at large, and they don't owe you, you owe them. Your tax dollars sponsor all of this even more than your direct payments do.
They force more and more AI into everything so that AI can continue to learn.
Also the requests aren't answered locally. Your data is forwarded to the AI's DC, processed and the answer returned. You can be absolutely certain that they keep a copy of your data.
I noticed that some of his choices contributed to his problem. I haven't been forced into accepting AI (so far) while I've been using duckduckgo for search, libreoffice, protonmail, and linux.
I feel an urge to build personal local AI bots that would be personal spam filters. AI filtering AI, fight fire with fire. Mostly because the world OP wants is never coming back. Everything will be AI and it's everywhere.
I also feel an urge to build spaces in the internet just for humans, with some 'turrets' to protect against AI invasion and exploitation. I just don't know what content would be shared in those spaces because AI is already everywhere in content production.
But that's exactly the problem with proprietary software. It's not force-feeding you anything, it's working exactly as intended.
Software is loyal to the owner. If you don't own your software, software won't be loyal to you. It can be convenient for you, but as time passes and interest changes, if you don't own software it can turn against you. And you shouldn't blame Microsoft or it's utilities. It doesn't owe you anything just because you put effort in it and invested time in it. It'll work according to who it's loyal to, who owns it.
If it bothers you, choose software you can own. If you can't choose software you own now, change your life so you can in the future. And if you just can't, you have to accept the consequences.
The issue really is that the AI isn’t good enough that people actually want it and are willing to pay for it.
It’s like IPV6, if it really was a huge benefit to the end user, we’d have adopted it already.
"Any sufficiently advanced AI technology is indistinguishable from bullshit."
- me, a few years ago.
I find the whole situation with regard to AI utterly ridiculous and boring. While those algos might have some interesting applications, they're not as earth-shattering as we are made to believe, and their utility is, to me at least, questionable.
I mostly agree with TFA, with one glaring exception: The quality of Google search results has regressed so badly in the past years (played by SEO experts), that AI was actually a welcome improvement.
So, are there any EU citizens around who are willing to create and run the needed European Citizens' Initiative to get this ball rolling? :)
As a data point, the "Stop Killing Games" one has passed the needed 1M signatures so is in good shape:
https://www.stopkillinggames.com
And on an unwilling workforce. Everyone I know is being made to drop what they were working on a year ago and stuff AI into everything.
Some are excited about it. Some are actually making something cool with AI. Very few are both.
I think there’s a difference between the tool that helps you do work better and the service that generates the end result.
People would be less upset if ai is shown to support the person. This also allows that person to curate the output and ignore it if needed before sharing it, so it’s a win/win.
But is the big money in revolution?
As a note on Microsoft's obnoxious Copilot push, I too got the "Your 365 subscription price is increasing because we're forcing AI on you".
Only when I went to cancel[1], suddenly they made me aware that there was a "classic" subscription that was the normal price, without CoPilot. So they basically just upsized everyone to try to force uptake.
[1] - I'm in the AI business and am a user and abuser of AI daily, but I don't need it built directly into every app. I Already have AI subscriptions and local models and solutions.
If people are stupid to fall for the subscribe model, they likely need AI.
Excellent Frank Zappa reference in The Famous Article is "I'm the Slime"[1].
The thing that really chafes me about this AI, irrespective of whether it is awesome or not, is emitting all of the information to some unknown server. To go with another Zappa reference, AI becomes The Central Scrutinizer[2].
I predict an increasing use of Free Software by discerning people who want to maintain more control of their information.
[1] https://www.youtube.com/watch?v=JPFIkty4Zvk
[2] https://en.wikipedia.org/wiki/Joe%27s_Garage#Lyrical_and_sto...
Are you not concerned that force-feeding might be unduly disparaged by your comparison?
This guy calls himself honest broker but his articles are just expressions of status anxiety. The kind of media the he loves to write about is becoming less relevant and so he lashes out at everything new from AI to TikTok.
Just a quick quibble…the subtitle of the article calls this problem tyranny.
Tyranny is a real thing which exists in the world and is not exemplified by “product manager adding text expansion to word processor.”
The natural state of capitalism is trying things which get voted on by money. It’s always subject to boom-bust cycles and we are in a big boom. This will eventually correct itself once the public makes its position clear and the features which truly suck will get fixed or removed.
I agree copilot for answering emails is negative value. But I find Google AI search results are very useful, can't see how they will monetise this, but can't complain for now.
You guys are lying if you don’t use ChatGPT instead of Google now
I honestly can’t think of reasons to use AI. At work I have to give myself reminders to show my bosses that I used the internal ai tool so I don’t get in shit.
I don’t see the utility, all I see is slop and constant notifications in google.
You can say skill issue but that’s kind of the point; this was all dropped on me by people who don’t understand it themselves. I didn’t ask or want to built the skills to understand ai. Nor did my bosses: they are just following the latest wave. We are the blind leading the blind.
Like crypto ai will prove to be a dead end mistake that only enabled grifters
the title can be shortened to "force feeding an unwilling public" which is a fairly reasonable description of our current econimic system. we went from "supply and demand", to "we can supply demand"(the heydays of hype and advertising), to "surprise!, like it or lump it"
"Shut up, buddy, and chew on your rock."
I assume you've been happy with the other slop Microsoft and Google fed you for years.
Your may agree or disagree with the OP, but this passage is spot-on:
"I don’t want AI customer service—but I don’t get a choice.
I don’t want AI responses to my Google searches—but I don’t get a choice.
I don’t want AI integrated into my software—but I don’t get a choice.
I don’t want AI sending me emails—but I don’t get a choice.
I don’t want AI music on Spotify—but I don’t get a choice.
I don’t want AI books on Amazon—but I don’t get a choice."
[dead]
[dead]
[flagged]
It’s not force-feeding. It’s rape and assault.
I said no. Respect my preferences.
Why do people who attempt to critique AI lean on the "no one wants this, everyone hates this" instead of just making their point. If your arguments are strong you don't need to wrap them in false statistics.
I’ve observed the opposite—not enough people are leveraging AI, especially in government institutions. Critical time and taxpayer money are wasted on tasks that could be automated with state-of-the-art models. Instead of embracing efficiency, these organizations perpetuate inefficiency at public expense.
The same issue plagues many private companies. I’ve seen employees spend days drafting documents that a free tool like Mistral could generate in seconds, leaving them 30-60 minutes to review and refine. There's a lot of resistance from the public. They're probably thinking that their job will be saved if they refuse to adopt AI tools.
"There ought to be a law" is why we have nanny-state government. I imagine that is why there have been "no spitting" and "no chewing gum" laws on the books.
People going to lord it over others in the pursuit of what they think is proper.
Society is over-rated, once it gets beyond a certain size.
Along the same lines, I am currently starting my morning with blocking ranges of IP addresses to get Internet service back, due to someone's current desire to SYN Flood my webserver, which being hosted in my office, affects my office Internet.
It may soon come to a point where I choose to block all IP addresses except a few to get work done.
People gonna be people.
sigh.