I find that AI is very useful for getting me past the 'blank page' writing block, but inevitably it writes in ways I would never, and so I end up editing it heavily. But, for me, a boy with ADHD, editing something is infinitely easier than writing it from scratch.
I think this is the opposite of how most people tend to use LLMs, and I actually think my way is the "better" way. My issue has never been the act of writing well, or clearly expressing what I mean... it has been the inertia of putting words on a page at all.
(and an LLM had nothing to do with this comment :P)
I can relate to the inclination, but so many new insights and moments of inspiration are necessarily confined to that painstaking iterative line-by-line process of real writing. When you are simply prompting and editing, you will fill the page (and it might even sound like “you”), but you will not have that delightful experience of encountering something unexpected along the way to filling it.
There's nothing stopping you from doing that with an LLM. I get more insights refining a draft through prompts than I ever did writing because there's more of it. The end stage of that process rarely sees the light of day because the artifact wasn't the point.
For writing as thinking with trouble starting from scratch, LLMs are the most important technology to emerge in my lifetime. Microblogging filled that gap in a way, but it had too many downsides.
Yes this is my use-case for it too - it's great to generate a structure which I will keep but I always end up reworking all the actual content so it sounds like me. It is a great way to get past the 'getting started' hurdle though.
You're the first articulating my exact use case with AI as well! It really helps get me in 'the zone'. I actually now dictate as well and then the AI rewrite it and then I start editing. To lower the barrier even more.
The way the post is written, I wonder if the author is working for a company going through a growth spurt and where, through sheer size, everything is becoming more "corporate".
There's a huge difference between having AI clean up a text you send privately to someone you have worked closely with for years, versus a broad spectrum text sent by a VP to hundreds of people or more. The first case is reprehensible, for the reasons the author lays out. But as for the second case, corporate doublespeak has been a meme since long before the advent of AI and it would remain even in some AI-pocalypse. Just because your boss puts out sanitized language in a mass communication, doesn't inherently mean your boss won't still be present and real with you in a more private setting.
I really don't mind text filtered through an LLM per se. But I prefer high signal-to-token so to speak. The way humans talk and write means that the seemingly extraneous text they add often provides an interesting insight into the thought patterns of the person, and therefore mistakes or even pointless monologues can be interesting.
This is not always true. Once there was an online reaction to short content that made people treat "long-form" content as desirable entirely due to its length. I rather like reading books and the New Yorker's fiction section when I still subscribed, but much of this "long-form" content was token-expansion of a formulaic nature which I did not enjoy. LLMs have mastered this kind of long-form token-expansion.
This is assuming people are using an LLM in good faith, obviously. One day, perhaps LLMs will learn to express what someone is saying in an elegant way that is enjoyable for people like me to read. But even then, I will have the difficulty of distinguishing whether this is a human speaking through an LLM in good faith or a human who has set up a machine that is set up to mimic a human.
The latter is undesirable to me because I have access to the best such machines at a remarkably low cost. Were I to desire a conversation with an LLM, it is trivial for me to find one. I'm not coming here for that[0].
A sufficiently insightful LLM which prompts my thinking in certain ways wouldn't be unwelcome to me, I suppose. I have a couple of my friends for whom I still go on Twitter to read what they say even after I have stopped using the site routinely. If I found out the posts were entirely an LLM I think I would still read them simply because I find the posts useful and with sufficiently high signal-to-token.
0: Certainly, if every place only spoke about things I was interested in and never in things I was not interested in, I wouldn't need separation of interest spaces at all. But the variation of interest vectors for different humans has made this impossible.
> The way humans talk and write means that the seemingly extraneous text they add often provides an interesting insight into the thought patterns of the person, and therefore mistakes or even pointless monologues can be interesting.
I havily use llms for internal communication.
I receive docen request per day from colleagues asking me very specific stuff by mail or teams about processes, setups, master data, my particular experiences with approaches, for contacts within our big corp or just general knowledge questions and how I would recommend to tackle certain problems: Setting up conditions in sap, where to find certain info or just send them current setups. Also they ask me about strategic advices. I use my personal knowledge base to automatically prepare drafts of the answers based on previous answers to other colleages. Before the llm time I could barely help all of then. I got more productive by x-times. I then digest the emails again back to my knowledge system.
People have no problem with receiving obviously llm written answers. But because of the particular domain knowledge they know it can only come from me.
Excuse my writing, this did not went though the same system :)
Edit:
And now I forgot the most important. When the knowledge the llm retrieved is insufficient to answer colleagues question or the agent skill can not execute the requested task from my colleague, it asks me just for the missing info or skill and with me (the human) in the loop work is done x times faster. Eventually it will replace me and all my colleagues one day. Looking forward to do other stuff then
> People have no problem with receiving obviously llm written answers.
If I asked you for your particular experience on something and got an obvious LLM reply, I might say nothing or I might ask if it was an LLM, but either way I’m unlikely to ask you something or trust you ever again. Which also works for you, I guess, since it’d be one fewer person taking up your time. But if you had instead told me “I’m too swamped to help right now” I would’ve instead offered to help take some burden off your back.
This sounds like a very odd and very lonely job to me. Reading your description I pictured a comically tiny room with only one opening for incoming requests and another one for outgoing responses. Obviously silly, but in an abstract sense maybe not that far from the truth?
It also sounds like you were overworked and when you started to use LLMs you've stripped yourself of the chance to work with a colleague.
I totally agreed with you. I'm French (nobody is perfect ^^), I'm not so fluent in english and I'm dyslexic, that why I often write my message, then I ask to Claude to translate it in english because i'm feeling I will lose the credibility of my message if there is too much mistake...
But you're right, so this message is not translated by LLM :D
There's grammatical mistakes and then there is sloppiness. Only the second makes me disregard someone's comment.
> I will lose the credibility of my message if there is too much mistake...
The correct way to write this is "if there are too many mistakes", because mistakes are countable and plural. And it's fine to make grammatical mistakes if English is not your native language. You can only get better by practising :-)
Same situation here. English is not my first language and I use Claude constantly to clean up my writing. Not to sound like someone else, just to make sure what I'm trying to say actually comes across clearly. The irony is that the more polished text sometimes gets less engagement because people assume it's fully AI generated, while a messier version with obvious non-native patterns would feel more human.
I'm curious, why would you use an LLM to translate French to English? Why not use a dedicated translator such as DeepL, which will not only save you tokens/energy, but will also be much closer to your personal phrasing?
Yeah, some colleagues started using ChatGPT for internal communication as well. While we don’t like to mandate or prohibit anyone from using any tools, we did need to make it really clear to everyone that this is not productive. Grammarly to make small corrections to external recipients is fine. Using ChatGPT to “polish” your message is not. If you’re not sure about your English abilities, we offer you free English lessons and encourage giving each other feedback during chats.
LLMs shouldn’t be used for communication at all if you want any form of authenticity.
i do that when i don't trust the persons ability to translate to english without error. if they are using a tool to translate to english, then i might as well use that tool myself, with the benefit that i then have the original untranslated message too and can use it to get a second opinion if the translation doesn't make sense. if all i have is the translation then i am stuck with that.
Why play this word game that has nothing to do with their point? I can write an email about TPS reports in my own voice without caring about the subject matter. That's authentic. I care about performing my job well and with individuality and (no pun intended) agency.
This is starting to become my latest pet peeve, people using Claude to write their messages in Slack. I'm going to just stop communicating via text with these people.
It's one thing to have Claude polish a message and another thing for it to write out an entire message.
I feel the same and I experience less pressure when writing because for the first time it seems being a bit sloppy can be advantageous.
The only thing is that my anecdata contradicts it. My AI cleaned up writing seems to fare much better and this seems to be true across all channels. To be clear I do not mean AI generated just AI cleaned, that is spelling, punctuation, grammar mainly, the occasional word order change.
In the end it's about getting the message across first and "get to know me" second and proper and clear expression helps a lot with the first.
I don't often use AI to cleanup my texts, but when I do, I fully own the output. I make a conscious decision whether to leave in every AI suggestion or not. The final text _is_ what I want to say.
and the reason for that is that we passively understand more than we actively use, but when reading something we often can not distinguish our active and passive knowledge of an expression. so when you read a filtered text, it will sound fine because you are familiar with the expressions used, but you don't realize that some of those expressions are not actually in your active vocabulary.
It feels so disrespectful sometimes too, having to read a long paragraph that conveys so little meaning knowing full well the original prompt was probably very short and I'm now wasting extra time parsing the hollow LLM text expansion.
That's absolutely what's happening already: write for me for the writer, summarise this for me for the reader. At some point it will become clear how absurdly wasteful we're being (right now, we're being paid to ignore that waste).
> write for me for the writer, summarise this for me for the reader.
It's funny though. For computer to computer conversation, we have invented (deflate+inflate) algorithms to save bandwidth, time and money.
On the other hand for human to human communication, we are in the process of inventing a (inflate+deflate) method and at the same time we are spending insane amounts of time, money & bandwidth to make it possible!
We need to come up with a catchy buzzword salad to market to executives. Something like "increased communication efficiency between workers by direct brain-email-brain interface"
That’s exactly why I’ve refused to use autocomplete on smartphone keyboards from the very beginning. I want to express myself in my own words.
In a work context, of course, things are a bit different: I want to move the project forward and not jeopardize my future paychecks. Authenticity tends to take a back seat there. However, I’d be more concerned about inefficiency. Is it really necessary to run every piece of communication through ChatGPT to refine the wording? Are you sure nothing gets lost in the process? Doesn’t that end up wasting a lot of work time without adding any real value?
And on top of that, it leads to alienation and frustration. If you talk to me as if you were an LLM, don’t be surprised if I talk to you as if you were an LLM.
I used to use LLMs to 'clean up' my own writings, and in the end I agree with the author here: it doesn't really help. The reader will have this impression of 'too perfect', and will have a diminished feeling of value, of honesty. I think we would benefit from a standardized way of signaling text and content that is exclusively human. Say, some sort of logo that says 'genuine', 'untouched by the hand of AI'. I'll be thinking about a way to do this.
Imagine going to work or a social meeting where everyone looks and sounds the same(or just a limited set) all with the same perfect tone, body language and communication style. Sounds like a nightmare and I would find it hard to relate and get that "perspective", when there is nothing to differentiate a person.
I guess everyone using LLMs for text is similar to that. If everyone uses the same LLM style, its hard to understand where the other person is coming from. This is not a problem for technical and precise communication though(the choice of LLMs in that context has other risks).
It is also strictly not an LLM capability problem because they can mimic or retain the original style and just "polish" with enough hints but that takes time, investment and people go through path of least resistance. So, we all end up with similar text with typical AI-isms.
There are other reasons to dislike LLM text like padding and effort asymmetry that have been discussed here enough.
When in the middle of a group text-chat, someone replied with AI-generated blather. It was dead-clear with the usual sterile vocab, structured buzzphrases, and other LLM "tells".
I politely called him out and asked to use his own voice. In public he insisted that it was his voice and that he used AI only for "formatting". But in private he admits that he created a "gem to assist with multicultural comms" that generated it. He claims he did it because "not everyone can take the native American English well". A load of bovine manure. I nicely told him to cut this crap and just write as it comes to him. (Basic spell- and grammar-check is fine.)
I think there was an SMBC comic about this topic, but I don't think I can find it, and the site doesn't exactly make it easy. I don't even remember if it was pre-2020 or not.
It was about how people would get a thing (a robot?) that would repeat whatever they said but in a more fancy way (or something along those lines), to make them sound smarter. Then the people would start depending on these robots to communicate at all, to the point their speech degrades and they start making unintelligible noises that the robots still translate into actual speech.
Honestly that's a sign you shouldn't stay around those people. If you're financially dependent on it and can't leave, okay, exception granted, but that kind of behavior isn't ok.
There are two ways to write an email. One is to keep it short and to the point that so there are obviously no errors, the other is to waffle on and obfuscate the message with an LLM so that the reader's eyes glaze over...or something like that.
"I would have written a shorter letter, but did not have the time."
i can ramble without an LLM, and i suppose you can ask an LLM to keep it short. but both are results of not taking the time to craft an appropriate message.
In emails...whatever. I can tell it's there but fine whatever, we're just trying to get a message across LLM or otherwise.
But this was the first year I saw it in performance review write-ups which frankly was jarring. Here is feedback supposedly 1:1 that massively affects this person's life and their perception of "worth" so to speak...and it's just AI.
Notably it was split by geography. EU countries closest to organic, india slop trainwreck, US in the middle
Sorta made me conclude "ok i guess that's the end of performance reviews that vaguely mean anything & actually get read"
I use ChatGPT for communication. It started with "please fix typos" and now it's "write me a slack message about this and that". This is mostly an effect of the communication environment we created - taking risks is rarely rewarded, and mistakes can be very costly. Remember, you're always one misunderstood message away from being fired. Of course there are people whom I trust and I'd never offend them with AI-generated slop, but the rest of the humanity - it is what it is, LLMs help me a lot.
I've worked corporate jobs all my life, and I was never one misunderstood message away from being fired. Instead they would've talked to me and, even if they figured it was my fault, they would've given me a warning since it was the first time. No worthwhile employer is firing people for the first offense, corporate or otherwise.
Ugh, you are not entitled to get to know me. There is a threshold between all that I share with the world and the rest of me. Hell, not every person gets the same picture, and that's deliberate and healthy--my customers don't get to know what my proctologist knows. My mother doesn't get to know what my wife knows.
You don't get to know all of me, because I don't trust you.
This post comes across as sweet, and innocent. It also comes across as absurdly self-entitled, and it's not an OK posture to take towards the world. It's not OK when the police take this posture, it's not OK when private companies take this posture, and it's not OK when strangers on the internet take this posture.
You are entitled to withdraw from relationships that don't fulfill your emotional needs. A reasonable audience for this missive is your girlfriend, your child (who relies on you), or your employer (to whom you are vulnerable).
Weaponised therapy speak is gross. This article was not asking you to spill your life story to every person you meet, it was asking you to speak with your own voice, which is a perfectly normal and in no way entitled thing to be asking.
What are you rambling about? It’s not about your doctor using ChatGPT for his newsletter, it’s about your colleagues using ChatGPT on Slack or email.
I personally think that the people who can’t be bothered to actually write authentic messages, and assume that everyone will just read their word salad full of repetitive AI patterns, are being the ones acting entitled.
It is, because of the baked-in asymmetry. "I couldn't be bothered to write it, but you have to read it". Unless your expectation is that I'm going to have my chatbot summarize the messages from your chatbot, in which case, maybe we should just both ride off into the sunset.
It's not "getting to know you" in that sense, it's getting to know the public face you present, whether I can trust you, and how I can interact with you most smoothly. If you're my coworker and you don't ever want to talk about your family or friends or personal interests or problems or anything, that's fine.
True: Nobody is entitled to be treated nicely. Nobody is entitled to an open, friendly relationship. Nobody is entitled to get to know you. If we only did what we were entitled to do, and received what we were entitled to receive, the world would be an even shittier place than it already is. We have enough people walking around with the "You're not entitled to me being nice, so I'm not gonna be! nyaaaaa!" attitudes.
and acťually i believe the opposite is true: we are entitled to be treated nicely. we are entitled to an open and friendly relationship. and while i agree that we are not entitled to get to know you, i'd prefer to deal with an authentic person, because hiding behind a generic facade makes it easier for someone to impersonate you, putting you at risk of becoming a victim of identity theft.
I find that AI is very useful for getting me past the 'blank page' writing block, but inevitably it writes in ways I would never, and so I end up editing it heavily. But, for me, a boy with ADHD, editing something is infinitely easier than writing it from scratch.
I think this is the opposite of how most people tend to use LLMs, and I actually think my way is the "better" way. My issue has never been the act of writing well, or clearly expressing what I mean... it has been the inertia of putting words on a page at all.
(and an LLM had nothing to do with this comment :P)
I can relate to the inclination, but so many new insights and moments of inspiration are necessarily confined to that painstaking iterative line-by-line process of real writing. When you are simply prompting and editing, you will fill the page (and it might even sound like “you”), but you will not have that delightful experience of encountering something unexpected along the way to filling it.
There's nothing stopping you from doing that with an LLM. I get more insights refining a draft through prompts than I ever did writing because there's more of it. The end stage of that process rarely sees the light of day because the artifact wasn't the point.
For writing as thinking with trouble starting from scratch, LLMs are the most important technology to emerge in my lifetime. Microblogging filled that gap in a way, but it had too many downsides.
Similar for me, I find it's an absolutely amazing "creative unblocker".
It generally has enough "activation energy" to get me over the hump of wherever I've been mentally stuck.
Yes this is my use-case for it too - it's great to generate a structure which I will keep but I always end up reworking all the actual content so it sounds like me. It is a great way to get past the 'getting started' hurdle though.
You're the first articulating my exact use case with AI as well! It really helps get me in 'the zone'. I actually now dictate as well and then the AI rewrite it and then I start editing. To lower the barrier even more.
The way the post is written, I wonder if the author is working for a company going through a growth spurt and where, through sheer size, everything is becoming more "corporate".
There's a huge difference between having AI clean up a text you send privately to someone you have worked closely with for years, versus a broad spectrum text sent by a VP to hundreds of people or more. The first case is reprehensible, for the reasons the author lays out. But as for the second case, corporate doublespeak has been a meme since long before the advent of AI and it would remain even in some AI-pocalypse. Just because your boss puts out sanitized language in a mass communication, doesn't inherently mean your boss won't still be present and real with you in a more private setting.
Yes, the more personal the context, the more the humanity aspect / being relatable matters.
I really don't mind text filtered through an LLM per se. But I prefer high signal-to-token so to speak. The way humans talk and write means that the seemingly extraneous text they add often provides an interesting insight into the thought patterns of the person, and therefore mistakes or even pointless monologues can be interesting.
This is not always true. Once there was an online reaction to short content that made people treat "long-form" content as desirable entirely due to its length. I rather like reading books and the New Yorker's fiction section when I still subscribed, but much of this "long-form" content was token-expansion of a formulaic nature which I did not enjoy. LLMs have mastered this kind of long-form token-expansion.
This is assuming people are using an LLM in good faith, obviously. One day, perhaps LLMs will learn to express what someone is saying in an elegant way that is enjoyable for people like me to read. But even then, I will have the difficulty of distinguishing whether this is a human speaking through an LLM in good faith or a human who has set up a machine that is set up to mimic a human.
The latter is undesirable to me because I have access to the best such machines at a remarkably low cost. Were I to desire a conversation with an LLM, it is trivial for me to find one. I'm not coming here for that[0].
A sufficiently insightful LLM which prompts my thinking in certain ways wouldn't be unwelcome to me, I suppose. I have a couple of my friends for whom I still go on Twitter to read what they say even after I have stopped using the site routinely. If I found out the posts were entirely an LLM I think I would still read them simply because I find the posts useful and with sufficiently high signal-to-token.
0: Certainly, if every place only spoke about things I was interested in and never in things I was not interested in, I wouldn't need separation of interest spaces at all. But the variation of interest vectors for different humans has made this impossible.
> The way humans talk and write means that the seemingly extraneous text they add often provides an interesting insight into the thought patterns of the person, and therefore mistakes or even pointless monologues can be interesting.
I havily use llms for internal communication. I receive docen request per day from colleagues asking me very specific stuff by mail or teams about processes, setups, master data, my particular experiences with approaches, for contacts within our big corp or just general knowledge questions and how I would recommend to tackle certain problems: Setting up conditions in sap, where to find certain info or just send them current setups. Also they ask me about strategic advices. I use my personal knowledge base to automatically prepare drafts of the answers based on previous answers to other colleages. Before the llm time I could barely help all of then. I got more productive by x-times. I then digest the emails again back to my knowledge system. People have no problem with receiving obviously llm written answers. But because of the particular domain knowledge they know it can only come from me. Excuse my writing, this did not went though the same system :)
Edit: And now I forgot the most important. When the knowledge the llm retrieved is insufficient to answer colleagues question or the agent skill can not execute the requested task from my colleague, it asks me just for the missing info or skill and with me (the human) in the loop work is done x times faster. Eventually it will replace me and all my colleagues one day. Looking forward to do other stuff then
> People have no problem with receiving obviously llm written answers.
If I asked you for your particular experience on something and got an obvious LLM reply, I might say nothing or I might ask if it was an LLM, but either way I’m unlikely to ask you something or trust you ever again. Which also works for you, I guess, since it’d be one fewer person taking up your time. But if you had instead told me “I’m too swamped to help right now” I would’ve instead offered to help take some burden off your back.
This sounds like a very odd and very lonely job to me. Reading your description I pictured a comically tiny room with only one opening for incoming requests and another one for outgoing responses. Obviously silly, but in an abstract sense maybe not that far from the truth?
It also sounds like you were overworked and when you started to use LLMs you've stripped yourself of the chance to work with a colleague.
I totally agreed with you. I'm French (nobody is perfect ^^), I'm not so fluent in english and I'm dyslexic, that why I often write my message, then I ask to Claude to translate it in english because i'm feeling I will lose the credibility of my message if there is too much mistake... But you're right, so this message is not translated by LLM :D
> I will lose the credibility
There's grammatical mistakes and then there is sloppiness. Only the second makes me disregard someone's comment.
> I will lose the credibility of my message if there is too much mistake...
The correct way to write this is "if there are too many mistakes", because mistakes are countable and plural. And it's fine to make grammatical mistakes if English is not your native language. You can only get better by practising :-)
Same situation here. English is not my first language and I use Claude constantly to clean up my writing. Not to sound like someone else, just to make sure what I'm trying to say actually comes across clearly. The irony is that the more polished text sometimes gets less engagement because people assume it's fully AI generated, while a messier version with obvious non-native patterns would feel more human.
I'm curious, why would you use an LLM to translate French to English? Why not use a dedicated translator such as DeepL, which will not only save you tokens/energy, but will also be much closer to your personal phrasing?
Yeah, some colleagues started using ChatGPT for internal communication as well. While we don’t like to mandate or prohibit anyone from using any tools, we did need to make it really clear to everyone that this is not productive. Grammarly to make small corrections to external recipients is fine. Using ChatGPT to “polish” your message is not. If you’re not sure about your English abilities, we offer you free English lessons and encourage giving each other feedback during chats.
LLMs shouldn’t be used for communication at all if you want any form of authenticity.
You can take one step ahead and let user write in their own language then you figure out how to make sense of it.
i do that when i don't trust the persons ability to translate to english without error. if they are using a tool to translate to english, then i might as well use that tool myself, with the benefit that i then have the original untranslated message too and can use it to get a second opinion if the translation doesn't make sense. if all i have is the translation then i am stuck with that.
Tbe hard truth is at work there is no authenticity.
Definitely not getting any better if everyone starts using ChatGPT for private communications.
Bad thing X has been happening for a while. Let's all work towards making it worse.
Why play this word game that has nothing to do with their point? I can write an email about TPS reports in my own voice without caring about the subject matter. That's authentic. I care about performing my job well and with individuality and (no pun intended) agency.
This is starting to become my latest pet peeve, people using Claude to write their messages in Slack. I'm going to just stop communicating via text with these people.
It's one thing to have Claude polish a message and another thing for it to write out an entire message.
I have noticed this in GitHub issues too. Where many long paragraphs used to indicate high quality, now it's the opposite.
I feel the same and I experience less pressure when writing because for the first time it seems being a bit sloppy can be advantageous.
The only thing is that my anecdata contradicts it. My AI cleaned up writing seems to fare much better and this seems to be true across all channels. To be clear I do not mean AI generated just AI cleaned, that is spelling, punctuation, grammar mainly, the occasional word order change.
In the end it's about getting the message across first and "get to know me" second and proper and clear expression helps a lot with the first.
> spelling, punctuation, grammar mainly, the occasional word order change.
All of those you could already achieve with tools before LLMs.
I don't often use AI to cleanup my texts, but when I do, I fully own the output. I make a conscious decision whether to leave in every AI suggestion or not. The final text _is_ what I want to say.
The point of the article is it is not what you would've said. Even though you take responsibility for the result, you were never 100% the origin.
and the reason for that is that we passively understand more than we actively use, but when reading something we often can not distinguish our active and passive knowledge of an expression. so when you read a filtered text, it will sound fine because you are familiar with the expressions used, but you don't realize that some of those expressions are not actually in your active vocabulary.
I largely reached the same conclusion recently => https://stephencagle.dev/posts-output/2025-10-14-you-should-...
It feels so disrespectful sometimes too, having to read a long paragraph that conveys so little meaning knowing full well the original prompt was probably very short and I'm now wasting extra time parsing the hollow LLM text expansion.
Easy fix: use an LLM to summarize it.
(only half-joking, a part of me fears that this is the reality we’re moving towards)
That's absolutely what's happening already: write for me for the writer, summarise this for me for the reader. At some point it will become clear how absurdly wasteful we're being (right now, we're being paid to ignore that waste).
> write for me for the writer, summarise this for me for the reader.
It's funny though. For computer to computer conversation, we have invented (deflate+inflate) algorithms to save bandwidth, time and money.
On the other hand for human to human communication, we are in the process of inventing a (inflate+deflate) method and at the same time we are spending insane amounts of time, money & bandwidth to make it possible!
We need to come up with a catchy buzzword salad to market to executives. Something like "increased communication efficiency between workers by direct brain-email-brain interface"
That’s exactly why I’ve refused to use autocomplete on smartphone keyboards from the very beginning. I want to express myself in my own words.
In a work context, of course, things are a bit different: I want to move the project forward and not jeopardize my future paychecks. Authenticity tends to take a back seat there. However, I’d be more concerned about inefficiency. Is it really necessary to run every piece of communication through ChatGPT to refine the wording? Are you sure nothing gets lost in the process? Doesn’t that end up wasting a lot of work time without adding any real value?
And on top of that, it leads to alienation and frustration. If you talk to me as if you were an LLM, don’t be surprised if I talk to you as if you were an LLM.
I used to use LLMs to 'clean up' my own writings, and in the end I agree with the author here: it doesn't really help. The reader will have this impression of 'too perfect', and will have a diminished feeling of value, of honesty. I think we would benefit from a standardized way of signaling text and content that is exclusively human. Say, some sort of logo that says 'genuine', 'untouched by the hand of AI'. I'll be thinking about a way to do this.
To me the rhythm of the text makes it clear whether I'm reading something AI generated or not, usually.
Otherwise, not using em dashes, adding some mistakes and writing more like how you think/talk helps :)
Imagine going to work or a social meeting where everyone looks and sounds the same(or just a limited set) all with the same perfect tone, body language and communication style. Sounds like a nightmare and I would find it hard to relate and get that "perspective", when there is nothing to differentiate a person.
I guess everyone using LLMs for text is similar to that. If everyone uses the same LLM style, its hard to understand where the other person is coming from. This is not a problem for technical and precise communication though(the choice of LLMs in that context has other risks).
It is also strictly not an LLM capability problem because they can mimic or retain the original style and just "polish" with enough hints but that takes time, investment and people go through path of least resistance. So, we all end up with similar text with typical AI-isms.
There are other reasons to dislike LLM text like padding and effort asymmetry that have been discussed here enough.
I recently heard a new (to me) excuse:
When in the middle of a group text-chat, someone replied with AI-generated blather. It was dead-clear with the usual sterile vocab, structured buzzphrases, and other LLM "tells".
I politely called him out and asked to use his own voice. In public he insisted that it was his voice and that he used AI only for "formatting". But in private he admits that he created a "gem to assist with multicultural comms" that generated it. He claims he did it because "not everyone can take the native American English well". A load of bovine manure. I nicely told him to cut this crap and just write as it comes to him. (Basic spell- and grammar-check is fine.)
I think there was an SMBC comic about this topic, but I don't think I can find it, and the site doesn't exactly make it easy. I don't even remember if it was pre-2020 or not.
It was about how people would get a thing (a robot?) that would repeat whatever they said but in a more fancy way (or something along those lines), to make them sound smarter. Then the people would start depending on these robots to communicate at all, to the point their speech degrades and they start making unintelligible noises that the robots still translate into actual speech.
EDIT: Found it, from 2014: https://smbc-comics.com/index.php?id=3576
When I wrote a snarky mail to the MD and I couldn’t suppress my anger, Claude did a great job smoothing it out while keeping it pointy.
Last time I did that, I got pointed out as an ESL and got insulted and laughed at.
Honestly that's a sign you shouldn't stay around those people. If you're financially dependent on it and can't leave, okay, exception granted, but that kind of behavior isn't ok.
Sounds like terrible people. I’ve worked with plenty of people who didn’t start with English and if you give them time they usually excel
Once asked Claude to guess what the prompt was that generated a mail. Didn’t work unfortunately.
There are two ways to write an email. One is to keep it short and to the point that so there are obviously no errors, the other is to waffle on and obfuscate the message with an LLM so that the reader's eyes glaze over...or something like that.
"I would have written a shorter letter, but did not have the time."
i can ramble without an LLM, and i suppose you can ask an LLM to keep it short. but both are results of not taking the time to craft an appropriate message.
In emails...whatever. I can tell it's there but fine whatever, we're just trying to get a message across LLM or otherwise.
But this was the first year I saw it in performance review write-ups which frankly was jarring. Here is feedback supposedly 1:1 that massively affects this person's life and their perception of "worth" so to speak...and it's just AI.
Notably it was split by geography. EU countries closest to organic, india slop trainwreck, US in the middle
Sorta made me conclude "ok i guess that's the end of performance reviews that vaguely mean anything & actually get read"
I use ChatGPT for communication. It started with "please fix typos" and now it's "write me a slack message about this and that". This is mostly an effect of the communication environment we created - taking risks is rarely rewarded, and mistakes can be very costly. Remember, you're always one misunderstood message away from being fired. Of course there are people whom I trust and I'd never offend them with AI-generated slop, but the rest of the humanity - it is what it is, LLMs help me a lot.
> Remember, you're always one misunderstood message away from being fired.
If this is true, you really want to be fired. That is a horrendous work environment, and you should quit if at all possible.
Most workplaces (any certainly any good workplace) will seek to understand, not fire you immediately.
Blessed are those who haven't worked corproate.
I've worked corporate jobs all my life, and I was never one misunderstood message away from being fired. Instead they would've talked to me and, even if they figured it was my fault, they would've given me a warning since it was the first time. No worthwhile employer is firing people for the first offense, corporate or otherwise.
> It robs me of getting to know you.
Ugh, you are not entitled to get to know me. There is a threshold between all that I share with the world and the rest of me. Hell, not every person gets the same picture, and that's deliberate and healthy--my customers don't get to know what my proctologist knows. My mother doesn't get to know what my wife knows.
You don't get to know all of me, because I don't trust you.
This post comes across as sweet, and innocent. It also comes across as absurdly self-entitled, and it's not an OK posture to take towards the world. It's not OK when the police take this posture, it's not OK when private companies take this posture, and it's not OK when strangers on the internet take this posture.
You are entitled to withdraw from relationships that don't fulfill your emotional needs. A reasonable audience for this missive is your girlfriend, your child (who relies on you), or your employer (to whom you are vulnerable).
Weaponised therapy speak is gross. This article was not asking you to spill your life story to every person you meet, it was asking you to speak with your own voice, which is a perfectly normal and in no way entitled thing to be asking.
What are you rambling about? It’s not about your doctor using ChatGPT for his newsletter, it’s about your colleagues using ChatGPT on Slack or email.
I personally think that the people who can’t be bothered to actually write authentic messages, and assume that everyone will just read their word salad full of repetitive AI patterns, are being the ones acting entitled.
It is, because of the baked-in asymmetry. "I couldn't be bothered to write it, but you have to read it". Unless your expectation is that I'm going to have my chatbot summarize the messages from your chatbot, in which case, maybe we should just both ride off into the sunset.
It's not "getting to know you" in that sense, it's getting to know the public face you present, whether I can trust you, and how I can interact with you most smoothly. If you're my coworker and you don't ever want to talk about your family or friends or personal interests or problems or anything, that's fine.
in some cultures getting to know you is a crucial part of a business relationship. no connection -> no business.
likewise for friends (not just your girlfriend), getting to know you is part of developing friendship.
so family, friends, work, business, that pretty much covers everyone you deal with on a regular basis.
i would go as far as saying that if you don't trust me then you have no business even communicating with me unless the interaction is incidental.
> Ugh, you are not entitled to get to know me.
If your comment is at all indicative of how you are in real life, I really don't think you have to worry about people wanting to get to know you.
I'm so tired of hearing that word online.
True: Nobody is entitled to be treated nicely. Nobody is entitled to an open, friendly relationship. Nobody is entitled to get to know you. If we only did what we were entitled to do, and received what we were entitled to receive, the world would be an even shittier place than it already is. We have enough people walking around with the "You're not entitled to me being nice, so I'm not gonna be! nyaaaaa!" attitudes.
and acťually i believe the opposite is true: we are entitled to be treated nicely. we are entitled to an open and friendly relationship. and while i agree that we are not entitled to get to know you, i'd prefer to deal with an authentic person, because hiding behind a generic facade makes it easier for someone to impersonate you, putting you at risk of becoming a victim of identity theft.