In my view the author is putting the cart before the horse here. His primary argument seems to be that people already think in bullet points, so the fluff around them is unnecessary and can be excised without destroying the original message. But that fluff is there for many reasons. It adds context, it allows us to commingle our meaningful and valid emotions alongside our facts, and ultimately, it lets us tell a human story.
The way in which we create and consume information has a direct effect on our experience of the world, and I think there is a deeper point to be made here about how the way we use communication technology. The endless firehose of information is drowning our brains to the point that we are compelled to find a way to cope. But I would argue that the way to do that is to rate limit receipt of messages so that only the quality stuff gets through, rather than letting everything through and destroy every human aspect of them in the process. It’s Twitter’s 140 character limit argument from last decade all over again; the medium becomes the message, so we must be careful what mediums we use.
> But that fluff is there for many reasons. It adds context, it allows us to commingle our meaningful and valid emotions alongside our facts, and ultimately, it lets us tell a human story.
That was the case prior to the availability of LLMs. However, the practice of sending over LLM-expanded content from the sender to the recipient and the use of LLM-aided summarization on the recipient's side is only about to become prevalent. Once it reaches some sort of saturation point, people would either forego LLMs entirely, or move to other forms of communication that you speak of where this sort of social convention won't be needed entirely.
In my case, I predict that this is going to make people interact a lot more in meatspace and supersede Internet communication in the same way email has been relegated for many people over channels such as Discord, WhatsApp, etc.
It's a little bit sad that people are wasting all these kWh of compute power to save a few seconds hassle (apparently) of communication. I don't know how to translate this in English, but LLM-driven emailing is a real "gas factory", an absurd level of complexity and effort (albeit hidden) for a trivial task.
Why not just adopt a more direct style of communication? Why fret so much over emails? You wouldn't leave an actually important e-mail to an LLM, I would hope not. And, if it's more casual (even in a work setting), just write as you usually do, the other person most likely knows you and doesn't give a shit.
At my previous employer, there was a kind of a top-down mandate to "use AI in your daily work to become efficient", which senior management would interpret in all sorts of braindead ways.
This, of course, lead to hilariously nonsensical emails, such as a IT security team member sending out a LLM-generated email about how users should be reading CVE reports for the software they use, instead of doing the right thing and lock out people running old software.
None of the places I've worked at required sending an email -- although I've been on the receiving end of people asking to set up calls for trivial issues.
I don’t think it’s just you. “This site visit could have been an online meeting which could have been a phone call which could have been an email which could have been a text.”
> It adds context, it allows us to commingle our meaningful and valid emotions alongside our facts, and ultimately, it lets us tell a human story.
You are correct of course, The fluff is the "delivery vehicle" but that also introduces errors.
the problem is, this kind of extra information is hard to do effectively, and its not really taught formally. Worse still its very hard to do if there are cultural or neurological differences(ie being on the spectrum, or a Dutch person talking to someone English)
I agree, I think he overstated the desire to add fluff. It's true that LLMs now add a bunch of fluff by default, but I imagine that will get better.
I actually do the reverse of the example the author gave (bullet points - fluff). I give train of thought to an LLM about what I want to communicate and instruct it to use no BS and make it tight and it gives me a well structure short message with clear action items logically structured. It adds the minimal amount of fluff (e.g. "Thanks for taking the time to speak with me today") but it makes it a lot easier for the other person to comprehend, without using an LLM.
The "fluff" is a careful orchestration driven by our desire to maintain social standing. 5,000 years ago rejection by the tribe meant death, and that fear is subconsciously influencing all our communications and actions. That is why a bullet point email of what is wrong with a coworker is not acceptable, it lacks all the nuance that allows you to read into the other persons past perceptions and future intents.
There is a huge range of cultural influence though. Americans, on average, have quite a different communication to let's say Germans or Scandinavians. Fluff will be present everywhere, but to vastly different degrees.
> That said, is it a beneficial assignment to ask a student to read a long book? I haven't a clue. Is it beneficial to ask a student to write an essay? More on that in a moment.
As far as I can tell, he dropped this thread, and there wasn't, "more on that in a moment."
But I'm kinda surprised why he "hasn't a clue" that the answer to that question is a clear "yes." The whole point of school is to teach kids to use their brains, and reading something complicated, understanding it, and being able to synthesize something about it is a pretty important brain-use.
> Here we can think of the long email with meaningless fluffy padding as being the business speak protocol that office employees communicate with today. And we can think of the bullet points as how we actually think.
You've got to be careful to not overgeneralize this kind of observation, a problem that software engineers are especially prone to. Sometimes the "fluff" is meaningless, and "bullet points" are all that's really there. Other times the "fluff" is important stuff that the person making the judgement just doesn't understand or has trouble comprehending.
Software engineer types are often pretty ignorant, without really realizing it, and often assume their superiority because "intelligence" or some dumb shit like that.
Yes, LLM expansion of bullet points is fluff that wastes everyone's time, but that doesn't mean all or even most true thought can be compressed into concise bullet points. Furthermore, even when it can, there's are often good reasons why the most concise and compact form is not the most desirable presentation.
> Software engineers types are often pretty ignorant, without really realizing it, and often assume their superiority because "intelligence" or some dumb shit like that.
It’s clearly on display here. He made me skeptical when he pooh-poohed the importance of reading books and writing about them in school, and lost me when he casually claimed that all people think in bullet points. These things are incredibly egocentric and stereotypically engineer.
I think the problem is the disconnect between learning vs passing. The goal of writing a book report is supposed to be to develop your brain and improve some skills. But society cannot simply give away knowledge without some kind of testing, so there must be an exam. And you have curricula where students are "required" to take a list of classes. Not all students are deeply excited every class on that list (or their teacher, or textbook) so some students are in some classes purely to tick a checkbox. That means to them, whatever skill is taught there is useless, so they'll happily use the LLM and cheat in other ways.
First part of the problem is we need to stop cookie cutter course lists. Forcing people to take a course they don't care about is a futile ability. Back in the day it was easy to do it, but now it has gotten harder due to LLMs and reliance on exams as a compliance tool. Yes, this will make it harder to say someone has a degree in X. Instead you will have to handle a bit more nuance and discuss what specific topics they studied.
Second part is we need to dial down the credentialism. Treating third party exam grades as an indicator of ability is no longer feasible in the LLM world. The only viable way is to have a extremely controlled exam environment, but that greatly restricts what sort of things you can examine. A lot of knowledge is relevant on a timescale of days or longer, not a few hours, and you can't detain people for days just for an exam grade.
Both of this are challenging for sure but I don't think it's impossible. The programming industry has dealt with this for decades. When someone has a degree in CS or related area, it doesn't mean all that much in practice, and the GPA in that degree is also a weak indicator. Instead, the industry found other ways to directly evaluate ability. Sure, they're not perfect, but not exactly hopeless I would say.
>so some students are in some classes purely to tick a checkbox
As a student I was forced to take classes I would have never willingly chosen to take, and yet I still learned from them. I worked for an A and didn't consider cheating an option. I'm not really sure why, I can answer why I wouldn't today, but I can't particularly say why my yesteryear self was so against it, yet it remains as a key point in me gaining a very useful education.
>Forcing people to take a course they don't care about is a futile ability.
While I think sometimes we include too many unrelated courses, I also don't agree with the idea of only giving someone courses they are interested in. I would have been weaker for it. I think the issue is the culture that encourages cheating as a valid response, but where does that come from and how to fight it are massive problems.
>The only viable way is to have a extremely controlled exam environment, but that greatly restricts what sort of things you can examine.
I think oral exams are great at testing knowledge, but they suffer other problems. They don't scale at all, and they leave more room for bias than other forms of exams. I'm sure there are other problems, but those two are enough to start with. If only there was some option that had the benefits of an oral exam with an expert without the issues (this sounds like I'm hinting there is such a solution, but I promise I'm not, it is just wistful thinking).
>But I'm kinda surprised why he "hasn't a clue" that the answer to that question is a clear "yes." The whole point of school is to teach kids to use their brains, and reading something complicated, understanding it, and being able to synthesize something about it is a pretty important brain-use.
I also believe that there's something to be said about being able to sit and focus on something like reading a book for a long period of time. Patience and focus seem to be diminished these days.
That and it's only in reading A LOT that builds the proficiency needed to absorb, evaluate, comprehend, and respond to other documents you will eventually come upon. Whether it is reading for enjoyment and allowing a virtual world to wash over you, while simultaneously scanning your eyes across a page, or reading a scientific paper to understand the research topic and results well enough, to know when to review and reread for context and understanding.
Some level of long focus and deeper awareness of written material is necessary to create anything substantially new and massage it into what you actually desire.
I'm not sure this is as self evident as you believe it is. Outside of school, I don't think I've read an actual book and gotten use out of it. I read for pleasure but the actual useful information in my life has been accumulated from reading many many short form (<10 page) things over a long time.
I don't mean to criticize you but... my God, this is such a sad statement to me.
I understand where you're coming from when it comes to technical stuff - I would agree that I haven't gotten very much of my engineering knowledge from books, but rather from shorter-form stuff.
But there are so very many things I've read over the years that have stuck with me, that have informed the way I think about the world, that have provided comfort or food for thought or new lenses through which to view things. My life would be so much poorer without the books I've read, they all mix around in my brain and come to me at random times, and give me a lot of use, even if it's the kind of "value" that is hard to quantify and pin down.
the author ironically uses a lot of words to say very little, though I agree with the conclusion. it’s already annoying to have someone use a lot of words to say very little (especially in a business context). now it’s free and easily accessible for anyone, whereas before it at least took some social stamina
so people will do it, people will be annoyed by it, people will prioritize to more efficient communicators
Therefore, since brevity is the soul of wit,
And tediousness the limbs and outward flourishes,
I will be brief. Your noble son is mad.
or
“That,” replied Hardin, “is the interesting thing. The analysis was the most difficult of the three by all odds. When Houk, after two days of steady work, succeeded in eliminating meaningless statements, vague gibberish, useless qualifications—in short all the goo and dribble—he found he had nothing left. Everything canceled out. Lord Dorwin, gentlemen, in five days of discussion didn't say one damned thing, and said it so that you never noticed.
We should invent a language like stenography or algebra, in which it would be impossible not to express something if it is not either a fact or an implication. Then we can see at first sight whether it’s dense or not.
A bullet-point list of ways you screwed up communicates something entirely different than a long form email filled with flowery "fluff" (as the author puts it).
In fact, if the author feels confident in this theory, I suggest they replace the blog post with this AI-generated bullet-point summary I just made...
> One day we'll just send bullet points as emails. We'll reach business speak protocol version 2.0. That which was verbose becomes terse. No more time wasted translating thoughts and injecting platitudes.
I'll celebrate the day this happens and gets widespread. Conversing with Americans is painful compared to Germans, because Americans insist on being coddled all the time and the very second you don't they'll complain behind your back to your boss.
Fun fact - that cultural difference was also a huge part why Wal-Mart failed to gain traction here in Germany. German consumers really didn't like staff welcoming them with a forced smile, that and bad press from crossed labor laws was their downfall.
Walmarts notorious "people greeters" are there to prevent theft, and as far as I can tell, the people greeter reason seems to be purely speculative, and less explanatory than more solid political reasons (established competitors conspired, just like Walmart does to incumbents in the US).
> Walmarts notorious "people greeters" are there to prevent theft
How is some old pensioner sitting (or worse: standing) at the entrance going to deter theft? Thieves don't care unless there are actual security guards on patrol.
I agree that LLMs turn short prompts into long code blocks, but I don't agree that it's fluff in the same way that email pleasantries are fluff.
The short prompt leaves a lot of room for interpretation. The code itself leaves zero room for interpretation (assuming the behavior of the coding language is well understood). I don't agree that AI will allow us to start relying on code that isn't fully defined just because it might allow our emails to remove fluff that didn't contribute to the meaning at all.
> How are professionals expected to communicate with each other? Usually with empty platitudes to kick things off like "Hey! How's it going? How's the family? How are you doing?" Messages are expected to be written across several coherent English sentences, neatly organized into paragraphs, finally with some sort of signature. In the programming world we refer to this as boilerplate. What is it that we are really trying to communicate? Usually a few short ideas that could be represented as bullet points, but that we need to fluff up with meaningless words so that we don't sound rude. Of course, this changes by culture and by language and is not applicable to many parts of the world, but it is definitely a thing in American English.
Do the rest of you really do this? I can’t recall receiving any slack or emails with this sort of thing over the 25+ years I’ve been working in a business environment, even though all of them have been in American English. I certainly don’t use that sort of “boilerplate”. I just jump straight into what the topic is.
I do agree with the fact, that this is an annoying phenomenon. It took me a while to understand that there are people who are not just using LLM to write these style of emails, but those people are the source of the training data for LLMs.
The solution is "simple", to move aways from such people and stick to genuine communicators.
And if that's not an option, another solution is to use AI to summarize those long emails. In the end both parties will be writing and reading terse little emails, with the verbose text in the middle as the world's most ridiculous communication protocol.
This is why I am starting a new aI company, the a is for automatic and the I is for identity. We will automate the application of the identity matrix multiplication/solve operation to produce an extremely performant implementation of the bullet point/prose/bullet point protocol.
Wow, you have the tech not only to apply the identity, but to solve for it? That is something!
A major flaw in your approach is the identity is potentially too large, you should instead project something onto a very small space, and then extrapolate in some more or less random (but pretty) manner back to the full dimensional space.
We’re still optimizing. We tried to use a low rank approximation but for some reason it didn’t work. We’ll probably use a banded or sparse representation.
Progress would be faster, but for some reason we can’t retain a mathematician after showing them our experiments.
It’s something so ridiculous it’s difficult to even parody. I guess the real-world equivalent is ordering a single microSD card and getting it in a plastic case, in a blister pack, in an envelope, in a mailer, inside a Jiffy bag that doesn’t fit through your letter box because none of the middlemen cared enough to think about it.
I consider myself a “genuine communicator” insofar as I write messages thoughtfully and thoroughly.
Though I suspect many of my emails go unread, and people have confessed that they ran my personal messages to them through an LLM to generate something for them to report up in some spreadsheet etc tool.
Or in the case of some managers for whom the critical technical detail goes over their head, they just re-ask their questions in a call and try to get to “is it done yet” and “how many engineers can I add to make it go faster”.
I think I might be on the chopping block if a move was made to get rid of overly-thorough verbose communicators, genuine or not
I have a new rule: if someone sends me an "AI" message, I will probably denylist them.
Not only hasn't the person thought about it or invested in it, but now they're jamming my ability to read into exactly what that one person said, and how they said it.
Many people have made the observation/joke that meaning->LLM->LLM->meaning is silly, but I don't recall anyone pointing out that information that a skilled reader/listener can discern from direct expression is lost.
> One day we'll just send bullet points as emails. We'll reach business speak protocol version 2.0. That which was verbose becomes terse. No more time wasted translating thoughts and injecting platitudes.
I'm not sure about you all, but as a non-manager at [big tech company] I can count on one hand the number of actual emails I've sent in the past year.
I don't think the final point about programming languages makes much sense.
In the overall software development process, lots of people contribute different things to create the product.
The job of the software developer is to bring the amount of ambiguity in the specification to zero, because computers can only run a program with zero ambiguity.
There have been lots of high level programming languages that abstract away certain things or give the programmer control over those things. The real thing that you want to do is pick a programming language that allows you control over the things you care about. Do you care about when memory is allocated and deallocated? Do you care about how hardware is used (especially GPUs and ML accelerators) or do you want the hardware completely abstracted away? Do you care more about runtime or dev iteration time? Does your program need to exist in a certain tech context?
There's no programming language that will let people who care about different things deal with them or not deal with them.
Let's discuss this spot on the wall while the building we are in is being excavated from beneath.
Yes, AI is going to change the world, not as anyone seems to be discussing. We've created interactive logical assistance for everything and anything intellectual anyone does. The ramifications of this are ambitions requiring intellectual difficulty are going to skyrocket. This is thanks to the magical thinking that is already rampant is all aspects of society. Net result will be significantly higher demands on anyone with a knowledge based career, after all "you have AI assistance now, why are you not 10X?"
We will all be forced to become adept using AI, and not just casually. We will be required to operate on our intellectual edge, or we will find ourselves unemployable.
I actually got a very similar automatic response to a take home (that I spent 12 hours of my time on) for an interview process. Some feedback was good, but other feedback was not (one example: The feedback mentions not enforcing a Node version, while such enforcement was in the package.json file). That combined with the formatting made me realize that it was either a 100% copy paste output based on some prompts, maybe tuned two or three words without checking for factuals.
A very distressing experience that prompted me to change how I approach take home assignments.
These are not set in stone and if a dream company comes may be overruled, but:
At least one or more of the following MUST be true:
- Take home takes 2 hours or less
- Take home has a posterior instance that will take place with a human being in order to allow me to defend/discuss/expand the take home. Of course this assumes I eliver something that is not just the word "farts" in a txt.
- Take home is monetarily compensated.
These are heuristics that do not imply I am an impressive IC or something like that. It's just self defense.
I mostly agree with what the author is saying with regard to "the current state of things." I do not feel, however, that it is a particularly large concern, especially long-term, given the second step (summarizing) is probably already integrated into everyone's e-mail client, and if not -- it will be at some point. The "smoothing out of difficult communications", however, may end up being worth the whole "having to read a summarized response."
The reason it won't matter "long term" is that e-mail clients are solving/will solve[0] the "give me 'the point' of this e-mail." If my couple of decades of experience across multiple employers is any indication[1], the vast majority of people in software development fall into one of two camps: (a) They don't have the basics of written communication down. It's not a matter of misspellings or improper semi-colon, or emdash. It's all lower case[2] with no punctuation, or with "..." (no spaces between, either) in place of every other form of punctuation. Or (b) they are generally grumpy people who write in a manner that fits their personality.
Conveying tone, correctly, via written text is hard unless the tone you're trying to convey is "frustration/anger/impatience". And, of course, the same folks who can't figure out punctuation tend to respond tersely. Between co-workers who work closely together, that's preferred. When my boss has to tell me something minor about my performance and sends it in a five-word e-mail, it comes off like I need to start looking for new work. Prior to AI, "good managers who were writing-challenged" would find templates online and replace words. It never sounded genuine. AI brings us a lot closer to that, while not requiring an enormous amount of effort on the part of the writer. It'll be a matter of time before a lot of that process happens within the client (if it doesn't, already). I know tone detection is a common feature on communications tools I use[3].
[0] Not entirely sure; I use e-mail so infrequently, but thinking about the chat app we use at work, it provides AI summaries of the day's chats in each channel.
[1] Anecdata, I know, but it's all I've got
[2] Including the first letter of every meeting invite and subject; if I have OCD, that triggers it.
AI will change the world, but not in the way the OP (Thomas Hunter) thinks.
--
The first statement, AI will change the world, is low surprise and clearly true already.
The second statement, not in the way X thinks, is also low surprise, because most technologies have very unpredictable impacts, especially if it is "close to singularity" or the singularity.
There was a moment when google introduced autocomplete and it was a game changer.
LLMs are still waiting for their autocomplete moment: when they become an extension of the keyboard and complete our thoughts so fast, that i could write this article in 2 minutes. That will feel magical.
> LLMs can write boilerplate code at least an order of magnitude faster than I can.
This is my biggest fear with everyone adopting LLMs without considering the consequences.
In the past, I used "Do I have to write a lot of boilerplate here?" as a sort of litmus test for figuring out when to refactor. If I spend the entire day just writing boilerplate, I'm 99% sure I'm doing the wrong thing, at least most of the time.
But now, junior developers won't even get the intuition that if they're spending the entire day just typing boilerplate something is wrong, instead they'll just get the LLM to do it and there is no careful thoughts/reflections about the design and architecture.
Of course, reflection is still possible, but I'm afraid it won't be as natural and "in your face" which kind of forces you to learn it, instead it'll only be a thing for people who consider it in the first place.
The question to ask yourself is what value one gets from that refactor. Is the software faster? Has more functionality? Cheaper to operate? Reduce time to market. These would be benefits to the user, and I'd venture to say the refactor does not impact the user.
The refactor will impact the developer. Maybe the code is now more maintainable, or easier to integrate, or easier to test. But this is where I expect LLMs will make a lot of progress - - they will not need clean, well structured code. So the refactor, in the long run, is not useful to a developer with an LLM sidekick.
I agree with you but as a thought exercise: does it matter if there is a lot of boilerplate if ultimately the code works and is performant enough?
Fwiw, most of the time I like writing code and I don't enjoy wading through LLM-generated code to see if it got it right. So the idea of using LLMs as reviewers resonates. I don't like writing tests though so I would happily have it write all of those.
But I do wonder if eventually it won't make sense to ever write code and it will turn into a pastime.
> I agree with you but as a thought exercise: does it matter if there is a lot of boilerplate if ultimately the code works and is performant enough
Yeah it matters because it is almost guaranteed that eventually a human will have to interact with the code directly so it should still be good quality code
> But I do wonder if eventually it won't make sense to ever write code and it will turn into a pastime
Even the fictional super-AI of Star Trek wasn't so good that the engineers didn't have to deeply understand the underlying work that it produced.
Tons of Trek episodes deal with the question of "if the technology fails, how do the humans who rely on it adapt?"
In the fictional stories we see people who are absolute masters of their domain solve the problems and win the day
In reality we have glorified chatbots, nowhere near the abilities of the fictional super-AI, and we already have people asking "do people even need to master their domains anymore?"
I dunno about you but I find it pretty discouraging
No, that is not the only thing missing. Right now AI continues to make mistakes a child learns not to make by age 10…don’t make shit up. And by time you’re an adult you figure out how to manage not to forget (by whatever means necessary) the first thing your boss told you to do after he added a new ask.
> AI can solve math puzzles better than 99.9% of population
I've studied electronic engineering and then switched to software engineering as a career, and I can say the only time I've been exposed to math puzzles were in academic settings. The knowledge is nice and help with certain problem solving, but you can be pretty sure I will reach out to a textbook and a calculator before trying to brute-force one such puzzle.
The most important thing in my daily life is understand the given task, do it correctly, and report about what I've done.
Yep this exactly. If I ever feel that I'm solving a puzzle at work, I stop and ask for more information. Once my task is clear, I start working on it. Learned long ago that puzzle solving is almost always solving the wrong problem and wasting time.
Maybe that's where the disconnect comes from. For me understanding comes before doing. And coding for me is doing. There may be cases when I assumes that my understanding was complete and the feedback (errors during compiling and testing) told me I'm wrong, but that just triggers another round of research to seek understanding.
Puzzle solving is only for when information are not available (reverse engineering, closed systems,...) but there's a lot of information out there for the majority of tasks. I'm amazed when people spend hours trying to vibe code something, where they could spend just a few minutes reading about the system and comes up with a working solution (or find something that already works).
While I do see this argument made quite frequently, doesn't any professional effort center in procedures employed particularly to avoid mistakes? Isn't this really the point of professional work (including professional liabilities)?
I feel like the opposite/something else is missing. I can write lots of text quickly, if I just write down my unfiltered stream of thoughts, both together with and without an LLM, but what's the point?
What takes really long time, is making a large text contain just the important parts. Saying less takes longer time, at least for me, but hopefully saves the time and effort for people reading it. At least that's the idea.
The quote "If I had more time, I would have written a shorter letter" comes to mind.
You can do this in Cursor already. Write a README or a SPEC file and the LLM will try to "complete your thoughts", except that it's off often enough. I find this hugely distracting, as if someone always interrupts your train of thought with random stuff.
By definition if I knew how AI would change the world, I would invest/build things to that end. The fact that we still don't have a great AI product outside of chatgpt, shows that no one knows what will happen.
The author claims to be able to tell AI content. I am wondering, is there any test to help me train to distinguish AI content? Like: this paragraph is written by a human this by an AI and see how well we all do?
> Messages are expected to be written across several coherent English sentences, neatly organized into paragraphs, finally with some sort of signature. In the programming world we refer to this as boilerplate.
"This HTTP call has no message body and therefore no content, and can thus be ignored" he said confidently, not noticing the status code. The verbiage is where you find out useful information, like whether the speaker is on your side and whether they understand the problem the same way as you and whether they're dumb. It's not as if the reason we didn't adopt bullet-points-only ten years ago is that it required better AI.
More generally, I submit that when faced with a long email thread, skim reading is superior to LLM summaries in all cases (except maybe the one where the reader is too inexperienced to do it well). It's faster, captures more detail, and (probably most important) avoids the problem of the people on the thread coming away with subtly different understandings of the conversation.
Frankly I don't think the end users will notice much difference.
Excel '98 probably covers 90% of what the average excel user needs, yet here we are with a grossly bloated SaaS excel app in 2025. Constant SaaS feature-pack enshitification that people are either pushed or tricked into.
I think software is in a for a big disruption when people realize they can prompt an LLM "Make me a simple app for tracking employee payroll for my small business. Just like lotus 1-2-3 was more than capable of doing 40 years ago."
I distinctly remember a review in a 1996 computer magazine calling the new version of MS Word bloated, overcomplicated and slow. If only they knew how bad things were going to get.
TL;DR: AI summaries and bullet points for everything will change human communication to that format.
The problem with this post is that, despite mentioning programming languages in the title, the examples are about writing emails. The author forgets to address programming at all, which is very much a use case and will remain one as processors only run compiled machine code and not bullet points.
While there is more talk about emails, programming is addressed toward the end: "[…] this will affect computer programming languages as well. Right now I can open a codebase, […]"
I'm not so sure that's a missing point or just an actual point.
I'm not convinced myself AI will have much impact on the developers landscape, beyond better autocompletion and doc generation.
All these fancy AI developers are cute, but we have had the cheaper vs lower quality trade-off available for quite some time now with outsourcing to emerging countries. When I was in my studies, we heard all the predictions of the destruction of software engineers because of outsourcing, the need to become an architect instead of a developer or you'll be replaced, etc.
I've seen none of that happen over the years, except for very low skilled automation / CRUD jobs.
> I'm not convinced myself AI will have much impact on the developers landscape, beyond better autocompletion and doc generation.
I’ve been a developer for 20 years. Using an LLM to generate context-specific prototypes has completely changed my productivity. It has allowed me to start and finish projects that were previously relegated to “maybe someday” (read: never).
It’s not clear what the net result will be in the long run, but it’s already changing what and how I build things, and I’m not nearly as bearish as I was before a few successful projects.
I don’t think AI is going to replace good engineers. It’s going to make good engineers better engineers. This alone will lead to some interesting outcomes I think.
The world is full of unsolved problems and many of them are just time-expensive to fix. Significantly lowering the activation energy and time investment required totally changes the calculus.
I wrote about one concrete use case I built to help me solve a long-standing clutter problem in a comment on another thread [0]. Not only did this unblock a project I'd been trying to get going for awhile (declutter my place), it taught me about Linux Kernel Gadgets and started a cascade of new future project ideas.
It also makes trying harder things lower risk, i.e. I can fail quickly vs. having to spend 20 hours to see if it's worth pursuing.
Setting aside basic productivity, as a high functioning depressed person, activation energy is a huge deal. Having the ability to get an idea to a stage that I can actually see results and work on it is a game changer.
In what way is "I can solve real problems faster than before" analogous or related to companies building frivolous/unnecessary cloud-connected features?
That is not true, see "mental health disorders" which I do suffer from. Me not doing something does not mean it never needed to be done. The opposite is true, too. Something I have done does not mean it had to be done, if that makes sense.
> When I was in my studies, we heard all the predictions of the destruction of software engineers because of outsourcing, the need to become an architect instead of a developer or you'll be replaced, etc.
A lot of low level stuff has been outsourced to India over the last decades, and more if you count second or even third level support (won't even include first level support that's not required to be on-site).
C-levels only see the expenses savings, but how their employees feel about internal support being utter dogshit can't be quantified in a language that beancounters speak...
In my view the author is putting the cart before the horse here. His primary argument seems to be that people already think in bullet points, so the fluff around them is unnecessary and can be excised without destroying the original message. But that fluff is there for many reasons. It adds context, it allows us to commingle our meaningful and valid emotions alongside our facts, and ultimately, it lets us tell a human story.
The way in which we create and consume information has a direct effect on our experience of the world, and I think there is a deeper point to be made here about how the way we use communication technology. The endless firehose of information is drowning our brains to the point that we are compelled to find a way to cope. But I would argue that the way to do that is to rate limit receipt of messages so that only the quality stuff gets through, rather than letting everything through and destroy every human aspect of them in the process. It’s Twitter’s 140 character limit argument from last decade all over again; the medium becomes the message, so we must be careful what mediums we use.
> But that fluff is there for many reasons. It adds context, it allows us to commingle our meaningful and valid emotions alongside our facts, and ultimately, it lets us tell a human story.
That was the case prior to the availability of LLMs. However, the practice of sending over LLM-expanded content from the sender to the recipient and the use of LLM-aided summarization on the recipient's side is only about to become prevalent. Once it reaches some sort of saturation point, people would either forego LLMs entirely, or move to other forms of communication that you speak of where this sort of social convention won't be needed entirely.
In my case, I predict that this is going to make people interact a lot more in meatspace and supersede Internet communication in the same way email has been relegated for many people over channels such as Discord, WhatsApp, etc.
It's a little bit sad that people are wasting all these kWh of compute power to save a few seconds hassle (apparently) of communication. I don't know how to translate this in English, but LLM-driven emailing is a real "gas factory", an absurd level of complexity and effort (albeit hidden) for a trivial task.
Why not just adopt a more direct style of communication? Why fret so much over emails? You wouldn't leave an actually important e-mail to an LLM, I would hope not. And, if it's more casual (even in a work setting), just write as you usually do, the other person most likely knows you and doesn't give a shit.
How much email activity is directly valuable to these individual workers vs boss and owner-mandated low-value nonsense?
It feels like blaming emissions on commuters as individuals instead of the ones making RTO mandates
At my previous employer, there was a kind of a top-down mandate to "use AI in your daily work to become efficient", which senior management would interpret in all sorts of braindead ways.
This, of course, lead to hilariously nonsensical emails, such as a IT security team member sending out a LLM-generated email about how users should be reading CVE reports for the software they use, instead of doing the right thing and lock out people running old software.
isn't it kinda already the case that you send emails and people want you to jump on a video call to explain it?
or have I just had bad luck
None of the places I've worked at required sending an email -- although I've been on the receiving end of people asking to set up calls for trivial issues.
I don’t think it’s just you. “This site visit could have been an online meeting which could have been a phone call which could have been an email which could have been a text.”
> It adds context, it allows us to commingle our meaningful and valid emotions alongside our facts, and ultimately, it lets us tell a human story.
You are correct of course, The fluff is the "delivery vehicle" but that also introduces errors.
the problem is, this kind of extra information is hard to do effectively, and its not really taught formally. Worse still its very hard to do if there are cultural or neurological differences(ie being on the spectrum, or a Dutch person talking to someone English)
I agree, I think he overstated the desire to add fluff. It's true that LLMs now add a bunch of fluff by default, but I imagine that will get better.
I actually do the reverse of the example the author gave (bullet points - fluff). I give train of thought to an LLM about what I want to communicate and instruct it to use no BS and make it tight and it gives me a well structure short message with clear action items logically structured. It adds the minimal amount of fluff (e.g. "Thanks for taking the time to speak with me today") but it makes it a lot easier for the other person to comprehend, without using an LLM.
Yeah totally agree, the fluff is important.
The "fluff" is a careful orchestration driven by our desire to maintain social standing. 5,000 years ago rejection by the tribe meant death, and that fear is subconsciously influencing all our communications and actions. That is why a bullet point email of what is wrong with a coworker is not acceptable, it lacks all the nuance that allows you to read into the other persons past perceptions and future intents.
There is a huge range of cultural influence though. Americans, on average, have quite a different communication to let's say Germans or Scandinavians. Fluff will be present everywhere, but to vastly different degrees.
> That said, is it a beneficial assignment to ask a student to read a long book? I haven't a clue. Is it beneficial to ask a student to write an essay? More on that in a moment.
As far as I can tell, he dropped this thread, and there wasn't, "more on that in a moment."
But I'm kinda surprised why he "hasn't a clue" that the answer to that question is a clear "yes." The whole point of school is to teach kids to use their brains, and reading something complicated, understanding it, and being able to synthesize something about it is a pretty important brain-use.
> Here we can think of the long email with meaningless fluffy padding as being the business speak protocol that office employees communicate with today. And we can think of the bullet points as how we actually think.
You've got to be careful to not overgeneralize this kind of observation, a problem that software engineers are especially prone to. Sometimes the "fluff" is meaningless, and "bullet points" are all that's really there. Other times the "fluff" is important stuff that the person making the judgement just doesn't understand or has trouble comprehending.
Software engineer types are often pretty ignorant, without really realizing it, and often assume their superiority because "intelligence" or some dumb shit like that.
Yes, LLM expansion of bullet points is fluff that wastes everyone's time, but that doesn't mean all or even most true thought can be compressed into concise bullet points. Furthermore, even when it can, there's are often good reasons why the most concise and compact form is not the most desirable presentation.
> Software engineers types are often pretty ignorant, without really realizing it, and often assume their superiority because "intelligence" or some dumb shit like that.
It’s clearly on display here. He made me skeptical when he pooh-poohed the importance of reading books and writing about them in school, and lost me when he casually claimed that all people think in bullet points. These things are incredibly egocentric and stereotypically engineer.
I think the problem is the disconnect between learning vs passing. The goal of writing a book report is supposed to be to develop your brain and improve some skills. But society cannot simply give away knowledge without some kind of testing, so there must be an exam. And you have curricula where students are "required" to take a list of classes. Not all students are deeply excited every class on that list (or their teacher, or textbook) so some students are in some classes purely to tick a checkbox. That means to them, whatever skill is taught there is useless, so they'll happily use the LLM and cheat in other ways.
First part of the problem is we need to stop cookie cutter course lists. Forcing people to take a course they don't care about is a futile ability. Back in the day it was easy to do it, but now it has gotten harder due to LLMs and reliance on exams as a compliance tool. Yes, this will make it harder to say someone has a degree in X. Instead you will have to handle a bit more nuance and discuss what specific topics they studied.
Second part is we need to dial down the credentialism. Treating third party exam grades as an indicator of ability is no longer feasible in the LLM world. The only viable way is to have a extremely controlled exam environment, but that greatly restricts what sort of things you can examine. A lot of knowledge is relevant on a timescale of days or longer, not a few hours, and you can't detain people for days just for an exam grade.
Both of this are challenging for sure but I don't think it's impossible. The programming industry has dealt with this for decades. When someone has a degree in CS or related area, it doesn't mean all that much in practice, and the GPA in that degree is also a weak indicator. Instead, the industry found other ways to directly evaluate ability. Sure, they're not perfect, but not exactly hopeless I would say.
>so some students are in some classes purely to tick a checkbox
As a student I was forced to take classes I would have never willingly chosen to take, and yet I still learned from them. I worked for an A and didn't consider cheating an option. I'm not really sure why, I can answer why I wouldn't today, but I can't particularly say why my yesteryear self was so against it, yet it remains as a key point in me gaining a very useful education.
>Forcing people to take a course they don't care about is a futile ability.
While I think sometimes we include too many unrelated courses, I also don't agree with the idea of only giving someone courses they are interested in. I would have been weaker for it. I think the issue is the culture that encourages cheating as a valid response, but where does that come from and how to fight it are massive problems.
>The only viable way is to have a extremely controlled exam environment, but that greatly restricts what sort of things you can examine.
I think oral exams are great at testing knowledge, but they suffer other problems. They don't scale at all, and they leave more room for bias than other forms of exams. I'm sure there are other problems, but those two are enough to start with. If only there was some option that had the benefits of an oral exam with an expert without the issues (this sounds like I'm hinting there is such a solution, but I promise I'm not, it is just wistful thinking).
>But I'm kinda surprised why he "hasn't a clue" that the answer to that question is a clear "yes." The whole point of school is to teach kids to use their brains, and reading something complicated, understanding it, and being able to synthesize something about it is a pretty important brain-use.
I also believe that there's something to be said about being able to sit and focus on something like reading a book for a long period of time. Patience and focus seem to be diminished these days.
That and it's only in reading A LOT that builds the proficiency needed to absorb, evaluate, comprehend, and respond to other documents you will eventually come upon. Whether it is reading for enjoyment and allowing a virtual world to wash over you, while simultaneously scanning your eyes across a page, or reading a scientific paper to understand the research topic and results well enough, to know when to review and reread for context and understanding.
Some level of long focus and deeper awareness of written material is necessary to create anything substantially new and massage it into what you actually desire.
I'm not sure this is as self evident as you believe it is. Outside of school, I don't think I've read an actual book and gotten use out of it. I read for pleasure but the actual useful information in my life has been accumulated from reading many many short form (<10 page) things over a long time.
I don't mean to criticize you but... my God, this is such a sad statement to me.
I understand where you're coming from when it comes to technical stuff - I would agree that I haven't gotten very much of my engineering knowledge from books, but rather from shorter-form stuff.
But there are so very many things I've read over the years that have stuck with me, that have informed the way I think about the world, that have provided comfort or food for thought or new lenses through which to view things. My life would be so much poorer without the books I've read, they all mix around in my brain and come to me at random times, and give me a lot of use, even if it's the kind of "value" that is hard to quantify and pin down.
the author ironically uses a lot of words to say very little, though I agree with the conclusion. it’s already annoying to have someone use a lot of words to say very little (especially in a business context). now it’s free and easily accessible for anyone, whereas before it at least took some social stamina
so people will do it, people will be annoyed by it, people will prioritize to more efficient communicators
Therefore, since brevity is the soul of wit, And tediousness the limbs and outward flourishes, I will be brief. Your noble son is mad.
or
“That,” replied Hardin, “is the interesting thing. The analysis was the most difficult of the three by all odds. When Houk, after two days of steady work, succeeded in eliminating meaningless statements, vague gibberish, useless qualifications—in short all the goo and dribble—he found he had nothing left. Everything canceled out. Lord Dorwin, gentlemen, in five days of discussion didn't say one damned thing, and said it so that you never noticed.
A perfect summary: https://marketoonist.com/wp-content/uploads/2023/03/230327.n...
It's lossy expansion. The worst of both worlds!
and society requires efficiency as attention spans have been reduced to near 0 with tiktok videos and youtube shorts being the norm now.
maybe we end up back at hieroglyphs.
We should invent a language like stenography or algebra, in which it would be impossible not to express something if it is not either a fact or an implication. Then we can see at first sight whether it’s dense or not.
> it’s already annoying to have someone use a lot of words to say very little
Totally agreed. I'd like to cut to the point with a bit of pleasantry at the beginning just because I have to.
> now it’s free and easily accessible for anyone
at least everyone can also let it be summarized for us then
It's still terser than a pg essay!
“I didn't have time to write a short letter, so I wrote a long one instead.”
"more time shorter letter"
Personally I want to double down on this approach I wrote about a couple of weeks ago: https://www.sealambda.com/blog/this-post-passed-unit-tests/
Which is, to keep using LLMs as reviewers, rather than as writers.
A bullet-point list of ways you screwed up communicates something entirely different than a long form email filled with flowery "fluff" (as the author puts it).
In fact, if the author feels confident in this theory, I suggest they replace the blog post with this AI-generated bullet-point summary I just made...
> One day we'll just send bullet points as emails. We'll reach business speak protocol version 2.0. That which was verbose becomes terse. No more time wasted translating thoughts and injecting platitudes.
I'll celebrate the day this happens and gets widespread. Conversing with Americans is painful compared to Germans, because Americans insist on being coddled all the time and the very second you don't they'll complain behind your back to your boss.
Fun fact - that cultural difference was also a huge part why Wal-Mart failed to gain traction here in Germany. German consumers really didn't like staff welcoming them with a forced smile, that and bad press from crossed labor laws was their downfall.
>staff welcoming them with a forced smile
Walmarts notorious "people greeters" are there to prevent theft, and as far as I can tell, the people greeter reason seems to be purely speculative, and less explanatory than more solid political reasons (established competitors conspired, just like Walmart does to incumbents in the US).
> Walmarts notorious "people greeters" are there to prevent theft
How is some old pensioner sitting (or worse: standing) at the entrance going to deter theft? Thieves don't care unless there are actual security guards on patrol.
Smile isn’t that bad, smiling with a pistol to the back of their head is :)
I agree that LLMs turn short prompts into long code blocks, but I don't agree that it's fluff in the same way that email pleasantries are fluff.
The short prompt leaves a lot of room for interpretation. The code itself leaves zero room for interpretation (assuming the behavior of the coding language is well understood). I don't agree that AI will allow us to start relying on code that isn't fully defined just because it might allow our emails to remove fluff that didn't contribute to the meaning at all.
> How are professionals expected to communicate with each other? Usually with empty platitudes to kick things off like "Hey! How's it going? How's the family? How are you doing?" Messages are expected to be written across several coherent English sentences, neatly organized into paragraphs, finally with some sort of signature. In the programming world we refer to this as boilerplate. What is it that we are really trying to communicate? Usually a few short ideas that could be represented as bullet points, but that we need to fluff up with meaningless words so that we don't sound rude. Of course, this changes by culture and by language and is not applicable to many parts of the world, but it is definitely a thing in American English.
Do the rest of you really do this? I can’t recall receiving any slack or emails with this sort of thing over the 25+ years I’ve been working in a business environment, even though all of them have been in American English. I certainly don’t use that sort of “boilerplate”. I just jump straight into what the topic is.
It is like https://marketoonist.com/2023/03/ai-written-ai-read.html being written down at length.
I do agree with the fact, that this is an annoying phenomenon. It took me a while to understand that there are people who are not just using LLM to write these style of emails, but those people are the source of the training data for LLMs.
The solution is "simple", to move aways from such people and stick to genuine communicators.
If a picture is worth a thousand words, the blog author seems to have chosen "a thousand words."
And if that's not an option, another solution is to use AI to summarize those long emails. In the end both parties will be writing and reading terse little emails, with the verbose text in the middle as the world's most ridiculous communication protocol.
This is why I am starting a new aI company, the a is for automatic and the I is for identity. We will automate the application of the identity matrix multiplication/solve operation to produce an extremely performant implementation of the bullet point/prose/bullet point protocol.
Wow, you have the tech not only to apply the identity, but to solve for it? That is something!
A major flaw in your approach is the identity is potentially too large, you should instead project something onto a very small space, and then extrapolate in some more or less random (but pretty) manner back to the full dimensional space.
We’re still optimizing. We tried to use a low rank approximation but for some reason it didn’t work. We’ll probably use a banded or sparse representation.
Progress would be faster, but for some reason we can’t retain a mathematician after showing them our experiments.
It’s something so ridiculous it’s difficult to even parody. I guess the real-world equivalent is ordering a single microSD card and getting it in a plastic case, in a blister pack, in an envelope, in a mailer, inside a Jiffy bag that doesn’t fit through your letter box because none of the middlemen cared enough to think about it.
I consider myself a “genuine communicator” insofar as I write messages thoughtfully and thoroughly.
Though I suspect many of my emails go unread, and people have confessed that they ran my personal messages to them through an LLM to generate something for them to report up in some spreadsheet etc tool.
Or in the case of some managers for whom the critical technical detail goes over their head, they just re-ask their questions in a call and try to get to “is it done yet” and “how many engineers can I add to make it go faster”.
I think I might be on the chopping block if a move was made to get rid of overly-thorough verbose communicators, genuine or not
But it sounds like you might be filling your emails with actual facts, rather than the sort of fluff that an LLM could do for you.
I have a new rule: if someone sends me an "AI" message, I will probably denylist them.
Not only hasn't the person thought about it or invested in it, but now they're jamming my ability to read into exactly what that one person said, and how they said it.
Many people have made the observation/joke that meaning->LLM->LLM->meaning is silly, but I don't recall anyone pointing out that information that a skilled reader/listener can discern from direct expression is lost.
> One day we'll just send bullet points as emails. We'll reach business speak protocol version 2.0. That which was verbose becomes terse. No more time wasted translating thoughts and injecting platitudes.
I'm not sure about you all, but as a non-manager at [big tech company] I can count on one hand the number of actual emails I've sent in the past year.
Everything is IM chats. Its actually pretty nice
I don't think the final point about programming languages makes much sense.
In the overall software development process, lots of people contribute different things to create the product.
The job of the software developer is to bring the amount of ambiguity in the specification to zero, because computers can only run a program with zero ambiguity.
There have been lots of high level programming languages that abstract away certain things or give the programmer control over those things. The real thing that you want to do is pick a programming language that allows you control over the things you care about. Do you care about when memory is allocated and deallocated? Do you care about how hardware is used (especially GPUs and ML accelerators) or do you want the hardware completely abstracted away? Do you care more about runtime or dev iteration time? Does your program need to exist in a certain tech context?
There's no programming language that will let people who care about different things deal with them or not deal with them.
Let's discuss this spot on the wall while the building we are in is being excavated from beneath.
Yes, AI is going to change the world, not as anyone seems to be discussing. We've created interactive logical assistance for everything and anything intellectual anyone does. The ramifications of this are ambitions requiring intellectual difficulty are going to skyrocket. This is thanks to the magical thinking that is already rampant is all aspects of society. Net result will be significantly higher demands on anyone with a knowledge based career, after all "you have AI assistance now, why are you not 10X?"
We will all be forced to become adept using AI, and not just casually. We will be required to operate on our intellectual edge, or we will find ourselves unemployable.
I was expecting some analysis on the economic impact of losing a whole class of jobs, and decimating a load of others which causes the wages to drop.
Thats the way it'll change the world, think manufacturing job losses in Europe/america. in the late 90s/00
I actually got a very similar automatic response to a take home (that I spent 12 hours of my time on) for an interview process. Some feedback was good, but other feedback was not (one example: The feedback mentions not enforcing a Node version, while such enforcement was in the package.json file). That combined with the formatting made me realize that it was either a 100% copy paste output based on some prompts, maybe tuned two or three words without checking for factuals.
A very distressing experience that prompted me to change how I approach take home assignments.
What changes did you implement, if I can ask?
These are not set in stone and if a dream company comes may be overruled, but:
At least one or more of the following MUST be true:
- Take home takes 2 hours or less
- Take home has a posterior instance that will take place with a human being in order to allow me to defend/discuss/expand the take home. Of course this assumes I eliver something that is not just the word "farts" in a txt.
- Take home is monetarily compensated.
These are heuristics that do not imply I am an impressive IC or something like that. It's just self defense.
This was a very common meme two years ago when ChatGPT was released.
It's fine to see Thomas catching up with the times, but two pages of writing seems a bit overkill, imo.
Edit: Found it, https://marketoonist.com/2023/03/ai-written-ai-read.html ... and yeah two years ago to the date, always right :).
I mostly agree with what the author is saying with regard to "the current state of things." I do not feel, however, that it is a particularly large concern, especially long-term, given the second step (summarizing) is probably already integrated into everyone's e-mail client, and if not -- it will be at some point. The "smoothing out of difficult communications", however, may end up being worth the whole "having to read a summarized response."
The reason it won't matter "long term" is that e-mail clients are solving/will solve[0] the "give me 'the point' of this e-mail." If my couple of decades of experience across multiple employers is any indication[1], the vast majority of people in software development fall into one of two camps: (a) They don't have the basics of written communication down. It's not a matter of misspellings or improper semi-colon, or emdash. It's all lower case[2] with no punctuation, or with "..." (no spaces between, either) in place of every other form of punctuation. Or (b) they are generally grumpy people who write in a manner that fits their personality.
Conveying tone, correctly, via written text is hard unless the tone you're trying to convey is "frustration/anger/impatience". And, of course, the same folks who can't figure out punctuation tend to respond tersely. Between co-workers who work closely together, that's preferred. When my boss has to tell me something minor about my performance and sends it in a five-word e-mail, it comes off like I need to start looking for new work. Prior to AI, "good managers who were writing-challenged" would find templates online and replace words. It never sounded genuine. AI brings us a lot closer to that, while not requiring an enormous amount of effort on the part of the writer. It'll be a matter of time before a lot of that process happens within the client (if it doesn't, already). I know tone detection is a common feature on communications tools I use[3].
[0] Not entirely sure; I use e-mail so infrequently, but thinking about the chat app we use at work, it provides AI summaries of the day's chats in each channel.
[1] Anecdata, I know, but it's all I've got
[2] Including the first letter of every meeting invite and subject; if I have OCD, that triggers it.
[3] Divorce communications ...
Would it also be true to say:
AI will change the world, but not in the way the OP (Thomas Hunter) thinks.
--
The first statement, AI will change the world, is low surprise and clearly true already.
The second statement, not in the way X thinks, is also low surprise, because most technologies have very unpredictable impacts, especially if it is "close to singularity" or the singularity.
There was a moment when google introduced autocomplete and it was a game changer.
LLMs are still waiting for their autocomplete moment: when they become an extension of the keyboard and complete our thoughts so fast, that i could write this article in 2 minutes. That will feel magical.
The speed is currently missing
For me the speed is already there. LLMs can write boilerplate code at least an order of magnitude faster than I can.
Just today I generate U-Net code for a certain scenario. I had to tweak some parameters, at the end I got it working in <1hr.
> LLMs can write boilerplate code at least an order of magnitude faster than I can.
This is my biggest fear with everyone adopting LLMs without considering the consequences.
In the past, I used "Do I have to write a lot of boilerplate here?" as a sort of litmus test for figuring out when to refactor. If I spend the entire day just writing boilerplate, I'm 99% sure I'm doing the wrong thing, at least most of the time.
But now, junior developers won't even get the intuition that if they're spending the entire day just typing boilerplate something is wrong, instead they'll just get the LLM to do it and there is no careful thoughts/reflections about the design and architecture.
Of course, reflection is still possible, but I'm afraid it won't be as natural and "in your face" which kind of forces you to learn it, instead it'll only be a thing for people who consider it in the first place.
The question to ask yourself is what value one gets from that refactor. Is the software faster? Has more functionality? Cheaper to operate? Reduce time to market. These would be benefits to the user, and I'd venture to say the refactor does not impact the user.
The refactor will impact the developer. Maybe the code is now more maintainable, or easier to integrate, or easier to test. But this is where I expect LLMs will make a lot of progress - - they will not need clean, well structured code. So the refactor, in the long run, is not useful to a developer with an LLM sidekick.
I agree with you but as a thought exercise: does it matter if there is a lot of boilerplate if ultimately the code works and is performant enough?
Fwiw, most of the time I like writing code and I don't enjoy wading through LLM-generated code to see if it got it right. So the idea of using LLMs as reviewers resonates. I don't like writing tests though so I would happily have it write all of those.
But I do wonder if eventually it won't make sense to ever write code and it will turn into a pastime.
> I agree with you but as a thought exercise: does it matter if there is a lot of boilerplate if ultimately the code works and is performant enough
Yeah it matters because it is almost guaranteed that eventually a human will have to interact with the code directly so it should still be good quality code
> But I do wonder if eventually it won't make sense to ever write code and it will turn into a pastime
Even the fictional super-AI of Star Trek wasn't so good that the engineers didn't have to deeply understand the underlying work that it produced.
Tons of Trek episodes deal with the question of "if the technology fails, how do the humans who rely on it adapt?"
In the fictional stories we see people who are absolute masters of their domain solve the problems and win the day
In reality we have glorified chatbots, nowhere near the abilities of the fictional super-AI, and we already have people asking "do people even need to master their domains anymore?"
I dunno about you but I find it pretty discouraging
> I dunno about you but I find it pretty discouraging
same :)
Or: An important thing that was taught to me on the first day of my C++ class was ‘no project was ever late because the typing took too long’.
> There was a moment when google introduced autocomplete and it was a game changer.
I remember.... I turned it off immediately.
No, that is not the only thing missing. Right now AI continues to make mistakes a child learns not to make by age 10…don’t make shit up. And by time you’re an adult you figure out how to manage not to forget (by whatever means necessary) the first thing your boss told you to do after he added a new ask.
That's not entirely true. AI can solve math puzzles better than 99.9% of population.
Yes AI makes mistakes, so do humans very often.
Humans make mistakes, sure, but if a human starts hallucinating we immediately lose trust in them.
> AI can solve math puzzles better than 99.9% of population
So can a calculator.
> AI can solve math puzzles better than 99.9% of population
I've studied electronic engineering and then switched to software engineering as a career, and I can say the only time I've been exposed to math puzzles were in academic settings. The knowledge is nice and help with certain problem solving, but you can be pretty sure I will reach out to a textbook and a calculator before trying to brute-force one such puzzle.
The most important thing in my daily life is understand the given task, do it correctly, and report about what I've done.
Yep this exactly. If I ever feel that I'm solving a puzzle at work, I stop and ask for more information. Once my task is clear, I start working on it. Learned long ago that puzzle solving is almost always solving the wrong problem and wasting time.
Maybe that's where the disconnect comes from. For me understanding comes before doing. And coding for me is doing. There may be cases when I assumes that my understanding was complete and the feedback (errors during compiling and testing) told me I'm wrong, but that just triggers another round of research to seek understanding.
Puzzle solving is only for when information are not available (reverse engineering, closed systems,...) but there's a lot of information out there for the majority of tasks. I'm amazed when people spend hours trying to vibe code something, where they could spend just a few minutes reading about the system and comes up with a working solution (or find something that already works).
> so do humans very often.
While I do see this argument made quite frequently, doesn't any professional effort center in procedures employed particularly to avoid mistakes? Isn't this really the point of professional work (including professional liabilities)?
But why should I put my time into reading and thinking about your article if you didn't think it worth your time to actually think about and write it?
Hope the Next Big Thing (TM) is the Electric Monk.
> The speed is currently missing
I feel like the opposite/something else is missing. I can write lots of text quickly, if I just write down my unfiltered stream of thoughts, both together with and without an LLM, but what's the point?
What takes really long time, is making a large text contain just the important parts. Saying less takes longer time, at least for me, but hopefully saves the time and effort for people reading it. At least that's the idea.
The quote "If I had more time, I would have written a shorter letter" comes to mind.
Unfiltered stream of thoughts: I wrote a lengthy note that is quite incoherent and disorganized, and I intend to use the LLM to organize it. Ugh.
But who will actually read the “autocompleted” text?
At that point any other human being will likely also have one to scan incoming text.
You can do this in Cursor already. Write a README or a SPEC file and the LLM will try to "complete your thoughts", except that it's off often enough. I find this hugely distracting, as if someone always interrupts your train of thought with random stuff.
For me, it’s the opposite – speed is there, intelligence is still lacking sometimes.
I’m OK waiting 10 minutes with o1-pro, but I want a deep insight into the issue I’m brainstorming. Hopefully GPT-5 will deliver.
Great idea! And scary.
Stop assuming how I think!
I hate titles that attempt to address the reader directly and personally when they can't possibly do so.
You can express the same idea without using this tactic. Just say "... but not in the way most might think"
i was expecting a "written from these bullet points by an llm" at the end.
By definition if I knew how AI would change the world, I would invest/build things to that end. The fact that we still don't have a great AI product outside of chatgpt, shows that no one knows what will happen.
I think the author points to cultural changes in business comunication, which doesn't give any clues to what you should invest your money.
The author claims to be able to tell AI content. I am wondering, is there any test to help me train to distinguish AI content? Like: this paragraph is written by a human this by an AI and see how well we all do?
Not really. AI is too good now—no test reliably tells the difference.
Although LLMs have a tendency of using — in place of - - which is the one I use.
"Why waste time say lot word when few word do trick?" - Malone, Kevin
> Messages are expected to be written across several coherent English sentences, neatly organized into paragraphs, finally with some sort of signature. In the programming world we refer to this as boilerplate.
"This HTTP call has no message body and therefore no content, and can thus be ignored" he said confidently, not noticing the status code. The verbiage is where you find out useful information, like whether the speaker is on your side and whether they understand the problem the same way as you and whether they're dumb. It's not as if the reason we didn't adopt bullet-points-only ten years ago is that it required better AI.
More generally, I submit that when faced with a long email thread, skim reading is superior to LLM summaries in all cases (except maybe the one where the reader is too inexperienced to do it well). It's faster, captures more detail, and (probably most important) avoids the problem of the people on the thread coming away with subtly different understandings of the conversation.
The primary commercial application for AI seems to be enshittification. I think that will continue.
Frankly I don't think the end users will notice much difference.
Excel '98 probably covers 90% of what the average excel user needs, yet here we are with a grossly bloated SaaS excel app in 2025. Constant SaaS feature-pack enshitification that people are either pushed or tricked into.
I think software is in a for a big disruption when people realize they can prompt an LLM "Make me a simple app for tracking employee payroll for my small business. Just like lotus 1-2-3 was more than capable of doing 40 years ago."
I distinctly remember a review in a 1996 computer magazine calling the new version of MS Word bloated, overcomplicated and slow. If only they knew how bad things were going to get.
Laughable, the author is really bullet-point-brained.
Taking wagers he only knows English
TL;DR: AI summaries and bullet points for everything will change human communication to that format.
The problem with this post is that, despite mentioning programming languages in the title, the examples are about writing emails. The author forgets to address programming at all, which is very much a use case and will remain one as processors only run compiled machine code and not bullet points.
Cory Doctorow just wrote about the bullet-point thing yesterday, in the context of referral letters:
https://pluralistic.net/2025/03/25/communicative-intent/
Also this meme: https://marketoonist.com/2023/03/ai-written-ai-read.html
AI diplomats talking to each other, while humans stick to communicating via bullet points.
While there is more talk about emails, programming is addressed toward the end: "[…] this will affect computer programming languages as well. Right now I can open a codebase, […]"
> bullet points for everything will change human communication to that format
I'd be okay with that! Verbosity has become too pervasive as of late.
I'm not so sure that's a missing point or just an actual point.
I'm not convinced myself AI will have much impact on the developers landscape, beyond better autocompletion and doc generation.
All these fancy AI developers are cute, but we have had the cheaper vs lower quality trade-off available for quite some time now with outsourcing to emerging countries. When I was in my studies, we heard all the predictions of the destruction of software engineers because of outsourcing, the need to become an architect instead of a developer or you'll be replaced, etc.
I've seen none of that happen over the years, except for very low skilled automation / CRUD jobs.
> I'm not convinced myself AI will have much impact on the developers landscape, beyond better autocompletion and doc generation.
I’ve been a developer for 20 years. Using an LLM to generate context-specific prototypes has completely changed my productivity. It has allowed me to start and finish projects that were previously relegated to “maybe someday” (read: never).
It’s not clear what the net result will be in the long run, but it’s already changing what and how I build things, and I’m not nearly as bearish as I was before a few successful projects.
I don’t think AI is going to replace good engineers. It’s going to make good engineers better engineers. This alone will lead to some interesting outcomes I think.
> It has allowed me to start and finish projects that were previously relegated to “maybe someday” (read: never).
I.e. stuff that probably never needed to get done, otherwise it would have been a higher priority than what you were otherwise working on.
I couldn't disagree more.
The world is full of unsolved problems and many of them are just time-expensive to fix. Significantly lowering the activation energy and time investment required totally changes the calculus.
I wrote about one concrete use case I built to help me solve a long-standing clutter problem in a comment on another thread [0]. Not only did this unblock a project I'd been trying to get going for awhile (declutter my place), it taught me about Linux Kernel Gadgets and started a cascade of new future project ideas.
It also makes trying harder things lower risk, i.e. I can fail quickly vs. having to spend 20 hours to see if it's worth pursuing.
Setting aside basic productivity, as a high functioning depressed person, activation energy is a huge deal. Having the ability to get an idea to a stage that I can actually see results and work on it is a game changer.
- [0] https://news.ycombinator.com/item?id=43314906
This kind of thinking is why we need cloud-connected apps to run our dishwashers now.
This seems like a very strange conclusion.
In what way is "I can solve real problems faster than before" analogous or related to companies building frivolous/unnecessary cloud-connected features?
That is not true, see "mental health disorders" which I do suffer from. Me not doing something does not mean it never needed to be done. The opposite is true, too. Something I have done does not mean it had to be done, if that makes sense.
This is my experience as well.
> When I was in my studies, we heard all the predictions of the destruction of software engineers because of outsourcing, the need to become an architect instead of a developer or you'll be replaced, etc.
A lot of low level stuff has been outsourced to India over the last decades, and more if you count second or even third level support (won't even include first level support that's not required to be on-site).
C-levels only see the expenses savings, but how their employees feel about internal support being utter dogshit can't be quantified in a language that beancounters speak...