"Some people might not enjoy writing their own code. If that’s the case, as harsh as it may seem, I would say that they’re trying to work in a field that isn’t for them."
I've tolerated writing my own code for decades. Sometimes I'm pleased with it. Mostly it's the abstraction standing between me and my idea. I like to build things, the faster the better. As I have the ideas, I like to see them implemented as efficiently and cleanly as possible, to my specifications.
I've embraced working with LLMs. I don't know that it's made me lazier. If anything, it inspires me to start when I feel in a rut. I'll inevitably let the LLM do its thing, and then them being what they are, I will take over and finish the job my way. I seem to be producing more product than I ever have.
I've worked with people and am friends with a few of these types; they think their code and methodologies are sacrosanct. That if the AI moves in there is no place for them. I got into the game for creativity, it's why I'm still here, and I see no reason to select myself for removal from the field. The tools, the syntax, its all just a means to an end.
This is something that I struggle with for AI programming. I actually like writing the code myself. Like how someone might enjoy knitting or model building or painting or some other "tedious" activity. Using AI to generate my code just takes all the fun out of it for me.
This so much. I love coding. I might be the person that still paints stuff by hand long after image generation has made actual paintings superfluous, but it is what it is.
One analogy that works for me is to consider mural painting. Artists who create huge building-size murals are responsible for the design of the painting itself, but usually work with a team of artist to get up on the ladders and help apply the image to the building.
There were seamstresses who enjoyed sewing prior to the industrial revolution, and continued doing so afterwards. We still have people with those skills now, but it's often in very different contexts. But the ability to create a completely new garment industry was possible because of the scale that was then possible. Similarly for most artesanal crafts.
The industry will change drastically, but you can still enjoy your individual pleasures. And there will be value in unique, one-off and very different pieces that only an artesan can create (though there will now be a vast number of "unique" screen printed tees on the market as well)
I don’t enjoy writing unit tests but fortunately this is one task LLMs seem to be very good at and isn’t high stakes, they can exhaustively create test cases for all kinds of conditions, and can torture test your code without mercy. This is the only true improvement LLMs have made to my enjoyment.
except they are not good at it. the unit tests you'll have written will be filled with (slow) mocks with tautological assertions, create no reusable test fixtures, etc.
of course — that’s what they’re trained on after all. most treat tests as a burden / afterthought, propagating the same issues from codebase to codebase, never improving. i wouldn’t consider those good either.
Saying that writing unit tests isn’t high stakes is a dubious statement. The very purpose of unit tests is to make sure that programming errors are caught that may very well be high stakes.
It is as much of an issue if it prevents a bug in production code from being detected before it occurs in production. Which is the very purpose of unit tests.
The only reason I got suckd into this field was because I enjoyed writing code. What I "tolerated" (professionally) was having to work on other people's code. And LLM code is other people's code.
what's the largest (traffic, revenue) product you've built? quantity >>>> quality of code is a great trade-off for hacking things together but doesn't lend itself to maintainable systems, in my experience.
Sure, but the vast majority of the time in greenfield applications situations, it's entirely unclear if what is being built is useful, even when people think otherwise. So the question of "maintainable" or not is frequently not the right consideration.
To be fair, this person wasn’t claiming they’re making a trade off on quality, just that they prefer to build things quickly. If an AI let you keep quality constant and deliver faster, for example.
I don’t think that’s what LLMs offer, mind you (right now anyway), and I often find the trade offs to not be worth it in retrospect, but it’s hard to know which bucket you’re in ahead of time.
I've accepted this way of working too. There is some code that I enjoy writing. But what I've found is that I actually enjoy just seeing the thing in my head actually work in the real world. For me, the fun part was finding the right abstractions and putting all these building blocks together.
My general way of working now is, I'll write some of the code in the style I like. I won't trust an LLM to come up with the right design, so I still trust my knowledge and experience to come up with a design which is maintainable and scaleable. But I might just stub out the detail. I'm focusing mostly on the higher level stuff.
Once I've designed the software at a high level, I can point the LLM at this using specific files as context. Maybe some of them have the data structures describing the business logic and a few stubbed out implementations. Then Claude usually does an excellent job at just filling in the blanks.
I've still got to sanity check it. And I still find it doing things which looks like it came right from a junior developer. But I can suggest a better way and it usually gets it right the second or third time. I find it a really productive way of programming.
I don't want to be writing datalayer of my application. It's not fun for me. LLMs handle that for me and lets me focus on what makes my job interesting.
The other thing I've kinda accepted is to just use it or get left behind. You WILL get people who use this and become really productive. It's a tool which enables you to do more. So at some point you've got to suck it up. I just see it as a really impressive code generation tool. It won't replace me, but not using it might.
I don't think the author is saying it's a dichotomy. Like, you're either a disciple of doing things "ye olde way" or allowing the LLM to do it for you.
I find his point to be that there is still a lot of value in understanding what is actually going on.
Our business is one of details and I don't think you can code strictly having an LLM doing everything. It does weird and wrong stuff sometimes. It's still necessary to understand the code.
I like coding on private projects at home; that is fun and creative. The coding I get to do at work inbetween waiting for CI, scouring logs, monitoring APM dashboards and reviewing PRs, in a style and abstraction level I find inappropriate is not interesting at all. A type of change that might take 10 minutes at home might take 2 days at work.
I resonate so strongly with this. I’ve been a professional software engineer for almost twenty years now. I’ve worked on everything from my own solo indie hacker startups to now getting paid a half million per year to sling code for a tech company worth tens of billions. I enjoy writing code sometimes, but mostly I just want to build things. I’m having great fun using all these AI tools to build things faster than ever. They’re not perfect, and if you consider yourself to be a software engineer first, then I can understand how they’d be frustrating.
But I’m not a software engineer first, I’m a builder first. For me, using these tools to build things is much better than not using them, and that’s enough.
> "as harsh as it may seem, I would say that they’re trying to work in a field that isn’t for them."
I find this statement problematic for a different reason: we live in a world where minimum wages (if they exist) are lower than living wages & mean wages are significantly lower the point at which well-being indices plateau. In that context calling people out for working in a field that "isn't for them" is inutile - if you can get by in the field then leaving it simply isn't logical.
THAT SAID, I do find the above comment incongruent with reality. If you're in a field that's "not for you" for economic reasons that's cool but making out that it is in fact for you, despite "tolerating" writing code, is a little different.
> I got into the game for creativity
Are you confusing creativity with productivity?
If you're productive that's great; economic imperative, etc. I'm not knocking that as a positive basis. But nothing you describe in your comment would fall under the umbrella of what I consider "creativity".
We've seen this happen over and over again, when a new leaky layer of abstraction is developed that makes it easier to develop working code without understanding the lower layer.
It's almost always a leaky abstraction, because sometimes you do need to know how the lower layer really works.
Every time this happens, developers who have invested a lot of time and emotional energy in understanding the lower level claim that those who rely on the abstraction are dumber (less curious, less effective, and they write "worse code") than those who have mastered the lower level.
Wouldn't we all be smarter if we stopped relying on third-party libraries and wrote the code ourselves?
Wouldn't we all be smarter if we managed memory manually?
Wouldn't we all be smarter if we wrote all of our code in assembly, and stopped relying on compilers?
Wouldn't we all be smarter if we were wiring our own transistors?
It is educational to learn about lower layers. Often it's required to squeeze out optimal performance. But you don't have to understand lower layers to provide value to your customers, and developers who now find themselves overinvested in low-level knowledge don't want to believe that.
(My favorite use of coding LLMs is to ask them to help me understand code I don't yet understand. Even when it gets the answer wrong, it's often right enough to give me the hints I need to figure it out myself.)
LLMs don't create an abstraction. They generate code. If you are thinking about LLMs as a layer of abstraction, you are going to have all kinds of problems.
My C compiler has been generating assembly code for me for 30 years. And people were saying the same thing even earlier about how compilers and HLLs made developers dumb because they couldn't code in asm.
So we're not making up stuff, this perspective was ubiquitous among assembly programmers of the 1950s. In 1958 (as the first article I link to mentions), half of programs were written in Fortran. Which means half of people still thought writing assembly by hand was the way to go.
I've personally written assembly by hand for money on an obscure architecture, and I've also written a non-optimizing compiler for a subset of Rust to avoid the assembly. There is great joy in playing stack tetris, but changing code requires a lot of effort. Imagine if there weren't great alternatives, you'd just get good at it.
I imagine if there weren't compilers (or interpreter) I would never have learned how to code. My generation of programmers was taught with Java and in my university course we did all our homework in the first year using BlueJay, a program that made it _even easier_ to get up and running with a bit of Java code.
(just to save some face: I learned Prolog in my second year).
Not quite what the OP claims but see for example the Story of Mel:
I had been hired to write a FORTRAN compiler
for this new marvel and Mel was my guide to its wonders.
Mel didn't approve of compilers.
``If a program can't rewrite its own code'',
he asked, ``what good is it?''
The joke story is mocking the common arguments/beliefs at the time.
If you expect me to source you a collection of comments about "real programmers" from over 30 years ago though that is too much of an ask but I was there, I read it often and I started fairly late on the scene in the 90s.
if i can one day have a similar level of confidence in LLM output, as i do in a compiler, then i will also call it an abstraction. until then…i wait. :)
I like to call that “unary thinking”. Nothing is “perfect”, therefore everything is “imperfect”, so everything is the “same”. One category for everything, unary.
Every 5 years or so of my career something new has come out to make coding easier and every time a stack of people like yourself come out to argue it's making devs dumb or worse.
LLMs still generate terrible output a lot of the time but they are improving. Early compilers generated terrible ASM a lot of the time to the point that it was common to use inline assembly in your code or rewrite parts later. Tools can improve, the point is that neither make the dev worse they just add to productivity.
Writing code isn't my job it's a task I do to make the systems I design functional.
They can also generate documentation of code you've written. So it is very useful if leveraged correctly to understand what the code is doing. Eventually you learn all of the behaviors of that code and able to write it yourself or improve on it.
I would consider it as a tool to teach and learn code if used appropriately. However LLMs are bullshit if you ask it to write something, pieces yes, whole code... yeah good luck having it maintain consistency and comprehension of what the end goal is. The reason it works great for reading existing code is that the input results into a context it can refer back to but because LLMs are weighted values it has no way to visualize the final output without significant input.
The point is LLMs may allow developers to write code for problems they may not fully understand at the current level or under the hood.
In a similar way using a high level web framework may allow a developer to work on a problem they don’t fully understand at the current level or under the hood.
There will always be new tools to “make developers faster” usually at a trade off of the developer understanding less of what specifically they’re instructing the computer to do.
Sometimes it’s valuable to dig and better understand, but sometimes not. And always responding to new developer tooling (whether LLMs or Web Frameworks or anything else) by saying they make developers dumber, can be naive.
Nope, it's not disingenuous. It's a genuine critique that it's just stupid way to think about things. You don't check in a bunch of prompts, make changes to the prompts, run them through a model, compile/build the code.
It's simply not the same thing as a high level web framework.
If you have an intern, or a junior engineer - you give them work and check the work. You can give them work that you aren't an expert in, where you don't know all the required pieces in detail, and you won't get out of it the same as doing the work yourself. An intern is not a layer of abstraction. Not all divisions of labor are via layers of abstraction. If you treat them all that way it's dumb and you'll have problems.
Leaky abstractions is a really appropriate term for LLM-assisted coding.
The original "law of leaky abstractions" talked about how the challenge with abstractions is that when they break you now have to develop a mental model of what they were hiding from you in order to fix the problem.
Having LLMs write code for you has a similar effect: the moment you run into problems, you're going to have to make sure you deeply understand exactly what they have done.
> Having LLMs write code for you has a similar effect: the moment you run into problems, you're going to have to make sure you deeply understand exactly what they have done.
I'm finding that, If I don't have solid mastery of at least one aspect of generated code, I won't know that I have problems until they touch a domain I understand.
Leaky abstractions doesn't imply abstractions are bad.
Using abstractions to trade power for simplicity is a perfectly fine trade-off... but you have to bare in mind that at some point you'll run into a problem that requires you to break through that abstraction.
I read that essay 22 years ago and ever since then I've always looked out for opportunities to learn little extra details about the abstractions I'm using. It pays off all the time.
Not all of those abstractions are equally leaky though. Automatic memory management for example is leaky only for a very narrow set of problems, in many situations the abstraction works extremely well. It remains to be seen whether AI can be made to leak so rarely (which does not meant that it's not useful even in its current leaky state).
If we just talk in analogies: a cup is also leaky because fluid is escaping via vapours.
It's not the same as a cup with a hole in it.
Llms currently have tiny holes and we don't know if we can fix them. Established abstractions are more like cups that may leak but only in certain conditions (when it's hot)
> (My favorite use of coding LLMs is to ask them to help me understand code I don't yet understand. Even when it gets the answer wrong, it's often right enough to give me the hints I need to figure it out myself.)
Agree, specially useful when you join a new company and you have to navegate a large codebase (or bad-maintained codebase, which is even worse by several orders of magnitude). I had no luck asking LLM to fix this or that, but it did mostly OK when I asked how it works and what the code is trying to code (it includes mistakes but that's fine, I can see them, which is different if it was just code that I copy and paste).
No ieda about that, those are amounts of money I can't even consider because I couldn't tell the different of T vs B or even a hundred millions (as someone who never had more than 100k).
It's something I would cosnider paying for the first months when joining a new company (specially with a good salary), but not more than that, to be honest.
This "it's the same as the past changes" analogy is lazy - everywhere it's reached for, not just AI. It's basically just "something something luddites".
Criticisms of each change are not somehow invalid just because the change is inevitable, like all the changes before it.
When a higher level of abstraction allows programmers to focus on the detail relevant to them they stop needing to know the low level stuff. Some programmers tend not to be a fan of these kinds of changes as we well know.
But do LLMs provide a higher level of abstraction? Is this really one of those transition points in computing history?
If they do, it's a different kind to compilers, third-party APIs or any other form of higher level abstraction we've seen so far. It allows programmers to focus on a different level of detail to some extent but they still need to be able to assemble the "right enough" pieces into a meaningful whole.
Personally, I don't see this as a higher level of abstraction. I can't offload the cognitive load of understanding, just the work of constructing and typing out the solution. I can't fully trust the output and I can't really assemble the input without some knowledge of what I'm putting together.
LLMs might speed up development and lower the bar for developing complex applications but I don't think they raise the problem-solving task to one focused solely on the problem domain. That would be the point where you no longer need to know about the lower layers.
I think it’s usually helpful if your knowledge extends a little deeper than the level you usually work at. You need to know a lot about the layer you work in, a good amount about the surrounding layers, and maybe just a little bit about more distant layers.
If you are writing SQL, it’s helpful to understand how database engines manage storage and optimize queries. If you write database engine code, it’s helpful to understand (among many other things of course) how memory is managed by the operating system. If you write OS code, it’s helpful to understand how memory hardware works. And so on. But you can write great SQL without knowing much of anything about memory hardware.
The reverse is also true in that it’s good to know what is going on one level above you as well.
Anyway my experience has been that knowledge of the adjacent stack layers is highly beneficial and I don’t think exaggerated.
Last year I learned a new language and framework for the first time in a while. Until I became used to the new way of thinking, the discomfort I felt at each hurdle was both mental and physical! I imagine this is what many senior engineers feel when they first begin using an AI programming assistant, or an even more hands-off AI tool.
Oddly enough, using an AI assistant, despite it guessing incorrectly as often as it did, helped me learn and write code faster!
Let me alter this perspective, you can use it to learn why parts of code does and use it for commenting. Help you read what might be otherwise unreadable. LLMs and programming is good, but not great. However it can easily be someone to teach a developer what parts of the code they are working on.
Those aren't the same thing at all, and you already mentioned why in your comment: leakiness. The higher up you go on the abstraction chain, the leakier the abstractions become, and the less viable it is to produce quality software without understanding the layer(s) below.
You can generally trust that transistors won't randomly malfunction and give you wrong results. You can generally trust that your compiler won't generate the wrong assembly, or that your interpreter won't interpret your code incorrectly. You can generally trust that your language's automatic memory management won't corrupt memory. It might be useful to understand how those layers work anyway, but it's usually not a hard requirement.
But once you reach a certain level of abstraction (usually 1 level above programming language), you'll start running into more and more issues resulting from abstraction leaks that require understanding the layer below to properly fix. Probably the most blatant example of this nowadays are "React developers" who don't know JS/CSS/HTML and WILL constantly be running into issues that they can't properly solve as a result, and are forced to either give up or write the most deranged workarounds imaginable that consist of hundreds of lines of unintelligible spaghetti.
AI is the highest level of abstraction so far, and as a result, it's also the leakiest abstraction so far. You CANNOT write proper functional and maintainable code using an LLM without having at least a decent understanding of what it's outputting, unless you're writing baby's first todo app or something.
> But once you reach a certain level of abstraction (usually 1 level above programming language), you'll start running into more and more issues resulting from abstraction leaks that require understanding the layer below to properly fix. Probably the most blatant example of this nowadays are "React developers" who don't know JS/CSS/HTML and WILL constantly be running into issues that they can't properly solve as a result, and are forced to either give up or write the most deranged workarounds imaginable that consist of hundreds of lines of unintelligible spaghetti.
I want to frame this. I am sick to death of every other website and application needing a gig of RAM and making my damn phone hot in my hand.
> But you don't have to understand lower layers to provide value to your customers, and developers who now find themselves overinvested in low-level knowledge don't want to believe that.
This is the weakest point which breaks your whole argument.
I see it happening ALL the time: newer web developers enter the field from an angle of high abstraction, whenever these abstractions don't work well, then they are completelly unable to proceed. They wouldn't be in that place if they knew the low-level and it DOES prevent them from delivering "value" to their customers.
What is even worse than that, since these developers don't understand exactly why some problen manifests, and the don't even understand exactly what their abstraction trully solves, they wrongly proceed to solve a problem using the wrong (high level) tools.
That has some amount to do with the level abstraction but almost everything to do with inexperience. The lower level you get, the harder the impact of inexperience.
New web developers are still sorting themselves out and they are at a stage where they’ll suck no matter what the level of abstraction.
I get what you say but I must insist. Abstractions don't properly surface the underlying causes when they break.
This creates a barier in gaining experience on the topic that matters in order to solve your problem. Let me try to give an example (please don't neat pick, it is just an example):
A front-end dev who has never bundled an app like we did in the old days (manual scripts or manual gulp, grunt, webpack pipelines) but use a off-the-shelf webpack config (lets say CRA) has great trouble understanding why some webpack plugin doesn't work as expected, or they might not even understand how importing an svg into their jsx actually works. Yes this is due to lack of experience, but their current level of working isn't exposing those details directly so they can't gain experience, the waters are way too deep.
> Abstractions don't properly surface the underlying causes when they break.
"Abstractions" in the abstract neither do nor do not do this, as it is orthogonal to what an abstraction is; concrete implementations of abstractions can either swallow or wrap information about the underlying cause when they break, both are valid approaches.
> but their current level of working isn't exposing those details directly so they can't gain experience
This isn't really true. They absolutely can gain experience; they don't pay a (very high) extra effort with the side benefit of maybe gaining some experience on routine tasks where nothing breaks, and tend to gain experience only if they expend extra effort in situations where things fail or their are unusual challenges to resolve.
Not sure where you are getting with this. Yes, literally speaking they can, but what matters is if they actually do, in general, in the real world. My admitedly biassed opinion, based on personaly observating other colleagues, shows that in general, they don't learn and they give up.
> Not sure where you are getting with this. Yes, literally speaking they can, but what matters is if they actually do,
And, despite the fairly consistent bias people used to working at lower levels of abstraction have against people who usually work at higher levels, we all work with some level of abstraction, and most people gain experience at lower levels by dealing with problems that emerge, and those that don't don't fail to do so because they can't but because they choose not to bother either because they have other people that they can rely on for lower-level problems, or because they aren't interested and are doing things where they can afford to simply change what they are doing in response to those problems.
Abstraction is fine, it allows you to work faster, or easier. Reliance that becomes dependency is the problem - when abstraction supersedes fundamentals you're no longer able to reason about the leaks and they become blindspots.
Don't confuse low-level tedium with CS basics, if you're arguing that knowing how computers work is not relevant to working as a SWE then sure, but why would a company want a software dev that doesn't seem to know software? Usually your irreplaceable value as a developer is knowing and mitigating the leaks so they don't wind up threatening the business.
This is where the industry most suffers from not having a standardized-ish hierarchy, you're right that most shops don't need a trauma surgeon on-call for treating headaches but there's still many medical options before resorting to random grifter who simply "watched some Grey's Anatomy" as "medschool was a barrier for providing value to customers".
And every time the commentariat dismisses it with the trope that it’s the same as the other times.
It’s not the same as the other times. The naysayers might be the same elitists as the last time. But that’s irrelevant because the moment is different.
It’s not even an abstraction. An abstraction of what? It’s English/Farsi/etc. text input which gets translated into something that no one can vouch for. What does that abstract?
You say that they can learn about the lower layers. But what’s the skill transfer from the prompt engineering to the programming?
People who program in memory-managed languages are programming. There’s no paradigm shift when they start doing manual memory management. It’s more things to manage. That’s it.
People who write spreadsheet logic are programming.
But what are prompt engineers doing? ... I guess they are hoping for the best. Optimism is what they have in common with programming.
When a lower layer fails, your ability to remedy a situation depends on your ability to understand that layer of abstraction. Now: if an LLM produces wrong code, how do you know why it did that?
Sounds like you’re coping cause you have some type of investment in LLM “coding”, whether that is financial or emotional.
I won’t waste my time too much reacting to this nonsensical comment but I’ll just give this example, LLMs can hallucinate, where they generate code that’s not real, LLMs don’t work off straight rules, they’re influenced by a seed. Normal abstraction layers aren’t.
I dearly hope you’re arguing in bad faith, otherwise you are really deluded with either programming terms or reality.
I've had a similar experience. I built out a feature using an LLM and then found the library it must have been "taking" the code from, so what I ended up was a much worse mangled version of what already existed, had I taken the time to properly research. I've now fully gone back to just getting it to prototype functions for me in-editor based off comments, and I do the rest. Setting up AI pipelines with rule files and stuff takes all the fun away and feels like extremely daunting work I can't bring myself to do. I would much rather just code than act as a PM for a junior that will mess up constantly.
When the LLM heinously gets it wrong 2, 3, 4 times in a row, I feel a genuine rage bubbling that I wouldn't get otherwise. It's exhausting. I expect within the next year or two this will get a lot easier and the UX better, but I'm not seeing how. Maybe I lack vision.
You’re exactly right on the rage part, and that’s not something I’ve seen discussed enough.
Maybe it’s the fact that you know you could do it better in less time that drives the frustration. For a junior dev, perhaps that frustration is worth it because there’s a perception that the AI is still more likely to be saving them time?
I’m only tolerating this because of the potential for long term improvement. If it just stayed like it is now, I wouldn’t touch it again. Or I’d find something else to do with my time, because it turns an enjoyable profession into a stressful agonizing experience.
It’s exponentially better for me to use AI for coding than it was two years ago. GPT-4 launched two years and two days ago. Claude 3.5 sonnet was still fifteen months away. There were no reasoning models. Costs were an order of magnitude or two higher. Cursor and Windsurf hadn’t been released.
The last two years have brought staggering progress.
LLMs also take away the motivation from students to properly concentrate and deeply understand a technical problem (including but not limited to coding problems); instead, they copy, paste and move on without understanding. The electronic calculator analogy might be appropriate: it's a tool appropriate once you have learned how to do the calculations by hand.
In an experiment (six months long, twice repeated, so a one-year study), we gave business students ChatGPT and a data science task to solve that they did not have the background for (develop a sentiment analysis classifier for German-language recommendations of medical practices). With their electronic "AI" helper, they could find a solution, but the scary thing is they did not acquire any knowledge on the way, as exist interviews clearly demonstrated.
As a friend commented, "these language models should never have been made available to the general public", only to researchers.
> As a friend commented, "these language models should never have been made available to the general public", only to researchers.
That feels to me like a dystopian timeline that we've only very narrowly avoided.
It wouldn't just have been researchers: it would have been researchers and the wealthy.
I'm so relieved that most human beings with access to an internet-connected device have the ability to try this stuff and work to understand what it can and cannot do themselves.
I'm giving a programming class and students uses LLMs all the time. I see it as a big problem because:
- it puts focus on syntax instead of the big picture. Instead of finding articles or posts on Stack explaining things beyond how to write them. AI give them the "how" so they don't think of the "why"
- students almost don't ask questions anymore. Why would they when an AI give them code?
- AI output contains notions, syntax and API not seen in class, adding to the confusion
Even the best students have a difficult time answering basic questions about what have been seen on the last (3 hours) class.
Job market will verify those students, but the outcome may be potentially disheartening for you, because those guys may actually succeed one way or another. Think punched cards: they are gone along with the mindset of "need to implement it correctly on first try".
students pay for education such that at the end, they know something. if the job market filters them out because they suck, the school did a bad job teaching.
the teachers still need to figure out how to teach with LLMs around
I had this realization a couple weeks ago that AI and LLMs are the 2025 equivalent of what Wikipedia was in 2002. Everyone is worried about how all the kids are going to just use the “easy button” and get nonsense that’s in-checked and probably wrong and a whole generation of kids are going to grow up not knowing how to research, and trusting unverified sources.
And then eventually overall we learned what the limits of Wikipedia are. We know that it’s generally a pretty good resource for high level information and it’s more accurate for some things than for others. It’s still definitely a problem that Wikipedia can confidently publish unverified information (IIRC wasn’t the Scottish translation famously hilariously wrong and mostly written by an editor with no experience with the language?)
And yet, I think if these days people were publishing think pieces about how Wikipedia is ruining the ability of students to learn, or advocating that people shouldn’t ever use Wikipedia to learn something, we’d largely consider them crackpots, or at the very least out of touch.
I think AI tools are going to follow the same trajectory. Eventually we’ll gain enough cultural knowledge of their strengths and weaknesses to apply them properly and in the end they’ll be another valuable asset in our ever growing lists of tools.
You can't ask an AI to do that either. I mean, you physically can, but it would be the same thing as copying and pasting a wikipedia article verbatim into your essay.
What I mean is, people do actually do this with LLMs, but most assignments do not map 1:1 to a Wikipedia article you can copy (certainly programming tasks don't). Or to put it differently, it's relatively trivial to formulate assignments for which a blind Wikipedia copy & paste wouldn't be applicable; in contrast to the LLM case.
it's particularly bad for students who should be trying to learn.
at the same time in my own life, there are tasks that I don't want to do, and certainly don't want to learn anything about, yet have to do.
For example, figuring out a weird edge case combination of flags for a badly designed LaTeX library that I will only ever have to use once. I could try to read the documentation and understand it, but this would take a long time. And, even if it would take no time at all, I literally would prefer not to have this knowledge wasting neurons in my brain.
Imagine a calculator that computes definite integrals, but gives non-sensical results on non-smooth functions for whatever reason (i.e., not an error, but an incorrect but otherwise well-formed answer).
If there were a large number of people who didn't quite understand what it meant for a function to be continuous, let alone smooth, who were using such a calculator, I think you'd see similar issues to the ones that are identified with LLM usage: a large number of students wouldn't learn how to compute definite or indefinite integrals, and likely wouldn't have an intuitive understanding of smoothness or continuity either.
I think we don't see these problems with calculators because the "entry-level" ones don't have support for calculus-related functionality, and because people aren't taught how to arrange the problems that you need calculus to solve until after they've given some amount of calculus-related intuition. These conditions obviously aren't the case for LLMs.
What do you think is the big difference between these tools and *outsourcing*?
AI is far more comparable to delegating work to *people*.
Calculators and compilers are deterministic. Using them doesn't change the nature of your work.
AI, depending on how you use it, gives you a different role. So take that as a clue: if you are less interested in building things and more interested into getting results, maybe a product management role would be a better fit.
Fundamentally nothing, but everybody already knows that you shouldn't teach young kids to rely on calculators during the basic "four-function" stage of their mathematics education.
Calculators for the most part don't solve novel problems. They automate repetitive basic operations which are well-defined and have very few special cases. Your calculator isn't going to do your algebra for you, it's going to give you more time to focus on the algebraic principles instead of material you should have retained from elementary school. Algebra and calculus classes are primarily concerned with symbolic manipulation, once the problem is solved symbolically coming to a numerical answer is time-consuming and uninteresting.
Of course, if you have access to the calculator throughout elementary school then you're never going to learn the basics and that's why schoolchildren don't get to use calculators until the tail-end of middle school. At least that's how it worked in the early 2000s when i was a kid; from what i understand kids today get to use their phones and even laptops in class so maybe i'm wrong here.
Previously I stated that calculators are allowed in later stages of education because they only automate the more basic tasks; Matlab can arguably be considered a calculator which does automate complicated tasks and even when i was growing up the higher-end TI-89 series was available which actually could solve algebra and even simple forms of calculus problems symbolically; we weren't allowed access to these when i was in high school because we wouldn't learn the material if there was a computer to do it for us.
So anyways, my point (which is halfway an agreement with the OP and halfway an agreement with you) is that AI and calculators are fundamentally the same. It needs to be a tool to enhance productivity, not a crutch to compensate for your own inadequacies[1]. This is already well-understood in the case of calculators, and it needs to be well-understood in the case of AI.
[1] actually now that i think of it, there is an interesting possibility of AI being able to give mentally-impaired people an opportunity to do jobs they might never be capable of unassisted, but anybody who doesn't have a significant intellectual disability needs to be wary of over-dependence on machines.
There's a reason we don't let kids use calculators to learn their times tables. In order to be effective at more advanced mathematics, you need to develop a deep intuition for what 9 * 7 means, not just what buttons you need to push to get the calculator to spit out 63.
A junior developer was tasked with writing a script that would produce a list of branches that haven't been touched for a while. I've got the review request. The big chunk of it was written in awk -- even though many awk scripts are one-liners, they don't have to be -- and that chunk was kinda impressive, making some clever use of associative arrays, auto-vivification, and more pretty advanced awk stuff. In fact, it was actually longer than any awk that I have ever written.
When I asked them, "where did you learn awk?", they were taken by surprise -- "where did I learn what?"
Turns out they just fed the task definition to some LLM and copied the answer to the pull request.
I wonder if it would work to introduce a company policy that says you should never commit consensus aren't able to explain how it works?
I've been using that as my own personal policy for AI-assisted code and I am finding it works well for me, but would it work as a company company policy thing?
I call this the "house scrabble rule" because I used to play regularly with a group who imposed a rule that said you couldn't play a word without being able to define it.
I assume that would be seen as creating unnecessary burden, provided that the script works and does what's required. Is it better than the code written by people who have departed, and now no one can explain how it works?
The developer in question has been later promoted to a team lead, and (among other things) this explains why it's "my previous place" :)
One of the advantages of working with people who are not native english-speakers is that, if their english suddenly becomes perfect and they can write concise technical explanations in tasks, you know it's some LLM.
Then if you ask for some detail on a call, it's all uhm, ehm, ehhh, "I will send example later".
Plato, in the Phaedrus, 370BC: "They will cease to exercise memory because they rely on that which is written, calling things to remembrance no longer from within themselves, but by means of external marks."
Has it? Or do we instead have vast overfilled palaces of the sum of human knowledge, often stored in pointers and our limited working memory readily available for things recently accessed?
I'd argue that our ability to recall individual moments has gone down, but the sum of what we functionally know has gone up massively.
With a diminished ability to store, recall and thus manipulate information, our learning is arguably more shallow.
With AI trained on increasingly generic input used by the casual, then the quality of our production will increase in quantity but decrease in quality.
I am not arguing to abandon the written word or LLMs.
But the disadvantages--which will be overlooked by the young and those happy to have a time-saving tool, namely the majority--will do harm, harm most will overlook favouring the output and ignoring the atrophying user.
I think the question is that were Plato's fears unfounded. I don't think the question is "is writing bad", although it is framed as that to justify a carefree adoption of LLMs in daily life.
It’s all about how you use written content, even before AI you could just copy-paste code from StackOverflow without any understanding. But you could also use it as an opportunity to do your own research, make your own experiences and create your own memory (which sticks a lot better). And it’s not just about coding, you can’t really grasp a subject by just reading a text book and not doing exercises or further reading.
Plato’s (or rather the Egyptian king’s - IIRC) fears were not unfounded, since a lot of people do not operate this way (sadly I see this with some peers), however overall the effect could still be positive.
Writing distributes knowledge to a lot of people, without it you have to rely on a kind of personal relationship to learn from someone more knowledgeable (which can be better for the individual mentee though). So maybe it increases chances of learning (breadth of audience) at the cost of the depth of understanding?
I may be old-fashioned but I remember a time when silent failure was considered to be one of the worst things a system can do.
LLMs are silent failure machines. They are useful in their place, but when I hear about bosses replacing human labor with “AI” I am fairly confident they are going to get what they deserve: catastrophe.
> I got into software engineering because I love building things and figuring out how stuff works. That means that I enjoy partaking in the laborious process of pressing buttons on my keyboard to form blocks of code.
I think this is a mistake. Building things and figuring out how stuff works is not related to pressing buttons on a keyboard to form blocks of code. Typing is just a side effect of the technology used. It's like saying that in order to be a mathematician, you have to enjoy writing equations on a whiteboard, or to be a doctor you must really love filling out EHR forms.
In engineering, coming up with a solution that fits the constraints and requirements is typically the end goal, and the best measure of skill I'm aware of. Certainly it's the one that really matters the most in practice. When it is valuable to type everything by hand, then a good engineer should type it by hand. On the other hand, if the best use of your time is to import a third-party library, do that. If the best solution is to create a code base so large no single human brain can understand it all, then you'd better do that. If the easiest path to the solution is to offload some of the coding to an LLM, that's what you should do.
> There is a concept called “Copilot Lag”. It refers to a state where after each action, an engineer pauses, waiting for something to prompt them what to do next.
I've been experiencing this for 10-15 years. I type something and then wait for IDE to complete function names, class methods etc. From this perspective, LLM won't hurt too much because I'm already dumb enough.
It's really interesting how minor changes in your workflow can completely wreck productivity. When I'm at work I spend at least 90% of my time in emacs, but there are some programs I'm forced to use that are only available via Win32 GUI apps, or cursed webapps. Being forced to abandon my keybinds and move the mouse around hunting for buttons to click and then moving my hand from the mouse to the keyboard then back to the mouse really fucks me up. My coworkers all use MSVC and they don't seem to mind it all because they're used to moving the mouse around all the time; conversely a few of them actually seem to hate command-driven programs the same way I hate GUI-driven programs.
As I get older, it feels like every time I have to use a GUI I get stuck in a sort of daze because my mind has become optimized for the specific work I usually do at the expense of the work I usually don't do. I feel like I'm smarter and faster than I've ever been at any prior point in my life, but only for a limited class of work and anything outside of that turns me into a senile old man. This often manifests in me getting distracted by youtube, windows solitaire, etc because it's almost painful to try to remember how to move the mouse around though all these stupid menus with a million poorly-documented buttons that all have misleading labels.
I feel your pain. I have my own struggles with switching tasks and what helps to some degree is understanding that that kind of switching and adapting is a skill which could be trained by doing exactly this. At least I feel less like a victim and more like a person who improves himself :)
But it appears I'm in a better position because I don't have to work with clearly stupid GUIs and have no strong emotions to them.
This is the reason I don’t use auto completing IDEs. Pretty much vanilla emacs. I do often use syntax highlighting for the language, but that’s the limit of the crutches I want to use.
I am at the point of abandoning coding copilots because I spend most of my time fighting the god damned things. Surely, some of this is on me, not tweaking settings or finding the right workflow to get the most of it. Some of it is problematic UX/implementation in VSCode or Cursor. But the remaining portion is an assortment of quirks that require me to hover over it like an overattentive parent trying to keep a toddler from constantly sticking its fingers in electrical sockets. All that plus the comparatively sluggish and inconsistent responsivity is fucking exhausting and I feel like I get _less_ done in copilot-heavy sessions. Up to a point they will improve over time, but right now it makes programming less enjoyable for me.
On the other hand, I am finding LLMs increasingly useful as a moderate expert on a large swath of subjects available 24/7, who will never get tired of repeated clarifications, tangents, and questions, and who can act as an assistant to go off and research or digest things for you. It’s mostly decent rubber duck.
That being said, it’s so easy to land in the echo chamber bullshit zone, and hitting the wall where human intuition, curiosity, ingenuity, and personality would normally take hold for even a below average person is jarring, deflating, and sometimes counterproductive, especially when you hit the context window.
I’m fine with having it as another tool in the box, but I rather do the work myself and collaborate with actual people.
An LLM is a tool. It's your choice how you use it. I think there are at least two ways to use it that are helpful but don't replace your thinking. I sometimes have a problem I don't know how to solve that's too complex to ask google. I can write a paragraph in ChatGPT and it will "understand" what I'm asking and usually give me useful suggestions. Also I sometimes use it to do tedious and repetitive work I just don't want to do.
I don't generally ask it to write my code for me because that's the fun part of the job.
I think the issue is that a lot of orgs are systematically using the tool poorly.
I’m responsible for a couple legacy projects with medium sized codebases, and my experience with any kind of maintenance activities has been terrible. New code is great, but asking for fixes, refactoring, or understanding the code base has had an essentially 2% success rate for me.
Then you have to wonder, how the hell orgs expect to maintain/scale and more code from fewer devs, who don’t even understand how the original code worked?
LLMs are just a tool but overreliance on them is just as much of a code smell as - say - deciding your entire backend is going to be in Matlab; or all your variables are going to be global variables - you can do it, but I guarantee that it’s going to cause issues 2-3 years down the line.
It's also making the sleazy and lazy one thrive a bit more, which is quite painful when passionated devs which are also great colleagues don't gain any real leverage from chatgpt.
Humble craftsmen have long been getting replaced by automation and technology. Devs are resisting the same way as everyone else did before them but it's futile.
It's just especially poignant/painful because developers are being hoisted by their own petard, so to speak.
I use LLMs for generating small chunks of code (less than 150 lines) but I am of the opinion that you should always understand what generated cide is doing. I take time go read through it and make sure it makes sense before I actually run it. I've found that for smaller chunks of code it's usually pretty accurate on the first try. Occasionally it can't figure it out all all, even with trying to massage the prompt to be more descriptive.
I use Claude Sonnet to generate large chunks of code, practically as a form of macro expansion. Such as when adapting SQL queries to a new migration, or adding straightforward UI. Even still, it sometimes isn’t great and I would never commit anything without carefully observing what it actually wrote. More importantly, I never ask it to do something I myself don’t know how to do, especially if I suspect a library or best practice exists.
In other words, I treat it exactly like stochastic autocomplete. It makes me lazier, I’m sure, but the first part of the article above is a rant against a tautology: any tool worth using ought to be missed by the user if they stopped using it!
If you use LLMs in lieu of searching Stack Overflow, you're going to go faster and be neither smarter nor dumber. If you're prompting for entire functions, I suspect it'll be a crutch you learn to rely on forever.
Personally I think there's a middle ground to be had there.
I use LLMs to write entire test functions, but I also have specs for it to work from and can go over what it wrote and verify it. I never blindly go "yeah this test is good" after it generates it.
I think that's the middle ground, knowing where, and when it can handle a full function / impl vs a single/multi(short) line auto-completion.
It ruined my friend's startup. Junior dev "wrote" WAY too much code with no ability to support it after the fact. Glitches in production would result in the kid disappearing for weeks at a time because he had no idea how anything actually worked under the hood. Friend was _so_ confident of his codebase before shit hit the fan - the junior dev misrepresented the state of the world, b/c he simply didn't know what he didn't know.
It’s always been very weird that little hobbyist open source projects produce much better software than billions dollar companies. But I guess it will be even more notable now that the billion dollar garbage shoveling companies are getting self-operating shovels.
I'm learning javascript as my first programming language and I'm somewhere around beginner/intermediate. I used Chatgpt for a while, but stopped after a time and just mostly use documentation now. I don't want code solutions, I want code learning and I want certainty behind that learning.
I do see a time where I could use copilot or some LLM solution but only for making stuff I understand, or to sandbox high level concepts of code approaches. Given that I'm a graphic designer by trade, I like 'productivity/automation' AI tools and I see my approach to code will be the same - I like that they're there but I'm not ready for them yet.
I've heard people say I'll get left behind if I don't use AI, and that's fine as I'll just use niche applications of code alongside my regular work as it's just not stimulating to have AI fill in knowledge blanks and outsource my reasoning.
Nope, also pretty shitty for Python, at least that's my experience from my rather limited usage. I might be using it wrong though.
The problem is that the LLM won't find design mistakes. E.g. trying to get the value of a label in Textual, you can technically do it, but you're not really suppose to. The variable starts with an underscore, so that's an indication that you shouldn't really touch it. The LLMs will happily help you attempt to use a non-existing .text attribute, then start running circles, because what you're doing is a design mistake.
LLMs a probably fairly helpful for situations where the documentation is lacking, but simple auto-complete is also working well enough.
I also love building things. LLM-assisted workflows have definitely not taken this away. If anything, it has only amplified my love for coding. I can finally focus on the creative parts only.
That said, the author is probably right that it has made me dumber or at least less prolific at writing boilerplate.
Gen Z kind of already have a reputation for being 'dumb', being unable to supposedly do basic tasks expected from an entry level office working, or questioning basic things like why tasks get delegated down the chain. Maybe being bad at coding, especially if they are using AI, is just part of that?
I heard about the term 'vibe coding' recently, which really just means copying and pasting code from an AI without checking it. It's interesting that that's a thing, I wonder how widespread it is.
I told my colleagues that if they’re just going to send me LLM code I cannot review it and assume they already double checked the work themselves. This gives them instant approval and if they want to spend time submitting follow up PRs because they’re not double checking their code and not understanding then they can do that. I honestly did this for two reasons:
1. The problem domain is a marketing site (low risk)
2. I got tired of fixing bad LLM code
I have noticed the people who do this are caught up in the politics at work and not really interested in writing code.
Wonder if we'll have this discussion in 20 years. Or will traditional programmers be some niche "artisanal" group of workers, akin to what bootmakers and bespoke tailors are today.
If you want AI to make you less dumb, instead of using it like stack overflow, you can go on a road trip and have a deep conversation about a topic or field you want to learn more about, you can have it quiz you, do mock interviews, ask questions, have a chat, its incredible at that. As long as its not something where the documentation is less than a year or two old.
> As they’re notorious for making crap up because, well, that’s how LLMs work by design, it means that they’re probably making up nonsense half the time.
I found this to be such a silly statement. I find arguments generated by AI to significantly more solid than this.
I think "AI makes developers dumb" makes as much sense as "becoming a manager makes developers dumb."
I was an engineer before moving to more product and strategy oriented roles, and I work on side projects with assistance from Copilot and Roo Code. I find that the skills that I developed as a manager (like writing clear reqs, reviewing code, helping balance tool selection tradeoffs, researching prior art, intuiting when to dive deep into a component and when to keep it abstract, designing system architectures, identifying long-term-bad ideas that initially seem like good ideas, and pushing toward a unified vision of the future) are sometimes more useful for interacting with AI devtools than my engineering skillset.
I think giving someone an AI coding assistant is pretty bad for having them develop coding skills, but pretty good for having them develop "working with an AI assistant" skills. Ultimately, if the result is that AI-assisted programmers can ship products faster without sacrificing sustainability (i.e. you can't have your codebase collapse under the weight of AI-generated code that nobody understands), then I think there will be space in the future for both AI-power users who can go fast as well as conventional engineers who can go deep.
What modern LLMs are good at is reducing boilerplate for workflows that are annoying and tedious, but b) genuinely save time b) are less likely for a LLM to screw up c) are easy to spot check and identify issues in the event the LLM does mess up.
For example, in one of my recent blog posts I wanted to use Python's Pillow to composite five images: one consisting of the left half of the image, the other four in quadrants (https://github.com/minimaxir/mtg-embeddings/blob/main/mtg_re...). I know how to do that in PIL (have to manually specify the coordinates and resize images) but it is annoying and prone to human error and I can never remember what corner is the origin in PIL-land.
Meanwhile I asked Claude 3.5 Sonnet this:
Write Python code using the Pillow library to compose 5 images into a single image:
1. The left half consists of one image.
2. The right half consists of the remaining 4 images, equally sized with one quadrant each
And it got the PIL code mostly correct, except it tried to load the images from a file path which wasn't desired, but it is both an easy fix and my fault since I didn't specify that.
Point (c) above is also why I despise the "vibe coding" meme because I believe it's intentionally misleading, since identifying code and functional requirement issues is an implicit requisite skill that is intentionally ignored in hype as it goes against the novelty of "an AI actually did all of this without much human intervention."
There's a difference in quality, though. IntelliSense was never meant to be more than autocomplete or suggestions (function names, variables names, etc.), i.e. the typing out and memorizing API calls and function signatures part. LLMs in the context of programming are tools that aim to replace the thinking part. Big difference.
I don't need to remember all functions and their signatures for APIs I rarely use - it's fine if a tool like IntelliSense (or an ol' LSP really) acts like a handy cheat sheet for these. Having a machine auto-implement entire programs or significant parts of them, is on another level entirely.
Here is a disturbing look at what the absolute knobs at Y Combinator (and elsewhere) are preaching/pushing, with commentary from Primeagen: https://www.youtube.com/watch?v=riyh_CIshTs
Watch the whole thing, it's hilarious. Eventually these venture capitalists are forced to acknowledge that LLM-dependent developers do not develop an understanding and hit a ceiling. They call it "good enough".
The use of LLMs for constructive activities (writing, coding, etc.) rapidly produces a profound dependence. Try turning it off for a day or two, you're hobbled, incapacitated. Competition in the workplace forces us down this road to being utterly dependent. Human intellect atrophies through disuse. More discussion of this effect, empirical observations: https://www.youtube.com/watch?v=cQNyYx2fZXw
To understand the reality of LLM code generators in practice, Primeagen and Casey Muratori carefully review the output of a state-of-the-art LLM code generator. They provide a task well-represented in the LLM's training data, so development should be easy. The task is presented as a cumulative series of modifications to a codebase: https://www.youtube.com/watch?v=NW6PhVdq9R8
This is the reality of what's happening: iterative development converging on subtly or grossly incorrect, overcomplicated, unmaintainable code, with the LLM increasingly unable to make progress. And the human, where does he end up?
But this is exactly what the higher ups want according to Braverman, they will insist on "know-how" being non-existent, and always push to tell workers what they - of course - know of the work, that we peons would ignore.
Wrong quote - "calculators are making people bad at doing maths" was the fear. Truns out, they didn't, but didn't help either [1]
> "Spell checkers are making people dumb"
Well, at this point I assume you use "dumb" as a general stand-in for "worse at the skill in question". here, however, research shows that indeed, spell checkers and auto-correct seem to have a negative influence on learning proper spelling and grammar [2]. The main takeaway here seems to be the fact that handwriting in particular is a major contributor in learning and practicing written language skills. [2]
> "Wikipedia is making people dumb"
Honestly, haven't heard that one before. Did you just make that up? Apart from people like Plato, thousands of years ago, owning and using books, encyclopaedias, and dictionaries has generally been viewed as a sign of a cultured and knowledgeable individual in many cultures... I don't see how an online source is any different in that regard.
The decline of problem solving and analytical skills, short attention spans, lack of foundational knowledge and subsequent loss of valuable training material for our beloved stochastical parrots, though, might prove to become a problem in future.
There's a qualitative difference between relying on spell checkers while still knowing the words and slowly losing the ability to formulate, express, and solve problems in an analytical fashion. Worst case we're moving towards E.M. Forster's dystopian "The Machine Stops"-scenario.
People, who know how to write, use spell checkers as assistants. People, who don't know how to write, use spell checkers to do everything for them effectively replacing one errors with other errors.
I dont think it is making developers dumb, you still need to audit and review the code. As long as you augment your writing by relying on base templating, finding material to read or have it explain code. It is really good.
AI makes developers smarter when used in smart ways. How amazing to have code generated for you, freeing you to consider the next task (ie. “sit there waiting for the next task to come to mind”) .. oh, by the way, if you don’t understand the code, highlight it and ask for an explanation. Repeat ad infinitum until you understand what you’re reading.
The dumb developers are those resisting this amazing tool and trend.
Frankly, i don't think this is true at all. If anything I notice, for me, that I take better and more informed decisions, in many aspects of life. Think this criticism comes from a position of someone having invested alot of time in something AI can do quite well.
For me, the main question in this context would be whether the decisions are better informed or they just feel better informed. I regularly get LLMs to lie to me in my areas of expertise, but there I have the benefit that I can usually sniff out the lie. In topics I'm not that familiar with, I can't tell whether the LLM is confidently correct or confidently incorrect.
Well, AI does make errors, and never says "I don't know". That is also true of Wikipedia though. I've seen much improvement in accuracy from 3.5 to 4.5. Hallucinations can often be hashed out by a dialogue.
Wikipedia has multiple ways it tells you it doesn't know or it doesn't know for certain. Tags such as clarify, explain, confusing (all of which expand into phrases such as clarification needed etc) are abundant, and if an article doesn't meet the bar for the standard, it's either clearly annotated at the top of the article or the article is removed altogether.
I live on a farm and there are a lot of things that machines can do faster and cheaper. And for a lot of tasks, it makes more sense from a time / money tradeoff.
But I still like to do certain things by hand. Both because it's more enjoyable that way, and because it's good to stay in shape.
Coding is similar to me. 80% of coding is pretty brain dead — boilerplate, repetitive. Then there's that 20% that really matters. Either because it requires real creativity, or intentionality.
Look for the 80/20 rule and find those spots where you can keep yourself sharp.
My guess is that AI will make programming even more misserable for those who entered the field for the wrong reasons. Now is the time to double down on learning the basics, the low level, the under-the-hood stuff.
I'm in full agreement with this, and it's part of the reason I'm considering leaving the software engineering field for good.
I've been programming for over 25 years, and the joy I get from it is the artistry of it, I see beauty in systems constructed in the abstract realm. But LLM based development remove much of that. I haven't used nor desire to use LLM for this, but I don't want to compete with people that do because I won't win in the short-term nature of corporate performance based culture. And so I'm now searching for careers that will be more resistant to LLM based workflows. Unfortunately in my opinion this pretty much rules out any knowledge based economy.
Books made orators dumb. I'm not sure this argument has ever had any credence, not now and not when Socrates came up with his version for his time.
Any technology that renders a mental skill obsolete will undergo this treatment. We should be smart enough to recognize the rhetoric it is rather than pretend it's a valid argument for Luddism.
"There’s a reason behind why I say this. Over time, you develop a reliance on [search engines]. This is to the point where is [sic!] starts to become hard for you to work without one."
AI tools are great. They don’t absolve you from understanding what you’re doing and why.
One of the jobs of a software engineer is to be the point person for some pieces of technology. The responsible person in the chain. If you let AI do all of your job, it’s the same as letting a junior employee do all of your job: Eventually the higher-ups will notice and wonder why they need you.
I experimented with vibe coding [0] yesterday to build a Pomodoro timer app [1] and had a mixed experience.
The process - instead of typing code, I mostly just talked (voice commands) to an AI coding assistant - in this case, Claude Sonnet 3.7 with GitHub Copilot in Visual Studio Code and the macOS built-in Dictation app. After each change, I’d check if it was implemented correctly and if it looked good in the app. I’d review the code to see if there are any mistakes. If I want any changes, I will ask AI to fix it and again review the code. The code is open source and available in GitHub [2].
On one hand, it was amazing to see how quickly the ideas in my head were turning into real code. Yes reviewing the code take time, but it is far less than if I were to write all that code myself. On the other hand, it was eye-opening to realize that I need to be diligent about reviewing the code written by AI and ensuring that my code is secure, performant and architecturally stable. There were a few occasions when AI wouldn't realize there is a mistake (at one time, a compile error) and I had to tell it to fix it.
No doubt that AI assisted programming is changing how we build software. It gives you a pretty good starting point, it will take you almost 70-80% there. But a production grade application at scale requires a lot more work on architecture, system design, database, observability and end to end integration.
So I believe we developers need to adapt and understand these concepts deeply. We’ll need to be good at:
- Reading code - Understanding, verifying and correcting the code written by AI
- Systems thinking - understand the big picture and how different components interact with each other
- Guiding the AI system - giving clear instructions about what you want it to do
- Architecture and optimization - Ensuring the underlying structure is solid and performance is good
- Understand the programming language - without this, we wouldn't know when AI makes a mistake
- Designing good experiences - As coding gets easier, it becomes more important and easier to build user-friendly experiences
Without this knowledge, apps built purely through AI prompting will likely be sub-optimal, slow, and hard to maintain. This is an opportunity for us to sharpen the skills and a call to action to adapt to the new reality.
This entire line of reasoning is worker propaganda. Like the boss is some buffoon and the employees constantly have to skirt his nonsensical requirements to make create a reasonable product.
It's a cartoon mentality. Real products have more requirements than any human can fathom, correctness is just one of the uncountable tradeoffs you can make. Understanding, or some kind of scientific value is another.
If anything but a single minded focus on your pet requirement is dumb, then call me dumb idc. Why YOU got into software development is not why anyone else did.
> Over time, I started to forget basic foundational elements of the languages I worked with. I started to forget parts of the syntax, how basic statements are used
It's a good thing tbh. Language syntax is ultimately entirely arbitrary and is the most pointless thing to have to keep in mind. Why bother focusing on that when you can use the mental effort on the actual logic instead?
This has been a problem for me for years before LLMs, constantly switching languages and forgetting what exact specifics I need to use because everyone thinks their super special way of writing the same exact thing is best and standards are avoided like the plague. Why do we need two hundred ways of writing a fuckin for loop?
> Some people might not enjoy writing their own code. If that’s the case, as harsh as it may seem, I would say that they’re trying to work in a field that isn’t for them
Conversely: Some people want to insist that writing code 10x slower is the right way to do things, that horses were always better, more dependable than cares, and that nobody would want to step into one of those flying monstrosities. And they may also find that they are no longer in the right field.
This is the new technology is always better argument that invoked the imagery of all the times I was true and ignores all the products that have been disposed of.
The truth is it depends on every detail. What technology. For who. When.
Wait, let’s give it a couple years, the way Boeing is going the horse people might have had a point. I’m not 100% sold on the idea that our society will long-term be capable of maintaining the infrastructure required to do stuff like airplanes.
AI lowers the bar. You can say Python makes developers dumb too. Or that canned food makes cooks dumb. That’s not really the point though. When something is easier more people can do it. That expansion is biased downward.
Honestly I just don’t remember the names of methods and without my IDE I’d be a lot less productive than I am now. Are IDEs a problem?
The bit about “people don’t really know how things work anymore”: my friend I grew up programming in assembly, I’ve modified the kernel on games consoles. Nobody around me knocking out their C# and their typescript has any idea how these things work. Like I can name the people in the campus that do.
LLMs are a useful tool. Learn to use them to increase your productivity or be left behind.
"Some people might not enjoy writing their own code. If that’s the case, as harsh as it may seem, I would say that they’re trying to work in a field that isn’t for them."
I've tolerated writing my own code for decades. Sometimes I'm pleased with it. Mostly it's the abstraction standing between me and my idea. I like to build things, the faster the better. As I have the ideas, I like to see them implemented as efficiently and cleanly as possible, to my specifications.
I've embraced working with LLMs. I don't know that it's made me lazier. If anything, it inspires me to start when I feel in a rut. I'll inevitably let the LLM do its thing, and then them being what they are, I will take over and finish the job my way. I seem to be producing more product than I ever have.
I've worked with people and am friends with a few of these types; they think their code and methodologies are sacrosanct. That if the AI moves in there is no place for them. I got into the game for creativity, it's why I'm still here, and I see no reason to select myself for removal from the field. The tools, the syntax, its all just a means to an end.
This is something that I struggle with for AI programming. I actually like writing the code myself. Like how someone might enjoy knitting or model building or painting or some other "tedious" activity. Using AI to generate my code just takes all the fun out of it for me.
This so much. I love coding. I might be the person that still paints stuff by hand long after image generation has made actual paintings superfluous, but it is what it is.
One analogy that works for me is to consider mural painting. Artists who create huge building-size murals are responsible for the design of the painting itself, but usually work with a team of artist to get up on the ladders and help apply the image to the building.
The way I use LLMs feels like that to me: I'm designing the software to quite a fine level, then having the LLMs help out with some of the typing of the code: https://simonwillison.net/2025/Mar/11/using-llms-for-code/#t...
There were seamstresses who enjoyed sewing prior to the industrial revolution, and continued doing so afterwards. We still have people with those skills now, but it's often in very different contexts. But the ability to create a completely new garment industry was possible because of the scale that was then possible. Similarly for most artesanal crafts.
The industry will change drastically, but you can still enjoy your individual pleasures. And there will be value in unique, one-off and very different pieces that only an artesan can create (though there will now be a vast number of "unique" screen printed tees on the market as well)
I don’t enjoy writing unit tests but fortunately this is one task LLMs seem to be very good at and isn’t high stakes, they can exhaustively create test cases for all kinds of conditions, and can torture test your code without mercy. This is the only true improvement LLMs have made to my enjoyment.
except they are not good at it. the unit tests you'll have written will be filled with (slow) mocks with tautological assertions, create no reusable test fixtures, etc.
Sounds just like human-written test suites lol
of course — that’s what they’re trained on after all. most treat tests as a burden / afterthought, propagating the same issues from codebase to codebase, never improving. i wouldn’t consider those good either.
Saying that writing unit tests isn’t high stakes is a dubious statement. The very purpose of unit tests is to make sure that programming errors are caught that may very well be high stakes.
However high the stakes are, a bug in test code is not as much of an issue as a bug in the production code.
It is as much of an issue if it prevents a bug in production code from being detected before it occurs in production. Which is the very purpose of unit tests.
What? Bugs in the test code are what lead to bugs in production code.
That is exactly it. A bug in test code may or may not lead to a bug in production code, while a bug in production code IS a bug in production code
> I've tolerated writing my own code for decades.
The only reason I got suckd into this field was because I enjoyed writing code. What I "tolerated" (professionally) was having to work on other people's code. And LLM code is other people's code.
It's probably worse: the 'other' is faceless with no accountability.
> I like to build things, the faster the better.
what's the largest (traffic, revenue) product you've built? quantity >>>> quality of code is a great trade-off for hacking things together but doesn't lend itself to maintainable systems, in my experience.
Have you seen it work to the long term?
Sure, but the vast majority of the time in greenfield applications situations, it's entirely unclear if what is being built is useful, even when people think otherwise. So the question of "maintainable" or not is frequently not the right consideration.
right, which is why I asked about the largest project.
If they've never worked on something post-PMF, I get it. They might be mostly right.
> in greenfield applications situations
which isn't most of the software industry
I suppose that's where the use case for LLMs starts to diminish rapidly.
To be fair, this person wasn’t claiming they’re making a trade off on quality, just that they prefer to build things quickly. If an AI let you keep quality constant and deliver faster, for example.
I don’t think that’s what LLMs offer, mind you (right now anyway), and I often find the trade offs to not be worth it in retrospect, but it’s hard to know which bucket you’re in ahead of time.
I've accepted this way of working too. There is some code that I enjoy writing. But what I've found is that I actually enjoy just seeing the thing in my head actually work in the real world. For me, the fun part was finding the right abstractions and putting all these building blocks together.
My general way of working now is, I'll write some of the code in the style I like. I won't trust an LLM to come up with the right design, so I still trust my knowledge and experience to come up with a design which is maintainable and scaleable. But I might just stub out the detail. I'm focusing mostly on the higher level stuff.
Once I've designed the software at a high level, I can point the LLM at this using specific files as context. Maybe some of them have the data structures describing the business logic and a few stubbed out implementations. Then Claude usually does an excellent job at just filling in the blanks.
I've still got to sanity check it. And I still find it doing things which looks like it came right from a junior developer. But I can suggest a better way and it usually gets it right the second or third time. I find it a really productive way of programming.
I don't want to be writing datalayer of my application. It's not fun for me. LLMs handle that for me and lets me focus on what makes my job interesting.
The other thing I've kinda accepted is to just use it or get left behind. You WILL get people who use this and become really productive. It's a tool which enables you to do more. So at some point you've got to suck it up. I just see it as a really impressive code generation tool. It won't replace me, but not using it might.
I don't think the author is saying it's a dichotomy. Like, you're either a disciple of doing things "ye olde way" or allowing the LLM to do it for you.
I find his point to be that there is still a lot of value in understanding what is actually going on.
Our business is one of details and I don't think you can code strictly having an LLM doing everything. It does weird and wrong stuff sometimes. It's still necessary to understand the code.
I like coding on private projects at home; that is fun and creative. The coding I get to do at work inbetween waiting for CI, scouring logs, monitoring APM dashboards and reviewing PRs, in a style and abstraction level I find inappropriate is not interesting at all. A type of change that might take 10 minutes at home might take 2 days at work.
I resonate so strongly with this. I’ve been a professional software engineer for almost twenty years now. I’ve worked on everything from my own solo indie hacker startups to now getting paid a half million per year to sling code for a tech company worth tens of billions. I enjoy writing code sometimes, but mostly I just want to build things. I’m having great fun using all these AI tools to build things faster than ever. They’re not perfect, and if you consider yourself to be a software engineer first, then I can understand how they’d be frustrating.
But I’m not a software engineer first, I’m a builder first. For me, using these tools to build things is much better than not using them, and that’s enough.
[dead]
There's two sides to this:
> "as harsh as it may seem, I would say that they’re trying to work in a field that isn’t for them."
I find this statement problematic for a different reason: we live in a world where minimum wages (if they exist) are lower than living wages & mean wages are significantly lower the point at which well-being indices plateau. In that context calling people out for working in a field that "isn't for them" is inutile - if you can get by in the field then leaving it simply isn't logical.
THAT SAID, I do find the above comment incongruent with reality. If you're in a field that's "not for you" for economic reasons that's cool but making out that it is in fact for you, despite "tolerating" writing code, is a little different.
> I got into the game for creativity
Are you confusing creativity with productivity?
If you're productive that's great; economic imperative, etc. I'm not knocking that as a positive basis. But nothing you describe in your comment would fall under the umbrella of what I consider "creativity".
We've seen this happen over and over again, when a new leaky layer of abstraction is developed that makes it easier to develop working code without understanding the lower layer.
It's almost always a leaky abstraction, because sometimes you do need to know how the lower layer really works.
Every time this happens, developers who have invested a lot of time and emotional energy in understanding the lower level claim that those who rely on the abstraction are dumber (less curious, less effective, and they write "worse code") than those who have mastered the lower level.
Wouldn't we all be smarter if we stopped relying on third-party libraries and wrote the code ourselves?
Wouldn't we all be smarter if we managed memory manually?
Wouldn't we all be smarter if we wrote all of our code in assembly, and stopped relying on compilers?
Wouldn't we all be smarter if we were wiring our own transistors?
It is educational to learn about lower layers. Often it's required to squeeze out optimal performance. But you don't have to understand lower layers to provide value to your customers, and developers who now find themselves overinvested in low-level knowledge don't want to believe that.
(My favorite use of coding LLMs is to ask them to help me understand code I don't yet understand. Even when it gets the answer wrong, it's often right enough to give me the hints I need to figure it out myself.)
LLMs don't create an abstraction. They generate code. If you are thinking about LLMs as a layer of abstraction, you are going to have all kinds of problems.
My C compiler has been generating assembly code for me for 30 years. And people were saying the same thing even earlier about how compilers and HLLs made developers dumb because they couldn't code in asm.
Presumably you don't throw out the c code and just check in the assembly.
>And people were saying
Source? A quote? Or are we just making up historical strawmen to win arguments against?
I've heard from people older than me that this is how people felt about compilers.
https://vivekhaldar.com/articles/when-compilers-were-the--ai...
He provides some sources (7) at the bottom of the article.
To pick one, the following has a video interview with one of the founders of Fortran:
https://www.ibm.com/history/fortran#:~:text=Fortran%20was%20...
So we're not making up stuff, this perspective was ubiquitous among assembly programmers of the 1950s. In 1958 (as the first article I link to mentions), half of programs were written in Fortran. Which means half of people still thought writing assembly by hand was the way to go.
I've personally written assembly by hand for money on an obscure architecture, and I've also written a non-optimizing compiler for a subset of Rust to avoid the assembly. There is great joy in playing stack tetris, but changing code requires a lot of effort. Imagine if there weren't great alternatives, you'd just get good at it.
I imagine if there weren't compilers (or interpreter) I would never have learned how to code. My generation of programmers was taught with Java and in my university course we did all our homework in the first year using BlueJay, a program that made it _even easier_ to get up and running with a bit of Java code.
(just to save some face: I learned Prolog in my second year).
Yeah, before high-level languages, programming was mostly done by mathematicians and electrical engineers who felt adventurous.
I also had BlueJay on my first semester. But fortunately I had machine architecture, compilers and operating systems later.
No quote or source, but I can corroborate. I've even heard the same argument when C++ was getting popular and C was still the "standard."
Not quite what the OP claims but see for example the Story of Mel:
https://www.gutenberg.org/cache/epub/3008/pg3008-images.html...Though to be fair The Story of Mel supports rather than refute the argument against compilers.
https://en.m.wikipedia.org/wiki/Real_Programmers_Don%27t_Use...
The joke story is mocking the common arguments/beliefs at the time.
If you expect me to source you a collection of comments about "real programmers" from over 30 years ago though that is too much of an ask but I was there, I read it often and I started fairly late on the scene in the 90s.
except we can guarantee (with tests) that the generated instructions from the compiler are bug free 99% of the time. Pretty big difference there.
The tool currently being bad doesn't matter to the argument of whether or not they make devs dumb.
if i can one day have a similar level of confidence in LLM output, as i do in a compiler, then i will also call it an abstraction. until then…i wait. :)
But i agree, it’s unrelated to devs being dumb.
C compilers are deterministic. You don't have to inspect the assembly they produce to know that it did the right thing.
And C compilers have bugs too earlier ones a lot more though thankfully we are now several decades into their development.
Very whataboutism. Do you understanding why the person is making this argument? Or is everything just postmodern deconstructionism?
Everything has some imperfection, so LLMs are just fine... Completely missing the totally valid criticisms people have of the systems.
I like to call that “unary thinking”. Nothing is “perfect”, therefore everything is “imperfect”, so everything is the “same”. One category for everything, unary.
I'm not arguing LLMs currently generate great code. My argument is that it doesn't matter to the assertion that they make devs dumb.
Tools being bad currently doesn't matter just like compilers being bad in the past doesn't matter.
You're trying too hard to argue here.
Every 5 years or so of my career something new has come out to make coding easier and every time a stack of people like yourself come out to argue it's making devs dumb or worse.
LLMs still generate terrible output a lot of the time but they are improving. Early compilers generated terrible ASM a lot of the time to the point that it was common to use inline assembly in your code or rewrite parts later. Tools can improve, the point is that neither make the dev worse they just add to productivity.
Writing code isn't my job it's a task I do to make the systems I design functional.
critical thinking is hard i guess
[dead]
They can also generate documentation of code you've written. So it is very useful if leveraged correctly to understand what the code is doing. Eventually you learn all of the behaviors of that code and able to write it yourself or improve on it.
I would consider it as a tool to teach and learn code if used appropriately. However LLMs are bullshit if you ask it to write something, pieces yes, whole code... yeah good luck having it maintain consistency and comprehension of what the end goal is. The reason it works great for reading existing code is that the input results into a context it can refer back to but because LLMs are weighted values it has no way to visualize the final output without significant input.
This is a disingenuous critique of what was said.
The point is LLMs may allow developers to write code for problems they may not fully understand at the current level or under the hood.
In a similar way using a high level web framework may allow a developer to work on a problem they don’t fully understand at the current level or under the hood.
There will always be new tools to “make developers faster” usually at a trade off of the developer understanding less of what specifically they’re instructing the computer to do.
Sometimes it’s valuable to dig and better understand, but sometimes not. And always responding to new developer tooling (whether LLMs or Web Frameworks or anything else) by saying they make developers dumber, can be naive.
Nope, it's not disingenuous. It's a genuine critique that it's just stupid way to think about things. You don't check in a bunch of prompts, make changes to the prompts, run them through a model, compile/build the code.
It's simply not the same thing as a high level web framework.
If you have an intern, or a junior engineer - you give them work and check the work. You can give them work that you aren't an expert in, where you don't know all the required pieces in detail, and you won't get out of it the same as doing the work yourself. An intern is not a layer of abstraction. Not all divisions of labor are via layers of abstraction. If you treat them all that way it's dumb and you'll have problems.
Leaky abstractions is a really appropriate term for LLM-assisted coding.
The original "law of leaky abstractions" talked about how the challenge with abstractions is that when they break you now have to develop a mental model of what they were hiding from you in order to fix the problem.
(Absolutely classic Joel Spolsky essay from 22 years ago which still feels relevant today: https://www.joelonsoftware.com/2002/11/11/the-law-of-leaky-a... )
Having LLMs write code for you has a similar effect: the moment you run into problems, you're going to have to make sure you deeply understand exactly what they have done.
> Having LLMs write code for you has a similar effect: the moment you run into problems, you're going to have to make sure you deeply understand exactly what they have done.
I'm finding that, If I don't have solid mastery of at least one aspect of generated code, I won't know that I have problems until they touch a domain I understand.
[deleted]
Leaky abstractions doesn't imply abstractions are bad.
Using abstractions to trade power for simplicity is a perfectly fine trade-off... but you have to bare in mind that at some point you'll run into a problem that requires you to break through that abstraction.
I read that essay 22 years ago and ever since then I've always looked out for opportunities to learn little extra details about the abstractions I'm using. It pays off all the time.
> Leaky abstractions doesn't imply abstractions are bad.
No, of course not. I’m saying that the article is bad. Because he’s not saying anything that the term itself isn’t already describing or implying.
No adjectives needed. And certainly no laws.
Not all of those abstractions are equally leaky though. Automatic memory management for example is leaky only for a very narrow set of problems, in many situations the abstraction works extremely well. It remains to be seen whether AI can be made to leak so rarely (which does not meant that it's not useful even in its current leaky state).
If we just talk in analogies: a cup is also leaky because fluid is escaping via vapours. It's not the same as a cup with a hole in it.
Llms currently have tiny holes and we don't know if we can fix them. Established abstractions are more like cups that may leak but only in certain conditions (when it's hot)
> (My favorite use of coding LLMs is to ask them to help me understand code I don't yet understand. Even when it gets the answer wrong, it's often right enough to give me the hints I need to figure it out myself.)
Agree, specially useful when you join a new company and you have to navegate a large codebase (or bad-maintained codebase, which is even worse by several orders of magnitude). I had no luck asking LLM to fix this or that, but it did mostly OK when I asked how it works and what the code is trying to code (it includes mistakes but that's fine, I can see them, which is different if it was just code that I copy and paste).
See this as one of the major selling points.
…but I haven’t joined a new company since LLMs were a thing. How often is this use case necessary to justify $Ts in investment?
No ieda about that, those are amounts of money I can't even consider because I couldn't tell the different of T vs B or even a hundred millions (as someone who never had more than 100k).
It's something I would cosnider paying for the first months when joining a new company (specially with a good salary), but not more than that, to be honest.
This "it's the same as the past changes" analogy is lazy - everywhere it's reached for, not just AI. It's basically just "something something luddites".
Criticisms of each change are not somehow invalid just because the change is inevitable, like all the changes before it.
When a higher level of abstraction allows programmers to focus on the detail relevant to them they stop needing to know the low level stuff. Some programmers tend not to be a fan of these kinds of changes as we well know.
But do LLMs provide a higher level of abstraction? Is this really one of those transition points in computing history?
If they do, it's a different kind to compilers, third-party APIs or any other form of higher level abstraction we've seen so far. It allows programmers to focus on a different level of detail to some extent but they still need to be able to assemble the "right enough" pieces into a meaningful whole.
Personally, I don't see this as a higher level of abstraction. I can't offload the cognitive load of understanding, just the work of constructing and typing out the solution. I can't fully trust the output and I can't really assemble the input without some knowledge of what I'm putting together.
LLMs might speed up development and lower the bar for developing complex applications but I don't think they raise the problem-solving task to one focused solely on the problem domain. That would be the point where you no longer need to know about the lower layers.
I think it’s usually helpful if your knowledge extends a little deeper than the level you usually work at. You need to know a lot about the layer you work in, a good amount about the surrounding layers, and maybe just a little bit about more distant layers.
If you are writing SQL, it’s helpful to understand how database engines manage storage and optimize queries. If you write database engine code, it’s helpful to understand (among many other things of course) how memory is managed by the operating system. If you write OS code, it’s helpful to understand how memory hardware works. And so on. But you can write great SQL without knowing much of anything about memory hardware.
The reverse is also true in that it’s good to know what is going on one level above you as well.
Anyway my experience has been that knowledge of the adjacent stack layers is highly beneficial and I don’t think exaggerated.
Last year I learned a new language and framework for the first time in a while. Until I became used to the new way of thinking, the discomfort I felt at each hurdle was both mental and physical! I imagine this is what many senior engineers feel when they first begin using an AI programming assistant, or an even more hands-off AI tool.
Oddly enough, using an AI assistant, despite it guessing incorrectly as often as it did, helped me learn and write code faster!
Understanding assembler (or even just ABI) if useful.
Let me alter this perspective, you can use it to learn why parts of code does and use it for commenting. Help you read what might be otherwise unreadable. LLMs and programming is good, but not great. However it can easily be someone to teach a developer what parts of the code they are working on.
The difference between a mechanic and an engineer, illustrated ..
Those aren't the same thing at all, and you already mentioned why in your comment: leakiness. The higher up you go on the abstraction chain, the leakier the abstractions become, and the less viable it is to produce quality software without understanding the layer(s) below.
You can generally trust that transistors won't randomly malfunction and give you wrong results. You can generally trust that your compiler won't generate the wrong assembly, or that your interpreter won't interpret your code incorrectly. You can generally trust that your language's automatic memory management won't corrupt memory. It might be useful to understand how those layers work anyway, but it's usually not a hard requirement.
But once you reach a certain level of abstraction (usually 1 level above programming language), you'll start running into more and more issues resulting from abstraction leaks that require understanding the layer below to properly fix. Probably the most blatant example of this nowadays are "React developers" who don't know JS/CSS/HTML and WILL constantly be running into issues that they can't properly solve as a result, and are forced to either give up or write the most deranged workarounds imaginable that consist of hundreds of lines of unintelligible spaghetti.
AI is the highest level of abstraction so far, and as a result, it's also the leakiest abstraction so far. You CANNOT write proper functional and maintainable code using an LLM without having at least a decent understanding of what it's outputting, unless you're writing baby's first todo app or something.
> But once you reach a certain level of abstraction (usually 1 level above programming language), you'll start running into more and more issues resulting from abstraction leaks that require understanding the layer below to properly fix. Probably the most blatant example of this nowadays are "React developers" who don't know JS/CSS/HTML and WILL constantly be running into issues that they can't properly solve as a result, and are forced to either give up or write the most deranged workarounds imaginable that consist of hundreds of lines of unintelligible spaghetti.
I want to frame this. I am sick to death of every other website and application needing a gig of RAM and making my damn phone hot in my hand.
> But you don't have to understand lower layers to provide value to your customers, and developers who now find themselves overinvested in low-level knowledge don't want to believe that.
This is the weakest point which breaks your whole argument.
I see it happening ALL the time: newer web developers enter the field from an angle of high abstraction, whenever these abstractions don't work well, then they are completelly unable to proceed. They wouldn't be in that place if they knew the low-level and it DOES prevent them from delivering "value" to their customers.
What is even worse than that, since these developers don't understand exactly why some problen manifests, and the don't even understand exactly what their abstraction trully solves, they wrongly proceed to solve a problem using the wrong (high level) tools.
Yeah … but
That has some amount to do with the level abstraction but almost everything to do with inexperience. The lower level you get, the harder the impact of inexperience.
New web developers are still sorting themselves out and they are at a stage where they’ll suck no matter what the level of abstraction.
I get what you say but I must insist. Abstractions don't properly surface the underlying causes when they break.
This creates a barier in gaining experience on the topic that matters in order to solve your problem. Let me try to give an example (please don't neat pick, it is just an example):
A front-end dev who has never bundled an app like we did in the old days (manual scripts or manual gulp, grunt, webpack pipelines) but use a off-the-shelf webpack config (lets say CRA) has great trouble understanding why some webpack plugin doesn't work as expected, or they might not even understand how importing an svg into their jsx actually works. Yes this is due to lack of experience, but their current level of working isn't exposing those details directly so they can't gain experience, the waters are way too deep.
> Abstractions don't properly surface the underlying causes when they break.
"Abstractions" in the abstract neither do nor do not do this, as it is orthogonal to what an abstraction is; concrete implementations of abstractions can either swallow or wrap information about the underlying cause when they break, both are valid approaches.
> but their current level of working isn't exposing those details directly so they can't gain experience
This isn't really true. They absolutely can gain experience; they don't pay a (very high) extra effort with the side benefit of maybe gaining some experience on routine tasks where nothing breaks, and tend to gain experience only if they expend extra effort in situations where things fail or their are unusual challenges to resolve.
> They absolutely can gain experience
Not sure where you are getting with this. Yes, literally speaking they can, but what matters is if they actually do, in general, in the real world. My admitedly biassed opinion, based on personaly observating other colleagues, shows that in general, they don't learn and they give up.
Have you noticed an other trend in your circles?
> Not sure where you are getting with this. Yes, literally speaking they can, but what matters is if they actually do,
And, despite the fairly consistent bias people used to working at lower levels of abstraction have against people who usually work at higher levels, we all work with some level of abstraction, and most people gain experience at lower levels by dealing with problems that emerge, and those that don't don't fail to do so because they can't but because they choose not to bother either because they have other people that they can rely on for lower-level problems, or because they aren't interested and are doing things where they can afford to simply change what they are doing in response to those problems.
Abstraction is fine, it allows you to work faster, or easier. Reliance that becomes dependency is the problem - when abstraction supersedes fundamentals you're no longer able to reason about the leaks and they become blindspots.
Don't confuse low-level tedium with CS basics, if you're arguing that knowing how computers work is not relevant to working as a SWE then sure, but why would a company want a software dev that doesn't seem to know software? Usually your irreplaceable value as a developer is knowing and mitigating the leaks so they don't wind up threatening the business.
This is where the industry most suffers from not having a standardized-ish hierarchy, you're right that most shops don't need a trauma surgeon on-call for treating headaches but there's still many medical options before resorting to random grifter who simply "watched some Grey's Anatomy" as "medschool was a barrier for providing value to customers".
And every time the commentariat dismisses it with the trope that it’s the same as the other times.
It’s not the same as the other times. The naysayers might be the same elitists as the last time. But that’s irrelevant because the moment is different.
It’s not even an abstraction. An abstraction of what? It’s English/Farsi/etc. text input which gets translated into something that no one can vouch for. What does that abstract?
You say that they can learn about the lower layers. But what’s the skill transfer from the prompt engineering to the programming?
People who program in memory-managed languages are programming. There’s no paradigm shift when they start doing manual memory management. It’s more things to manage. That’s it.
People who write spreadsheet logic are programming.
But what are prompt engineers doing? ... I guess they are hoping for the best. Optimism is what they have in common with programming.
When a lower layer fails, your ability to remedy a situation depends on your ability to understand that layer of abstraction. Now: if an LLM produces wrong code, how do you know why it did that?
Sounds like you’re coping cause you have some type of investment in LLM “coding”, whether that is financial or emotional.
I won’t waste my time too much reacting to this nonsensical comment but I’ll just give this example, LLMs can hallucinate, where they generate code that’s not real, LLMs don’t work off straight rules, they’re influenced by a seed. Normal abstraction layers aren’t.
I dearly hope you’re arguing in bad faith, otherwise you are really deluded with either programming terms or reality.
I've had a similar experience. I built out a feature using an LLM and then found the library it must have been "taking" the code from, so what I ended up was a much worse mangled version of what already existed, had I taken the time to properly research. I've now fully gone back to just getting it to prototype functions for me in-editor based off comments, and I do the rest. Setting up AI pipelines with rule files and stuff takes all the fun away and feels like extremely daunting work I can't bring myself to do. I would much rather just code than act as a PM for a junior that will mess up constantly.
When the LLM heinously gets it wrong 2, 3, 4 times in a row, I feel a genuine rage bubbling that I wouldn't get otherwise. It's exhausting. I expect within the next year or two this will get a lot easier and the UX better, but I'm not seeing how. Maybe I lack vision.
You’re exactly right on the rage part, and that’s not something I’ve seen discussed enough.
Maybe it’s the fact that you know you could do it better in less time that drives the frustration. For a junior dev, perhaps that frustration is worth it because there’s a perception that the AI is still more likely to be saving them time?
I’m only tolerating this because of the potential for long term improvement. If it just stayed like it is now, I wouldn’t touch it again. Or I’d find something else to do with my time, because it turns an enjoyable profession into a stressful agonizing experience.
Is it just me or has this been a year or two off for at least a year or two now?
It’s exponentially better for me to use AI for coding than it was two years ago. GPT-4 launched two years and two days ago. Claude 3.5 sonnet was still fifteen months away. There were no reasoning models. Costs were an order of magnitude or two higher. Cursor and Windsurf hadn’t been released.
The last two years have brought staggering progress.
LLMs also take away the motivation from students to properly concentrate and deeply understand a technical problem (including but not limited to coding problems); instead, they copy, paste and move on without understanding. The electronic calculator analogy might be appropriate: it's a tool appropriate once you have learned how to do the calculations by hand.
In an experiment (six months long, twice repeated, so a one-year study), we gave business students ChatGPT and a data science task to solve that they did not have the background for (develop a sentiment analysis classifier for German-language recommendations of medical practices). With their electronic "AI" helper, they could find a solution, but the scary thing is they did not acquire any knowledge on the way, as exist interviews clearly demonstrated.
As a friend commented, "these language models should never have been made available to the general public", only to researchers.
> As a friend commented, "these language models should never have been made available to the general public", only to researchers.
That feels to me like a dystopian timeline that we've only very narrowly avoided.
It wouldn't just have been researchers: it would have been researchers and the wealthy.
I'm so relieved that most human beings with access to an internet-connected device have the ability to try this stuff and work to understand what it can and cannot do themselves.
I'm giving a programming class and students uses LLMs all the time. I see it as a big problem because:
- it puts focus on syntax instead of the big picture. Instead of finding articles or posts on Stack explaining things beyond how to write them. AI give them the "how" so they don't think of the "why"
- students almost don't ask questions anymore. Why would they when an AI give them code?
- AI output contains notions, syntax and API not seen in class, adding to the confusion
Even the best students have a difficult time answering basic questions about what have been seen on the last (3 hours) class.
Job market will verify those students, but the outcome may be potentially disheartening for you, because those guys may actually succeed one way or another. Think punched cards: they are gone along with the mindset of "need to implement it correctly on first try".
> but the outcome may be potentially disheartening for you, because those guys may actually succeed one way or another
Your sentence is very contradictory to say the least! I'll be very glad for each of them to succeed in any way.
students pay for education such that at the end, they know something. if the job market filters them out because they suck, the school did a bad job teaching.
the teachers still need to figure out how to teach with LLMs around
[dead]
I wish I had an LLM as a student because I couldn’t afford a tutor and googling for information was tedious.
It’s the college’s responsbility now to teach students how to harness the power of LLMs effectively. They can’t keep their heads in the sand forever.
I had this realization a couple weeks ago that AI and LLMs are the 2025 equivalent of what Wikipedia was in 2002. Everyone is worried about how all the kids are going to just use the “easy button” and get nonsense that’s in-checked and probably wrong and a whole generation of kids are going to grow up not knowing how to research, and trusting unverified sources.
And then eventually overall we learned what the limits of Wikipedia are. We know that it’s generally a pretty good resource for high level information and it’s more accurate for some things than for others. It’s still definitely a problem that Wikipedia can confidently publish unverified information (IIRC wasn’t the Scottish translation famously hilariously wrong and mostly written by an editor with no experience with the language?)
And yet, I think if these days people were publishing think pieces about how Wikipedia is ruining the ability of students to learn, or advocating that people shouldn’t ever use Wikipedia to learn something, we’d largely consider them crackpots, or at the very least out of touch.
I think AI tools are going to follow the same trajectory. Eventually we’ll gain enough cultural knowledge of their strengths and weaknesses to apply them properly and in the end they’ll be another valuable asset in our ever growing lists of tools.
It’s not the same because you can’t ask Wikipedia to do your homework or programming task without even reading the result.
You can't ask an AI to do that either. I mean, you physically can, but it would be the same thing as copying and pasting a wikipedia article verbatim into your essay.
What I mean is, people do actually do this with LLMs, but most assignments do not map 1:1 to a Wikipedia article you can copy (certainly programming tasks don't). Or to put it differently, it's relatively trivial to formulate assignments for which a blind Wikipedia copy & paste wouldn't be applicable; in contrast to the LLM case.
[dead]
it's particularly bad for students who should be trying to learn.
at the same time in my own life, there are tasks that I don't want to do, and certainly don't want to learn anything about, yet have to do.
For example, figuring out a weird edge case combination of flags for a badly designed LaTeX library that I will only ever have to use once. I could try to read the documentation and understand it, but this would take a long time. And, even if it would take no time at all, I literally would prefer not to have this knowledge wasting neurons in my brain.
What do you think is the big difference between these tools and calculators?
Imagine a calculator that computes definite integrals, but gives non-sensical results on non-smooth functions for whatever reason (i.e., not an error, but an incorrect but otherwise well-formed answer).
If there were a large number of people who didn't quite understand what it meant for a function to be continuous, let alone smooth, who were using such a calculator, I think you'd see similar issues to the ones that are identified with LLM usage: a large number of students wouldn't learn how to compute definite or indefinite integrals, and likely wouldn't have an intuitive understanding of smoothness or continuity either.
I think we don't see these problems with calculators because the "entry-level" ones don't have support for calculus-related functionality, and because people aren't taught how to arrange the problems that you need calculus to solve until after they've given some amount of calculus-related intuition. These conditions obviously aren't the case for LLMs.
I think we don't see these problems with calculators because we have figured out how to teach people how to use them.
We are still very early in the process of figuring out how to teach people to use LLMs.
I will bite. Correct question would be:
AI is far more comparable to delegating work to *people*. Calculators and compilers are deterministic. Using them doesn't change the nature of your work.AI, depending on how you use it, gives you a different role. So take that as a clue: if you are less interested in building things and more interested into getting results, maybe a product management role would be a better fit.
Calculators do not accept ambiguous instructions and they work 100% of the time.
>> Calculators do not accept ambiguous instructions and they work 100% of the time.
That is stated with a lot of confidence :)
https://news.ycombinator.com/item?id=43066953 https://apcentral.collegeboard.org/courses/resources/example... https://matheducators.stackexchange.com/questions/27702/what...
Fair point!
Love me some edge cases :D
Let's call it 100% of the time in 99%+ of scenarios.
[dead]
If you divide by 0 you’ll get an “E” - LLM will just make something up
I would say "E" is the correct answer.
So would I
Fundamentally nothing, but everybody already knows that you shouldn't teach young kids to rely on calculators during the basic "four-function" stage of their mathematics education.
Calculators for the most part don't solve novel problems. They automate repetitive basic operations which are well-defined and have very few special cases. Your calculator isn't going to do your algebra for you, it's going to give you more time to focus on the algebraic principles instead of material you should have retained from elementary school. Algebra and calculus classes are primarily concerned with symbolic manipulation, once the problem is solved symbolically coming to a numerical answer is time-consuming and uninteresting.
Of course, if you have access to the calculator throughout elementary school then you're never going to learn the basics and that's why schoolchildren don't get to use calculators until the tail-end of middle school. At least that's how it worked in the early 2000s when i was a kid; from what i understand kids today get to use their phones and even laptops in class so maybe i'm wrong here.
Previously I stated that calculators are allowed in later stages of education because they only automate the more basic tasks; Matlab can arguably be considered a calculator which does automate complicated tasks and even when i was growing up the higher-end TI-89 series was available which actually could solve algebra and even simple forms of calculus problems symbolically; we weren't allowed access to these when i was in high school because we wouldn't learn the material if there was a computer to do it for us.
So anyways, my point (which is halfway an agreement with the OP and halfway an agreement with you) is that AI and calculators are fundamentally the same. It needs to be a tool to enhance productivity, not a crutch to compensate for your own inadequacies[1]. This is already well-understood in the case of calculators, and it needs to be well-understood in the case of AI.
[1] actually now that i think of it, there is an interesting possibility of AI being able to give mentally-impaired people an opportunity to do jobs they might never be capable of unassisted, but anybody who doesn't have a significant intellectual disability needs to be wary of over-dependence on machines.
Calculators either get you through math you won't use in the real world or can aid in calculating when you know the right formula already.
Calculators don't pretend to think or solve a class of problems. They are pure execution. The comparison in tech is probably compilers, not code.
There's a reason we don't let kids use calculators to learn their times tables. In order to be effective at more advanced mathematics, you need to develop a deep intuition for what 9 * 7 means, not just what buttons you need to push to get the calculator to spit out 63.
[dead]
A personal anecdote from my previous place:
A junior developer was tasked with writing a script that would produce a list of branches that haven't been touched for a while. I've got the review request. The big chunk of it was written in awk -- even though many awk scripts are one-liners, they don't have to be -- and that chunk was kinda impressive, making some clever use of associative arrays, auto-vivification, and more pretty advanced awk stuff. In fact, it was actually longer than any awk that I have ever written.
When I asked them, "where did you learn awk?", they were taken by surprise -- "where did I learn what?"
Turns out they just fed the task definition to some LLM and copied the answer to the pull request.
I wonder if it would work to introduce a company policy that says you should never commit consensus aren't able to explain how it works?
I've been using that as my own personal policy for AI-assisted code and I am finding it works well for me, but would it work as a company company policy thing?
I call this the "house scrabble rule" because I used to play regularly with a group who imposed a rule that said you couldn't play a word without being able to define it.
I assume that would be seen as creating unnecessary burden, provided that the script works and does what's required. Is it better than the code written by people who have departed, and now no one can explain how it works?
The developer in question has been later promoted to a team lead, and (among other things) this explains why it's "my previous place" :)
This should be (the major) part of the code review.
One of the advantages of working with people who are not native english-speakers is that, if their english suddenly becomes perfect and they can write concise technical explanations in tasks, you know it's some LLM.
Then if you ask for some detail on a call, it's all uhm, ehm, ehhh, "I will send example later".
[dead]
Plato, in the Phaedrus, 370BC: "They will cease to exercise memory because they rely on that which is written, calling things to remembrance no longer from within themselves, but by means of external marks."
And our memory has declined. He was right
Has it? Or do we instead have vast overfilled palaces of the sum of human knowledge, often stored in pointers and our limited working memory readily available for things recently accessed?
I'd argue that our ability to recall individual moments has gone down, but the sum of what we functionally know has gone up massively.
For sure. But through writing we've been able to learn more, and through AI we'll produce more.
With a diminished ability to store, recall and thus manipulate information, our learning is arguably more shallow.
With AI trained on increasingly generic input used by the casual, then the quality of our production will increase in quantity but decrease in quality.
I am not arguing to abandon the written word or LLMs.
But the disadvantages--which will be overlooked by the young and those happy to have a time-saving tool, namely the majority--will do harm, harm most will overlook favouring the output and ignoring the atrophying user.
That's not really the question though. The question is, would we be better off if we didn't have "[the] means of external marks"?
I'm sure somebody out there would argue that the answer is yes, but personally I have my doubts.
I think the question is that were Plato's fears unfounded. I don't think the question is "is writing bad", although it is framed as that to justify a carefree adoption of LLMs in daily life.
It’s all about how you use written content, even before AI you could just copy-paste code from StackOverflow without any understanding. But you could also use it as an opportunity to do your own research, make your own experiences and create your own memory (which sticks a lot better). And it’s not just about coding, you can’t really grasp a subject by just reading a text book and not doing exercises or further reading.
Plato’s (or rather the Egyptian king’s - IIRC) fears were not unfounded, since a lot of people do not operate this way (sadly I see this with some peers), however overall the effect could still be positive.
Writing distributes knowledge to a lot of people, without it you have to rely on a kind of personal relationship to learn from someone more knowledgeable (which can be better for the individual mentee though). So maybe it increases chances of learning (breadth of audience) at the cost of the depth of understanding?
Good job he wrote that down.
Good one
Plato deliberately did not put some of his teachings into writing and only taught them orally, because he found written text unfit for the purpose.
https://en.wikipedia.org/wiki/Plato%27s_unwritten_doctrines
[dead]
I mean, he wasn't wrong, but he couldn't foresee the positives that would come of the new tech.
But sometimes the new tech is a hot x-ray foot measuring machine.
I may be old-fashioned but I remember a time when silent failure was considered to be one of the worst things a system can do.
LLMs are silent failure machines. They are useful in their place, but when I hear about bosses replacing human labor with “AI” I am fairly confident they are going to get what they deserve: catastrophe.
> I got into software engineering because I love building things and figuring out how stuff works. That means that I enjoy partaking in the laborious process of pressing buttons on my keyboard to form blocks of code.
I think this is a mistake. Building things and figuring out how stuff works is not related to pressing buttons on a keyboard to form blocks of code. Typing is just a side effect of the technology used. It's like saying that in order to be a mathematician, you have to enjoy writing equations on a whiteboard, or to be a doctor you must really love filling out EHR forms.
In engineering, coming up with a solution that fits the constraints and requirements is typically the end goal, and the best measure of skill I'm aware of. Certainly it's the one that really matters the most in practice. When it is valuable to type everything by hand, then a good engineer should type it by hand. On the other hand, if the best use of your time is to import a third-party library, do that. If the best solution is to create a code base so large no single human brain can understand it all, then you'd better do that. If the easiest path to the solution is to offload some of the coding to an LLM, that's what you should do.
[dead]
> There is a concept called “Copilot Lag”. It refers to a state where after each action, an engineer pauses, waiting for something to prompt them what to do next.
I've been experiencing this for 10-15 years. I type something and then wait for IDE to complete function names, class methods etc. From this perspective, LLM won't hurt too much because I'm already dumb enough.
It's really interesting how minor changes in your workflow can completely wreck productivity. When I'm at work I spend at least 90% of my time in emacs, but there are some programs I'm forced to use that are only available via Win32 GUI apps, or cursed webapps. Being forced to abandon my keybinds and move the mouse around hunting for buttons to click and then moving my hand from the mouse to the keyboard then back to the mouse really fucks me up. My coworkers all use MSVC and they don't seem to mind it all because they're used to moving the mouse around all the time; conversely a few of them actually seem to hate command-driven programs the same way I hate GUI-driven programs.
As I get older, it feels like every time I have to use a GUI I get stuck in a sort of daze because my mind has become optimized for the specific work I usually do at the expense of the work I usually don't do. I feel like I'm smarter and faster than I've ever been at any prior point in my life, but only for a limited class of work and anything outside of that turns me into a senile old man. This often manifests in me getting distracted by youtube, windows solitaire, etc because it's almost painful to try to remember how to move the mouse around though all these stupid menus with a million poorly-documented buttons that all have misleading labels.
I feel your pain. I have my own struggles with switching tasks and what helps to some degree is understanding that that kind of switching and adapting is a skill which could be trained by doing exactly this. At least I feel less like a victim and more like a person who improves himself :)
But it appears I'm in a better position because I don't have to work with clearly stupid GUIs and have no strong emotions to them.
This is the reason I don’t use auto completing IDEs. Pretty much vanilla emacs. I do often use syntax highlighting for the language, but that’s the limit of the crutches I want to use.
I am at the point of abandoning coding copilots because I spend most of my time fighting the god damned things. Surely, some of this is on me, not tweaking settings or finding the right workflow to get the most of it. Some of it is problematic UX/implementation in VSCode or Cursor. But the remaining portion is an assortment of quirks that require me to hover over it like an overattentive parent trying to keep a toddler from constantly sticking its fingers in electrical sockets. All that plus the comparatively sluggish and inconsistent responsivity is fucking exhausting and I feel like I get _less_ done in copilot-heavy sessions. Up to a point they will improve over time, but right now it makes programming less enjoyable for me.
On the other hand, I am finding LLMs increasingly useful as a moderate expert on a large swath of subjects available 24/7, who will never get tired of repeated clarifications, tangents, and questions, and who can act as an assistant to go off and research or digest things for you. It’s mostly decent rubber duck.
That being said, it’s so easy to land in the echo chamber bullshit zone, and hitting the wall where human intuition, curiosity, ingenuity, and personality would normally take hold for even a below average person is jarring, deflating, and sometimes counterproductive, especially when you hit the context window.
I’m fine with having it as another tool in the box, but I rather do the work myself and collaborate with actual people.
An LLM is a tool. It's your choice how you use it. I think there are at least two ways to use it that are helpful but don't replace your thinking. I sometimes have a problem I don't know how to solve that's too complex to ask google. I can write a paragraph in ChatGPT and it will "understand" what I'm asking and usually give me useful suggestions. Also I sometimes use it to do tedious and repetitive work I just don't want to do.
I don't generally ask it to write my code for me because that's the fun part of the job.
I think the issue is that a lot of orgs are systematically using the tool poorly.
I’m responsible for a couple legacy projects with medium sized codebases, and my experience with any kind of maintenance activities has been terrible. New code is great, but asking for fixes, refactoring, or understanding the code base has had an essentially 2% success rate for me.
Then you have to wonder, how the hell orgs expect to maintain/scale and more code from fewer devs, who don’t even understand how the original code worked?
LLMs are just a tool but overreliance on them is just as much of a code smell as - say - deciding your entire backend is going to be in Matlab; or all your variables are going to be global variables - you can do it, but I guarantee that it’s going to cause issues 2-3 years down the line.
"understanding the code base has had an essentially 2% success rate for me"
How have you been using LLMs to help understand existing code?
I have been finding that to work extremely well, but mainly with the longer context models like Google Gemini.
It's also making the sleazy and lazy one thrive a bit more, which is quite painful when passionated devs which are also great colleagues don't gain any real leverage from chatgpt.
Humble craftsmen have long been getting replaced by automation and technology. Devs are resisting the same way as everyone else did before them but it's futile.
It's just especially poignant/painful because developers are being hoisted by their own petard, so to speak.
Yeah, there's a cynical paradox.
I'm leaning more engineering to maybe pivot.
[dead]
I use LLMs for generating small chunks of code (less than 150 lines) but I am of the opinion that you should always understand what generated cide is doing. I take time go read through it and make sure it makes sense before I actually run it. I've found that for smaller chunks of code it's usually pretty accurate on the first try. Occasionally it can't figure it out all all, even with trying to massage the prompt to be more descriptive.
I use Claude Sonnet to generate large chunks of code, practically as a form of macro expansion. Such as when adapting SQL queries to a new migration, or adding straightforward UI. Even still, it sometimes isn’t great and I would never commit anything without carefully observing what it actually wrote. More importantly, I never ask it to do something I myself don’t know how to do, especially if I suspect a library or best practice exists.
In other words, I treat it exactly like stochastic autocomplete. It makes me lazier, I’m sure, but the first part of the article above is a rant against a tautology: any tool worth using ought to be missed by the user if they stopped using it!
If you use LLMs in lieu of searching Stack Overflow, you're going to go faster and be neither smarter nor dumber. If you're prompting for entire functions, I suspect it'll be a crutch you learn to rely on forever.
Personally I think there's a middle ground to be had there.
I use LLMs to write entire test functions, but I also have specs for it to work from and can go over what it wrote and verify it. I never blindly go "yeah this test is good" after it generates it.
I think that's the middle ground, knowing where, and when it can handle a full function / impl vs a single/multi(short) line auto-completion.
It ruined my friend's startup. Junior dev "wrote" WAY too much code with no ability to support it after the fact. Glitches in production would result in the kid disappearing for weeks at a time because he had no idea how anything actually worked under the hood. Friend was _so_ confident of his codebase before shit hit the fan - the junior dev misrepresented the state of the world, b/c he simply didn't know what he didn't know.
Sounds like your friend ruined his own startup by employing only a junior dev?
That's probably the mechanism by which AI will take over many jobs:
1. Skilled people do a good job, AI does a not-so-good job.
2. AI users get dumbed down so they can't do any better. Mediocrity normalized.
3. Replace the AI users with AI.
It’s always been very weird that little hobbyist open source projects produce much better software than billions dollar companies. But I guess it will be even more notable now that the billion dollar garbage shoveling companies are getting self-operating shovels.
In this scenario, if AI does a not so good job, there will still be good developers left to code.
Sure, the problem then becomes finding the ideal ratio between good "proper" programmers and AI code monkeys.
Also, it'll be interesting to see how LLM prompt writing develops as a skill unto itself.
> it'll be interesting to see how LLM prompt writing develops as a skill unto itself.
MIT already offers an online prompt engineering course as of a year ago so I'm sure it already has.
Maybe. It might be that the “industry” continues on without good code, slowly dying.
If bad code could kill the software industry, it would be long dead by now.
Definitely a fair point. But I observe that dS/dt may not be positive already.
I'm learning javascript as my first programming language and I'm somewhere around beginner/intermediate. I used Chatgpt for a while, but stopped after a time and just mostly use documentation now. I don't want code solutions, I want code learning and I want certainty behind that learning.
I do see a time where I could use copilot or some LLM solution but only for making stuff I understand, or to sandbox high level concepts of code approaches. Given that I'm a graphic designer by trade, I like 'productivity/automation' AI tools and I see my approach to code will be the same - I like that they're there but I'm not ready for them yet.
I've heard people say I'll get left behind if I don't use AI, and that's fine as I'll just use niche applications of code alongside my regular work as it's just not stimulating to have AI fill in knowledge blanks and outsource my reasoning.
Tried several times for C++, almost always got nonsense results. Maybe they only work for weakly typed language.
Nope, also pretty shitty for Python, at least that's my experience from my rather limited usage. I might be using it wrong though.
The problem is that the LLM won't find design mistakes. E.g. trying to get the value of a label in Textual, you can technically do it, but you're not really suppose to. The variable starts with an underscore, so that's an indication that you shouldn't really touch it. The LLMs will happily help you attempt to use a non-existing .text attribute, then start running circles, because what you're doing is a design mistake.
LLMs a probably fairly helpful for situations where the documentation is lacking, but simple auto-complete is also working well enough.
[dead]
I also love building things. LLM-assisted workflows have definitely not taken this away. If anything, it has only amplified my love for coding. I can finally focus on the creative parts only.
That said, the author is probably right that it has made me dumber or at least less prolific at writing boilerplate.
Gen Z kind of already have a reputation for being 'dumb', being unable to supposedly do basic tasks expected from an entry level office working, or questioning basic things like why tasks get delegated down the chain. Maybe being bad at coding, especially if they are using AI, is just part of that?
I heard about the term 'vibe coding' recently, which really just means copying and pasting code from an AI without checking it. It's interesting that that's a thing, I wonder how widespread it is.
I told my colleagues that if they’re just going to send me LLM code I cannot review it and assume they already double checked the work themselves. This gives them instant approval and if they want to spend time submitting follow up PRs because they’re not double checking their code and not understanding then they can do that. I honestly did this for two reasons:
1. The problem domain is a marketing site (low risk)
2. I got tired of fixing bad LLM code
I have noticed the people who do this are caught up in the politics at work and not really interested in writing code.
I have no desire to be a code janitor.
Wonder if we'll have this discussion in 20 years. Or will traditional programmers be some niche "artisanal" group of workers, akin to what bootmakers and bespoke tailors are today.
I'm pretty sure I saw the word "programming" being used as a synonym for webdev.
Webdev is a large, impactful and challenging field within programming as a whole.
If you want AI to make you less dumb, instead of using it like stack overflow, you can go on a road trip and have a deep conversation about a topic or field you want to learn more about, you can have it quiz you, do mock interviews, ask questions, have a chat, its incredible at that. As long as its not something where the documentation is less than a year or two old.
> As they’re notorious for making crap up because, well, that’s how LLMs work by design, it means that they’re probably making up nonsense half the time.
I found this to be such a silly statement. I find arguments generated by AI to significantly more solid than this.
I think "AI makes developers dumb" makes as much sense as "becoming a manager makes developers dumb."
I was an engineer before moving to more product and strategy oriented roles, and I work on side projects with assistance from Copilot and Roo Code. I find that the skills that I developed as a manager (like writing clear reqs, reviewing code, helping balance tool selection tradeoffs, researching prior art, intuiting when to dive deep into a component and when to keep it abstract, designing system architectures, identifying long-term-bad ideas that initially seem like good ideas, and pushing toward a unified vision of the future) are sometimes more useful for interacting with AI devtools than my engineering skillset.
I think giving someone an AI coding assistant is pretty bad for having them develop coding skills, but pretty good for having them develop "working with an AI assistant" skills. Ultimately, if the result is that AI-assisted programmers can ship products faster without sacrificing sustainability (i.e. you can't have your codebase collapse under the weight of AI-generated code that nobody understands), then I think there will be space in the future for both AI-power users who can go fast as well as conventional engineers who can go deep.
What modern LLMs are good at is reducing boilerplate for workflows that are annoying and tedious, but b) genuinely save time b) are less likely for a LLM to screw up c) are easy to spot check and identify issues in the event the LLM does mess up.
For example, in one of my recent blog posts I wanted to use Python's Pillow to composite five images: one consisting of the left half of the image, the other four in quadrants (https://github.com/minimaxir/mtg-embeddings/blob/main/mtg_re...). I know how to do that in PIL (have to manually specify the coordinates and resize images) but it is annoying and prone to human error and I can never remember what corner is the origin in PIL-land.
Meanwhile I asked Claude 3.5 Sonnet this:
And it got the PIL code mostly correct, except it tried to load the images from a file path which wasn't desired, but it is both an easy fix and my fault since I didn't specify that.Point (c) above is also why I despise the "vibe coding" meme because I believe it's intentionally misleading, since identifying code and functional requirement issues is an implicit requisite skill that is intentionally ignored in hype as it goes against the novelty of "an AI actually did all of this without much human intervention."
Just a generic rant. How many people can sew; fell a tree; or skin an animal? Yeah, I thought so.
And no data or link to data either. Just a waves hand "I think it happened to me"
People said the same thing about IntelliSense a long time ago.
There's a difference in quality, though. IntelliSense was never meant to be more than autocomplete or suggestions (function names, variables names, etc.), i.e. the typing out and memorizing API calls and function signatures part. LLMs in the context of programming are tools that aim to replace the thinking part. Big difference.
I don't need to remember all functions and their signatures for APIs I rarely use - it's fine if a tool like IntelliSense (or an ol' LSP really) acts like a handy cheat sheet for these. Having a machine auto-implement entire programs or significant parts of them, is on another level entirely.
Or maybe AI is enabling dumb people to program?
> [...] This is to the point where is starts to become hard for you to work without one.
Why would one work without one?
Or it makes dumb people become developers ;)
Here is a disturbing look at what the absolute knobs at Y Combinator (and elsewhere) are preaching/pushing, with commentary from Primeagen: https://www.youtube.com/watch?v=riyh_CIshTs
Watch the whole thing, it's hilarious. Eventually these venture capitalists are forced to acknowledge that LLM-dependent developers do not develop an understanding and hit a ceiling. They call it "good enough".
The use of LLMs for constructive activities (writing, coding, etc.) rapidly produces a profound dependence. Try turning it off for a day or two, you're hobbled, incapacitated. Competition in the workplace forces us down this road to being utterly dependent. Human intellect atrophies through disuse. More discussion of this effect, empirical observations: https://www.youtube.com/watch?v=cQNyYx2fZXw
To understand the reality of LLM code generators in practice, Primeagen and Casey Muratori carefully review the output of a state-of-the-art LLM code generator. They provide a task well-represented in the LLM's training data, so development should be easy. The task is presented as a cumulative series of modifications to a codebase: https://www.youtube.com/watch?v=NW6PhVdq9R8
This is the reality of what's happening: iterative development converging on subtly or grossly incorrect, overcomplicated, unmaintainable code, with the LLM increasingly unable to make progress. And the human, where does he end up?
But this is exactly what the higher ups want according to Braverman, they will insist on "know-how" being non-existent, and always push to tell workers what they - of course - know of the work, that we peons would ignore.
[dead]
"Calculators are making people dumb"
"Spell checkers are making people dumb"
"Wikipedia is making people dumb"
Nothing to see here.
> "Calculators are making people dumb"
Wrong quote - "calculators are making people bad at doing maths" was the fear. Truns out, they didn't, but didn't help either [1]
> "Spell checkers are making people dumb"
Well, at this point I assume you use "dumb" as a general stand-in for "worse at the skill in question". here, however, research shows that indeed, spell checkers and auto-correct seem to have a negative influence on learning proper spelling and grammar [2]. The main takeaway here seems to be the fact that handwriting in particular is a major contributor in learning and practicing written language skills. [2]
> "Wikipedia is making people dumb"
Honestly, haven't heard that one before. Did you just make that up? Apart from people like Plato, thousands of years ago, owning and using books, encyclopaedias, and dictionaries has generally been viewed as a sign of a cultured and knowledgeable individual in many cultures... I don't see how an online source is any different in that regard.
The decline of problem solving and analytical skills, short attention spans, lack of foundational knowledge and subsequent loss of valuable training material for our beloved stochastical parrots, though, might prove to become a problem in future.
There's a qualitative difference between relying on spell checkers while still knowing the words and slowly losing the ability to formulate, express, and solve problems in an analytical fashion. Worst case we're moving towards E.M. Forster's dystopian "The Machine Stops"-scenario.
[1] https://www.jstor.org/stable/42802150?seq=1#page_scan_tab_co...
[2] https://www.researchgate.net/publication/362696154_The_Effec...
People, who know how to write, use spell checkers as assistants. People, who don't know how to write, use spell checkers to do everything for them effectively replacing one errors with other errors.
Do human servants make you lazier or more productive? (A sincere thought experiment)
This is one of the many many experiences in the tapestry of people figuring out how to use this new tool.
There will be many such cases of engineers losing their edge.
There will be many cases of engineers skillfully wielding LLMs and growing as a result.
There will be many cases of hobbyists becoming empowered to build new things.
There will be many cases of SWEs getting lazy and building up huge, messy, intractable code bases.
I enjoy reading from all these perspectives. I am tired of sweeping statements like "AI is Making Developers Dumb."
*Inexperienced devs using tools to think for them instead of problem solving.
I dont think it is making developers dumb, you still need to audit and review the code. As long as you augment your writing by relying on base templating, finding material to read or have it explain code. It is really good.
AI makes developers smarter when used in smart ways. How amazing to have code generated for you, freeing you to consider the next task (ie. “sit there waiting for the next task to come to mind”) .. oh, by the way, if you don’t understand the code, highlight it and ask for an explanation. Repeat ad infinitum until you understand what you’re reading.
The dumb developers are those resisting this amazing tool and trend.
Frankly, i don't think this is true at all. If anything I notice, for me, that I take better and more informed decisions, in many aspects of life. Think this criticism comes from a position of someone having invested alot of time in something AI can do quite well.
For me, the main question in this context would be whether the decisions are better informed or they just feel better informed. I regularly get LLMs to lie to me in my areas of expertise, but there I have the benefit that I can usually sniff out the lie. In topics I'm not that familiar with, I can't tell whether the LLM is confidently correct or confidently incorrect.
Well, AI does make errors, and never says "I don't know". That is also true of Wikipedia though. I've seen much improvement in accuracy from 3.5 to 4.5. Hallucinations can often be hashed out by a dialogue.
Wikipedia has multiple ways it tells you it doesn't know or it doesn't know for certain. Tags such as clarify, explain, confusing (all of which expand into phrases such as clarification needed etc) are abundant, and if an article doesn't meet the bar for the standard, it's either clearly annotated at the top of the article or the article is removed altogether.
I live on a farm and there are a lot of things that machines can do faster and cheaper. And for a lot of tasks, it makes more sense from a time / money tradeoff.
But I still like to do certain things by hand. Both because it's more enjoyable that way, and because it's good to stay in shape.
Coding is similar to me. 80% of coding is pretty brain dead — boilerplate, repetitive. Then there's that 20% that really matters. Either because it requires real creativity, or intentionality.
Look for the 80/20 rule and find those spots where you can keep yourself sharp.
My guess is that AI will make programming even more misserable for those who entered the field for the wrong reasons. Now is the time to double down on learning the basics, the low level, the under-the-hood stuff.
I'm in full agreement with this, and it's part of the reason I'm considering leaving the software engineering field for good.
I've been programming for over 25 years, and the joy I get from it is the artistry of it, I see beauty in systems constructed in the abstract realm. But LLM based development remove much of that. I haven't used nor desire to use LLM for this, but I don't want to compete with people that do because I won't win in the short-term nature of corporate performance based culture. And so I'm now searching for careers that will be more resistant to LLM based workflows. Unfortunately in my opinion this pretty much rules out any knowledge based economy.
Most code is unoriginal boilerplate that serves a business need. LLMs are very good at generating output that is 60-90% of the way there.
And I wish all the best with doubling down on that by generating boiler plate at scale rather than removing it
Books made orators dumb. I'm not sure this argument has ever had any credence, not now and not when Socrates came up with his version for his time.
Any technology that renders a mental skill obsolete will undergo this treatment. We should be smart enough to recognize the rhetoric it is rather than pretend it's a valid argument for Luddism.
"There’s a reason behind why I say this. Over time, you develop a reliance on [search engines]. This is to the point where is [sic!] starts to become hard for you to work without one."
AI tools are great. They don’t absolve you from understanding what you’re doing and why.
One of the jobs of a software engineer is to be the point person for some pieces of technology. The responsible person in the chain. If you let AI do all of your job, it’s the same as letting a junior employee do all of your job: Eventually the higher-ups will notice and wonder why they need you.
I experimented with vibe coding [0] yesterday to build a Pomodoro timer app [1] and had a mixed experience.
The process - instead of typing code, I mostly just talked (voice commands) to an AI coding assistant - in this case, Claude Sonnet 3.7 with GitHub Copilot in Visual Studio Code and the macOS built-in Dictation app. After each change, I’d check if it was implemented correctly and if it looked good in the app. I’d review the code to see if there are any mistakes. If I want any changes, I will ask AI to fix it and again review the code. The code is open source and available in GitHub [2].
On one hand, it was amazing to see how quickly the ideas in my head were turning into real code. Yes reviewing the code take time, but it is far less than if I were to write all that code myself. On the other hand, it was eye-opening to realize that I need to be diligent about reviewing the code written by AI and ensuring that my code is secure, performant and architecturally stable. There were a few occasions when AI wouldn't realize there is a mistake (at one time, a compile error) and I had to tell it to fix it.
No doubt that AI assisted programming is changing how we build software. It gives you a pretty good starting point, it will take you almost 70-80% there. But a production grade application at scale requires a lot more work on architecture, system design, database, observability and end to end integration.
So I believe we developers need to adapt and understand these concepts deeply. We’ll need to be good at:
Without this knowledge, apps built purely through AI prompting will likely be sub-optimal, slow, and hard to maintain. This is an opportunity for us to sharpen the skills and a call to action to adapt to the new reality.[0] https://en.wikipedia.org/wiki/Vibe_coding
[1] https://my-pomodoro-flow.netlify.app/
[2] https://github.com/annjose/pomodoro-flow
This entire line of reasoning is worker propaganda. Like the boss is some buffoon and the employees constantly have to skirt his nonsensical requirements to make create a reasonable product.
It's a cartoon mentality. Real products have more requirements than any human can fathom, correctness is just one of the uncountable tradeoffs you can make. Understanding, or some kind of scientific value is another.
If anything but a single minded focus on your pet requirement is dumb, then call me dumb idc. Why YOU got into software development is not why anyone else did.
> Over time, I started to forget basic foundational elements of the languages I worked with. I started to forget parts of the syntax, how basic statements are used
It's a good thing tbh. Language syntax is ultimately entirely arbitrary and is the most pointless thing to have to keep in mind. Why bother focusing on that when you can use the mental effort on the actual logic instead?
This has been a problem for me for years before LLMs, constantly switching languages and forgetting what exact specifics I need to use because everyone thinks their super special way of writing the same exact thing is best and standards are avoided like the plague. Why do we need two hundred ways of writing a fuckin for loop?
"Pay a lot, cry once." --Chinese Proverb
> Some people might not enjoy writing their own code. If that’s the case, as harsh as it may seem, I would say that they’re trying to work in a field that isn’t for them
Conversely: Some people want to insist that writing code 10x slower is the right way to do things, that horses were always better, more dependable than cares, and that nobody would want to step into one of those flying monstrosities. And they may also find that they are no longer in the right field.
This is the new technology is always better argument that invoked the imagery of all the times I was true and ignores all the products that have been disposed of.
The truth is it depends on every detail. What technology. For who. When.
> Some people want to insist that writing code 10x slower is the right way to do things
If the code has to be correct, then this is right
Wait, let’s give it a couple years, the way Boeing is going the horse people might have had a point. I’m not 100% sold on the idea that our society will long-term be capable of maintaining the infrastructure required to do stuff like airplanes.
I don't need AI to be dumb.
I'd say PHP and JS made developers dumb. And this is the kind of "developers" that AI is currently replacing.
is crazy to me how people talk about aeons ago when these tool came out like two years ago
AI lowers the bar. You can say Python makes developers dumb too. Or that canned food makes cooks dumb. That’s not really the point though. When something is easier more people can do it. That expansion is biased downward.
Honestly I just don’t remember the names of methods and without my IDE I’d be a lot less productive than I am now. Are IDEs a problem?
The bit about “people don’t really know how things work anymore”: my friend I grew up programming in assembly, I’ve modified the kernel on games consoles. Nobody around me knocking out their C# and their typescript has any idea how these things work. Like I can name the people in the campus that do.
LLMs are a useful tool. Learn to use them to increase your productivity or be left behind.
[dead]
[dead]