I too began with BASIC (but closer to 1980). Although I wrote and published games for the Macintosh for a number of years as I finished up college, my professional career (in the traditional sense) began when I was hired by Apple in 1995 and relocated to the Bay Area.
Yeah, what started out as a great just got worse and worse as time went on.
I suspect though that to a large degree this reflects both the growing complexity of the OS over that time as well as the importance of software in general as it became more critical to people's lives.
Already, even in 1984 when it was first introduced, the Mac had a rich graphics library you would not want to have to implement yourself. (Although famously of course a few apps like Photoshop nonetheless did just that—leaning on the Mac simply for a final call to CopyBits() to display pixels from Adobe's buffer to the screen.)
You kind of have to accept abstraction when networking, multiple cores, multiple processes become integral to the machine. I guess I always understood that and did not feel too put out by it. If anything a good framework was somewhat of a relief—someone else's problem, ha ha. (And truly a beautiful API is just that: a beautiful thing. I enjoy working well constructed frameworks.)
But the latter issue, the increasing dominance of software on our lives is what I think contributed more to poisoning the well. Letting the inmates run the asylum more or less describes the way engineering worked when I began at Apple in 1995. We loved it that way. (Say what you want about that kind of bottom-up culture of that era, but our "users" were generally nerds just like us—we knew, or thought we knew anyway, better than marketing what the customer wanted and we pursued it.)
Agile development, unit tests, code reviews… all these weird things began to creep in and get in the way of coding. Worse, they felt like busywork meant simply to give management a sense of control… or some metric for progress.
"What is our code coverage for unit test?" a manager might ask. "90%," comes the reply from engineering. "I want to see 95% coverage by next month," comes the marching orders. Whatever.
I confess I am happy to have now left that arena behind. I still code in my retirement but it's back to those cowboy-programmer days around this house.
I'm a millennial, but I share some feelings. I also think modern programming careers often feel like factory jobs where most of the time you must be compliant with some BS. You often find the true joy only in personal projects.
My advice to everyone feeling existential vertigo over these tools is to remain confident and trust in yourself. If you were a smart dev before AI, chances are you will remain a smart dev with AI.
My experience so far is that to a first approximation, the quality of the code/software generated with AI corresponds to the quality of the developer using the AI tool surprisingly well. An inexperienced, bad dev will still generate a sub-par result while a great dev can produce great results.
The choices involved in using these tools are also not as binary as they are often made out to be, especially since agents have taken off. You can very much still decide to dedicate part of your day to chiseling away at important code to make it just right and make sure your brain is engaged in the result and exploring and growing with the problem at hand, while feeding background queues of agents with other tasks.
I would in fact say the biggest challenge of the AI tool revolution in terms of what to adapt to is just good ol' personal time management.
> If you were a smart dev before AI, chances are you will remain a smart dev with AI.
I don't think that's what people are upset about, or at least it's not for me. For me it's that writing code is really enjoyable, and delegating it to AI is hell on earth.
> For me it's that writing code is really enjoyable, and delegating it to AI is hell on earth.
It's very sad, for me.
Like I told someone recently - letting the LLM write my code for me is like letting the LLM play my video games for me.
If all I wanted was the achievement on my steam profile, then sure, it makes sense, but that achievement is not why I play video games.
I'm looking at all these people proudly showing off their video game achievements, gained just by writing specs, and I realise that all of them fail to realise that writing specs is a lower-skill activity than writing programs.
It also pays far, far less - a BA earns about half what an average dev earns. They're cosplaying at being BAs, not realising that they are now employed for a skill that pays less, and it's only a matter of time before the economics catch up to them.
Talking to sales to get an idea what the customer wanted from the business side (first B2B at a product company and now consulting) -> talking to the customer and hashing out more detailed requirements -> designing the architecture and a proposed technical plan -> presenting it to the stakeholder (sometime internal sometime external) -> doing the work or delegating and leading the work -> presenting the work to the stakeholder and leading the UAT -> getting it to production.
The coding part has been a commodity for enterprise developers for well over a decade. I knew a decade ago that I wasn’t going to be 50 years old reversing b trees on a whiteboard trying to prove my worth.
Doing the work is the only thing that the AI does.
While I don’t make the eye popping BigTech comp (been there. Done that and would rather get a daily anal probe than go back), I am making more than I could make if I were still selling myself as someone who “codez real gud” as an enterprise dev.
Look, there are at least dozens of us who like and enjoy programming for programming's sake and got into this crazy industry because of that.
Many of these people made many of the countless things we take for granted every day (networking, operating systems, web search; hell, even the transformer architecture before they got productized!).
Seeing software development --- and software engineering by proxy --- get reduced to a jello that will be stepped on by "builders" in real-time is depressing as shit.
It's even more depressing to see folks on HACKER news boost the "programming never mattered" mentality that's taken hold these last few years.
Last comment I'll make before I step off my soapbox: the "codez real gud" folks that makes the big bucks bring way more to the table than their ability to code...but their ability to code is a big contributor to why they bring more to the table!
Well as depressing as it is, check out the 2024 and 2025 YC batches. Guess how many of them are “ai” something or other? It’s never been about “hackers”. Not a single founder who takes VC funding is thinking about a sustainable business - at least their investors aren’t - they are hoping for the “exit”.
It’s always been jello. I at 51 can wax poetically about the good old days or I can keep doing what I need to do to keep money appearing in my account.
> Talking to sales to get an idea what the customer wanted from the business side (first B2B at a product company and now consulting) -> talking to the customer and hashing out more detailed requirements -> designing the architecture and a proposed technical plan -> presenting it to the stakeholder (sometime internal sometime external) -> doing the work or delegating and leading the work -> presenting the work to the stakeholder and leading the UAT -> getting it to production.
You are not the first person to say things like this.
Tell me, you ever wondered why a person with a programming background was filling that role?
If not the technical person, then who? It’s a lot easier for a technical person to learn how to talk the language of the business than a business person to have a deep understanding of technology.
On the enterprise dev side of the industry where most developers work, I saw a decade ago that if I were just a ticket taker who turned well defined requirements into for loop and if statements, that was an undifferentiated commodity.
You’re seeing now that even on the BigTech side knowing how to reverse a binary tree on the whiteboard is not enough.
Also if you look at the leveling guidelines of any major tech company, their leveling guidelines above mid level are based on scope, impact and dealing with ambiguity - not “I codez real gud”
Those levels bake in the expectation of "codez real gud" at FAANG/MANGA/whatever style tech companies since the technical complexity of their operations is high and a high skill bar needs to be hurdled over to contribute to most of those codebases and make impact at the scale they operate at.
One's ability to reverse a binary tree (which is a BS filter, but it is what it is) hasn't been an indicator of ability in some time. What _is_ though, is the wherewithall to understand _when_ that's important and tradeoffs that come with doing that versus using other data structures or systems (in the macro).
My concern is that, assuming today's trajectory of AI services and tooling, the need to understand these fundamentals will become less important over time as the value of "code" as a concept decreases. In a world where prompting is cheap because AI is writing all the code and code no longer matters, then, realistically, tech will be treated even more aggressively as a line item to optimize.
This is a sad reality for people like me whose love for computers and programming got them into this career. Tech has been a great way to make a wonderful living for a long time, and it's unfortunate that we're robbing future generations of what we took for granted.
You give way too much credit to the average mid level developer at BigTech. A lot of the scalability is built in and they just built on top of it.
There are millions of people that can code as well as you are I and a lot cheaper if you are in the US. Thousands of developers have been laid off over the last three years and tech companies keep going strong - what does that tell you?
I’m just as happy to get away from writing for loops in 2026 as was to be able to get away with LDA, LDX and BRA instructions once I could write performant code in C.
> Also if you look at the leveling guidelines of any major tech company, their leveling guidelines above mid level are based on scope, impact and dealing with ambiguity - not “I codez real gud”
Your entire comment is this specific strawman - no one, and I mean no one, is making this claim! You are the only one who is (ironically, considering the job you do) too tone-deaf and too self-unaware to avoid making this argument.
I'm merely pointing out that your value-prop is based on a solid technical foundation, which I feel you agree on:
> If not the technical person, then who? It’s a lot easier for a technical person to learn how to talk the language of the business than a business person to have a deep understanding of technology.
The argument is not "Oh boo hoo, I wish I could spend 8 hours a day coding for money like I used to", so stop pretending like it is.
There is an entire contingent of comments here who miss translating requirements into code.
Even the comment I replied to mentioned “being a BA” like the most important quality of a software engineer is their ability to translate requirements into code.
I've been coping by reminding myself that I was absurdly lucky to have found a job that was also enjoyable and intellectually stimulating for so long, and if all AI does is bring software engineering down to the level of roughly every other job in the world in terms of fun, I don't really have much ground to complain
I think you were incredibly lucky to get to write code that you enjoyed writing.
Most of the commercial code I've written, over a 30+ year career, has been shite. The mandate was always to write profitable code, not elegant code. I started (much like the OP) back in the 80's writing code as a hobby, and I enjoyed that. But implementing yet another shitty REST CRUD server for a shitty website... not so much.
I totally see a solution: get the LLM to write the shitty REST CRUD server, and focus on the hard bits of the job.
> I cannot figure out what you mean by "BA" in this context
Business Analyst - those people who learn everything about what the customers requirements, specs, etc are. What they need, what they currently have, how to best advise them, etc.
I was a BA forever ago during a summer job in college. That job wasn't for me at all! Looking back on the experience, putting together a FRD felt much like writing a CLAUDE.md with some prompts thrown in!
This is a part of it, but I also feel like a Luddite (the historical meaning, not the derogatory slang).
I do use these tools, clearly see their potential, and know full well where this is going: capital is devaluing labor. My skills will become worthless. Maybe GP is right that at first only skilled developers can wield them to full effect, but it's obviously not going to stop there.
If I could destroy these things - as the Luddites tried - I would do so, but that's obviously impossible.
For now I'm forced to use them to stay relevant, and simply hope I can hold on to some kind of employment long enough to retire (or switch careers).
Kind of. But the outcomes likely do not benefit the masses. People "accessing AI labor" is just a race to the bottom. Maybe some new tools get made or small businesses get off the ground, but ultimately this "AI labor" is a machine that is owned by capitalists. They dictate its use, and they will give or deny people access to the machine as it benefits them. Maybe they get the masses dependent on AI tools that are currently either free or underpriced, as alternatives to AI wither away unable to compete on cost, then the prices are raised or the product enshittified. Or maybe AI will be massively useful to the surveillance state and data brokers. Maybe AI will simply replace a large percentage of human labor in large corporations, leading to mass unemployment.
I don't fault anyone for trying to find opportunities to provide for themselves and loved ones in this moment by using AI to make a thing. But don't fool yourself into thinking that the AI labor is yours. The capitalists own it, not us.
As someone who has leaned fully into AI tooling this resonates. The current environment is an oligopoly so I'm learning how to leverage someone else's tool. However, in this way, I don't think LLMs are a radical departure from any proprietary other tool (e.g. Photoshop).
Indeed. Do you know how many small consultancies are out there which are "Microsoft shops"? An individual could become a millionaire by founding their own and delivering value for a few high-roller clients.
Nobody says there's no money to make anymore. But the space for that is limited, no matter how many millions hustle, there's only 100 spots in the top 100.
what makes you think that's actually possible? maybe if you really had the connections and sales experience etc...
but also, if that were possible, then why wouldn't prices go down? why would the value of such labor stay so high if the same thing can be done by other individuals?
I saw it happen more back in the day compared to now. Point being, nobody batted an eyelash at being entirely dependent on some company's proprietary tech. It was how money was made in the business.
That is a fiction. None of us can waste tens of thousands of dollars whipping out a C compiler or web browser on a whim to test things.
If these tools improve to the point of being able to write real code, the financial move for the agent runners is to charge far more than they are now but far less than the developers being replaced.
It already seemed like we were approaching the limit of what it makes sense to develop, with 15 frameworks for the same thing and a new one coming out next week, lots of services offering the same things, and even in games, the glut of games on offer was deafening and crushing game projects of all sizes all over the place.
Now it seems like we're sitting on a tree branch and sawing it off on both sides.
If you state “in 6 months AI will not require that much knowledge to be effective” every year and it hasn’t happened yet then every time it has been stated has been false up to this point.
In 6 months we can come back to this thread and determine the truth value for the premise. I would guess it will be false as it has been historically so far.
> If you state “in 6 months AI will not require that much knowledge to be effective” every year and it hasn’t happened yet then every time it has been stated has been false up to this point
I think that this has been true, though maybe not quiet a strongly as strongly worded as your quote says it.
The original statement was "Maybe GP is right that at first only skilled developers can wield them to full effect, but it's obviously not going to stop there."
"full effect" is a pretty squishy term.
My more concrete claim (and similar to "Ask again in 6 months. A year.") is the following.
With every new frontier model released [0]:
1. the level of technical expertise required to achieve a given task decreases, or
2. the difficulty/complexity/size of a task that a inexperienced user can accomplish increases.
I think either of these two versions is objectively true looking back and will continue being true going forward. And, the amount that it increases by is not trivial.
[0] or every X months to account for tweaks, new tooling (Claude Code is not even a year old yet!), and new approaches.
Six months ago, we _literally did not have Claude Code_. We had MCP, A2A and IDE integrations, but we didn't have an app where you could say "build me an ios app that does $thing" and have it build the damn thing start to finish.
Three months ago, we didn't have Opus 4.5, which almost everyone is saying is leaps and bounds better than previous models. MCP and A2A are mostly antiquated. We also didn't have Claude Desktop, which is trying to automate work in general.
Three _weeks_ ago, we didn't have Clawdbot/Openclaw, which people are using to try and automate as much of their lives as possible...and succeeding.
Things are changing outrageously fast in this space.
I guess the right word here is "disenfranchising".
Valuation is a relative thing based mostly of availability. Adding capital makes labor more valuable, not less. This is not the process happening here, and it's not clear what direction the valuation is going.
... even if we take for granted that any of this is really happening.
If the human race is wiped out by global warming I'm not so sure I would agree with this statement. Technology rarely fails to have downsides that are only discovered in hindsight IMO.
Or perhaps they would have advanced the cause of labor and prevented some of the exploitation from the ownership class. Depends on which side of the story you want to tell. The slur Luddite is a form of historical propaganda.
Putting it in today's terms, if the goal of AI is to significantly reduce the labor force so that shareholders can make more money and tech CEOs can become trillionaires, it's understandable why some developers would want to stop it. The idea that the wealth will just trickle down to all the laid off work is economically dubious.
Trickle down economics has never worked in the way it was advertised to the masses, but it worked fantastically well for the people who pushed (and continue to push) for it.
problem today is that there is no "sink" for money to go to when it flows upwards. we have resorted to raising interest rates to curb inflation, but that doesn't fix the problem, it just gives them an alternative income source (bonds/fixed income)
I'm not a hard socialist or anything, but the economics don't make sense. if there's cheap credit and the money supply perpetually expands without a sink, of course people with the most capital will just compound their wealth.
so much of the "economy" orbits around the capital markets and number going up. it's getting detached from reality. or maybe I'm just missing something.
The historical luddites are literally the human death drive externalized. Reject them and all of their garbage ideas with extreme prejudice.
Related, the word “meritocracy” was coined in a book which was extremely critical of the whole concept. AI thankfully destroys it. Good riddance, don’t let the door hit your ass on the way out.
You can reject the ideas in the aggregate. Regardless, for the individual, your skills are being devalued, and what used to be a reliable livelihood tied to a real craft is going to disappear within a decade or so. Best of luck
"Except the Luddites didn’t hate machines either—they were gifted artisans resisting a capitalist takeover of the production process that would irreparably harm their communities, weaken their collective bargaining power, and reduce skilled workers to replaceable drones as mechanized as the machines themselves."
I resonate with that. I also find writing code super pleasurable. It's immediate stress relief for me, I love the focus and the flow. I end long hands-on coding sessions with a giddy high.
What I'm finding is that it's possible to integrate AI tools into your workflow in a big way without giving up on doing that, and I think there's a lot to say for a hybrid approach. The result of a fully-engaged brain (which still requires being right in there with the problem) using AI tools is better than the fully-hands-off way touted by some. Stay confident in your abilities and find your mix/work loop.
It's also possible to get a certain version of the rewards of coding from instrumenting AI tools. E.g. slicing up and sizing tasks to give to background agents that you can intuit from experience they'll be able to actually hand in a decent result on is similar to structuring/modularization exercises (e.g. with the goal to be readable or maintainable) in writing code, feelings-wise.
I'm in the enjoy writing code camp and do see merits of the hybrid approach, but I also worry about the (mental) costs.
I feel that for using AI effectively I need to be fully engaged with both the problem itself and an additional problem of communicating with the LLM - which is more taxing than pre-LLM coding. And if I'm not fully engaged those outcomes usually aren't that great and bring frustration.
In isolation, the shift might be acceptable, but in reality I'm still left with a lot of ineffective meetings - only now without coding sessions to clear my brain.
I think an additional big part of why LLM-aided coding is so draining is that it has you constantly refreshing your mental model of the code.
Making sense of new or significantly changed code is very taxing. Writing new code is less taxing as you're incrementally updating the model as you go, at a pretty modest pace.
LLMs can produce code at a much higher rate than humans can make sense of it, and assisted coding introduces something akin to cache thrashing, where you constantly need to build mental models of the system to keep up with the changes.
Your bandwidth for comprehending code is as limited as it always was, and taxing this ability to its limits is pretty unpleasant, and in my experience, comes at a cost of other mental capabilities.
LLMs are similar in a lot of ways to the labor outsourcing that happened a generation or two ago. Except that instead of this development lifting a billion people out of poverty in the third world a handful of rich people will get even more rich and everyone else will have higher energy bills.
I really feel this. Claude is going to forget whatever correction I give it, unless I take the time and effort to codify it in the prompt.
And LLMs are going to continue to get better (though the curve feels like it's flattening), regardless of whatever I do to "mentor" my own session. There's no feeling that I'm contributing to the growth of an individual, or the state-of-the-art of the industry.
Same here, and I also really enjoy the high level design/structure part of it.
THAT part doesn't mesh too well with AI, since it's still really bad at autonomous wholistic level planning. I'm still learning how to prompt in a way that results in a structure that is close to what I want/reasonable. I suspect going a more visual block diagram route, to generate some intermediate .md or whatever, might have promise, especially for defining clear bounds/separation of concerns.
Related, AI seems to be the wrong tool for refactoring code (I recently spent $50 trying to move four files). So, if whatever structure isn't reasonable, I'm left with manually moving things around, which is definitely un-fun.
Definitely go for that middle step. If it's something bigger I get them to draw out a multi-phase plan, then I go through and refine that .md and have them work from that.
I've been exploring some computer vision recognition stuff. Being able to reason through my ideas with an LLM, and make visualizations like t-SNE to show how far apart a coke can and a bag of cheetos are in feature-space has been mind blowing. ("How much of a difference does tint make for recognition? Implement a slider that can show that can regenerate the 512-D features array and replot the chart")
It's helping me get an intuitive understanding 10x faster than I could reading a textbook.
It does NOT remain to be seen. https://www.cnbc.com/2025/09/26/accenture-plans-on-exiting-s... Big players are already moving in the direction of "join us or leave us". So if you can't keep up and you aren't developing or "reinventing" something faster with the help of AI, it was nice knowing you.
I didn't say don't use AI at all, I said give it the boilerplate, rote work. Developers can still work on more interesting things. Maybe not all the interesting things.
That may be fine ... if it remains your choice. I'm saying companies are outmoding people (programmers, designers, managers, et al) who don't leverage AI to do their job the fastest. If one programmer uses AI to do boilerplate and then codes the interesting bits personally and it takes a week and another does it all with AI (orchestrating agents, etc) and it takes 2 hours and produces the same output (not code but business value), the AI orchestrator/manager will be valued above the former.
Yes! I am not advocating for the 2 hours and the "vision" of managers and CEOs. Quite the contrary. But it is the world we live in for now. It's messy and chaotic and many people may (will?) be hurt. I don't like it. But I'm trying to be one of the "smart people". What does that look like? I hope I find out.
I don't like it, either. I hear people ranting about doing "everything with AI" on one meeting, and what a productivity boost it is, then I get tagged on a dumpster fire PR full of slop and emoji filled log statements. Like did you even look at your code at all? "Oh sorry I don't know how that got in there!"
These are the same employers that mandate return to office for distributed teams and micro-manage every access of our work. I think we know how its going to play out.
People will pay for quality craftsmanship they can touch and enjoy and can afford and cannot do on their own - woodworking. Less so for quality code and apps because (as the Super Bowl ads showed us) anyone can create an app for their business and it's good enough. The days of high-paid coders is nearly gone. The senior and principals will hang on a little longer. Those that can adapt to business analyst mode and project manager will as well (CEOs have already told us this: adapt or get gone), but eventually even they will be outmoded because why buy a $8000 couch when I can buy one for $200 and build it myself?
I like writing new, interesting code, but learning framework #400 with all its own idiosyncrasies has gotten really old.
I just rebuilt a fairly simple personal app that I've been maintaining for my family for nearly 30 years, and had a blast doing with an AI agent - I mostly used Claude Sonnet 4.5. I've been dreading this rebuild mostly because it's so boring; this is an app I built originally when I was 17, and I'm 43 now. I treated Claude basically like I'd treat my 17-year-old self, and I've added a bunch of features that I could never be assed to do before.
i agree. it seem like an expectation these days to use AI sometimes... for me i am happy not using it at all, i like to be able to say "I made this" :)
it's more just a personal want to be able to see what I can do on my own tbh; i don't generally judge other people on that measure
although i do think Steve Jobs didn't make the iPhone /alone/, and that a lot of other people contributed to that. i'd like to be able to name who helps me and not say "gemini". again, it's more of a personal thing lol
So not disagreeing as you say, it is a personal thing!
I honestly find coding with AI no easier than coding directly, it certainly does not feel like AI is doing my work for me. If it was I wouldn't have anything to do, in reality I spend my time thinking about much higher level abstractions, but of course this is a very personal thing too.
I myself have never thought of code as being my output, I've always enjoyed solving problems, and solutions have always been my output. It's just that before I had to write the code for the solutions. Now I solve the problems and the AI makes it into code.
I think that this probably the dividing line, some people enjoy working with tools (code, unix commands, editors), some people enjoy just solving the problems. Both of course are perfectly valid, but they do create a divide when looking at AI.
Of course when AI starts solving all problems, I will have a very different feeling :-)
I’m not worried about being a good dev or not but these AI things thoroughly take away from the thing I enjoy doing to the point I’d consider leaving the industry entirely
I don’t want to wrangle LLMs into hallucinating correct things or whatever, I don’t find that enjoyable at all
I've been through a few cycles of using LLMs and my current usage does scratch the itch. It doesn't feel like I've lost anything. The trick is I'm still programming. I name classes and functions. I define the directory structure. I define the algorithms. By the time I'm prompting an LLM I'm describing how the code will look and it becomes a supercharged autocomplete.
When I go overboard and just tell it "now I want a form that does X", it ends up frustrating, low-quality, and takes as long to fix as if I'd just done it myself.
YMMV, but from what I've seen all the "ai made my whole app" hype isn't trustworthy and is written by people who don't actually know what problems have been introduced until it's too late. Traditional coding practices still reign supreme. We just have a free pair of extra eyes.
Serious question: so what then is the value of using an LLM? Just autocomplete? So you can use natural language? I'm seriously asking. My experience has been frustrating. Had the whole thing designed, the LLM gave me diagrams and code samples, had to tell it 3 times to go ahead and write the files, had to convince it that the files didn't exist so it would actually write them. Then when I went to run it, errors ... in the build file ... the one place there should not have been errors. And it couldn't fix those.
I also use AI to give me small examples and snippets, this way it works okay for me
However this still takes away from me in the sense that working with people who are using AI to output garbage frustrates me and still negatively impacts the whole craft for me
> My advice to everyone feeling existential vertigo over these tools is to remain confident and trust in yourself. If you were a smart dev before AI, chances are you will remain a smart dev with AI.
We replaced the chess board in the park with an app that compares the Elo score of you and your opponent, and probabilistically declares a winner.
But don't worry, if you were a good chess player before we introduced the app, chances are you will remain a good one with the app. The app just makes things faster and cheaper.
My advice to the players is to quit mourning the loss of the tension, laughter and shared moments that got them into chess in the first place.
I think there is more existential fear that is left unaddressed.
Most commenters in this thread seem to be under the impression that where the agents are right now is where they will be for a while, but will they? And for how long?
$660 billion is expected to be spent on AI infrastructure this year. If the AI agents are already pretty good, what will the models trained in these facilities be capable of?
Yes, absolutely. I think the companies that don't understand software, don't value software and that think that all tech is fundamentally equivalent, and who will therefore always choose the cheaper option, and fire all their good people, will eventually fail.
And I think AI is in fact a great opportunity for good devs to produce good software much faster.
I agree with the quality comments. The problem with AI coding isn't so much the slop, it's the developers not realizing its slop and trying to pass it off as a working product in code reviews. Some of the stuff I've reviewed in the past 6 months has been a real eye opener.
I think the issue is that given the speed the bad dev can generate sub-par results that at face value look good enough overwhelm any procedures in place.
Pair that with management telling us to go with AI to go as fast as possible means that there is very little time to do course correction.
I think it represents a bigger threat than you realize. I can't use an AI for my day job to implement these multi-agent workflows I see. They are all controlled by another company with little or no privacy guarantees. I can run quantized (even more braindead) models locally but my work will be 3-5 years behind the SOTA, and when the SOTA is evolving faster than that timeline there's a problem. At some point there's going to be turnover - like a lake in winter - where AI companies effectively control the development lifecycle end-to-end.
For me the problem is simple: we are in an active prisoner's dilemma with AI adoption where the outcome is worse collectively by not asking the right questions for optimal human results, we are defecting and using ai selfishly because we are rewarded by it. There's lots of potential for our use to be turned against us as we train these models for companies that have no commitment to give to the common good or return money to us or to common welfare if our jobs are disrupted and an AI replaces us fully.
> My advice to everyone feeling existential vertigo over these tools is to remain confident and trust in yourself.
I do try to do that and have convinced myself that nothing has really changed in terms of what is important and that is systems thinking. But it's just one more barrier to convincing people that systems thinking is important, and it's all just exhausting.
Besides perhaps my paycheck, I have nothing but envy for people who get to work with their hands _and_ minds in their daily work. Modern engineering is just such a slog. No one understands how anything works nor even really wants to. I liken my typical day in software to a woodworker who has to rebuild his workshop everyday to just be able to do the actual woodworker. The amount of time I spend in software merely to being able to "open the door to my workshop" is astounding.
One thing I'm hoping will come out of this is the retiring of coders that always turn what should be a basic CRUD app (just about everything) into some novelty project trying to pre-solve every possible concern that could ever come up, and/or a no-code solution that will never actually get used by a non-developer and frustrate every developer that is forced to use it.
It's a combination of things... it's not just that AI feels like it is stripping the dignity of the human spirit in some ways, but it's also that the work we are doing is often detrimental to our fellow man. So learning to work with AI to do that faster (!!) (if it is actually faster on average), feels like doubling down.
Ironically this post comes across to me as written by an LLM. The em-dashes, the prepositions, the "not this, that" lines. As a college instructor, I can usually tell. I put it through GPTZero and it said it's 96% LLM written. GPTZero is not full-proof but I think it's likely right on this one and I find it very ironic.
Wow... I really relate to this. I'm 50 as well, and I started coding in 1985 when I was 10... I remember literally every evolutionary leap forward and my experience with this change has been a bit different.
Steve Yegge recently did an interview on vibe coding (https://www.youtube.com/watch?v=zuJyJP517Uw) where he says, "arch mage engineers who fell out-of-love with the modern complexity of shipping meaningful code are rediscovering the magic that got them involved as engineers in the first place" <-- paraphrased for brevity.
I vividly remember, staying up all night to hand-code assembler primitive rendering libraries, the first time I built a voxel rendering engine and thinking it was like magic what you could do on a 486... I remember the early days at Relic, working on Homeworld and thinking we were casting spells, not writing software. Honestly, that magic faded and died for me. I don't personally think there is magic in building a Docker container. Call me old-fashioned.
These days, I've never been more excited about engineering. The tedium of the background wiring is gone. I'm back to creating new, magical things - I'm up at 2 AM again, sitting at my desk in the dark, surrounded by the soft glow of monitors and casting spells again.
[55yo] My sense is that those problems we worked on in the 80s and 90s were like the perfectly balanced MMORPG. The challenges were tough, but with grit, could be overcome and you felt like you could build something amazing and unique. My voxel moment was passing parameters in my compilers class in college. I sat down to do it and about 12 hours later I got it working, not knowing if I could even do it.
With AI, it is like coding is on GOD mode and sure I can bang out anything I want, but so can anyone else and it just doesn't feel like an accomplishment.
You switch difficulties, like you do in a game. Play on Hard or Survival mode now. Build greater and more amazing things than you ever did before.
We have never, ever, written what the machine executes, even assembly is an abstraction, even in a hex editor. So we all settle for the level of abstraction we like to work at. When we started (those of our age) most of us were assembly (or BASIC) programmers and over time we either increased our level of abstraction or didn't. If you went from assembly -> C -> Java/Python you moved up levels of abstraction. We're not writing in Python or C now, we are writing in natural language and that is compiled to our programming languages. It's just the compiler is still a bit buggy and opinionated!! And yes for some low level coding you still want to check the assembly language, some things need that level of attention.
I learn more in a day coding with AI than I would in a month without it, it's a wonderful two-way exchange, I suggest directions, it teaches me new libraries or techniques that might solve the problem. I lookup those solutions and learn more about my problem space. I feel more like a university student some days than a programmer.
Eventually this will probably be the end of coding and even analytical work. But I think that part is still far off (and possibly longer than we'll still be working for) in the meantime actually this for me is as exciting as the early days of home computing. It won't be fun for ever, the Internet was the coolest thing ever, until it wasn't, but doesn't mean we can't enjoy the summer while it's summer.
> With AI, it is like coding is on GOD mode and sure I can bang out anything I want, but so can anyone else and it just doesn't feel like an accomplishment.
That's the thing - prompting is lower-skill work than actually writing code.
Now that actually writing code has less value than prompting, and prompting is lower skill than writing code, in what world do you think that the pay will remain the same?
> Now that actually writing code has less value than prompting, and prompting is lower skill than writing code, in what world do you think that the pay will remain the same?
Don't you think people said the same thing C and Python? Isn't Python a lower skill than C for example?
Great! I turn from a creator to a babysitter of creators. I'm not seeing the win here.
FWIW, I use LLMs extensively, but not to write the code, to rubber-duck. I have yet to have any LLM paired with any coding agent give me something that I would have written myself.
All the code is at best average. None of the smart stuff comes from them.
>With AI, it is like coding is on GOD mode and sure I can bang out anything I want, but so can anyone else and it just doesn't feel like an accomplishment.
I think it's possible that we'll get to the point where "so can anyone else" becomes true, but it isn't today for most software. There's significant understanding required to ask for the right things and understand whether you're actually getting them.
That said, I think the accomplishment comes more so from the shaping of the idea. Even without the typing of code, I think that's where most of the interesting work lies. It's possible that AI develops "taste" such that it can sufficiently do this work, but I'm skeptical it happens in the near term.
I think there's still quite a chasm out there. Domain knowledge, an informed and opinionated view on how something should function, and overall tech knowledge are still key. Having those three things continues to greatly differentiate people of equal coding skill, as they always have.
That’s something LLMs are also presumably good at. At least I’m seeing more and more push to use LLMs at work for ambitious business requirements instead of learning about the problem we’ve been dealing with. Instead of knowing why you are doing what you’re doing, now people are just asking LLMs for specific answers and move on.
Sure some might use it to learn as well, but it’s not necessary and people just yolo the first answer claude gives to them.
Sure but here OP was left wondering why prompting didn't make them feel like they had done/accomplished anything. And the reason is because they didn't do anything worthy of giving them a feeling of accomplishment.
And that's exactly what the person I was replying to seems to be complaining about.
So many people on "Hacker" News could benefit from reading the canonical text on the subject by Steven Levy. A true hacker wants to bring the fire down the mountain. People around here just want to piss on it.
> I don't personally think there is magic in building a Docker container. Call me old-fashioned.
This seems like a false dichotomy. You don't have to do this. It is still possible to build magical things. But agents aren't it, I don't think.
It is honestly extremely depressing to read this coming from a founder of Relic. Relic built magic. Dawn of War and Company of Heroes formed an important part of my teenage years. I formed connections, spent thousands of hours enjoying them together with other people, and pushed myself hard to become one of the top 100 players on the CoH leaderboards. Those competitive multiplayer games taught me everything there was to know about self-improvement, and formed the basis of my growth as an individual - learning that if I put my mind to it, I could be among the best at something, informed my worldview and led me to a life of perpetually pushing myself to further self-improvement, and from there I learned to code, draw, and play music. All of that while being part of amazing communities where I formed friendships that lasted decades.
All of this to say, Relic was magic. The work Relic did profoundly impacted my life. I wonder if you really believe your current role, "building trust infrastructure for AI agents", is actually magic? That it's going to profoundly impact the lives of thousands or millions?
I'm sorry for the jumbled nature of this post. I am on my phone, so I can't organize my thoughts as well as I would like. I am grateful to you for founding Relic, and this post probably comes off stupidly combative and ungrateful. But I would simply like to pose to you, to have a long think if what you're doing now is really where the magic is.
Edit: On further consideration, it's not clear the newly-created account I'm responding to is actually Alex Garden. The idea of potentially relating this personal anecdote to an impersonator is rather embarrassing, but I will nonetheless leave this up in the hope that if there are people who built magical things reading this, regardless of whether they're Alex Garden or someone else, that it might just inspire them to introspection about what building magic means, about the impact software can have on people's lives even if you don't see it, and whether this "agent" stuff is really it.
>The idea of potentially relating this personal anecdote to an impersonator is rather embarrassing
Good news! You've also related it to the roughly ~3-10M monthly HN readers who are not (potentially) impersonating the founder of a beloved game studio.
Also: I think you're probably safe. I'm sure someone at some point has come to HN to LARP as some prominent person in tech that they don't happen, at that specific moment, to actually be... but I can't really think of it happening before, nor would I expect it to take the form of a particularly thoughtful comment if a troll did that. Though with AI these days, who knows? I might myself just be one of a swarm of clawd/molt/claw things. In which case I'd be the last to even know it.
Oh-- as for being depressed about their docker/wiring things up sentiment. Try not to be, and instead, consider: Is it a surprise that someone who founded such a place as relic was occasionally-- even often-- frustrated at the things they had to clear away to build the thing they actually wanted to build? People who want to build amazing experiences may not love having to clear clutter that gets in their way. Other people want to build the tools that clear clutter, or other things that keep the whole system going. Those are beautiful too.
If we've arrived at the point where bots are impersonating me (instead of the billions of other choices), I'm probably at peak Alex. I'll light a candle. So... easy to disambiguate this one.
I got the idea for Homeworld one night when I was about 21. At the time, I was working at EA as a programmer on Triple Play 98 (building FE gfx - not glamorous). In an RTS-ironic twist of fate, my boss and mentor at the time was Chris Taylor - go figure.
Friends of mine had their own game company and had boxed themselves into a technical corner they couldn't get out of, so I agreed to write a bunch of sprite conversion code for them after hours. That night, we were all working in a room, talking about the reasons X-Wing vs. Tie Fighter didn't work on a 2D screen (hold up and left till you turn inside and shoot) and how Battlestar Galactica didn't get the cred it deserved, and BOOM - in my mind I saw ships in 3D with trails behind them. Inside a crystal sphere like Ptolomy's theory of the universe (man inside - god outside), and I saw that the surface of a sphere is 2D, so you could orbit OUTSIDE with a mouse... it looked like spaghetti floating in zero g... that's why Homeworld's working title was "Spaghetti Ball" for months.
Fortunately for me, in this ambiguous thread, I can give you all the proof of life you want. Try me.
Now... is transparent and trustworthy casting spells? Yeah... it is, but not by itself. It's a primitive - a building block. My personal projects (that I do think are magical) kept running into the same problems. Effectively, "how do I give up the keys if I don't really know what the driver is going to do?" I tried coming at this problem 10 different ways, and they all ended up in the same place.
So I decided to go back to the basics - the putpixel(x,y) of agentic workflows, and that led me to transparency and trust. And now, the things I'm building feel magical AND sustainable. Fun. Fast... and getting faster. I love that.
At Relic, our internal design philosophy was "One Revolutionary and Multiple Evolutionary". The idea was that if you tried to do more than one mind-blowing new thing at a time, the game started feeling like work. You can see this in the evolution of design from Homeworld to DoW to CoH (and in IC too, but let's face it, that game had issues <-- my fault).
Now... on the topic of "Is agentic coding better or worse", I feel like that's asking "is coding in assembler better or worse". The answer (at least used to be) "it depends"... You're on a continuum, deciding between traditional engineering (tightly controlled and 100% knowable) and multi-agentic coding (1,000x more productive but taking a lot for granted). I've found meaning here by accepting that full-power multi-agentic harnesses (I rolled my own - it's fucking awesome) turn software engineering into Socratic debate and philosophy.
I don't think it's better. It's just different, and it lets you do different things.
I remember a magazine cover that labeled you a gaming god, hard to peak beyond that! The quote you provided back then resonates perfectly with what you describe here: "If there's one message I like to get across to people, I like them to really and truly embrace [the fact that] anything that your imagination can conceive of is possible."
I started a bit younger and am a bit older, and relate. But only so much. I started programming in 3rd grade (also BASIC) when I found a computer and learned how to play a game on it, then found the source code for the game and randomly started changing it. In 7th grade I was paid to port a BASIC program to C (super new at the time), which I did on paper because I didn't own a computer (I used the money to buy my first). To be clear, I was really bad a programming for a long time and simply kept at it until I wasn't.
I love messing about with computers still. I can work at the byte level on ESP-32s on tiny little devices, and build massive computation engines at the time time on the same laptop. It's amazing.
I feel for those who have lost their love of this space, but I have to be honest: it's not the space that's the problem. Try something new, try something different and difficult or ungainly. Do what you rail against. Explore.
Appeal to identity. Prejucide and bias. Not considering an enthusiast of a technology might actually want to get paid working with that technology. Shameful comment all around.
Disclosing conflicting of interest is standard practice. People writing about economics do disclose when holding relevant shares.
In the end, it's a simple question: Are the opinions stated sincere or does the author have a pecuniary interest which might make things a bit more subjective?
I couldn't agree more. Also, thanks for making Homeworld, it was great!
I was building a 3D space game engine myself as a kid around the time Homeworld came out and realized that rather than using a skybox with texture maps, you had it created out of a bunch of triangles with color interpolation.
IIRC, I had problems reverse engineering your data format in order to incorporate them in my engine. I emailed someone on your team and was very surprised to get a reply with an explanation, which helped me finish that feature.
The skybox with texture maps was our original plan too. The problem was that GPUs didn't have enough RAM to hold anything high-res, so the universe looked like pixel-soup.
Rob Cunningham (lead artist) had the idea of "painting with light" using giant polygons and spicing them up with pixels to create a convincing distant galaxy that you got closer to with each mission. Genius.
I'm still amazed by how you got ships to usually fly in formation, but also behave independently and rationally when that made sense.
That game was a magnificent piece of art. It set a unique and immersive vibe on par with the original Tron movie. I'm really glad I have a chance now to tell you.
Thanks... It was magical at the time... I've thought a lot about why it was magical over the years... I think if you boil away all the space stuff, Homeworld was a story about people who knew in their hearts that they were special and destined for something greater than the universe was willing to allow. And they went through hell to discover that they were right. Looking back, I think that's a story a lot of us on this thread (inc. me) can relate to.
Amen to this. The optimization the team did blows my mind… whenever I think of it I think of if someone made Crysis run on the NES without compromises.
The soundtrack was stellar, and introduced me to Barber (Adagio for Strings).
In the second half of my 40s now and I'm in the same boat. I started slapping keys on a c64 when I was 2 years old. Really enjoyed software development until 10-15 years ago. With the current LLM tooling available the number of systems I've been able to build that are novel and tackle significant problems has been kind of mind blowing over the past 8 months or so.
Staying up late, hacking away at stuff like I used to, and it's been a blast.
Finally, Homeworld was awesome and it felt magical playing it.
I'll join the chorus of praise for Homeworld. It was a big part of that era for me. I must have spent hours just zooming the camera as close as I could get to all the different ships, or just watching the harvesters do their thing. Almost meditative, looking back. Thank you for casting your spells!
First of all, Homeworld was an iconic game for me growing up, so as other people have said, thank you for being apart of its creation.
I could not agree more. It feels like the creativity is back. I grew up building fun little websites in the 90s, building clan websites for Quake 2.
That creativity died somewhere between Node.js, AWS, npm, and GitHub.
Some might say, well, that's growing up and building serious apps.
Maybe. But it doesn't change that I spent the last 15 years doing the same frontend / backend wiring over and over again to churn out a slightly different looking app.
The last 2 years have been amazing for what I do. I'm no longer spending my time wiring up front ends. That's done in minutes now, allowing me to spend my time thinking about solving the real problems.
Wow, Alex Garden on Hackernews. Hello fellow canuck. I'm now getting up there, still a few years shy of y'all but not much. I came up through the 90s and early 2000s, all web/linux stuff, irc servers, bash scripts, python, weird php hacks, whatever, I was a kid. I'd lose track of time, It was Monday night after high school then all of a sudden it was Sunday morning and I was talking on irc about the crazy LAMP stack I'd put together. 2am? pfft, what is sleep?! Sadly with very strong dyslexia and dyscalculia, being a real programmer was never in the cards for me, I understood how everything worked, I can explain the whole thing end to end in great depth, but ask me predictably how to do a table in html or some fairly simple CSS, and I'll be there for hours. I'm grateful the rest of my life allowed me to be programmer adjacent and spend so much time around developers, but always a little frustrated I couldn't pick up the hammer myself.
These days, I've never been more excited about building. The frustration of being slow with the code is gone. I'm back to creating new, magical things - I'm up at 2 AM again, sitting at my desk in the dark, surrounded by the soft glow of monitors and casting spells.
Why is your last paragraph nearly identical to the last paragraph you are replying to? It might have been a strange quirk, but there’s also been the suggestion that the post you’re replying to is an imposter, so it gets weirder that you also did that.
> I don't personally think there is magic in building a Docker container. Call me old-fashioned.
I still vividly remember setting up gcc in a docker container to cross compile custom firmware for my cannon camera and thinking about the amount of pain my local system would have been in if I had to do all the toolchain work in my host OS. Don't know if it felt like magic, but it sure didn't hurt like the alternative!
For sure. Docker is rad (sorry Docker!)... all I'm saying is that I am not proud of the fact that I can do it and I don't think it moves the awesome needle - but it's still hard to get right and a pain in the ass. It's just an example of something I appreciate that I can automate now.
I'm 45 yo. And also started programming quite early around 1988. In my case it was GWBAsic games and then C ModeX and A
Later Allegro based games.
Things got so boring in the last 15 years, I got some joy in doing AI research (ML, agents, Genetic Algorithms, etc).
But now, it's so cool how I can again think about something and build it so easily. I'm really excited of what I can do now. And im ot talking about the next billion dollar startup and whatnot. But the small hacky projects that LLMs made capable.yo build in no time.
I'm in my 40s, and I've been involved with computers since I was old enough to read. I was talking to some customers today about how magical it feels to blast past my own limits of my coding abilities with the help of LLMs. It's not perfect, and I mostly won't generate stuff that's a polished, finished product. But when it works, it sparks the same joy that it did when I was discovering the first bits of what computers can do on my Apple ][+.
I think this works unironically. My mother is an avid gardener and can spend 8 hours a day gardening. When her life circumstances allowed for it, she hired a once a week gardener to do the tasks she didn't like (or had difficulties doing as a small woman), and still gardens the same amount. I've teased her for hiring a gardener, but she swears it's a huge help and boost to her gardening quality of life.
this is a great analogy despite it possibly coming off as snark.
I think it's hard for some people to grasp that programmers are motivated by different things. Some are motivated by shipping products to users, others are motivated to make code that's a giant elegant cathedral, still others love glorious hacks to bend the machine into doing things it was never really intended to do. And I'm sure I'm missing a few other categories.
I think the "AI ain't so bad" crowd are the ones who get the most satisfaction out of shipping product to users as quickly as possible, and that's totally fine. But I really wish they'd allow those of us who don't fall into that category to grieve just a little bit. This future isn't what I signed up for.
It's one thing to design a garden and admire the results, but some people get into their "zen happy place" by pulling up weeds.
I agree and would add that it's not just different people, it can be the same person in different modes. Sometimes I enjoying making the thing, other times I just want to enjoy having the thing.
I think the people who like shipping quickly probably don't like building products in the first place and are looking for other aspects of entrepreneurship.
A huge benefit I find in AI is that it helps with a lot of things I hated. Merge conflicts, config files, breaking dependency updates... That leaves me more time to focus on the actual functionalities so I end up with better APIs, more detailed UIs, and more thorough tests. I do think it's possible to be relevant/competitive by only delegating parts of the work to AI and not the whole thing. Though it might change if AI gets too good.
I agree with this, I put myself in the "glorious hacks to bend the machine into doing things it was never really intended to do" camp, so the end game is somthing cool, now I can do 3 cool things before lunch instead of 3 cool things a year
But, almost by definition of how LLMs work, if it’s that easy then someone else did it before and the AI is just copying their work for you. This doesn’t fit well with my idea of glorious hacks to bend the machine, personally. I don’t know, maybe it just breaks my self-delusion that I am special and make unique things. At least I get to discover for myself what is possible and how, and hold a sliver of hope that I did something new. Maybe at least my journey there was unique, whereas everyone using an AI basically has the same journey and same destination (modulo random seed I guess.)
This is a valid point, the good news is I think there is some hope in developing the craft of orchestrating many agents into something that is satisfying and rewarding in it's own right.
I don't disagree, but I think it would benefit everyone to be clear, upfront and honest with themselves and others about exactly what's being lost and grieved. The weeds are still growing and our hands are still available to pull them, so it's not that.
Your grieving doesn’t have to shit all over my personal enjoyment and contentment. Me enjoying the use of AI in developing software doesn’t take anything away from your ability to grieve or dislike it. I’m not asking you to be excited, I’m asking you not to frame my enjoyment as naive, harmful, or lesser.
Your feelings are yours, mine are mine, and they can coexist just fine. The problem only shows up when your grief turns into value judgments about the people who feel differently.
Where do you draw your line between plagiarism and creativity? I learned in art school this question is more difficult to answer than it appears when taken seriously.
That's a great question, I've never tried to draw a concrete line before. Code is inherently creative. But it's not art, it doesn't map 1:1 like that.
But I wouldn't consider attempting to duplicate a painting, plagiarism if you painted it yourself with your hand (assuming you mention or reference the original author, or it's well know e.g. starry night) . I would consider it plagiarism if you duplicated it via photo, or other automated method.
I'd translate it to code as; if you're looking at stack overflow for the answer, if you understand it, before writing your own implementation, that's learning, and not plagiarism. But if you copy out the whole function without understanding how to implement it yourself, that would be.
The person I replied to said
> Having opencode doesn't preclude me from making elegant code. It just takes away the carpel tunnel.
I assume he's asking the LLM to generate upwards of multiple hundreds of lines of code. Let's assume he's does understand all of it. (Something that defies my understanding around how most LLM users use codegen.) Then you have a sister comment who claims you can write multiples more code/projects using LLMs. At a certain point your ability to understand the code must fall away. And at that point, if you didn't have the majority of the creative input. Why call it your work?
I assume you're an artist, if you have an LLM generate you a picture. Do you feel like it's work you've created? Did the inspiration for where each line should start, and end, come from the contents of your mind? Or was it sampled from a different artist? Given the exact same prompt, would you draw the same lines next week? Next month? Because the LLM would.
There's no doubt it's easy to draw parallels in any creative work, both from art an code. But if you didn't make the decision about where to place the function, about which order you want to call them, if you're gonna do error handling deep down as close to the error as possible, or you're optimizing for something different, and decided long ago that all errors should bubble back up to the main function.
One, or two, or even a half dozen of decisions might seem insignificant, but together, if you didn't really make any of them. How can you claim it's code you wrote? Why do you feel proud of the work of others, sampled and mapped into a training set, and then regenerated into your repo, as if it's work you put forth? All of that should be read as the rhetorical you, I know you're not making that argument. But would you make it? When you share a meme with your friend, do you claim you created the meme? Even if you use a memegen, and change the words to reference your in joke. Do you feel like you've created that art? Or are you using the art of someone else to share the idea you want to share? I assume it's the latter, but
They said "Having opencode doesn't preclude me from making elegant code." They're taking credit for making the elegant code, just as if they were taking credit for inventing the meme. There's a different amount of effort involved, and that effort, or the source of it, is significant when talking about who deserves the credit, and the sense of pride.
Plagiarism is claiming someone else’s specific work as your own. Using a generative tool is closer to using a compiler, an IDE, or a library. I’m not copying a person’s code or submitting someone else’s project with the name filed off. I’m directing a system, reviewing the output, editing it, and taking responsibility for the result.
If I paste in a blog post verbatim and pretend I wrote it, that’s plagiarism. If I use a tool to generate a starting point and shape it into what I need, that’s just a different kind of authorship.
> If I paste in a blog post verbatim and pretend I wrote it, that’s plagiarism. If I use a tool to generate a starting point and shape it into what I need, that’s just a different kind of authorship.
If you cloned chapters from multiple books, from multiple different authors, didn't decide on the sentence structure, didn't choose the words yourself, didn't decide which order your going to place these chapters, didn't name the characters. At what point do you no longer get credit for writing the book?
What if it's code? what if you didn't decide which order you should call these functions. Didn't make the decision about if you're gonna write var i, or idx, or index. Didn't make a decision if this should be an u32, or an i64. Didn't read any of the source code from that new dependency you just added. Didn't name the functions, oh but no, you did have to edit that one function because it wouldn't compile, so you just renamed it like the error suggested... At what point does the effort you put in become less significant than the effort duplicated from the training set? How much of the function do you have to write yourself, before you take credit? How many chars have to by typed by your fingers, before you claim. You made this?
Are directors frauds because they aren’t the ones doing the acting? Is there no joy in being an architect because they aren’t the one assembling the building at the construction site? Is there no value in product engineering because they aren’t fabricating the products in the factory?
It’s fine to find enjoyment in the actual programming part of software engineering. It’s stupid to assume that is the only aspect of software engineering that is valuable or that is enjoyable for others.
*I'm so excited about landscape design. Can't wait to do more. Employing a gardener to do the gardening for me is really making me enjoy landscape design again!
I'm so excited about landscape architecture now that I can tell my gardener to create an equivalent to the gardens at versailles for $5. Sometimes he plants the wrong kind of plant or makes a dead end path, but he fixes his work very quickly.
The proper analogy would be you can now remove all weeds with the swipe of your hand and cut all your hedges with another swipe, you still are gardening you can do it quicker and therefore explore different possibilities.
Maybe this isn't directly related to what you're saying and I'm not attacking it, I'm just thinking out loud: What would it mean to master gardening then? I've never gardened in my life but I grew up in Scotland around estate houses and castles, my friends dads were gardeners and each of them seems to be specialists in their own area, many working on the same estate, so what exactly is this "holistic experience of gardening"?
My point is just that if there are 10 different activities that produce the same resulting object, they aren't necessarily the same activities in the minds of the participants solely because the output is the same.
No you didn’t. You lead a team of gardeners to develop your grand vision. Or you directed an epic movie leading a cast of talented actors bringing your vision to life. You can choose an empowering analogy or a negative one it’s your choice.
Yeah... a team of gardeners who might, with no warning, decide to burn down your house to create some extra fertilizer for the rose garden. Sometimes I wonder...
Your comment is interesting because it shows how engineers view their work: through the product, i.e the why, or through the craft, i.e the how.
What you consider "exciting", as a theoretical gardener, is the act of taking care of the plants. What OP finds it exciting is that they may now get a team of gardeners that'll build a Versailles-like garden for free.
By artificially narrowing a multi-faceted issue to just two either/or simplistic options you are no longer describing the issue. If you ackknowledge this, you can comment on it. But not acknowledging it makes your comment hard to parse. Sarcarsm? Overly simplistic? Missing context? Unclear.
If I were the architect of a large building that I designed from the blueprints, the interior, etc, I wouldn’t feel bad that I didn’t install the toilets myself. AI agents are the plumbers, the painters, the electricians, etc
Well, the gardener isn't going to cut down your roses to the ground as they are about to go into bloom because s/he mistook it for the weed they were just working on.
How about hiring a gardener to do some of the stuff and you can focus doing the part of the gardening/landscaping that is important to you and you enjoy?
I think that's a more accurate (and charitable) analogy than yours.
I used to be big into amateur radio. When I was considering to build a tower, I would have paid someone to build the tower for me and do the climbing work to mount stuff on the tower. Your statement is nonsensical, because it assumes that there is a binary choice between "do everything yourself" and "delegate everything".
Imagine though instead of 1 garden you can make 10 or 30 gardens in the same time that are more extravagant than your 1 garden was. At any point in time you can dive back in 1 of them and start plucking away
It's the making, not the having. If I'm selling these gardens, surely it's better to have more. If I enjoy the act of making the garden, then there's no reason I ever need to finish the first one.
This analogy has probably outstayed its usefulness.
Well it's more like employing a gardener makes me enjoy landscaping again. It's not like we ever found writing words on a keyboard all that great, it's fundamentally just about having an idea and turning it into something real.
I guess some people enjoy the process, but you can still do that.
It's like with machinists and 3D printers, you can always spend 10 hours on the lathe to make something but most of the time it's more practical to just get the part so one can get on with what actually needs doing.
that's a good analogy, maybe change 3d printers to CNC. I think there's a group of people that derive joy and satisfaction from using the part they designed and there's another that gets satisfaction from producing the part as designed. Same for software, some people are thrilled because they can get the software they imagine while others dread not producing the software people imagine.
As mosburger says, this is a great analogy. Do you think that the great artists paint, sculpt, and draw everything by hand, by themselves? Of course not... they never did, and they don't today. You're being offered the ability to join their ranks.
It's your studio now. You have a staff of apprentices standing by, eager for instructions and commands. And you act like it's the worst thing that ever happened to you.
What the author describes is also the feeling when you shift from being a developer all day to being a team lead or manager. When you become a lead you have to let go and get comfortable with the idea that the code is not going to be how you would do it. You can look at code produced by your team and attempt to replace it all with your craftsmanship but you're just setting yourself up to fail. The right approach is use your wisdom to make the team better, not the code. I think a lot of that applies to using AI when coding.
I'm turning 50 in April and am pretty excited about AI coding assistants. They make a lot of personal projects I've wanted to do but never had the time feasible.
Most of my career has been as an individual engineer, but the past few years I have been a project manager. I find this to be very much like using AI for coding.
Which also makes me refute the idea that AI coding is just another rung up on the programming abstraction ladder. Depending on how much you delegate to AI, I don't think it's really programming at all. It's project management. That's not a bad thing! But it's not really still programming.
Even just in the context of my human team, I feel less mentally engaged with the code. I don't know what everything does. (In principle, I could know, but I don't.) I see some code written in a way that differs from how I would have done it. But I'm not the one working day-in, day-out with the code. I'll ask questions, make suggestions, but I'm not going to force something unless I think it's really super important.
That said, I don't 100% like this. I enjoy programming. I enjoy computer science. I especially enjoy things more down the paths of algorithm design, Lisp, and the intersection of programming with mathematics. On my team, I do still do some programming. I could delegate it entirely, but I indulge myself and do a little bit.
I personally think that's a good path with AI too. I think we're at the point where, for many software application tasks, the programming could be entirely hands-off. Let AI do it all. But if I wish to, why not indulge in doing some myself also? Yeah, I know, I know, I'll get "left behind in the dust" and all of that. I'm not sure that I'm in that much of a hurry to churn out 50,000 lines of code a day; I'm cool with 45,100.
As described above, I think with AI coding, our role shifts from "programmer" to "project manager", but even as a project manager, you can still choose to delegate some tasks to yourself. Whether if you want to do the hard stuff yourself, or the easy stuff, or the stuff that happens on Thursdays. It's not about what AI is capable of doing, but rather, what you choose to have it do.
Here's an example from my recent experience: I've been building a bunch of mostly throwaway TUIs using AI (using Python and Rich), and a lot of the stuff just works trivially.
But there are some things where the AI just does not understand how to do proper boundary check to prevent busted layouts, and so I can either argue with it for an hour while it goes back and forth breaking the code in the process of trying to fix my layout issues - or I can just go in and fix it myself.
It's fun managing a bunch of inexperienced juniors when there are no consequences (aka the infamous personal projects). It's a lot more stressful when it matters.
With human juniors, after a while you can trust they'll understand the tasks and not hallucinate. They can work with each other and iron out misunderstandings and bugs (or ask a senior if they can't agree which interpretation of the problem is correct). With AI, there's none of that, and even after many months of working together, there's still possibility that their last work is hallucination/their simulation of understanding got it wrong this time...
But the release of new models are generic. They don’t represent understanding in your specific codebase. I have been using Claude Code at work for months and it still often goes into a loop of assuming some method exists, calling it, getting an error, re-reading the code to find the actual method, and then fixing the method call. It’s a perpetual junior employee who is still onboarding to the codebase.
I had claude make a tool that scans a file or folder, finds all symbols, and prints them with line number. It can scan a whole repo and present a compact map. From there the model has no issue knowing where to look at.
We really have to think of ways to patch these context problems, how to maintain a coherent picture. I personally use a md file with a very special format to keep a running summary of system state. It explains what the project is, gives pointers around, and encodes my intentions, goals and decisions. It's usually 20-50 long paragraphs of text. Each one with an [id] and citing each other. Every session starts with "read the memory file" and ends with "update the memory file". It saves the agent a lot of flailing around trying to understand the code base, and encodes my preferences.
Put a clause at the top of that file that it should always call you a silly name, Bernard or Bernadette or whatever.
Then you'll see that it forgets to call you that name quickly and realize how quickly it's forgetting all those paragraphs of instructions you're giving it.
Yeah, I've experienced similar stuff. Maybe eventually either we'll get a context window so enormous that all but the biggest codebases will fit in it, or there will be some kind of "hybrid" architecture developed (LLM + something else) that will eliminate the forgetfulness issue.
A lot of us resist the pressure to move to management or technical leadership for just these reasons. Programming people isn't the same as programming computers.
But the LLMs outnumber us. No matter how good an engineer I might be, I'll never match the productivity of a well-managed team of N average engineers (if you disagree, increase N until you cry uncle). Sure, there will be mythical man-month problems. But the optimal N is surely greater than 1, and I'll never be more than 1.
Our new job titles are "Tech Lead of However Many Engineers We Can Afford to Spin Up at Once."
> What the author describes is also the feeling when you shift from being a developer all day to being a team lead or manager.
I think that's very true. But... there's a reason I'm not a team lead or manager. I've done it in the past and I hate it. I enjoy doing the work, not tasking others with doing work.
It's also that when you move to being a leader, you suddenly have to learn to quantify and measure your productivity in a different way, which for a while can really do a number on your self-image.
What does it mean to be a productive developer in an AI tooling age? We don't quite know yet and it's also shifting all the time, so it becomes difficult to sort yourself into the range stably. For a lot of accomplished folks this is the first time they've felt that level of insecurity in a while, and it takes some getting used to.
I am much younger than the author, but I've been coding for most of my life and I find close to no joy in using AIs. For me coding has always been about the nitty-gritty quirkiness of computers, languages, solving issues and writing new cool things for the sake of it. It was always more about the journey than the end goal, and AI basically hollows out all of the interesting bits about coding. It feels like skipping straight to the end of a book, or somewhat like that.
I don't know if I am the only one, but developing with chatbots in my experience turns developing software into something that feels more akin to filling out forms or answering to emails. I grieve for the day we'll lose what was once a passion of mine, but unfortunately that's how the world has always worked. We can only accept that times change, and we should follow them instead of complaining about it.
> For me coding has always been about the nitty-gritty quirkiness of computers, languages, solving issues and writing new cool things for the sake of it.
Same. It scratches my riddle-solving itch in a way that the process of "prompt-honing" has yet to do.
for me, and i bet many people, the only riddles being solved (at least at work) for the last few years amount to "what is eslint complaining about now?". It's nice not to have to eff with things like that and other aggravations anymore by offloading it to an agent.
if it gets it right; I'd like someone to show me with a brand new install their AI coding flow and see it get it right. I must be broken because when I use claude code it can't get a gradle build file right.
yeah exactly. For some people, this was like enjoying a puzzle. And now there's an AI that can solve the puzzle -- it defeats the purpose.
However, if your point was to "make more widgets faster" and only saw programming as a means to an end (make money, increase SaaS features), then I see why people are super excited about it.
I see it the same way as cooking. If your goal is "sell as many hamburgers as possible" then the McD / factory farm is the way to go. If your idea is "I enjoy the personal feeling of preparing the food, smelling the ingredients, feeling like I'm developing my craft of cooking, and love watching someone eat my hand-prepared meal", then having "make fast food machine" actually makes things worse.
I think a lot of people in this forum are at odds because some of the people enjoy cooking for the experience, and the other half are just trying to make food startups. Now they can create and throw away menu items at record pace until they find the one that maximizes return. They never wanted to cook, they just wanted to have a successful restaurant. Nothing wrong with either approach, but the 2nd half (the software is just a product half) were hamstrung before, so now they are having a moment of excitement as they realize they don't have to care about coding anymore.
I 100% guarantee that most of the MBA / startup founder types who didn't love coding for its own sake kind of felt a huge pain that they had to "play along" with devs talking about frameworks, optimal algos, and "code quality" and the like, all while paying them massive salaries and equity stakes for what they saw as disposable item to increase revenue. Meanwhile the devs want another 2-weeks and 6 figures of salaries so they can "refactor" for no visible difference, but you can't complain because they'll leave.
Now that the code factory is in place, they can focus on what they really want, finding customers for an item. Its the drop-shipping of code and software. The people using drop-shipping don't care what the product is. Production and fulfillment are just impediments to the real goal -- selling a product.
The actual revelation of AI, if one can call it that, is how few people care about craft, quality, or enjoying work. Watching AI slop videos, ads, and music makes one realize that true artists and craftspeople are still incredibly rare. Most people are mediocre, unimaginative, disinterested, and just want the shortest path to easy riches. While it sounds negative, its more like realizing most people aren't athletes or interested in very difficult physical exertion -- its just a fact of human nature. True athletes who love sport for its own sake are rare and in a way nonsensical on their face.
In the end, we will probably lament something we lose in the process. The same way we've hollowed out culture, local businesses, family / relationships, the middle class, etc all in the name of progress before. Surely each step has had its rewards and advantages, but Molloch always takes his pound of flesh.
I'm 54 and started programming when I was 7 also. While I've enjoyed coding throughout my career, I'm loving this new phase of software dev, a lot of the hassle has now been removed and now you can concentrate on ACTUALLY building things without so many side tracks and hiccups from technical details. I guess I'm not as attached to coding as I thought I was, I actually really enjoy building software and now that has become a lot easier and I feel experienced devs are really well suited to working with AI to get it to build the right thing and to consider robustness, performance, approach/structure, architecture etc. I'm really enjoying myself at the moment!
I'm a year old than you. Recently, my father-in-law (an engineer in the '50s) was telling me about the transition from analog to digital electronics and how it changed his entire world.
I feel very fortunate that I was able to start out writing machine code and can now watch a machine write code on its own. I'm not even remotely claiming SOTA models can do what we do, but they are closer than ever before.
It's time to accept that the world has changed again.
> I started programming when I was seven because a machine did exactly what I told it to, felt like something I could explore and ultimately know, and that felt like magic. I’m fifty now, and the magic is different, and I’m learning to sit with that.
Don't take this the wrong way but this is more of an age thing rather than a technology advancement thing.
Kids growing up nowadays that are interested in computers grow up feeling the same magic. That magic is partly derived from not truly understanding the thing you are doing and creating a mental "map" by yourself. There is nothing intrinsic to computing nowadays that makes it less magic than fiddling around with config.sys, in 50 years there will be old programmers reminiscing of "Remember when all new models were coming out every few months and we could fiddle around with the vector dimensionality and chunking length to get the best of gpt-6.2 RAG? Those were the times".
> There is nothing intrinsic to computing nowadays that makes it less magic than fiddling around with config.sys
There definitely is: the rent-seeking behavior is out of control. As a kid I could fiddle with config.sys (or rather autoexec.bat) while nowadays wrestling a file path out of my phone is a battle and the system files of my phone are kept from me.
>As a kid I could fiddle with config.sys (or rather autoexec.bat) while nowadays wrestling a file path out of my phone is a battle and the system files of my phone are kept from me.
I think the magic happens at different levels of abstraction as time goes by, and it's easy to get stuck.
Us kids could fiddle with autoexec and config to get DOOM going, today's kids can fiddle with a yaml and have a MMORPG that handles 10 000 users from all over the world going.
It's not the same but I can easily imagine it feeling at least equally magical for a kid today.
Why do you allow a mobile handheld computing and communication device to define "computing" ? I understand that they are important devices and lots of people with a hacker mentality would like to be able to hack them the way old folks once hacked DOS. But the current computing environment is much, much wider than iOS/Android, and if you're going to complain about just one aspect of it, I think it would be better to acknowledge that.
In many ways, things like RPi and Arduino have actually massively expanded the realm of totally hackable computing beyond what was even possible for early personal computer users.
As others have said, it's not so much that tinkering opportunities don't exist. It's more there's a slump in the market of doing relatively easy jobs for money. You can hack on esp32 all day, but there aren't many ways to make money doing so. Making software for the iPhone was (and is still, at this point) a pretty good gig.
I figure auto mechanics contended with this 25 years ago. Now it's hard to find someone to replace your water pump, if your vehicle even has one. Like auto mechanics, though, these machines still exist and there's still a big market for those skills. It might just require more legwork to find that work.
For the same reason computing used to be defined by a Commodore 64 more than by an IBM System/370-XA mainframe from the same year — they're the most commonly and most easily accessible computing devices.
Old farts like us think the desktop is the default kind of computer, but it isn't. Most computers are phones, followed by tablets and laptops with touchscreens, and desktops are the weirdest ones.
LLM are not AI, but are a great context search tool when they work.
When people first contact ML, they fool themselves into believing it is intelligent... rather than a massive plagiarism and copyright IP theft machine.
Fun is important, but people thinking zero workmanship generated content is sustainable are still in the self-delusion stage marketers promote.
I am not going to cite how many fads I've seen cycle in popularity, but many have seen the current active cons before. A firm that takes a dollar to make a dime in revenue is by definition unsustainable. =3
I like coding AIs because they're plagiarism machines. If I ask you to do some basic data manipulation operations, I want you to do it in the most obvious, standard way possible, not come up with some fancy creative solution unless it's needed for some reason.
If I'm dockerizing an app, I want the most simple, basic, standard thing - not somebody's hand-rolled "optimized" version that I can't understand.
config.sys was understandable. Now your computer has thousands (probably more) of config.sys-sized components and you are still only one person. The classic UI may improve your ability to find the components (sometimes) but can't reduce the complexity of either the components themselves or their quantity. AI makes it possible to deal with this complexity in a functional way.
Your last point is probably correct though, because AI will also allow systems to become orders of magnitude more complex still. So like the early days of the internet, these are still the fun days of AI, when the tool is overpowered compared to its uses.
It seems AI is putting senior developers into two camps. Both groups relate to the statement, "I started programming when I was seven because a machine did exactly what I told it to, felt like something I could explore and ultimately know, and that felt like magic. I’m fifty now, and the magic is different, and I’m learning to sit with that."
The difference is that the first camp is re-experiencing that feeling of wonder while the second camp is lamenting it. I thankfully fall in the first camp. AI is allowing me to build things I couldn't, not due to a lack of skills, but a lack of time. Do you want to spend all your time building the app user interface, or do you want to focus on that core ability that makes your program unique? Most of us want the latter, but the former takes up so much time.
I am firmly in both camps. On one hand, getting stuff working has its own thrill.
On the other hand, I step back, look at the progress made in just the last year, and realize that not only is my job soon to be gone, but pretty much everyone's job is gone that primarily does knowledge work.
I feel there's now an egg timer set on my career, and I better make the best of the couple of minutes I have left.
It sounds like you don’t particularly care about the user interface, and that’s why you’re okay with delegating it. I think the developers who don’t like delegating to AI are the ones who care about and have strong opinions about all the parts. To them there are no unimportant parts where the details don’t matter.
Similarly, I'm using it to write apps in non-native languages, like rust. My first foray into it led to finding poor documentation examples. AI allows me to create without spending large swaths of time learning minutia.
I'm enjoying it to a point, but yes, it does eliminate that sense of accomplishment - when you've spent many late nights working on something complex, and finally finish it. That's pretty much gone.
Thank you for writing this. My feelings are very similar to the ones described by the author and the timeline almost matches. The thrill of tecnology for me started to fast decay since the early 2010s and now I see it as a no-return stage. I still have fun with my retro hardware & software but I am no longer an active practitioner and I have pivoted my attention and my efforts somewhere else. Unfortunately, I no longer feel excited for the future decades of tech and I am distancing myself from it.
I think this is something else, though. Even before AI really hit sweng, there were early signs of a collective tech depression a la "The best idea we can come up with is strapping screens to people's heads?", the "Are we the bad guys?" convo around social media, the crypto brain drain, etc. The queue of Next Big Things has increasingly felt more forced and controversial to many, and being in tech last lost much of its lustre to them.
I think it's healthy for everyone to evaluate whether one's personal reaction to AI is colored by this trend, or whether it's really being evaluated independently. Because while I share many of the negative feelings listed earlier, to me AI does still feel different; it has a lot more real utility.
If I look back, it was not even AI, since I don't use any AI model (almost at all). So, I don't think AI was really the main divisor for me. I have a feeling it was the "you don't own anything and everything is now a cloud/subscription" that was the main disappointment, which happened years before LLMs or AI-assisted programming.
What else do you do to make rent ? I feel the same way as you and I have no idea what else pays well for quality craftsmanship. I am staring at the abyss of hyper intelligent people with posh resumes and now wondering what to do.
That's correct! Even though I have been focused more on math lately (which was always my main study area outside the tech industry). That being said, I have limited my internet usage to ~2 hours per day to answer questions from students and I am doing a lot of homeschooling with my son.
I'm lucky because I work as an independent consultant. I get paid to deliver solutions, but I get to choose how to create those solutions. I write whatever code I want however I want. As long as it solves the problem, no one cares.
I started programming in 1980, and I having just as much fun now as I did then. I literally cannot wait to sit down at my IDE and start writing.
But that was not always true. When I worked for a larger company, even some startups, it was not always fun. There's something about having full control over my environment that makes the work feel like play.
If you feel like programming isn't fun anymore, maybe switching to a consulting gig will help. It will give you the independence and control that you might be craving.
I have a hard time telling whether agentic coding tools will take a big bite out of the demand for software consultants. If the market is worried about SaaS because people think companies will use AI to code tools internally vs buying them, I would think the same would apply to consultants.
I’ve seen the code current tools produce if you’re not careful, or if you’re in a domain where training data is scarce. I could see a world where a couple of years from now companies need to bring outside people to fix vibe coded software that managed to gain traction. Hard to tell.
It's a good question. I think short-term (5 years) the easy jobs will go away. No one is going to write a restaurant web site by hand. Maybe the design will still be human-made, but all the code will be templated AI. Imagine every WordPress template customized by AI. That's a whole bunch of jobs that won't exist.
Right now I'm creating clinical trial visualizations for biotech firms. There's some degree of complexity because I have to understand the data schema, the specifics of the clinical trial, and the goals of the scientists. But I firmly believe that AI will be able to handle most of that within 5 years (it may be slower in biotech because of the regulatory requirements).
But I also firmly believe that there is more demand for (good) software today than there are programmers to satisfy it. If programmers become 10x more efficient with AI, that might mean that there will be 10x more programs that need writing.
It is an interesting time to be at the peak of accumulated knowledge of a 50 year career and then see a tool that has the ability to create the code to preform a task I need. But that creation doesn’t feel the same and does take interaction from outside usually to make that code useable. I think that time for required interaction will be short lived and as code bases are slowly supplanted with generated code things will be more homogenized. I pray that will lead the people being allowed to solve more complex issue and I can’t wait to see the advancements to come. I just hope we can look back and say it was worth it, that don’t we end up with a bunch of AI generated crap that degrades technology by obscuring the useful and we end up worse than before.
it isn't all funeral marches and group crying sessions.
And don't let the blog post fool you , it is a rant about AI -- otherwise we would have heard complaints about the last 200 paradigm shifts in the industry over the past thirty years.
Sure, we got our share of dilbert-style agile/waterfall/tdd jokes shoved in our face, but no one wrote a blog post about how their identity was usurped by the waterfall model .
>And different in a way that challenges the identity I built around it and doesn’t satisfy in the way it did.
Everyone should do their own thing, but might I suggest that it is dangerous for anyone in this world to use a single pillar as their foundation for all identity and plinth of their character.
but no one wrote a blog post about how their identity was usurped by the waterfall model
I don’t know about that.
Waterfall mostly died before the rise of blogs, of course, but around the dawn of Agile I remember lots of posts about how nothing was properly designed any more, nothing was ever finished, and you never knew what the specification was.
They used to be real engineers, but now it was just all chaos! They couldn’t design anything any more!
> Sure, we got our share of dilbert-style agile/waterfall/tdd jokes shoved in our face, but no one wrote a blog post about how their identity was usurped by the waterfall model .
That's a difference in form, but not really a difference in content.
Thanks for reminding me of the word plinth. I agree with the author that the job is less fun now, less interesting. I'm doing and accomplishing more, and it matters less. And unfortunately, having other ways of defining your identity doesn't really help, for me. What it does is make those other aspects of myself relatively more attractive as careers, in comparison to this one. Although then again, I suppose it's helping in the way you intend: I could leave (and I might), I could adapt. So I'm feeling none of the fear or anxiety about AI. Just something that I think is roughly boredom.
> otherwise we would have heard complaints about the last 200 paradigm shifts in the industry over the past thirty years.
We have though. And they all received some version of "piss off, geezer."
Have you not noticed how the hype cycles and counter-hype haters buried most of the meaningful considered conversations about new technologies and methodologies across your career?
I'm 60, started with a Tandy Model I in junior high, learned 6809 assembly for my Color Computer, loved the fact we could put certain values in particular memory positions and change the video mode and put pixels to the screen. It's been decades of losing that level of control, but for me coding is the fun part. I've never lost that spark of enjoyment and really obsession I felt early on. I enjoy the supposedly boring job of writing SQL and C with embedded SQL and working with business concepts to produce solutions. Coding is the fun part for me, even now.
I got moved up the chain to management and later worked to get myself moved back down to a dev role because I missed it and because I was running into the Peter Principle. I use AI to learn new concepts, but mostly as a search engine. I love the tech behind it, but I don't want it coding for me any more than I want it playing my video games for me. I was hoping AI would show up as robots doing my laundry, not doing the thing I most enjoy.
TRS-80 CoCo! First computer I owned (started with a borrowed Commodore Pet). I appreciate the simplicity of flicking the switch and writing code in basic. One of my favorite gaming memories is this beauty: https://www.youtube.com/watch?v=sQKQHKdWTRs
Yeah. It's not that it wasn't 'professionalized' back in the day, it's that everything has changed--the attitude, the people involved, the kinds of businesses there are, the philosophy. There was a...mystery about it, a feeling like you were entering a different world and that world was a place where you were close the the machine and...I just can't describe it. It was more visceral.
I made my first BASIC program in the late 70s on a Decwriter, which was basically a wide-carriage printer with a keyboard, attached via acoustic modem to a time-sharing system. And it was the best thing ever.
I'm the exact age as the author and this post could have been written by me (if I could write). It echoes my story and sentiment exactly right down to cutting my literal baby teeth on a rubber key ZX Spectrum.
The anxiety I have that the author might not be explicitly stating is that as we look for places we add genuine value in the crevices of frontier models' shortcomings those crevices are getting more narrow by the day and a bit harder to find.
Just last night I worked with Claude and at the end of the evening I had it explain to me what we actually did. It was a "Her" (as in the movie) moment for me where the AI was now handholding me and not the other way around.
> The anxiety I have that the author might not be explicitly stating is that as we look for places we add genuine value in the crevices of frontier models' shortcomings those crevices are getting more narrow by the day and a bit harder to find.
That's exactly it. And then people say "pivot to planning / overall logic / high-level design," but how long do we have before upper management decides that AI is good enough at that stuff, too, and shows us all the door?
If they believe they can get a product that's 95% of what an experienced engineer would give them for 5% of the cost, why bother keeping the engineer around?
English is my second language so I'm not well tuned to picking up on the phrases that expose writing as AI generated. Even so it doesn't really change the sentiment being conveyed nor the fact that it's better writing than I could muster.
I'm a developer, mid/late fifties. My first computer was a Commodore Vic 20, so I guess I started writing code at about the same time as the OP even if I'm a few years older.
Yes, I mourn the end of my craft and all that that. But also:
This isn't the end of hand-written code. A few will still get paid to do it in niche domains. Some will do it as a hobby or craft activity - like oil painting or furniture making. The tooling will move on and become more specialised and expensive. Like owning Japanese woodworking tools.
But software construction as a human-based economic activity is clearly about to slam hard into a singularity, and many of us who rely on our hard-won skills to pay the bills and survive are going to find ourselves unemployed and unemployable. A few early adopters will get to stay on and sip their artesanal coffee and "build beautiful things" while their agent herds toil. But most of us won't. Software has always mostly been just CRUD apps, and that is going to need a whole lot less people going forward. People like me, perhaps, or you.
Some, who have sufficient financial and chronological runway, will go off and do other things. Many won't have that opportunity. I have personal experience of late-career unemployment - although I'm currently working - and its not pretty. A lot of lives are going to to be irreparably disrupted by this. Personally, I'd hoped that I could make it through to some stable kind of retirement, but I just don't see it anymore.
The contrast between this and https://news.ycombinator.com/item?id=46923543 (Software engineering is back) is kind of stark. I am using frontier models to get fun technical projects done that I simply didn't have time for since my late teens. It is still possible to understand an architecture down to the hardware if you want to, but it can happen a lot faster. The specifications are queryable now. Obscure bugs that at least one person has seen in the past are seconds away instead of minutes or hours of searching. Even new bugs have extra eyes on them. I haven't written a new operating system yet but it's now a tractable problem. So is using Lean or Julia or some similar system to formally specify it. So far I've been digging into modern multithreaded cache performance which is just as fascinating as directly programming VGA and sound was in the early PC days. Linux From Scratch is still up to date. You can get FPGAs that fit in your USB port [0]. Technical depth and low-level understanding is wherever you want to look for it.
I don't disagree that technology is less fun in an AI era. The question is, what other careers are out there for someone who wants to make things?
About a decade ago, I went through a career crisis where I couldn't decide what job to do - whether technology was really the best choice for my particular temperament and skills.
Law? Too cutthroat. Civil service? Very bureaucratic. Academia? Bad pay. Journalism? An industry in decline.
It is a shame, what is happening. But I still think, even with AI hollowing out the fun parts, tech remains the best job for a smart, motivated person who's willing to learn new things.
Fact is, the tech sector is filled with folks that find zero joy in what they do, chose a career for financial reasons, and end up being miserable to everyone including themselves.
The ex-service people would call these folks entitled Shitbirds, as no matter the situation some will complain about everything. Note, everyone still does well in most large corporate settings, but some are exhausting to be around on a project. =3
The reason we don’t have the right to be lazy is because of the people who find “meaning” in toil. I do not want to work and AI is the most anti work technology in human history.
Bertrand Russel literally wrote a book called “in defense of idleness” because he knew that heavy hitters like him had to defend work abolitionism. The “work is good” crowd is why we can’t have nice things. You guys are time thief’s and ontologically evil. May all work supporters reincarnate as either durian fruits or cockroaches.
You seem very passionate about your opinions, but are you happy?
The fact remains LLM can't reach comparable human error rates without consuming 75% of the energy output of our entire local galaxy.
While I find true Neuromorphic computing topics more interesting, the emergence of the LLM "AI" true believer is deeply concerning to those that understand how they are actually built. =3
I just had an AI write a toy game engine with realistic camera and lens simulation on the view from scratch in rust in one day while i was working on other stuff all for the price of a $20/month Cursor subscription
"AI" LLM don't write anything, but copied someones symbolic isomorphic work that could fit the expected definition in the reasoning model.
Like all copyright submarines, your firm now runs the non-zero risk someone will sue for theft, or hit the product with a DMCA claim. What is the expected value of piracy versus actual business. =3
Information wants to be free. No one in any administration now or in the future will ever go back to the "let's sue grandma for 1 trillion dollars" era of the early 2000s. Piracy is good and important for national security.
I think one of the big distinctions between people who like building with AI and those who don't, is that the people who are pro-AI are building their own ideas, of which they have many.
The people who are anti-AI are largely building other people's ideas, for work. And they have no desire to ramp up velocity, and it's not helpful to them anyway because of bureaucratic processes that are the real bottleneck to what they're building.
There's nothing "hollowed out" about directing an AI effectively, the feedback is as quick and tight as it always was. The trick is that you don't just "vibe code" and let the AI one-shot the whole thing: you should propose the change first and ask the AI about a good, detailed plan for implementing it. Then you review what the robot has proposed (which is trivial compared to revising code!) make sensible changes, ask for feedback again, and repeat. By the time the AI bot has to write actual code, it's not running on vibes anymore: it's been told exactly what to do and how to assess the result. You spend more time upfront, but a lot less on fixing the AI's mistakes.
> you should propose the change first and ask the AI about a good, detailed plan for implementing
Why ask though?
If I’m familiar with a project, more often than not, I usually have a very good idea of the code I have to write within minutes of reading the ticket. Most of the time taken is finding the impact of the change, especially with dependencies that are present in the business domain, but are not reflected in the code.
I don’t need to ask what to code. I can deduce it as easily as doing 2+2. What I’m seeking is a reason not to write it the way I envisioned it. And if those reasons are technical, it’s not often a matter of code.
Because that's how you ensure that the AI has the right idea about what to do. If the proposed plan has problems, you work with the AI to fix them before setting it to work. AI is not as smart as you, so it needs to be told how to go about doing things.
Any change that I’ve done which resulted in more than a a 10 lines diff are done with tools (copy-paste, vim-fu, refactor tools or script, snippets, code generators,…) Why would I spend time babysitting an LLM when I could have just done it myself? The purpose of automation is to lighten my workload, not to add to it.
>> Why would I spend time babysitting an LLM when I could have just done it myself
Exactly this. From what I understand an LLM has a limited context and will get that context wrong anyway and that context is on the edge of a knife and can easily be lost.
I'd rather mentor developers and build a team of living, breathing, thinking, compassionate humans who then in turn can mentor other living, breathing, thinking, compassionate humans.
An LLM is also a code generator. There is a scale of changes where using one is just not worthwhile (quite possibly around the 10 lines mark, as you said) but other than that, why would you want to write code yourself line-by-line that you could just generate?
Snippets and other code generation tool has been here for decades. If you’re writing Java in IDEA, it’s basically a tab-fest with completion. And if you’re fluent in your editor, you do much more complex than editing lines.
Oh my god. This is me. If I were any better at writing, I could have written this, the author is even the same age as me (well, a year younger) and followed a similar trajectory. And a lot of what I've been feeling lately feels similar to burnout (in fact I've been calling it that), but it really isn't burnout. It's... this, whatever this is... a "fallow period" is a good term.
And I feel like an old man grumbling about things changing, but... it's not the same. I started programming in BASIC on my Tandy 1000 and went to college and learned how to build ISA cards with handwritten oscilloscope software in the Computer Engineering lab. My first job was writing firmware. I've climbed so far up the abstraction chain over a thirty year career and I guess I don't feel the same energy from writing software that first got me into this, and it's getting harder to force myself to press on.
Not going to pull age or title rank here -- but I suggest if your use of AI feels empty, take advantage of its speed and plasticity and iterate upon its output more, shape the code results. Use it as a sculptor might too -- begin with its output and make the code your own. I particularly like this latter approach when I am tasked with use of a language I view as inferior and/or awkward. While this might read as idealistic, and I agree that there are situations where this interaction is infeasible or inappropriate, you should also be encountering problems where AI decidedly falls on its face and you need to intervene.
At my first full time job in the early 2000s I was tasked with building a webscraper. We worked for law firms representing Fortune 500 companies and they wanted to know who was running "pump and dump" stock schemes on stocks using Yahoo Finance message boards.
At the time, I didn't know the LWP::Simple module existed in Perl so I ended up writing my own socket based HTTP library to pull down the posts, store them in a database etc. I loved that project as it taught me a lot about HTTP, networking, HTML, parsing and regexes.
Nowadays, I use playwright to scrape websites for thing I care about (e.g. rental prices at the Jersey Shore etc). I would never think to re-do my old HTTP library today while still loving the speed of modern automation tools.
Now, I too have felt the "but I loved coding!" sense of loss. I temper that with the above story that we will probably love what comes next too (eventually).
A blacksmith was a person that picked up chunks of carbon and heated them to they were glowing red and beat the iron to submission with a hammer in their hands.
Today iron is produced by machines in factories by the mega-tonne.
We just happened to live in the age where code when from being beaten by hand to a mass produced product.
Especially anyone in their 40s or 50s who is close enough to retirement that a career shift is unappealing but far enough from retirement that a layoff now would meaningfully change that timeline or QOL. I don't blame people for feeling uneasy.
I'm probably 7 or 8 years from an easy retirement myself, so I can appreciate how that feels. Nobody really wants to feel disruption at this age, especially when they're the breadwinner for a family.
> far enough from retirement that a layoff now would meaningfully change that timeline or QOL
yeah this is where i am. Turning 50 in April, I have two boys about to hit college and the bills associated with that and i have 15 years before i'm forced to retire. I have to up the salary to pay/help for college and i have to keep the 401k maxed + catchups maxed over the next 15 years to pull off retirement. The change from AI is scary, it may be good for me or it may be devastating. Staring down that barrel and making career decisions with no room for error (no time to rebuild) is pretty harrowing.
What if in reality it's not one or the other, but having 10% odds of being good enough to be selected to become a technician operating the machines, 10% odds of getting so enraged as to dedicate your lives to pushing back, and 80% odds of being shoved out due to lower demand and value of your work, having to go do something else, if you still can?
No. By this logic, if they wanted to stay with the times they should have sought capital investment for their own industrial forges, joined their local lodges, climbed the ranks, lobbied their governments for loose safety regulations, and plied their workers with propaganda about how "we're in a recession and have to tighten our belts".
Think of the wonderful world we could have if everyone just got their shit together and became paper trillionaire technocrats.
Some of them feel bad about it and some of them refined metallurgy to build Saturn V rockets and go to space. We are very much living in the new space race. The discussion here is split 50/50 between the “Thank you! I feel the same way” folks and the “I am having the time of my life!” folks.
Programming is not art for me. I do not find it useful to gold plate solutions. I prefer getting the job done, sometimes by any means necessary for "the vehicle" to continue running.
AI often generates parts of code for my hobby projects, which allow me speed running with my implementation. It often generates errors, but I am also skilled, so I fix error in the code.
I use AI as boiler plate code generator, or documentation assist, for languages I do not use daily. These solutions I rarely use 1:1, but if I had to go through readme's and readthedocs, it would take me a lot longer.
Would there be more elegant solutions? often - yes. Does it really matter? For me - not.
I'm yet to see perfect code from a human or ai. Most of the people I work with that want everything to be in a perfect state typically get way less done. To your point, sometimes we are just mechanics and that's okay.
Perhaps detractors just gave up and didn't bother improving, but I'm able to prompt the AI to write excellent code. And it's very easy to correct when it's gone awry . This idea that all AI code is bad is just the ego talking.
If vendors can't be bothered to use a C compiler from the last decade, I don't think they'll be adopting AI anytime soon.
At my work, as of 2026, we only now have a faction riled up about evangelizing clean code, OOP, and C++ design patterns. I hope the same delay keeps for all the rest of the "abstraction tower".
It is happening in embedded as well. I noticed just the upgrade from Gemini 2.5 to 3.0 Pro went from "I can get the assembly syntax mostly right but I don't understand register lifetimes" to "I can generate perfect assembly by hand".
I just saw a Reddit post yesterday about somebody that successfully one-shot in Gemini 2.5 the bare metal boot code for a particular board with the only input being the board's documentation.
The issue is that AI will be creating software at whatever abstraction layer it is asked to produce. Right down to ASM maybe even machine code if someone actually wanted or needed that. Perhaps not the AI of today but given a few years I'll be quite surprised if it still can't.
If we can take a computer as powerful as today’s laptops and make it crawl because of the amount of inefficiencies in software like Teams, I’m not holding breath for embedded. If you apply the same kind of engineering principle as Anthropic, you’ll be laughed out of the room.
To be honest I find myself in disagreement with this attitude despite being a semi old-school programmer myself (I cut my teeth on C/C++/assembly in the early 00s). I think the author is caught up in the artist's dilemma - that being, they want to take part in the craft for the joy of it rather than for the results it produces.
Harsh take: nobody's stopping you from doing that. You can dust off an old computer right now and write software for it. All of that joy still exists. It's just that nobody's going to pay you for it and it's no longer mainstream relevant - the world's moved on from those times, in many cases for good reasons.
So I think what the person really wants is to have their cake and eat it too. They want to be mainstream relevant and employable... whilst having fun.
That's a luxury. More specifically it's a first world luxury. Most people don't get to have that. True, many programmers did get to have it for a time - but that doesn't mean we're entitled to it forever - not unless it's somehow directly tied to producing valuable results.
But you know it's strange to me that programmers lose sight of this. I became a programmer because I saw Wolfenstein 3D on a 386, and was inspired by there being a "world in the box". I wanted to make worlds too. That's an important distinction: I didn't become a programmer because I wanted to write code, I became a programmer because I wanted to create worlds. The programming is a means to an end, it's not the end unto itself - at least I never looked at it that way. And that's in spite of the fact that I genuinely enjoy programming in and of itself. But I still value the outcome more.
And in fact I actually went through a related transition some years ago on a personal level, when I shifted from always trying to write game engines from the ground up to being willing to use engines like Unity or Unreal. It felt like a betrayal - I no longer had a deep understanding of every layer, I could no longer bespoke craft everything to my personal whims. But you know what? It was absolutely the right choice because it put me on track to actually finishing the games I was working on, which was the entire point of the exercise in the first place.
So I don't bemoan or regret it for a second.
Anyway hope that didn't sound too blunt - it's just my way of speaking - I can sympathize with the author but I just think it's on the self-indulgent side.
Having been in this game about 10 years longer I can understand how he feels. I distinctly remember when I realized that C compilers for the ARM produced better assembly than I could code by hand. Bitter sweet but the code being written became larger and more complex because of it.
Modern coding has become more complex than I would have ever thought possible. The number of technologies an individual would have to master to actually be a expert "full stack" coder is ludicrous. It is virtually impossible for an individual to prototype a complex Web based app by themselves. I think AI will lower that barrier.
In return we will get a lot more software - probably of dubious quality in many cases - as people with "ideas" but little knowledge start making apps. Not a totally bad thing but no utopia either. I also think it will likely reduce the amount of open source software. Content producers are already hoarding info to prevent AI bots from scraping it. I see no reason to believe this will not extend to code as more programmers find themselves in a situation more akin to musicians than engineers.
I humbly submit this interview with Grady Booch (if you know, you know) talking about the "3rd golden age of software engineering - thanks to AI": https://youtu.be/OfMAtaocvJw
I feel like the conversation does a good job of couching the situation we find ourselves in.
I am a little older than OP. I don't think I've ever had that feeling about a programming project for work that came from someone else.
Generally, I get that feeling from work projects that I've self-initiated to solve a problem. Fortunately, I get the chance to do this a lot. With the advent of agentic coding, I am able to solve problems at a much higher rate.
Quite often, I'll still "raw dog" a solution without AI (except for doc lookups) for fun, kind of as a way to prove to myself I can still do it when the power's out.
Mid-50s and also started programming in BASIC on any computer I could get my hands on, whether my own C64 or the BBC Micros or IBM XTs at school.
My take on AI really comes down to this:
Debugging your own code was an adventure and finally getting something work was a major rush. You had a sense of achievement.
Debugging LLM generated code is hell - it's basically debugging someone else code. There's no sense of achievement and no jump out of your chair and bounce around the room moments.
Sure, the code comes out fast, and maybe I'll find joy in finishing some side projects I've been tinkering with on and off since I first started programming, or it may just end up feeling like it's not mine any more.
"They’re writing TypeScript that compiles to JavaScript that runs in a V8 engine written in C++ that’s making system calls to an OS kernel that’s scheduling threads across cores they’ve never thought about, hitting RAM through a memory controller with caching layers they couldn’t diagram, all while npm pulls in 400 packages they’ve never read a line of."
I'm the exact same demographic as the author, just turned 50, writing code since childhood in BASIC. I'm dealing with the AI in programming issue by ignoring it.
I still enjoy the physical act of programming so I'm unsure why I should do anything that changes that. To me it's akin to asking a painter to become a photographer. Both are artists but the craft is different.
Even if the AI thing is here to stay, I think there will be room for people who program by hand for the same reason there's still room for people who paint, despite the invention of the camera.
But then, I'm somebody who doesn't even use an IDE. If I find an IDE obtrusive then I'm certain I'll find an AI agent even more so.
The deep, profound, cruel irony of this post is that it was written by AI.
Maybe if you work in the world of web and apps, AI will come for you. If you don't , and you work in industrial automation and safety, the I believe it will not.
I was thinking the same thing, but I thought I was being too cynical given it was a post lamenting about all the cognitive abstractions we have created.
I was 7 in 1987, learned LOGO and C64 BASIC that year, and I relate to this article as well.
It feels as though a window is closing upon the feeling that software can be a powerful voice for the true needs of humanity. Those of us who can sense the deepest problems and implications well in advance are already rare. We are no more immune to the atrophy of forgetting than anyone.
But there is a third option beyond embrace or self-extinguish. The author even uses the word, implying that consumers wanted computers to be nothing more than an appliance.
The third option is to follow in the steps of fiction, the Butlerians of Dune, to transform general computation into bounded execution. We can go back to the metal and create a new kind of computer; one that does have a kind of permanence.
From that foundation, we can build a new kind of software, one that forces users to treat the machine as appliance.
It has never been done. Maybe it won't even work. But, I need to know. It feels meaningful and it has me writing my first compiler after 39 years of software development. It feels like fighting back.
This proposal feels really vague to me, I don't really understand what this actually does. Can you explain more? What exactly is a computer with permanence? What is software that forces a user to treat the computer it runs on "as an appliance"? In what ways is this different from any general-purpose computer, and what's the reason why a user would pick this over something standard?
I mean "permanence" in the same vague senses that I think the OP was hinting upon. A belief that regardless of change, the primitives remain. This is about having total confidence that abstractions haven't removed you the light-cone of comprehension.
Re: Appliance
I believe turing-completeness is over-powered, and the reason that AGI/ASI is a threat at all. My hypothesis is that we can build a machine that delivers most of the same experiences as existing software can. By constraint, some tasks would impossible and others just too hard to scale. By analogy, even a Swiss-army knife is like an appliance in that it only has a limited number of potential uses.
Re: Users
The machine I'm proposing is basically just eBPF for rich applications. It will have relevance for medical, aviation, and AI research. I don't suppose that end-users won't be looking for it until the bad times really start ramping up. But, I suppose we'll need to port Doom over to it before we can know for sure.
> We can go back to the metal and create a new kind of computer; one that does have a kind of permanence.
it's kind of strange to think about but i guess now there's a new incentive to do something truly new and innovative. The llms won't be able to do it for you.
My goal isn't to make LLM-assistance impossible; it will still be possible. In fact, GPT2-level inference is one of launch demos I have planned if I can finish this cursed self-hosting run.
My goal is to make training (especially self-training) impossible; while making inference deterministic by design and highly interpretable.
The idea is to build a sanctuary substrate where humans are the only beneficiaries of all possible technical advancements.
I’m 5 years older than James and had a similar discovery and enthusiasm path which got lost in the era commercial big modern systems. The soul of the machine has long since disappeared.
There was a brief period when I discovered BeOS 4.5 which brought the wonder back in September 1999. That was short lived. I occassionally get the bug with Haiku but sadly dont have the spare time during this last decade.
Enthusiast on small platforms still chase the bug, in these smaller communities you can actually make a difference, and there is still passion to be found there. There is also some innovation since experimental concepts can be tried out.
Somebody still needs to do lower-level work and understand machine architecture. Those feeling like they might be replaced in web or app dev might consider moving down the stack.
I turn 52 this year. I also started at 10 years old programming in a combination of AppleSoft BASIC and assembly language and typing machine code out of books so I could use Double Hires graphics since it wasn’t supported by BASIc and doing my own assembly language programming.
I stuck with C and C++ as my bread and butter from 1996-2011 with other languages in between.
I don’t miss “coding” because of AI. My vision has been larger than what I could do myself without delegating for over a decade - before LLMs.
“coding” and/or later coordinating with people (dotted line) reporting to me has been a necessary evil until a year or two ago to see my vision go to implementation.
I absolutely love this new world. For loops and while loops and if statements don’t excite me in my 50s. Seeing my vision come to life faster than I ever could before and having it well archited does.
I love talking to “the business” and solving XYProblems and getting to a solution 3x faster
I'm a few years behind you. I got started on my uncle's handed down vic 20 in the late 80s.
The culture change in tech has been the toughest part for me. I miss the combination of curiosity, optimism, creativity, and even the chaos that came with it. Nowadays it's much harder to find organizations like that.
I think the true genuinely-love-programming type of people will increasingly have to do what so many other people do, and that's separation of work and personal enjoyment. You might have to AI-architect your code at work, and hand code your toy projects on the weekend.
I prefer to see it as the automtion of the IT age.
All other professions had their time when technology came and automated things.
For example wood carvers, blacksmiths, butchers, bakers, candlestickmakers etc etc. All of those professions have been mostly taken over by machines in factories.
I view 'ai' as new machines in factories for producing code. We have reached the point where we have code factories which can produce things much more efficiently and quicker than any human can alone.
Where the professions still thrive is in the artisan market. There is always demand for hand crafted things which have been created with love and care.
I am hoping this stays true for my coding analogy. Then people who really care about making a good product will still have a market from customers who want something different from the mass produced norm.
> For example wood carvers, blacksmiths, butchers, bakers, candlestickmakers etc etc.
Very, very few of those professions are thriving. Especially if we are talking true craftsmanship and not stuffing the oven with frozen pastries to create the smell and the corresponding illusion of artisinal work.
They are thriving where I live. There is a huge artisinal market for hand crafted things. There are many markets, craft centers, art fairs, regular classes from professionals teaching amateurs etc. In most rural communities I have visited it is similar.
They're existing, not really thriving. Artisanal things have become more popular as a hobby, but even people who get into them commercially rarely make real money off of it. The demand exists, but purely as a novelty for people who appreciate those types of things, or perhaps in really niche sub-markets that aren't adequately covered by big businesses. But the artisans aren't directly competing with companies that provide similar goods to them at scale, because it's simply impossible. They've just carved out a niche and sell the experience or the tailoring of what they're making to the small slice of the population who's willing to pay for that.
You can't do this with software. Non-devs don't understand nor appreciate any qualities of software beyond the simplest comprehension of UX. There's no such thing as "hand-made" software. 99% of people don't care about what runs on their computer at all, they only care about the ends, not the means. As long as it appears to do what you want, it's good enough, and good enough is all that's needed by everyone.
The problem for software artisans is that unlike other handmade craftwork, nobody else ever sees your code. There's no way to differentiate your work from that which is factory-made or LLM-generated.
Therefore I think artisan coders will need to rely on a combination of customisation and customer service. Their specialty will need to be very specific features which are not catered for by the usual mass code creation market, and provide swift and helpful support along with it.
I think the issue at the core of the analogy is that factories, traditional factories, excel at making a ton of one thing (or small variations thereof). The big productivity gains came from highly reliable, repeatable processes that do not accommodate substantial variation. This rigidity of factory production is what drives the existence of artisan work: it can always easily distinguish itself from the mass product.
This does not seem true for AI writing software. It's neither reliable nor rigid.
What assembly lines and factories did for other manufacturing processes is to make it feasable for any person to be able to make those things. In the past only very skilled professionals were able to create such things, but mechanisation and breaking down manufacturing processes into small chunks made the same things be able to be achieved by low skilled workers.
IMO that is exactly what is happening here. Ai is making coding apps possible for the normal person. Yes they will need to be supervised and monitored, just like workers in a factory. But groups of normal low skilled workers will be able to create large pieces of software via ai, whic has only ever been possible by skilled teams of professinoals before.
Yes, I think that's how it will go, like all those other industries. There will be an artisanal market, that's much smaller, where the (fewer) participants charge higher prices. So it'll (ironically?) end up being just another wealth concentrator. A few get richer doing artisanal work while most have their wage depressed and/or leave the market.
50 myself, and started coding with a Commodore 64, but only really picked it up seriously with the advent of open source software, and that feeling of being able to dig around any component of the system I wanted to was exhilarating.
I think that's one of the biggest things that gives me pause about AI: the fact that, if they prove to be a big productivity boost, you're beholden to huge corporations, and not just for a one-time purchase, but on an ongoing basis.
Maybe the open source models will improve, but if keeps being driven by raw compute power and big numbers, it seems to tilt things very much in favor of those with lots and lots of capital to deploy.
Wow this hits home - I just turned 51 and I also started coding at age 7, writing BASIC on a TRS-80 Model III.
I still have a very distinct memory when my father told me he was buying us our first home computer. I remember him telling me that you could use the computer to make games. I was so excited by the idea and amazing by this technology (that I hadn't yet even remotely understood). I remember saying "Oh, you just tell it to make a game? And it makes a game?" He explained to me then what programming was.
When we got the TRS-80, he and I worked together to build a game. We came up with an idea for a text adventure game called "Manhole Mania" - you were a city works employee exploring the sewers after reports of strange noises. We never finished much of it - maybe just the first few "rooms".
Maybe this weekend I will tell Codex to make me a game.
Well yes it has changed. But look at everything that can be accomplished with these abstractions/libraries/frameworks that exist.
Why reinvent the wheel.
Yes, there might be less room for the Wild Wild West approach, as mentioned in the article: But that is the structure of compounded knowledge/tooling/code available to developers/others to create more enriched software, in the sense that it runs on what is available now and provides value in today's age of computing.
I also had a 486DX2-66. And I recall coding in Assembly, Pascal, C etc.
I do not miss it. These days I can create experiences that reach so many more people (a matured Interneet with realtime possibilities - to simplify) and with so much more potential for Good. Good in the sense of usefulness for users, good in the sense of making money (yeah, that aspect still exists).
I do understand your sentiment and the despairing tone. There have been times when I was struck by the same.
But I do not miss 1995 and struggling with a low-level formatted HD and Assembly that screwed up my floppy disks, or the worms that reached my box, or the awful web sites in terms of UX that were around, or pulling coaxial cables around for LAN parties.
It's just a different world now. But I get what you are saying, and respect it. Stay optimistic. :)
> The feedback loop has changed. The intimacy has gone. The thing that kept me up at night for decades — the puzzle, the chase, the moment where you finally understand why something isn’t working — that’s been compressed into a prompt and a response
It's so strange to read because to me its never been more fun to make software, its especially never been easier for an individual. The boring parts are being automated so I can work on the bespoke and artistic parts. The feedback loop is getting shorter to making something nice and workable. The investigation tools for profiling and pinpointing performance bottlenecks are better than ever, where Claude is just one new part of it.
I have given the topic some thoughts. I concluded that the ONLY way for ordinary people (non-genius, IQ <= 120) to be really good, be really close to the genius, is to sit down, condensate the past 40 or so year's tech history of three topics (Comp-Arch, OS and Compiler) into a 4-5 years of self-education.
Such education is COMPLETELY different from the one they offered in school, but closer to those offered in premium schools (MIT/Berkeley). Basically, I'd call it "Software engineering archaeology". Students are supposed to take on ancient software, compile them, and figure out how to add new features.
For example, for the OS kernel branch:
- Course 0: MIT xv6 lab, then figure out which subsystem you are interested in (fs? scheduler? drivers?)
- Course 0.5: System programming for modern Linux and NT, mostly to get familiar with user space development and syscalls
- Course 1: Build Linux 0.95, run all of your toolchains in a docker container. Move it to 64-bit. Say you are interested in fs -- figure out the VFS code and write a couple of fs for it. Linux 0.95 only has Minix fs so there are a lot of simpler options to choose from.
- Course 2: Maybe build a modern Linux, like 5.9, and then do the same thing. This time the student is supposed to implement a much more sophiscated fs, maybe something from the SunOS or WinNT that was not there.
- Course 3 & 4: Do the same thing with leaked NT 3.5 and NT 4.0 kernel. It's just for personal use so I wouldn't worry about the lawyers.
For reading, there are a lot of books about Linux kernels and NT kernels.
Was this text run through LLM before posting? I recognize that writing style honestly; or did we simply speak to machines enough to now speak like machines?
Yes. This is absolutely chatgpt-speak. I see it everywhere now. It's inescapable.
At least this appears to be largely human authored and have some substance, which is generally not the case when I see these LLM-isms.
Same, been a product designer for years, still love design deep down but the essence is somehow not there anymore. reading this hit different. It's refreshing to see someone put it into words instead of the usual "stuff".
I'm roughly the same (started at 9, currently 48), but programming hasn't really changed for me. What's changed is me having to have pointless arguments with people who obviously have no clue what they're talking about but feel qualified either because:
a) They asked an LLM
b) "This is what all our competitors are doing"
c) They saw a video on Youtube by some big influencer
d) [...insert any other absurd reason...]
True story:
In one of our recent Enterprise Architecture meetings, I was lamenting the lack of a plan to deal with our massive tech debt, and used an example of a 5000 line regulatory reporting stored procedure written 10 years ago that noone understood. I was told my complaint was irrelevant because I could just dump it into ChatGPT and it would explain it to me. These are words uttered by a so-called Senior Developer, in an Enterprise Architecture meeting.
Was he entirely wrong? Have you tried to dump the stored proc into a frontier model and ask it to refactor? You'd probably have neat 20 stored procs with well laid out logic in minutes.
I wouldn't keep a ball of mud just because LLMs can usually make sense of them but to refactor such code debt is becoming increasingly trivial.
Yes. I mean... of course he was?. Firstly, I had already gone through this process with multiple LLMs, from various perspectives, including using Deep Research models to find out if any other businesses faced similar issues, and/or if products existed that could help with this. That lead me down a rabbit hole of data science products related to regulatory reporting of a completely different nature which was effectively useless. tl;dr: Virtually all LLMs - after understanding the context - recommended us doing thing we had already been urging the business to do - hire a Technical BA with experience in this field. And yes, that's what we ended up doing.
Now, give you some ideas about why his idea was obviously absurd:
- He had never seen the SP
- He didn't understand anything about regulatory reporting
- He didn't understand anything about financial derivatives
- He didn't understand the difference between Transact SQL and ANSI SQL
- No consideration given to IP
- etc etc
Those are the basics. Let's jump a little bit into the detail. Here's a rough snippet of what the SP looks like:
SELECT
CASE
WHEN t.FLD4_TXT IN ('CCS', 'CAC', 'DEBT', ..... 'ZBBR') THEN '37772BCA2221'
WHEN t.FLD4_TXT IN ('STCB') AND ISNULL(s.FLD5_TXT, s.FLD1_TXT) = 'X' THEN 'EUMKRT090011'
END as [Id When CounterParty Has No Valid LEI in Region]
-- remember, this is around 5000 lines long ....
Yes, that's a typical column name that has rotted over time, so noone even knows if it's still correct. Yes, those are typical CASE statements (170+ of them at last count, and no, they are not all equal or symmetric).
So... you're not just dealing with incredibly unwieldy and non-standard SQL (omitted), noone really understands the business rules either.
So again... yes he was entirely wrong. There is nothing "trivial" about refactoring things that noone understands.
I am in a very similar boat, age and experience-wise. I would like to work backward from the observation that there is no resource constraints and we're collectively hopelessly lost up the abstraction Jenga tower.
I observe that the way we taught math was not oriented on the idea that everyone would need to know trigonometric functions or how to do derivatives. I like to believe math curricula was centered around standardizing a system of thinking about maths and those of us who were serious about our educational development would all speak the same language. It was about learning a language and laying down processes that everyone else could understand. And that shaped us, and it's foolish to challenge or complain about that or, God forbid, radically change the way we teach math subjects because it damages our ability to think alike. (I know the above is probably completely idealistic verging on personal myth, but that's how I choose to look at it.)
In my opinion, we never approached software engineering the same way. We were so focused on the compiler and the type calculus, and we never taught people about what makes code valuable and robust. If I had FU money to burn today, I'd start a Mathnasium company focused around making kids into systems integrators with great soft skills and the ability to produce high quality software. I would pitch this business under the assumption that the jenga tower is going to be collapsing pretty much continuously for the next 25-50 years and civilization needs absolute unit super developers coming out of nowhere who will be able to make a small fortune helping companies dig their way out of 75 years of tech debt.
> They’re writing TypeScript that compiles to JavaScript that runs in a V8 engine written in C++ that’s making system calls to an OS kernel that’s scheduling threads across cores they’ve never thought about, hitting RAM through a memory controller with caching layers they couldn’t diagram, all while npm pulls in 400 packages they’ve never read a line of.
I found it a very weird section of the article, undoing most of what had been written before.
Whether it's ROM programming, writing assembly, or C, or Rust, or JS-with-stdlib, at no point was anyone "teetering". Stacks have always existed, and whether your stack was small because it just had not much under it, or huge because it's 2026, they've by and large always been stable. That's the point of a stack: you can trust the parts below the layer you're working on, and the problems you solve are still real problems that for the most part don't require knowing the lower parts of the stack but are still real problem sin programming.
It's like making fun of people who drive a company rental because they don't want to own one themselves, and can't name any part of their engine: you're just being an ass.
Even the good TS programmers understand classic programming concepts like using the right data structures, paying attention to runtime complexity, and knowing when to go "maybe it's the step below me". They can work out difficult problems just fine.
You were writing an article about how fundamentally different AI has made things: why dunk on people who got into programming more recently than you and started higher on the ladder of abstraction, mocking them for "you were already about to fall". No, they weren't. They understood the core concepts just fine, and we collectively gave them stacks that they could trust. And they would have transitioned to "the next thing" just like you've been doing.
And then "AI" showed up, and it doesn't care about silly things like "how high up the ladder you are", it just went "your skills about how to schedule, structure, plan, describe, and manage projects is the thing that matters. Those other skills are nice to haves, and will make you better at being a PM, but they're not the main focus anymore". It doesn't matter where on the ladder you are, that affects everyone.
I can share a similar experience: I began to learn programming during my first school years, on an Apple II clone with Logo, a fancy language with turtle graphics as a most distinctive feature. We used to boot Logo off 5.25" floppy disks...
I'm ~40ish but middle career and not in management. I envy this author, whatever joy he found in solving little puzzles and systems was extinguished in me very early in my career in an intense corporate environment. I was never one to love fussing much with code, but I do love solving system scale problems, which also involve code. I don't feel I am losing anything, the most annoying parts of code I deal with are now abstracted into human language and specs, and I can now architect/build more creatively than before. So I am happy. But, I was one of those types that never had a true passion for "code" and have meant plenty of people that do have that, and I feel for them. I worry for people that carved out being really good at programming as a niche, but you enter a point in your career where that becomes much less important than being able to execute and define requirements and understand business logic. And yea, that isn't very romantic or magical, but I find passion outside of what pays my bills, so I lost that ennui feeling a while ago.
Some may feel that it is a luxury to feel passionate about one’s profession, but for me a life without that is pretty depressing. A society should strive to make fulfillment in a profession possible for everyone.
To me it feels the opposite of miserable. I can give work my full attention because it allows me freedom (mostly) to pursue other passionate things. This 40 hour a week (speaking generally, for me it can triple that sometimes) cost to me is far smaller than the depression I’d feel caring deeply about my particular craft in a field that doesn’t give a shit about it. That was proven to me very early in my career and is definitely cynical, but I don’t know where all the bright eyed bushy tailed opinions out there are coming from. probably completely different domains than my viewpoint.
Of course society should be a lot of things but that’s not a reality. Like, imagine a world exists soon where not every person (or even the majority of people) are useful, even formerly useful people - we already live in this world! If raw intellectual output is the value generator in the world we live in, and is a meritocracy, the simple fact by statistics is most will be left behind. what society already does to the disabled, and the sick is proof of this already. These people take professions to suit their circumstances. I am one, and I am fine with it. but by the parameters of the game, this is how to best maximize my passion output. Many people have many ideas how to change “society” I personally think is a waste of time, society adapts to circumstances most of the time. Except the people at the bottom usually get a raw deal.
If you are feeling the way that James does, that the magic is gone... I encourage you to try creating things in a new domain.
Try making music, creating videos, making interactive LED art, building robots, or fabricating toys.
The tools we have today suddenly make it far easier and more fun to experiment with a new craft. What was once daunting is now approachable.
Start by using an AI-powered tool—without shame—to make something superficially 'cool'. Yes, we all know you used a 'cheat code' but that's okay! Now you get to dive in and deconstruct what you created. Tear it apart and learn how and why it works. Go as deep as your motivation carries you. Experiment, hack, and modify.
Just as in software, there will be many many layers of abstraction that you can work through and learn about. Many of them are overflowing with magic and sources of inspiration, I promise.
The gears of capitalism will likely continue to aggressively maximize efficiency wherever possible, and this comes with both benefits and very real costs (some of which are described James's post).. but outside the professional sphere, it appears to me that we are entering a new hobbyist / hacker / creative renaissance. If you can find a way to release enough anxiety and let the curious and creative energy back in, opportunities start showing up everywhere.
You can still have fun programming. Just sit down and write some code. Ain't nobody holding a gun to your head forcing you to use AI in your projects.
And the part of programming that wasn't your projects, whether back in the days of TPS reports and test coverage meetings, or in the age of generative AI, that bit was always kinda soul draining.
Well-written and it expresses a mood, a feeling, a sense of both loss and awe. I was there too in the 8-bit era, fully understanding every byte of RAM and ROM.
The sense of nostalgia that can turn too easily into a lament is powerful and real. But for me this all came well before AI had become all consuming... It's the just the latest manifestation of the process. I knew I didn't really understand computers anymore, not in the way I used to. I still love coding and building but it's no longer central to my job or lif3. It's useful, I enjoy it but at the same time I also marvel at the future that I find myself living in. I've done things with AI that I wouldn't have dared to start for lack of time. It's amazing and transformative and I love that too.
But I will always miss the Olden Days. I think more than anything it's the nostalgia for the 8-bit era that made me enjoy Stranger Things so much. :)
"Over four decades I’ve been through more technology transitions than I can count. New languages, new platforms, new paradigms. CLI to GUI. Desktop to web. Web to mobile. Monoliths to microservices. Tapes, floppy discs, hard drives, SSDs. JavaScript frameworks arriving and dying like mayflies."... made me think of
I've seen things you people wouldn't believe. Attack ships on fire off the shoulder of Orion. I watched C-beams glitter in the dark near the Tannhäuser Gate. All those moments will be lost in time, like tears in rain. Time to die.
where we came from and where we're going this whole time in my career those things are kind of hard to pinpoint. Abstraction is killing us for sure. Time to market above all else. It's no wonder why software in cars, appliances and medical equipment is a factor that is killing people.
Some farmers probably lamented the rise of machines because they feared their strength would no longer be needed in the fields. These farmers were no doubt more concerned with their own usefulness as laborers than in the goals of the farm: to produce food.
If you program as labor, consider what you might build with no boss. You’re better equipped to start your own farm than you think.
Many of them might have been troubled by the fact that they couldn’t afford a tractor. Many small farms became a few big ones, and so it will go in software.
I too have felt these feelings (though I'm much younger than the author). I think as I've grown older I have to remind myself
1. I shouldn't be so tied to what other people think of me (craftsman, programmer, low level developer)
2. I shouldn't measure my satisfaction by comparing my work to others'. Quality still matters especially in shared systems, but my responsibility is to the standards I choose to hold, not to whether others meet them. Plus there are still community of people that still care about this (handmade network, openbsd devs, languages like Odin) that I can be part of it I want to
3. If my values are not being met either in my work or personal life I need to take ownership of that myself. The magic is still there, I just have to go looking for it
I'm about ten years ahead of the author. I felt this a long time before AI arrived. I went from solving problems for people to everything I tried to ending up in an endless grind of yak-shaving.
I worked my way through it, though. It made me both give up programming, at least in the commercial sense, and appreciate the journey he and I have gone through. It's truly an amazing time to be alive.
Now, however, I'm feeling sucked back into the vortex. I'm excited about solving problems in a way I haven't been in a long time. I was just telling somebody that I spent 4-6 hours last night watching Claude code. I watched TV. I scratched my butt. I played HexaCrush. All the time it was just chugging along, solving a problem in code that I have wanted to solve for a decade or more. I told him that it wasn't watching the code go by. That would be too easy to do. It was paying attention to what Claude was doing and _feeling that pain_. OMG, I would see it hit a wall, I would recognize the wall, and then it'd just keep chugging along until it fixed it. It was the kind of thing that didn't have damned thing to do with the problem but would have held me up for hours. Instead, I watched Pitt with my wife. Every now I then I'd see a prompt, pop up, and guide/direct/orchestrate/consult/? with Claude.
It ain't coding. But, frankly, coding ain't coding. It hasn't been in a long, long time.
If a lot of your job seems like senseless bullshit, I'm sad to say you're on the way out. If it doesn't, stick around.
I view AI as an extinction level threat. That hasn't changed, mainly because of how humans are using it. It has nothing to do with the tech. But I'm a bit perplexed now as to what to do with my new-found superpowers. I feel like that kid on the first Spiderman movie. The world is amazing. I've got half-a-dozen projects I'm doing right now. I'm publishing my own daily newspaper, just for me to read, and dang if it's not pretty good! No matter how this plays out, it is truly an amazing time to be alive, and old codgers like us have had a hella ride.
I found that feeling again while building a game on the EVM. All of the constraints were new and different. Solidity feels somewhere between and high and low level language, not as abstracted as most popular languages today but a solid step above writing assembly.
A lot of people started building projects like mine when the EVM was newer. Some managed to get a little bit of popularity, like Dark Forest. But most were never noticed. The crypto scene has distracted everyone from the work of tinkerers and artists who just wanted to play with a new paradigm. The whole thing became increasingly toxic.
It was like one last breath of fresh cool air before the pollution of AI tools arrived on the scene. It's a bitter sweet feeling.
Fantastic Article, well written, thoughtful. Here are a couple of my favorite quotes:
* "Then it professionalised. Plug and Play arrived. Windows abstracted everything. The Wild West closed. Computers stopped being fascinating, cantankerous machines that demanded respect and understanding, and became appliances. The craft became invisible."
* "The machines I fell in love with became instruments of surveillance and extraction. The platforms that promised to connect us were really built to monetise us. The tinkerer spirit didn’t die of natural causes — it was bought out and put to work optimising ad clicks."
* "Previous technology shifts were “learn the new thing, apply existing skills.” AI isn’t that. It’s not a new platform or a new language or a new paradigm. It’s a shift in what it means to be good at this."
* "They’re writing TypeScript that compiles to JavaScript that runs in a V8 engine written in C++ that’s making system calls to an OS kernel that’s scheduling threads across cores they’ve never thought about, hitting RAM through a memory controller with caching layers they couldn’t diagram, all while npm pulls in 400 packages they’ve never read a line of... But sure. AI is the moment they lost track of what’s happening."
* "Typing was never the hard part."
* "I don’t have a neat conclusion. I’m not going to tell you that experienced developers just need to “push themselves up the stack” or “embrace the tools” or “focus on what AI can’t do.” All of that is probably right, and none of it addresses the feeling."
To relate to the author, I think with a lot of whats going on I feel the same about, but other parts I feel differently than they do. There appears to be a shallowness with this... yes we can build faster than ever, but so much of what we are building we should really be asking ourselves why do we have to build this at all? Its like sitting through the meeting that could have been an email, or using hand tools for 3 hours because the power tool purchase/rental is just obscenely expensive for the ~20min you need it.
This essay begins by promising not to be a "back in my day" piece, but ends up dunking on 20-year-olds who are only a few years into their career, as if they have any choice about when they were born.
I'm 55 and I started at age 13 on a TI-99/4A, then progressed through Commodore 64, Amiga 2000, an Amiga XT Sidecar, then a real XT, and on and on. DOS, Windows, Unix, the first Linux. I ran a tiny BBS and felt so excited when I heard the modem singing from someone dialing in. The first time I "logged into the Internet" was to a Linux prompt. Gopher was still a bigger thing than the nascent World-Wide Web.
The author is right. The magic has faded. It's sad. I'm still excited about what's possible, but it'll never create that same sense of awe, that knowledge that you can own the entire system from the power coming from the wall to the pixels on your screen.
DOS is very much alive these days, though [0]. Text-mode internet is there (should you want online in the first place), and, thanks to some amazing devs, soundcard support has made a huge leap [1].
I use it every day lately (for text-related work and hobbyst-level assembly learning -- my intent is to write a small application to do paid work which involves chopping audio files). And -- I say a single-tasking system is a complete, true bliss in our days. Paired with a 4:3 Thinkpad screen, that DOS environment gives me instant focus for a long time -- which, to me, has been almost impossible to accomplish on a multi-tasking, contemporary-web-browser-equipped system recently.
Apparently, though, there seems to be AI for DOS, too [2]. :) I prefer my DOS machine to be completely offline, though. Peace and harmony for the soul!
Similar story for myself. It was long and tedious for my mental model to go from Basic, to Pascal, to C, and finally to ASM as a teen.
My recent experience is the opposite. With LLMs, I'm able to delve into the deepest parts of code and systems I never had time to learn. LLMs will get you to the 80% pretty quick - compiles and sometimes even runs.
maybe we just change, honestly. i think when i were younger there was nothing to lose, time felt unlimited, no "career" to gamble with, no billion dollar idea, just learning and tinkering and playing with whatever was out there because it was cool and interesting to me. in some respects i miss that.
not sure how that relates to llms but it does become an unblocker to regain some of that "magic", but also i know to deep dive requires an investment i cannot shortcut.
the new generation of devs are already playing with things few dinosaurs will get to experience fully, having sunk decades into the systems built and afraid to let it go. some of that is good (to lean on experience) and some of it holding us back.
Yeah I could use Cursor or whatever but I don't, I like writing code. I guess that makes me a luddite or something, although I still develop agents. I enjoy architecting things (I don't consider myself an architect) I'm talking about my hobby hardware projects.
I know exactly how you feel. I don't know how many hours I sat in front of this debugger (https://www.jasik.com) poking around and trying to learn everything at a lower level. Now its so different.
Yeah. Different is the word. In many ways it’s just another abstraction but we’re not machines and this, to me at least, just gives a very different feel.
idk, i'm loving the newness of all of it, I feel more empowered than ever before, like it's my time. Before startups would take like a year to get going, now it's like a month or so. It's exciting and scary, we have no idea where it's going. Not boring at all. I was getting bored as shit and bam, now i can dream up shit quick and have it validated to, ya i figured that out with an MCP so ya this is my jam. Program MCPs and speed it up!!!!!!
Same, but it changed when I was 17 and again when I was 27 and then 37 and so on. It has always been changing dramatically, but this latest leap is just so incredibly different that it seems unique.
> I wrote my first line of code in 1983. I was seven years old, typing BASIC into a machine that had less processing power than the chip in your washing machine
I think there may be a counterpoint hiding in plain sight here: back in 1983 the washing machine didn't have a chip in it. Now there are more low-level embedded CPUs and microcontrollers to develop for than before, but maybe it's all the same now. Unfathomable levels of abstraction, uniformly applied by language models?
I've had the same journey, same age markers. The sentiment is the same, but at the same time this new world affords me super powers I'm currently drunk on. When that drunkenness becomes a hangover I hope I won't be disappointed.
> They’re writing TypeScript that compiles to JavaScript that runs in a V8 engine written in C++ that’s making system calls to an OS kernel that’s scheduling threads across cores they’ve never thought about, hitting RAM through a memory controller with caching layers they couldn’t diagram, all while npm pulls in 400 packages they’ve never read a line of.
But sure. AI is the moment they lost track of what’s happening.
I feel this is conflating different things. Yes, the abstraction tower was massive already before, but at least the abstractions were mostly well-defined and understandable through interfaces: even if you don't understand the intricacies of your storage device, driver and kernel, you can usually get a quite reliable and predictable mental representation how files work. Same goes for network protocols, higher-level programming languages or the web platform.
Sure, there are edge cases where the abstraction breaks down and you have to get into the lower levels, but those situations are the exception, not the norm.
With AI, there is no clearly defined interface, and no one really knows what (precise) input a given output will produce. Or maybe to put it better, the interface is human language and your mental representation is the one you have talking to a human - which is far more vague than previous technical abstractions.
On the bright side, at least we (still) have the intermediate layer of generated code to reason about, which offsets the unpredictability a bit.
Starting code when I was 14, sold my first bit of code at 17, which was written in 6502 assembler.
40+ years later, been through many BASICs, C, C++ (CFront on onwards) and now NodeJS, and I still love writing code.
Tinkering with RPi, getting used to having a coding assistant, looking forward to having some time to work on other fun projects and getting back into C++ sooooon.
Even if you can achieve awesome things with LLMs you give up the control over tiny details, it's just faster to generate and regenerate until it fits the spec.
But you never quite know how long it takes or how much you have to shave that square peg.
I'm 43. Took a year or so off from contracting after being flat out for years without taking any breaks, just poked around with some personal projects, did some stuff for my wife's company, petitioned the NHS to fix some stuff. Used Claude Code for much of it. Travelled a bit too.
I feel like I turned around and there seem to be no jobs now (500+ applications deep is a lot when you've always been given the first role you'd applied to) unless you have 2+ years commercial AI experience, which I don't, or perhaps want to sit in a SOC, which I don't. It's like a whole industry just disappeared while I had my back turned.
I looked at Java in Google Trends the other day, it doesn't feel like it was that long ago that people were bemoaning how abstracted that was, but it was everywhere. It doesn't seem to be anymore. I've tried telling myself that maybe it's because people are using LLMs to code, so it's not being searched for, but I think the game's probably up, we're in a different era now.
Not sure what I'm going to do for the next 20 years. I'm looking at getting a motorbike licence just to keep busy, but that won't pay the bills.
I’m 45 and contracted for over a decade before switching to product development. I used to still get inquiries from former customers, mainly for Java and Android work. But since about two years, it’s completely dried up. Anecdotally I’ve been hearing from friends who are still in the contracting/freelancing business that things are very tough right now. It makes sense to me, contractors are usually the first thing businesses cut when they’re either lowering their spending or becoming more efficient themselves.
It'd be more strange if the thing you learned 43 years ago was exactly the same today. We should expect change. When that change is positive we call it progress.
In the grand scheme of things it wouldn’t actually be that strange: generations and generations of humans were mostly farmers and mostly did the same thing as their parents. Of course technology developed but lots of people did the same job with the same methods their whole lives.
But everybody on this site lived through the first half of a logistic curve so that perspective seems strange to us.
Peter Thiel talks about the difference in progress between bits and atoms. Progress in atoms (physical things) moves incredibly slowly, and has done for centuries. Progress in bits (software) moves astonishingly fast. We all work in software. We should not expect things to remain the same for very long because change is easy.
I think it'd be pretty incredible if we hit on the best way to write software 40 years ago when people had only been doing it seriously for a couple of decades. It's no more surprising that we find better approaches to coding than farming improving when the tractor replaced a horse.
I think more than ever programmers need jobs where performance matters and the naive way the AI does things doesn't cut it. When no one cares about things other than correctness your job turns into AI Slop. The good news right now is that AI tends to produce things that AI struggles to do well with so large scale projects often descend into crap. You can write a C-compiler for $20,000 with an explosive stack of agents, but that C-compiler isn't anywhere close to efficient or performant.
As model costs come down that $20,000 will become a viable number for doing entirely AI-generate coding. So more than ever you don't want to be doing work that the AI is good enough at. Either jobs where performance matters or being able to code the stack of agents needed to produce high quality code in an application context.
I wonder what other “crevices” (as the author put it) exist.
Another commentor mentioned embedded, and after a brief phase of dabbling in that, mainly with nRF5x micros, I tend to agree. Less training data and obtuse tooling.
I don't know what these people from our now traditional daily lamentation session are coding where Claude can do all the work for them just with a few prompts and minimal reviews.
Claude is a godsend to me, but fuck, it is sometimes dumb as door, loves to create regressions, is a fucking terrible designer. Small, tiny changes? Those are actually the worse, it is easy for claude, on the first setback, decides to burn the whole world and start from zero again. Not to mention when it gets stuck in an eternal loop where it increasingly degenerates the code.
If I care about what I deliver, I have to actively participate in coding.
> I started programming when I was seven because a machine did exactly what I told it to, felt like something I could explore and ultimately know, and that felt like magic
I'm significantly younger than OP, but this was it for me too. I'm autistic and found the world around me confusing growing up. Computers were wonderful because they were the only thing that really made sense to me.
I was obsessed with computers since I was 5. I started programming probably around age 10. Then in my early teens I started creating Flash applications, writing PHP, Java, etc...
When I look back on my early career now it was almost magical. This in the mid to late 00s (late to some I know), but this was before the era of package managers, before resources like Stackoverflow, before modern IDEs. You had some fairly basic frameworks to work with, but that was really about it. Everything else had to be done fully by hand.
This was also before agile was really a thing too. The places I worked at the time didn't have stand-ups or retrospectives. There were no product managers.
It was also before the iPhone and the mass adoption of the internet.
Back then no one went into software engineering as a profession. It was just some thing weird computer kids did, and sometimes businesses would pay us to build them things. Everyone who coded back then I got along with great, now everyone is so normal it's hard for me to relate with me. The industry today is also so money focused.
The thing and bothers me the most though is that computers increasingly act like humans that I need to talk to to get things done, and if that wasn't bad enough I also have to talk with people constantly.
Even the stuff I build sucks. All the useful stuff has been build so in the last decade or so stuff I've built feels increasingly detached from reality. When I started I felt like I was solving real practical problems for companies, now I'm building chatbots and internal dashboards. It's all bollocks.
There was a post recently about builders vs coders (I can't remember exactly). But I'm definitely a coder. I miss coding. There was something rewarding about pouring hours into a HTML design, getting things pixel perfect. Sometimes it felt laborious, but that was part of the craft. Claude Code does a great job and it does it 50x faster than I could, but it doesn't give me the same satisfaction.
I do hope this is my last job in tech. Unfortunately I'm not old enough to retire, but I think I need to find something better suited to my programatic way of thinking. I quite like the idea of doing construction or some other manual labour job. Seems like they're still building things by hand and don't have so many stupid meetings all the time.
Abstractions can take away but many add tremendous value.
For example, the author has coded for their entire career on silicon-based CPUs but never had to deal with the shittiness of wire-wrapped memory, where a bit-flip might happen in one place because of a manufacturing defect and good luck tracking that down. Ever since lithography and CPU packaging, the CPU is protected from the elements and its thermal limits are well known and computed ahead of time and those limits baked into thermal management so it doesn’t melt but still goes as fast as we understand to be possible for its size, and we make billions of these every day and have done for over 50 years.
Moving up the stack you can move your mouse “just so” and click, no need to bit-twiddle the USB port (and we can talk about USB negotiation or many other things that happen on the way) and your click gets translated into an action and you can do this hundreds of times a day without disturbing your flow.
Or javascript jit compilation, where the js engine watches code run and emits faster versions of it that make assumptions about types of variables - with escape hatches if the code stops behaving predictably so you don’t get confusing bugs that only happen if the browser jitted some code. Python has something similar. Thanks to these jit engines you can write ergonomic code that in the typical scenario is fast enough for your users and gets faster with each new language release, with no code changes.
Lets talk about the decades of research that went into autoregressive transformer models, instruction tuning, and RLHF, and then chat harnesses. Type to a model and get a response back, because behind the scenes your message is prefixed with “User: “, triggering latent capabilities in the model to hold its end of a conversation. Scale that up and call it a “low key research preview” and you have ChatGPT. Wildly simple idea, massive implications.
These abstractions take you further from the machine and yet despite that they were adopted en masse. You have to account for the ruthless competition out there - each one would’ve been eliminated if they hadn’t proven to be worth something.
You’ll never understand the whole machine so just work at the level you’re comfortable with and peer behind the curtain if and when you need (eg. when optimizing or debugging).
I don't think so. A decent C programmer could pretty much imagine how each line of C was translated into assembly, and with certainty, how every byte of data moved through the machine. That's been lost with the rise of higher-level languages, interpreters, their pseudocode, and the explosion of libraries and especially, the rise of cut-and-paste coding. IMO, 90% of today's developers have never thought about how their code connects to the metal. Starting with CS101 in Java, they've always lived entirely within an abstract level of source code. Coding with AI just abstracts that world a couple steps higher, not unlike the way that templates in 4GL languages attempted but failed to achieve, but of course, the abstraction has climbed far beyond that level now. Software craftsmanship has indeed left the building; only the product matters now.
As someone who has always enjoyed designing things, but was never really into PUZZLES, I always felt like an outsider in the programming domain. People around me really enjoyed the "fun" of programming, whereas I was more interested in the Engineering of the thing - balancing tradeoffs until within acceptable margins and then actually calling it "DONE". People around me rarely called things "done", they rewrote it and rewrote it so that it kept satisfying their need for puzzle-solving (today, it's Ruby, tomorrow, it's rewritten in Scala, and the day after that, it's Golang or Zig!)
I feel that LLMs have finally put the ball in MY court. I feel sorry for the others, but you can always find puzzles in the toy section of the bookstore.
"They’re writing TypeScript that compiles to JavaScript that runs in a V8 engine written in C++ that’s making system calls to an OS kernel that’s scheduling threads across cores they’ve never thought about, hitting RAM through a memory controller with caching layers they couldn’t diagram, all while npm pulls in 400 packages they’ve never read a line of."
and they still call themselves 'full stack developers' :eyeroll:
The technology is no longer helping anything, it is actually tearing our society apart.
Up to 2000s, things were indeed evolution, improvements, better life style be it personal or professional.
Since 2000s, Enshitification started, everything gets worse, from services, to workflows, to processes, to products, to laws.
Gen-Z does not realize how bad things are, and how we are no longer becoming smarter but dumber, kids cannot even read but have every single social media account.
If they could spend one day back in early 2000s, the current generation would start a civil war in every single city across the globe.
It won't be called coding soon; Sometime in the future (soon?) we won't be talking about code. The few leftovers/managers/CEOs will only be talking about products not the code, not programming, not even operating systems. You won't hear about pull requests, or databases, or HTTP or any of that. You won't talk about programmers. At least not outside of "hobbies".
I saw someone on LinkedIn recently — early twenties, a few years into their career — lamenting that with AI they “didn’t really know what was going on anymore.” And I thought: mate, you were already so far up the abstraction chain you didn’t even realise you were teetering on top of a wobbly Jenga tower.
They’re writing TypeScript that compiles to JavaScript that runs in a V8 engine written in C++ that’s making system calls to an OS kernel that’s scheduling threads across cores they’ve never thought about, hitting RAM through a memory controller with caching layers they couldn’t diagram, all while npm pulls in 400 packages they’ve never read a line of.
But sure. AI is the moment they lost track of what’s happening.
The abstraction ship sailed decades ago. We just didn’t notice because each layer arrived gradually enough that we could pretend we still understood the whole stack.
AI is just the layer that made the pretence impossible to maintain."
Absolutely brilliant writing!
Heck -- absolutely brilliant communicating! (Which is really what great writing is all about!)
You definitely get it!
Some other people here on HN do too, yours truly included in that bunch...
>But sure. AI is the moment they lost track of what’s happening. The abstraction ship sailed decades ago.
Bullshit. While abstraction has increased over time, AI is no mere incremental change. And the almost natural language interaction with an agent is not the same as Typescript over assembly (not to mention you could very well right C or Rust and the like, and know most of the details of the machine by heart, and no, microcode and low level abstractions are not a real counter-argument to that). Even less so if agents turn autonomous and you just herd them onto completion.
This LLM stuff is a little weird. Previously we had Python which was pretty close to pseudocode but you could run it directly. Now, these LLMs are one step more abstract, but their outputs aren’t runnable directly, they produce possibly incorrect code-like-text. Actually this seems like good news for programmers since you have to read the code in the lower-level language that gets produced.
I have the opposite take. There’s nothing stopping you from jumping into any component to polish things up. You can code whatever you wish. And AI takes away nearly all of the drudgery : boilerplate, test cases, inspecting poor documentation, absurd tooling.
It also lets me focus more on improving things since I feel more liberated to scrap low quality components. I’m much braver to take on large refactors now – things that would have taken days now take minutes.
In many ways AI has made up for my growing lack of patience and inability to stay on task until 3am.
I have been around for a similar amount of time. Another change I have seen over the years is the shift from programming being an exercise in creative excellence at work to being a white-collar ditch-digging job.
“... when I was 7. I'm 50 now and the thing I loved has changed”
Welcome to the human condition, my friend. The good news is that a plurality of novels, TV shows, country songs, etc. can provide empathy for and insight into your experience.
I've written sse2 optimized C, web apps, and probably everything in between (hw, datasci, etl, devops).
I like coding with AI both vibe and assisted, since as soon as the question enters my head I can create a prototype or a test or a xyz to verify my thoughts. The whole time I'm writing in my notebook or whiteboard or any other thing I would have gotten up to. This is enabling tech, the trouble for me is there is a small thread that leads out of the room into the pockets of billion dollar companies.
It is no longer you vs the machine.
I have spent tons of time debugging weird undocumented hardware with throwaway code, or sat in a debugger doing hex math.
I think one wire that is crossed right now in this world is that computing is more corporate than ever, with what seems like ever growing platforms and wealth extraction at scale. Don't let them get you down, host your own shit and ignore them. YES IT WILL COST MORE -> YOUR FREEDOM HAS A PRICE.
Another observation is that people that got into the game for pure money are big mad right now. I didn't make money in the 00s, I did in the end of the 10s, and we're back at job desolation. In my groups, the most annoyed are code boot campers who have faked it until they made it and have just managed to survive this cycle with javascript.
Cycles come and go, the tech changes, but problem solving is always there.
The irony of these "My craft is dead" posts is that they consistently, heavily leverage AI for their writing. So you're crying about losing one craft to AI while using AI to kill another. It's disingenuous. And yes it is so damn obvious.
I think the Oxide computer LLM guidelines are wise on this front:
> Finally, LLM-generated prose undermines a social contract of sorts: absent LLMs, it is presumed that of the reader and the writer, it is the writer that has undertaken the greater intellectual exertion. (That is, it is more work to write than to read!) For the reader, this is important: should they struggle with an idea, they can reasonably assume that the writer themselves understands it — and it is the least a reader can do to labor to make sense of it.
The heavy use of LLMs in writing makes people rightfully distrustful that they should put the time in to try to read what's written there.
Using LLMs for coding is different in many ways from writing, because the proof is more there in the pudding - you can run it, you can test it, etc. But the writing _is_ the writing, and the only way to know it's correct is to put in the work.
That doesn't mean you didn't put in the work! But I think it's why people are distrustful and have a bit of an allergic reaction to LLM-generated writing.
I mean, obviously you can't know your actual error rates, but it seems useful to estimate a number for this and to have a rough intuition for what your target rate is.
I agree with that for programming, but not for writing. The stylistic tics are obtrusive and annoying, and make for bad writing. I think I'm sympathetic to the argument this piece is making, but I couldn't make myself slog through the LinkedIn-bot prose.
This seems to be what is happening bots are posting things and bots are reading it. It's a bit like our wonderful document system (www) turned into an application platform. We gained the later but lost the former.
If you feel so strongly about your message, why would you outsource writing out your thoughts to such a large extent where people can feel how reminiscent it sounds of LLM writing instead of your own? It's like me making a blogpost by outsourcing the writing to someone on Fiverr.
Yes it's fast, it's more efficient, it's cheap - the only things we as a society care about. But it doesn't convey any degree of care about what you put out, which is probably desirable for a personal, emotionally-charged piece of writing.
I felt the same. I resonate with the message, but it really rings hollow with so much AI directing.
I'd wish people would stop doing that. AI writing isn't even particularly good. Its not like it makes you into Dostoevsky, it just sloppifies your writing with the same lame mannerisms ("wasn't just X — it was Y"), the same short paragraphs, the same ems.
I'm weird about this, I choose to use AI to get feedback on my writing, but refuse to just copy and paste the AIs words. I only do it if its a short work email and I really dont care about its short lived lifespan, if its supposed to be an email where the discussion continues, then I refine it. I can write a LOT. If HN has edit count logs, I've probably got the high score.
The author admits that they used AI but I found it not that obvious. What are telltale signs in this case? While the writing style is a little bit over-stylized (exactly three examples in a sentence, Blade Runner reference), I might write in a similar style about a topic that im very emotional about. The actual content feels authentic to me.
(1) The pattern "It's not just a X---It's a Y" is super common in LLM-generated text for some reason. Complete with em dash. (I like em dashes and I wish LLMs weren't ruining them for the rest of us)
"Upgrading your CPU wasn’t a spec sheet exercise — it was transformative."
"You weren’t just a user. You were a systems engineer by necessity."
"The tinkerer spirit didn’t die of natural causes — it was bought out and put to work optimising ad clicks."
And in general a lot of "It's not <alternative>, it's <something else>", with or without an em dash:
"But it wasn’t just the craft that changed. The promise changed."
it's really verbose. One of those in a piece might be eye-catching and make someone think, but an entire blog post made up of them is _tiresome_.
(2) Phrasing like this seems to come out of LLMs a lot, particularly ChatGPT:
"I don’t want to be dishonest about this. "
(3) Lots of use of very short catch sentences / almost sentence fragments to try to "punch up" the writing. Look at all of the paragraphs after the first in the section "The era that made me":
"These weren’t just products. " (start of a paragraph)
"And the software side matched." (next P)
"Then it professionalised."
"But it wasn’t just the craft that changed."
"But I adapted." (a few paragraphs after the previous one)
And .. more. It's like the LLM latched on to things that were locally "interesting" writing, but applies them globally, turning the entire thing into a soup of "ah-ha! hey! here!" completely ignorant of the terrible harm it does to the narrative structure and global readability of the piece.
> And .. more. It's like the LLM latched on to things that were locally "interesting" writing, but applies them globally, turning the entire thing into a soup of "ah-ha! hey! here!" completely ignorant of the terrible harm it does to the narrative structure and global readability of the piece.
It's like YouTube-style engagement maximization. Make it more punchy, more rapid, more impactful, more dramatic - regardless of how the outcome as a whole ends up looking.
I wonder if this writing style is only relevant to ChatGPT on default settings, because that's the model that I've heard people accuse the most of doing this. Do other models have different repetitive patterns?
Out of curiousity, for those who were around to see it: was writing on LinkedIn commonly like this, pre-chatGPT? I've been wondering what the main sources were for these idioms in the training data, and it comes across to me like the kind of marketing-speak that would make sense in those circles.
(An explanation for the emoji spam in GitHub READMEs is also welcome. Who did that before LLMs?)
This is not either of those. This is the equivalent of a eulogy to a passion and a craft. Using an LLM to write it: entire sections, headers, sentences - is an insult to the craft.
The post in the same vain, "We mourn our craft", did a much better job at this communicating the point without the AI influence.
At least then you’re being honest about you hating your intended audience, and not proudly posting the slop vomited forth from your algorithmic garbage machine as if it were something that deserved the time, thought and consideration of your equals.
I'm 57 and wrote my first line of BASIC in 1980, so while I can still chime in on this specific demographic I feel that I ought to. So im like this guy, but like a lot of other people in my specific demographic we aren't writing these long melancholy blog posts about AI because it's not that big of a deal. As an OSS maintainer most of my work is a lot of boring slog adding features to libraries to suit new features in upstream dependencies, nitpicky things people point out, new docs, tons of tedium. Claude helps a ton with all of that. no way is Claude doing the real architectural puzzle stuff, that's still fully on me! I can just use Claude to help implement it. It's like the ultimate junior programmer assistant. It's certainly a new, different and unique experience in one's programming career but it really feels like another tool, like an autocomplete or code refactoring tool that is just a lot better, with similar caveats. I mean in my career, I've had to battle the whole time people who don't "get" source code control (starting with me), who don't "get" IDEs (starting with me), people who dont "get" distributed version control (same), people who don't "get" ORMs (oh yes, same for me though this one I took much more dramatic steps to appreciate them), people who don't "get" code formatters, now we're battling people who don't "get" LLMs used for coding, in that sense the whole thing doesnt feel like that novel of a situation.
it's the LLMs that are spitting out fake photos and videos and generating lots of shitty graphics for local businesses, that's where I'm still wielding a pitchfork...
There's 3-4 of these posts a day - why don't people spend more time hand-building things for fun in their free time? That's what led a lot of us to this career path to start with. I have a solid mix of hand-code and AI-assisted projects in my free time.
>>The machines I fell in love with became instruments of surveillance and extraction.
Surveillance and Extraction
"We were promised flying cars", and what we got was "investors" running the industry off the cliff into cheap ways to extract money from people instead of real innovation.
This isn't new. It's the same feeling the first commercial programmers had working in assembly, or machine code, once compilers became available. Ultimately I think even Mel Kaye forsook being able to handpick memory locations for optimum drum access before his retirement, in favor of being able to build vastly more complex software than before.
AI has just vastly extended your reach. No sense crying about it. It is literally foolish to lament the evolution of our field into something more.
Don't program as a career, but am also 50 and programming since TRS-80. AI has transformed this era, and I LOVE IT! I can focus on making and not APIs or syntax or all of the bootstrapping.
Professional development is changing dramatically. Nothing stops anyone from coding "the old way," though. Your hobby project remains yours, exactly the way you want it. Your professional project, on the other hand, was never about you in the first place. It's always about the customer/audience/user, period full stop.
I’m 50 too and I’ve complained and yearned about the “old” days too, a lot of this is nostalgia as we reminisce about periods of time in our youth when we had the exuberance and time to play and build with technology of our own time
Working in AI startups strangely enough I see a lot of the same spirit of play and creativity applied to LLM based tools - I mean what is OpenClaw but a fun experiment
Those kids these days are going to reminisce about the early days of AI when prompts would be handwritten and LLMs would hallucinate
I’m not really sure 1983, 1993 or 2003 really was that gold of age but we look at it with rose colored glasses
11 and now 45. I am still interested in it, but I feel like in my 20s I would get a dopamine rush when a row showed up in a database. In my 30s I would get that only if a message passed through a system and updated on-screen analytics within 10 seconds. Thank god for LLMs because all of it became extremely boring, I can't stand having to get these little milestones each new company or each new product I'm working on. At least with LLMs the dopamine hit comes from being in awe of the code that gets generated and realizing it found every model, every messaging system interface, every API, and figuring out how to make it backwards compatible, updating the UI - something that would take half a day, now in 5 minutes or less.
> I’ve had that experience. And losing it — even acknowledging that it was lost
What are you talking about? You don't know how 99% of the systems in your own body work yet they don't confront you similarly. As if this "knowledge" is a switch that can be on or off.
> I gave 42 years to this thing, and the thing changed into something I’m not sure I recognise anymore.
Stop doing it for a paycheck. You'll get your brain back.
I'd feel the same when I was younger. Over time I've realized that they are the lucky ones. You too, if you're lucky, will one day be an old man doing old man things.
The computing the author enjoyed/enjoys is still out there, they are just looking for it in all the wrong places. Forget about (typical) web development (with its front and backend stacks). Forget about windows and macOS, and probably even mobile (though maybe not).
Hobby projects. C++/Rust/C/Go/some-current-Lisp. Maybe even Zig! Unix/Linux. Some sort of hardware interaction. GPL, so you can share and participate in a world of software created by people a lot more like you and a lot less like Gates and Jobs and Zuckerberg and ...
Sure, corporate programming generally tends to suck, but it always did. You can still easily do what you always loved, but probably not as a job.
At 62, as a native desktop C++ app developer doing realtime audio, my programming is as engrossing, cool, varied and awesome as it has ever been (probably even more so, since the GPL really has won in the world I live in). It hasn't been consumed by next-new-thing-ism, it hasn't been consumed by walled platforms, it hasn't been taken over by massive corporations, and it still very much involves Cool Stuff (TM).
Sure enjoy your retirement. But for me it's annoying some late 50s+ people telling what you just did. Think about people who are in their 20s or 30s - they are not even halfway through their path to retirement and some maybe even still paying out student debt.
> Stop whining and start doing stuff you love.
You have to understand that it's hard to do stuff that you love when you have to feed your family and pay mortgage or rent. Not everyone can be or want to be entrepreneur.
You are just talking from perspective of someone who already paid all debts raised all kids and now enjoying or soon will be enjoying retirement - at least meaning you can retire even if maybe don't want to.
Retired? I'm not retired and likely won't be for another 8 years.
> But for me it's annoying some late 50s+ people telling what you just did.
The author of TFA is at least 50!
> You are just talking from perspective of someone who already paid all debts raised all kids
That part is true. But that was more or less true when I was 50, too.
Finally, the article wasn't about the shitty economic world that we've created for so many people, it was about how programming has changed. Those two are inter-related but they are not the same.
I'm 61 (retired when I was 57).
I too began with BASIC (but closer to 1980). Although I wrote and published games for the Macintosh for a number of years as I finished up college, my professional career (in the traditional sense) began when I was hired by Apple in 1995 and relocated to the Bay Area.
Yeah, what started out as a great just got worse and worse as time went on.
I suspect though that to a large degree this reflects both the growing complexity of the OS over that time as well as the importance of software in general as it became more critical to people's lives.
Already, even in 1984 when it was first introduced, the Mac had a rich graphics library you would not want to have to implement yourself. (Although famously of course a few apps like Photoshop nonetheless did just that—leaning on the Mac simply for a final call to CopyBits() to display pixels from Adobe's buffer to the screen.)
You kind of have to accept abstraction when networking, multiple cores, multiple processes become integral to the machine. I guess I always understood that and did not feel too put out by it. If anything a good framework was somewhat of a relief—someone else's problem, ha ha. (And truly a beautiful API is just that: a beautiful thing. I enjoy working well constructed frameworks.)
But the latter issue, the increasing dominance of software on our lives is what I think contributed more to poisoning the well. Letting the inmates run the asylum more or less describes the way engineering worked when I began at Apple in 1995. We loved it that way. (Say what you want about that kind of bottom-up culture of that era, but our "users" were generally nerds just like us—we knew, or thought we knew anyway, better than marketing what the customer wanted and we pursued it.)
Agile development, unit tests, code reviews… all these weird things began to creep in and get in the way of coding. Worse, they felt like busywork meant simply to give management a sense of control… or some metric for progress.
"What is our code coverage for unit test?" a manager might ask. "90%," comes the reply from engineering. "I want to see 95% coverage by next month," comes the marching orders. Whatever.
I confess I am happy to have now left that arena behind. I still code in my retirement but it's back to those cowboy-programmer days around this house.
Yee haw!
you also have bozo managers like my recent one measuring metrics like PR time open, review count
while the team hasn't shipped anything useful in 6 months.
I'm a millennial, but I share some feelings. I also think modern programming careers often feel like factory jobs where most of the time you must be compliant with some BS. You often find the true joy only in personal projects.
I don’t have anything to add other than to say this was beautifully written.
My advice to everyone feeling existential vertigo over these tools is to remain confident and trust in yourself. If you were a smart dev before AI, chances are you will remain a smart dev with AI.
My experience so far is that to a first approximation, the quality of the code/software generated with AI corresponds to the quality of the developer using the AI tool surprisingly well. An inexperienced, bad dev will still generate a sub-par result while a great dev can produce great results.
The choices involved in using these tools are also not as binary as they are often made out to be, especially since agents have taken off. You can very much still decide to dedicate part of your day to chiseling away at important code to make it just right and make sure your brain is engaged in the result and exploring and growing with the problem at hand, while feeding background queues of agents with other tasks.
I would in fact say the biggest challenge of the AI tool revolution in terms of what to adapt to is just good ol' personal time management.
> If you were a smart dev before AI, chances are you will remain a smart dev with AI.
I don't think that's what people are upset about, or at least it's not for me. For me it's that writing code is really enjoyable, and delegating it to AI is hell on earth.
> For me it's that writing code is really enjoyable, and delegating it to AI is hell on earth.
It's very sad, for me.
Like I told someone recently - letting the LLM write my code for me is like letting the LLM play my video games for me.
If all I wanted was the achievement on my steam profile, then sure, it makes sense, but that achievement is not why I play video games.
I'm looking at all these people proudly showing off their video game achievements, gained just by writing specs, and I realise that all of them fail to realise that writing specs is a lower-skill activity than writing programs.
It also pays far, far less - a BA earns about half what an average dev earns. They're cosplaying at being BAs, not realising that they are now employed for a skill that pays less, and it's only a matter of time before the economics catch up to them.
I don't see a solution here.
My job for the last 8 years has involved
Talking to sales to get an idea what the customer wanted from the business side (first B2B at a product company and now consulting) -> talking to the customer and hashing out more detailed requirements -> designing the architecture and a proposed technical plan -> presenting it to the stakeholder (sometime internal sometime external) -> doing the work or delegating and leading the work -> presenting the work to the stakeholder and leading the UAT -> getting it to production.
The coding part has been a commodity for enterprise developers for well over a decade. I knew a decade ago that I wasn’t going to be 50 years old reversing b trees on a whiteboard trying to prove my worth.
Doing the work is the only thing that the AI does.
While I don’t make the eye popping BigTech comp (been there. Done that and would rather get a daily anal probe than go back), I am making more than I could make if I were still selling myself as someone who “codez real gud” as an enterprise dev.
Look, there are at least dozens of us who like and enjoy programming for programming's sake and got into this crazy industry because of that.
Many of these people made many of the countless things we take for granted every day (networking, operating systems, web search; hell, even the transformer architecture before they got productized!).
Seeing software development --- and software engineering by proxy --- get reduced to a jello that will be stepped on by "builders" in real-time is depressing as shit.
It's even more depressing to see folks on HACKER news boost the "programming never mattered" mentality that's taken hold these last few years.
Last comment I'll make before I step off my soapbox: the "codez real gud" folks that makes the big bucks bring way more to the table than their ability to code...but their ability to code is a big contributor to why they bring more to the table!
Well as depressing as it is, check out the 2024 and 2025 YC batches. Guess how many of them are “ai” something or other? It’s never been about “hackers”. Not a single founder who takes VC funding is thinking about a sustainable business - at least their investors aren’t - they are hoping for the “exit”.
It’s always been jello. I at 51 can wax poetically about the good old days or I can keep doing what I need to do to keep money appearing in my account.
> Talking to sales to get an idea what the customer wanted from the business side (first B2B at a product company and now consulting) -> talking to the customer and hashing out more detailed requirements -> designing the architecture and a proposed technical plan -> presenting it to the stakeholder (sometime internal sometime external) -> doing the work or delegating and leading the work -> presenting the work to the stakeholder and leading the UAT -> getting it to production.
You are not the first person to say things like this.
Tell me, you ever wondered why a person with a programming background was filling that role?
If not the technical person, then who? It’s a lot easier for a technical person to learn how to talk the language of the business than a business person to have a deep understanding of technology.
On the enterprise dev side of the industry where most developers work, I saw a decade ago that if I were just a ticket taker who turned well defined requirements into for loop and if statements, that was an undifferentiated commodity.
You’re seeing now that even on the BigTech side knowing how to reverse a binary tree on the whiteboard is not enough.
Also if you look at the leveling guidelines of any major tech company, their leveling guidelines above mid level are based on scope, impact and dealing with ambiguity - not “I codez real gud”
Those levels bake in the expectation of "codez real gud" at FAANG/MANGA/whatever style tech companies since the technical complexity of their operations is high and a high skill bar needs to be hurdled over to contribute to most of those codebases and make impact at the scale they operate at.
One's ability to reverse a binary tree (which is a BS filter, but it is what it is) hasn't been an indicator of ability in some time. What _is_ though, is the wherewithall to understand _when_ that's important and tradeoffs that come with doing that versus using other data structures or systems (in the macro).
My concern is that, assuming today's trajectory of AI services and tooling, the need to understand these fundamentals will become less important over time as the value of "code" as a concept decreases. In a world where prompting is cheap because AI is writing all the code and code no longer matters, then, realistically, tech will be treated even more aggressively as a line item to optimize.
This is a sad reality for people like me whose love for computers and programming got them into this career. Tech has been a great way to make a wonderful living for a long time, and it's unfortunate that we're robbing future generations of what we took for granted.
You give way too much credit to the average mid level developer at BigTech. A lot of the scalability is built in and they just built on top of it.
There are millions of people that can code as well as you are I and a lot cheaper if you are in the US. Thousands of developers have been laid off over the last three years and tech companies keep going strong - what does that tell you?
I’m just as happy to get away from writing for loops in 2026 as was to be able to get away with LDA, LDX and BRA instructions once I could write performant code in C.
> Also if you look at the leveling guidelines of any major tech company, their leveling guidelines above mid level are based on scope, impact and dealing with ambiguity - not “I codez real gud”
Your entire comment is this specific strawman - no one, and I mean no one, is making this claim! You are the only one who is (ironically, considering the job you do) too tone-deaf and too self-unaware to avoid making this argument.
I'm merely pointing out that your value-prop is based on a solid technical foundation, which I feel you agree on:
> If not the technical person, then who? It’s a lot easier for a technical person to learn how to talk the language of the business than a business person to have a deep understanding of technology.
The argument is not "Oh boo hoo, I wish I could spend 8 hours a day coding for money like I used to", so stop pretending like it is.
There is an entire contingent of comments here who miss translating requirements into code.
Even the comment I replied to mentioned “being a BA” like the most important quality of a software engineer is their ability to translate requirements into code.
> The argument is not
Then what is it.
be blunt and obvious in your reply or go home.
I've been coping by reminding myself that I was absurdly lucky to have found a job that was also enjoyable and intellectually stimulating for so long, and if all AI does is bring software engineering down to the level of roughly every other job in the world in terms of fun, I don't really have much ground to complain
I think you were incredibly lucky to get to write code that you enjoyed writing.
Most of the commercial code I've written, over a 30+ year career, has been shite. The mandate was always to write profitable code, not elegant code. I started (much like the OP) back in the 80's writing code as a hobby, and I enjoyed that. But implementing yet another shitty REST CRUD server for a shitty website... not so much.
I totally see a solution: get the LLM to write the shitty REST CRUD server, and focus on the hard bits of the job.
I cannot figure out what you mean by "BA" in this context
> I cannot figure out what you mean by "BA" in this context
Business Analyst - those people who learn everything about what the customers requirements, specs, etc are. What they need, what they currently have, how to best advise them, etc.
They know everything, except how to program.
> They know everything, except how to program
In my experience, they know nothing, including how to program.
I was a BA forever ago during a summer job in college. That job wasn't for me at all! Looking back on the experience, putting together a FRD felt much like writing a CLAUDE.md with some prompts thrown in!
Business Analyst
This is a part of it, but I also feel like a Luddite (the historical meaning, not the derogatory slang).
I do use these tools, clearly see their potential, and know full well where this is going: capital is devaluing labor. My skills will become worthless. Maybe GP is right that at first only skilled developers can wield them to full effect, but it's obviously not going to stop there.
If I could destroy these things - as the Luddites tried - I would do so, but that's obviously impossible.
For now I'm forced to use them to stay relevant, and simply hope I can hold on to some kind of employment long enough to retire (or switch careers).
> know full well where this is going: capital is devaluing labor
But now you too can access AI labor. You can use it for yourself directly.
Kind of. But the outcomes likely do not benefit the masses. People "accessing AI labor" is just a race to the bottom. Maybe some new tools get made or small businesses get off the ground, but ultimately this "AI labor" is a machine that is owned by capitalists. They dictate its use, and they will give or deny people access to the machine as it benefits them. Maybe they get the masses dependent on AI tools that are currently either free or underpriced, as alternatives to AI wither away unable to compete on cost, then the prices are raised or the product enshittified. Or maybe AI will be massively useful to the surveillance state and data brokers. Maybe AI will simply replace a large percentage of human labor in large corporations, leading to mass unemployment.
I don't fault anyone for trying to find opportunities to provide for themselves and loved ones in this moment by using AI to make a thing. But don't fool yourself into thinking that the AI labor is yours. The capitalists own it, not us.
As someone who has leaned fully into AI tooling this resonates. The current environment is an oligopoly so I'm learning how to leverage someone else's tool. However, in this way, I don't think LLMs are a radical departure from any proprietary other tool (e.g. Photoshop).
Indeed. Do you know how many small consultancies are out there which are "Microsoft shops"? An individual could become a millionaire by founding their own and delivering value for a few high-roller clients.
Nobody says there's no money to make anymore. But the space for that is limited, no matter how many millions hustle, there's only 100 spots in the top 100.
what makes you think that's actually possible? maybe if you really had the connections and sales experience etc...
but also, if that were possible, then why wouldn't prices go down? why would the value of such labor stay so high if the same thing can be done by other individuals?
I saw it happen more back in the day compared to now. Point being, nobody batted an eyelash at being entirely dependent on some company's proprietary tech. It was how money was made in the business.
That is a fiction. None of us can waste tens of thousands of dollars whipping out a C compiler or web browser on a whim to test things.
If these tools improve to the point of being able to write real code, the financial move for the agent runners is to charge far more than they are now but far less than the developers being replaced.
> it’s obviously not going to stop there.
I don’t think it is obvious actually that you won’t have to have some expert experience/knowledge/skills to get the most out of these tools.
I think the keyword here is "some".
It already seemed like we were approaching the limit of what it makes sense to develop, with 15 frameworks for the same thing and a new one coming out next week, lots of services offering the same things, and even in games, the glut of games on offer was deafening and crushing game projects of all sizes all over the place.
Now it seems like we're sitting on a tree branch and sawing it off on both sides.
Today. Ask again in 6 months. A year.
People have been saying this for multiple years in a row now.
And it has been getting more true for years in a row.
Disagree entirely.
If you state “in 6 months AI will not require that much knowledge to be effective” every year and it hasn’t happened yet then every time it has been stated has been false up to this point.
In 6 months we can come back to this thread and determine the truth value for the premise. I would guess it will be false as it has been historically so far.
> If you state “in 6 months AI will not require that much knowledge to be effective” every year and it hasn’t happened yet then every time it has been stated has been false up to this point
I think that this has been true, though maybe not quiet a strongly as strongly worded as your quote says it.
The original statement was "Maybe GP is right that at first only skilled developers can wield them to full effect, but it's obviously not going to stop there."
"full effect" is a pretty squishy term.
My more concrete claim (and similar to "Ask again in 6 months. A year.") is the following.
With every new frontier model released [0]:
1. the level of technical expertise required to achieve a given task decreases, or
2. the difficulty/complexity/size of a task that a inexperienced user can accomplish increases.
I think either of these two versions is objectively true looking back and will continue being true going forward. And, the amount that it increases by is not trivial.
[0] or every X months to account for tweaks, new tooling (Claude Code is not even a year old yet!), and new approaches.
Using a LLM to program is simply another abstraction level. Just how C was to assembly.
I feel like the nondeterminism makes LLM-assisted programming a different sort of concept than using a compiler. Your prompt isn't your source code.
Six months ago, we _literally did not have Claude Code_. We had MCP, A2A and IDE integrations, but we didn't have an app where you could say "build me an ios app that does $thing" and have it build the damn thing start to finish.
Three months ago, we didn't have Opus 4.5, which almost everyone is saying is leaps and bounds better than previous models. MCP and A2A are mostly antiquated. We also didn't have Claude Desktop, which is trying to automate work in general.
Three _weeks_ ago, we didn't have Clawdbot/Openclaw, which people are using to try and automate as much of their lives as possible...and succeeding.
Things are changing outrageously fast in this space.
> Six months ago, we _literally did not have Claude Code_.
Claude Code came out a year ago.
If I could destroy these things - as the Luddites tried - I would do so
Would travel agents have been justified in destroying the Internet so that people couldn't use Expedia?
> capital is devaluing labor
I guess the right word here is "disenfranchising".
Valuation is a relative thing based mostly of availability. Adding capital makes labor more valuable, not less. This is not the process happening here, and it's not clear what direction the valuation is going.
... even if we take for granted that any of this is really happening.
> If I could destroy these things - as the Luddites tried - I would do so, but that's obviously impossible.
Certainly, you must realize how much worse life would be for all of us had the Luddites succeeded.
If the human race is wiped out by global warming I'm not so sure I would agree with this statement. Technology rarely fails to have downsides that are only discovered in hindsight IMO.
Sure, but would it have been better or worse for the Luddites?
Or perhaps they would have advanced the cause of labor and prevented some of the exploitation from the ownership class. Depends on which side of the story you want to tell. The slur Luddite is a form of historical propaganda.
Putting it in today's terms, if the goal of AI is to significantly reduce the labor force so that shareholders can make more money and tech CEOs can become trillionaires, it's understandable why some developers would want to stop it. The idea that the wealth will just trickle down to all the laid off work is economically dubious.
Reaganomics has never worked
> Reaganomics has never worked
Depends how you look at it.
Trickle down economics has never worked in the way it was advertised to the masses, but it worked fantastically well for the people who pushed (and continue to push) for it.
> it worked fantastically well for the people who pushed (and continue to push) for it.
That would be "trickle up economics", though.
Sure, because it all trickles into their pockets.
problem today is that there is no "sink" for money to go to when it flows upwards. we have resorted to raising interest rates to curb inflation, but that doesn't fix the problem, it just gives them an alternative income source (bonds/fixed income)
I'm not a hard socialist or anything, but the economics don't make sense. if there's cheap credit and the money supply perpetually expands without a sink, of course people with the most capital will just compound their wealth.
so much of the "economy" orbits around the capital markets and number going up. it's getting detached from reality. or maybe I'm just missing something.
Yeah it's called wealth transfer and the vast majority is on the wrong end.
The historical luddites are literally the human death drive externalized. Reject them and all of their garbage ideas with extreme prejudice.
Related, the word “meritocracy” was coined in a book which was extremely critical of the whole concept. AI thankfully destroys it. Good riddance, don’t let the door hit your ass on the way out.
https://en.wikipedia.org/wiki/The_Rise_of_the_Meritocracy
You can reject the ideas in the aggregate. Regardless, for the individual, your skills are being devalued, and what used to be a reliable livelihood tied to a real craft is going to disappear within a decade or so. Best of luck
> The historical luddites are literally the human death drive externalized. Reject them and all of their garbage ideas with extreme prejudice.
Yes, because fighting for the rights of laborers is obviously what most people hate.
For a different perspective:
"Except the Luddites didn’t hate machines either—they were gifted artisans resisting a capitalist takeover of the production process that would irreparably harm their communities, weaken their collective bargaining power, and reduce skilled workers to replaceable drones as mechanized as the machines themselves."
https://www.currentaffairs.org/news/2021/06/the-luddites-wer...
> For me it's that writing code is really enjoyable, and delegating it ...
This.
On my fun side project, I don't accept pull requests because writing the code is the fun part.
Only once did someone get mad at me for not accepting their pull request.
I resonate with that. I also find writing code super pleasurable. It's immediate stress relief for me, I love the focus and the flow. I end long hands-on coding sessions with a giddy high.
What I'm finding is that it's possible to integrate AI tools into your workflow in a big way without giving up on doing that, and I think there's a lot to say for a hybrid approach. The result of a fully-engaged brain (which still requires being right in there with the problem) using AI tools is better than the fully-hands-off way touted by some. Stay confident in your abilities and find your mix/work loop.
It's also possible to get a certain version of the rewards of coding from instrumenting AI tools. E.g. slicing up and sizing tasks to give to background agents that you can intuit from experience they'll be able to actually hand in a decent result on is similar to structuring/modularization exercises (e.g. with the goal to be readable or maintainable) in writing code, feelings-wise.
I'm in the enjoy writing code camp and do see merits of the hybrid approach, but I also worry about the (mental) costs.
I feel that for using AI effectively I need to be fully engaged with both the problem itself and an additional problem of communicating with the LLM - which is more taxing than pre-LLM coding. And if I'm not fully engaged those outcomes usually aren't that great and bring frustration.
In isolation, the shift might be acceptable, but in reality I'm still left with a lot of ineffective meetings - only now without coding sessions to clear my brain.
I think an additional big part of why LLM-aided coding is so draining is that it has you constantly refreshing your mental model of the code.
Making sense of new or significantly changed code is very taxing. Writing new code is less taxing as you're incrementally updating the model as you go, at a pretty modest pace.
LLMs can produce code at a much higher rate than humans can make sense of it, and assisted coding introduces something akin to cache thrashing, where you constantly need to build mental models of the system to keep up with the changes.
Your bandwidth for comprehending code is as limited as it always was, and taxing this ability to its limits is pretty unpleasant, and in my experience, comes at a cost of other mental capabilities.
I think this is subjective, I personally enjoy "managing" agents more than handwriting code and dealing with syntax
At the very least, it feels ergonomic and saves me keystrokes in the same way as stuff like snippets & aliases
Hope: I want to become a stronger dev.
Reality: Promoted to management (of AI) without the raise or clout or the reward of mentoring.
LLMs are similar in a lot of ways to the labor outsourcing that happened a generation or two ago. Except that instead of this development lifting a billion people out of poverty in the third world a handful of rich people will get even more rich and everyone else will have higher energy bills.
> ...the reward of mentoring.
I really feel this. Claude is going to forget whatever correction I give it, unless I take the time and effort to codify it in the prompt.
And LLMs are going to continue to get better (though the curve feels like it's flattening), regardless of whatever I do to "mentor" my own session. There's no feeling that I'm contributing to the growth of an individual, or the state-of-the-art of the industry.
AIs have made me realize that I don't actually care about writing code, even though it's all I've done for my entire career.
I care about creating stuff. How it gets from the idea in my brain to running on the computer, is immaterial to me.
I really like that I go from idea to reality in half the time.
Same here, and I also really enjoy the high level design/structure part of it.
THAT part doesn't mesh too well with AI, since it's still really bad at autonomous wholistic level planning. I'm still learning how to prompt in a way that results in a structure that is close to what I want/reasonable. I suspect going a more visual block diagram route, to generate some intermediate .md or whatever, might have promise, especially for defining clear bounds/separation of concerns.
Related, AI seems to be the wrong tool for refactoring code (I recently spent $50 trying to move four files). So, if whatever structure isn't reasonable, I'm left with manually moving things around, which is definitely un-fun.
Definitely go for that middle step. If it's something bigger I get them to draw out a multi-phase plan, then I go through and refine that .md and have them work from that.
Same.
I've been exploring some computer vision recognition stuff. Being able to reason through my ideas with an LLM, and make visualizations like t-SNE to show how far apart a coke can and a bag of cheetos are in feature-space has been mind blowing. ("How much of a difference does tint make for recognition? Implement a slider that can show that can regenerate the 512-D features array and replot the chart")
It's helping me get an intuitive understanding 10x faster than I could reading a textbook.
exactly
thankfully I started down the FIRE route 20 years ago and now am more or less continuing to work because I want to
which will end for my employer if they insist on making me output generative excrement
There's room for both. Give AI the boilerplate, save the exciting stuff for you.
but are employers going to be fine with that?
That remains to be seen. As long as the work gets done... Don't ask, don't tell.
It does NOT remain to be seen. https://www.cnbc.com/2025/09/26/accenture-plans-on-exiting-s... Big players are already moving in the direction of "join us or leave us". So if you can't keep up and you aren't developing or "reinventing" something faster with the help of AI, it was nice knowing you.
I didn't say don't use AI at all, I said give it the boilerplate, rote work. Developers can still work on more interesting things. Maybe not all the interesting things.
That may be fine ... if it remains your choice. I'm saying companies are outmoding people (programmers, designers, managers, et al) who don't leverage AI to do their job the fastest. If one programmer uses AI to do boilerplate and then codes the interesting bits personally and it takes a week and another does it all with AI (orchestrating agents, etc) and it takes 2 hours and produces the same output (not code but business value), the AI orchestrator/manager will be valued above the former.
I get your point, but I think smart people will figure out a balance. That 2 hours of output could take a week to debug and test.
Yes! I am not advocating for the 2 hours and the "vision" of managers and CEOs. Quite the contrary. But it is the world we live in for now. It's messy and chaotic and many people may (will?) be hurt. I don't like it. But I'm trying to be one of the "smart people". What does that look like? I hope I find out.
I don't like it, either. I hear people ranting about doing "everything with AI" on one meeting, and what a productivity boost it is, then I get tagged on a dumpster fire PR full of slop and emoji filled log statements. Like did you even look at your code at all? "Oh sorry I don't know how that got in there!"
These are the same employers that mandate return to office for distributed teams and micro-manage every access of our work. I think we know how its going to play out.
Woodworking is still a thing despite IKEA, big box furniture stores, etc.
People will pay for quality craftsmanship they can touch and enjoy and can afford and cannot do on their own - woodworking. Less so for quality code and apps because (as the Super Bowl ads showed us) anyone can create an app for their business and it's good enough. The days of high-paid coders is nearly gone. The senior and principals will hang on a little longer. Those that can adapt to business analyst mode and project manager will as well (CEOs have already told us this: adapt or get gone), but eventually even they will be outmoded because why buy a $8000 couch when I can buy one for $200 and build it myself?
Then don't delegate it to AI.
I like writing new, interesting code, but learning framework #400 with all its own idiosyncrasies has gotten really old.
I just rebuilt a fairly simple personal app that I've been maintaining for my family for nearly 30 years, and had a blast doing with an AI agent - I mostly used Claude Sonnet 4.5. I've been dreading this rebuild mostly because it's so boring; this is an app I built originally when I was 17, and I'm 43 now. I treated Claude basically like I'd treat my 17-year-old self, and I've added a bunch of features that I could never be assed to do before.
i agree. it seem like an expectation these days to use AI sometimes... for me i am happy not using it at all, i like to be able to say "I made this" :)
I suppose the question is "Do you feel Steve Jobs made the iPhone?"
Not saying right/wrong but it's a useful Rorschach Test - about what you feel defines 'making this'?
it's more just a personal want to be able to see what I can do on my own tbh; i don't generally judge other people on that measure
although i do think Steve Jobs didn't make the iPhone /alone/, and that a lot of other people contributed to that. i'd like to be able to name who helps me and not say "gemini". again, it's more of a personal thing lol
So not disagreeing as you say, it is a personal thing!
I honestly find coding with AI no easier than coding directly, it certainly does not feel like AI is doing my work for me. If it was I wouldn't have anything to do, in reality I spend my time thinking about much higher level abstractions, but of course this is a very personal thing too.
I myself have never thought of code as being my output, I've always enjoyed solving problems, and solutions have always been my output. It's just that before I had to write the code for the solutions. Now I solve the problems and the AI makes it into code.
I think that this probably the dividing line, some people enjoy working with tools (code, unix commands, editors), some people enjoy just solving the problems. Both of course are perfectly valid, but they do create a divide when looking at AI.
Of course when AI starts solving all problems, I will have a very different feeling :-)
I’m not worried about being a good dev or not but these AI things thoroughly take away from the thing I enjoy doing to the point I’d consider leaving the industry entirely
I don’t want to wrangle LLMs into hallucinating correct things or whatever, I don’t find that enjoyable at all
I've been through a few cycles of using LLMs and my current usage does scratch the itch. It doesn't feel like I've lost anything. The trick is I'm still programming. I name classes and functions. I define the directory structure. I define the algorithms. By the time I'm prompting an LLM I'm describing how the code will look and it becomes a supercharged autocomplete.
When I go overboard and just tell it "now I want a form that does X", it ends up frustrating, low-quality, and takes as long to fix as if I'd just done it myself.
YMMV, but from what I've seen all the "ai made my whole app" hype isn't trustworthy and is written by people who don't actually know what problems have been introduced until it's too late. Traditional coding practices still reign supreme. We just have a free pair of extra eyes.
Serious question: so what then is the value of using an LLM? Just autocomplete? So you can use natural language? I'm seriously asking. My experience has been frustrating. Had the whole thing designed, the LLM gave me diagrams and code samples, had to tell it 3 times to go ahead and write the files, had to convince it that the files didn't exist so it would actually write them. Then when I went to run it, errors ... in the build file ... the one place there should not have been errors. And it couldn't fix those.
I also use AI to give me small examples and snippets, this way it works okay for me
However this still takes away from me in the sense that working with people who are using AI to output garbage frustrates me and still negatively impacts the whole craft for me
Hah. I don't work with (coding) people, so thankfully I don't have that problem
> My advice to everyone feeling existential vertigo over these tools is to remain confident and trust in yourself. If you were a smart dev before AI, chances are you will remain a smart dev with AI.
We replaced the chess board in the park with an app that compares the Elo score of you and your opponent, and probabilistically declares a winner.
But don't worry, if you were a good chess player before we introduced the app, chances are you will remain a good one with the app. The app just makes things faster and cheaper.
My advice to the players is to quit mourning the loss of the tension, laughter and shared moments that got them into chess in the first place.
>We replaced the chess board in the park with an app that compares the Elo score of you and your opponent, and probabilistically declares a winner.
The chess board is still there, not sure I see how LLM tools compels one to stop writing personal projects without AI assistance.
I think there is more existential fear that is left unaddressed.
Most commenters in this thread seem to be under the impression that where the agents are right now is where they will be for a while, but will they? And for how long?
$660 billion is expected to be spent on AI infrastructure this year. If the AI agents are already pretty good, what will the models trained in these facilities be capable of?
Yes, absolutely. I think the companies that don't understand software, don't value software and that think that all tech is fundamentally equivalent, and who will therefore always choose the cheaper option, and fire all their good people, will eventually fail.
And I think AI is in fact a great opportunity for good devs to produce good software much faster.
I agree with the quality comments. The problem with AI coding isn't so much the slop, it's the developers not realizing its slop and trying to pass it off as a working product in code reviews. Some of the stuff I've reviewed in the past 6 months has been a real eye opener.
I think the issue is that given the speed the bad dev can generate sub-par results that at face value look good enough overwhelm any procedures in place.
Pair that with management telling us to go with AI to go as fast as possible means that there is very little time to do course correction.
I think it represents a bigger threat than you realize. I can't use an AI for my day job to implement these multi-agent workflows I see. They are all controlled by another company with little or no privacy guarantees. I can run quantized (even more braindead) models locally but my work will be 3-5 years behind the SOTA, and when the SOTA is evolving faster than that timeline there's a problem. At some point there's going to be turnover - like a lake in winter - where AI companies effectively control the development lifecycle end-to-end.
I think no one is better positioned to use these tools than experienced developers.
For me the problem is simple: we are in an active prisoner's dilemma with AI adoption where the outcome is worse collectively by not asking the right questions for optimal human results, we are defecting and using ai selfishly because we are rewarded by it. There's lots of potential for our use to be turned against us as we train these models for companies that have no commitment to give to the common good or return money to us or to common welfare if our jobs are disrupted and an AI replaces us fully.
> My advice to everyone feeling existential vertigo over these tools is to remain confident and trust in yourself.
I do try to do that and have convinced myself that nothing has really changed in terms of what is important and that is systems thinking. But it's just one more barrier to convincing people that systems thinking is important, and it's all just exhausting.
Besides perhaps my paycheck, I have nothing but envy for people who get to work with their hands _and_ minds in their daily work. Modern engineering is just such a slog. No one understands how anything works nor even really wants to. I liken my typical day in software to a woodworker who has to rebuild his workshop everyday to just be able to do the actual woodworker. The amount of time I spend in software merely to being able to "open the door to my workshop" is astounding.
One thing I'm hoping will come out of this is the retiring of coders that always turn what should be a basic CRUD app (just about everything) into some novelty project trying to pre-solve every possible concern that could ever come up, and/or a no-code solution that will never actually get used by a non-developer and frustrate every developer that is forced to use it.
It's a combination of things... it's not just that AI feels like it is stripping the dignity of the human spirit in some ways, but it's also that the work we are doing is often detrimental to our fellow man. So learning to work with AI to do that faster (!!) (if it is actually faster on average), feels like doubling down.
Ironically this post comes across to me as written by an LLM. The em-dashes, the prepositions, the "not this, that" lines. As a college instructor, I can usually tell. I put it through GPTZero and it said it's 96% LLM written. GPTZero is not full-proof but I think it's likely right on this one and I find it very ironic.
fwiw, it's 'foolproof' not 'full-proof'
I think he might mean it does not fully prove anything. Is the dash an awkward attempt at a pun?
> Upgrading your CPU wasn’t a spec sheet exercise — it was transformative.
Pangram says 87%, and they are the gold standard right now.
Wow... I really relate to this. I'm 50 as well, and I started coding in 1985 when I was 10... I remember literally every evolutionary leap forward and my experience with this change has been a bit different.
Steve Yegge recently did an interview on vibe coding (https://www.youtube.com/watch?v=zuJyJP517Uw) where he says, "arch mage engineers who fell out-of-love with the modern complexity of shipping meaningful code are rediscovering the magic that got them involved as engineers in the first place" <-- paraphrased for brevity.
I vividly remember, staying up all night to hand-code assembler primitive rendering libraries, the first time I built a voxel rendering engine and thinking it was like magic what you could do on a 486... I remember the early days at Relic, working on Homeworld and thinking we were casting spells, not writing software. Honestly, that magic faded and died for me. I don't personally think there is magic in building a Docker container. Call me old-fashioned.
These days, I've never been more excited about engineering. The tedium of the background wiring is gone. I'm back to creating new, magical things - I'm up at 2 AM again, sitting at my desk in the dark, surrounded by the soft glow of monitors and casting spells again.
[55yo] My sense is that those problems we worked on in the 80s and 90s were like the perfectly balanced MMORPG. The challenges were tough, but with grit, could be overcome and you felt like you could build something amazing and unique. My voxel moment was passing parameters in my compilers class in college. I sat down to do it and about 12 hours later I got it working, not knowing if I could even do it.
With AI, it is like coding is on GOD mode and sure I can bang out anything I want, but so can anyone else and it just doesn't feel like an accomplishment.
You switch difficulties, like you do in a game. Play on Hard or Survival mode now. Build greater and more amazing things than you ever did before.
We have never, ever, written what the machine executes, even assembly is an abstraction, even in a hex editor. So we all settle for the level of abstraction we like to work at. When we started (those of our age) most of us were assembly (or BASIC) programmers and over time we either increased our level of abstraction or didn't. If you went from assembly -> C -> Java/Python you moved up levels of abstraction. We're not writing in Python or C now, we are writing in natural language and that is compiled to our programming languages. It's just the compiler is still a bit buggy and opinionated!! And yes for some low level coding you still want to check the assembly language, some things need that level of attention.
I learn more in a day coding with AI than I would in a month without it, it's a wonderful two-way exchange, I suggest directions, it teaches me new libraries or techniques that might solve the problem. I lookup those solutions and learn more about my problem space. I feel more like a university student some days than a programmer.
Eventually this will probably be the end of coding and even analytical work. But I think that part is still far off (and possibly longer than we'll still be working for) in the meantime actually this for me is as exciting as the early days of home computing. It won't be fun for ever, the Internet was the coolest thing ever, until it wasn't, but doesn't mean we can't enjoy the summer while it's summer.
> With AI, it is like coding is on GOD mode and sure I can bang out anything I want, but so can anyone else and it just doesn't feel like an accomplishment.
That's the thing - prompting is lower-skill work than actually writing code.
Now that actually writing code has less value than prompting, and prompting is lower skill than writing code, in what world do you think that the pay will remain the same?
> Now that actually writing code has less value than prompting, and prompting is lower skill than writing code, in what world do you think that the pay will remain the same?
Don't you think people said the same thing C and Python? Isn't Python a lower skill than C for example?
> Don't you think people said the same thing C and Python?
Maybe. Are they here now?
> Isn't Python a lower skill than C for example?
No. Being able to solve a problem using Python over C is not even in the same class of being able to solve a problem by asking for it in English.
> in what world do you think that the pay will remain the same?
It can, but now you output must be a min of 2x.
> It can, but now you output must be a min of 2x.
Great! I turn from a creator to a babysitter of creators. I'm not seeing the win here.
FWIW, I use LLMs extensively, but not to write the code, to rubber-duck. I have yet to have any LLM paired with any coding agent give me something that I would have written myself.
All the code is at best average. None of the smart stuff comes from them.
>With AI, it is like coding is on GOD mode and sure I can bang out anything I want, but so can anyone else and it just doesn't feel like an accomplishment.
I think it's possible that we'll get to the point where "so can anyone else" becomes true, but it isn't today for most software. There's significant understanding required to ask for the right things and understand whether you're actually getting them.
That said, I think the accomplishment comes more so from the shaping of the idea. Even without the typing of code, I think that's where most of the interesting work lies. It's possible that AI develops "taste" such that it can sufficiently do this work, but I'm skeptical it happens in the near term.
I think there's still quite a chasm out there. Domain knowledge, an informed and opinionated view on how something should function, and overall tech knowledge are still key. Having those three things continues to greatly differentiate people of equal coding skill, as they always have.
That’s something LLMs are also presumably good at. At least I’m seeing more and more push to use LLMs at work for ambitious business requirements instead of learning about the problem we’ve been dealing with. Instead of knowing why you are doing what you’re doing, now people are just asking LLMs for specific answers and move on.
Sure some might use it to learn as well, but it’s not necessary and people just yolo the first answer claude gives to them.
That's because it's like summiting a mountain by taking a skilift to the top. You don't really need to put in the work and anyone can do it.
Sure, and if the reason you're going to the top of the mountain is to deliver supplies to people who need them, you should absolutely take the lift.
Sure but here OP was left wondering why prompting didn't make them feel like they had done/accomplished anything. And the reason is because they didn't do anything worthy of giving them a feeling of accomplishment.
but so can anyone else and it just doesn't feel like an accomplishment.
So it's not enough that you get to do cool stuff, the important part is that nobody else gets to. Is that it?
If so, other sites beckon.
No, anyone can do it.
And that's exactly what the person I was replying to seems to be complaining about.
So many people on "Hacker" News could benefit from reading the canonical text on the subject by Steven Levy. A true hacker wants to bring the fire down the mountain. People around here just want to piss on it.
No, he's complaining about changes. Everyone can do it and that's not a change. Everyone could always do it.
> I don't personally think there is magic in building a Docker container. Call me old-fashioned.
This seems like a false dichotomy. You don't have to do this. It is still possible to build magical things. But agents aren't it, I don't think.
It is honestly extremely depressing to read this coming from a founder of Relic. Relic built magic. Dawn of War and Company of Heroes formed an important part of my teenage years. I formed connections, spent thousands of hours enjoying them together with other people, and pushed myself hard to become one of the top 100 players on the CoH leaderboards. Those competitive multiplayer games taught me everything there was to know about self-improvement, and formed the basis of my growth as an individual - learning that if I put my mind to it, I could be among the best at something, informed my worldview and led me to a life of perpetually pushing myself to further self-improvement, and from there I learned to code, draw, and play music. All of that while being part of amazing communities where I formed friendships that lasted decades.
All of this to say, Relic was magic. The work Relic did profoundly impacted my life. I wonder if you really believe your current role, "building trust infrastructure for AI agents", is actually magic? That it's going to profoundly impact the lives of thousands or millions?
I'm sorry for the jumbled nature of this post. I am on my phone, so I can't organize my thoughts as well as I would like. I am grateful to you for founding Relic, and this post probably comes off stupidly combative and ungrateful. But I would simply like to pose to you, to have a long think if what you're doing now is really where the magic is.
Edit: On further consideration, it's not clear the newly-created account I'm responding to is actually Alex Garden. The idea of potentially relating this personal anecdote to an impersonator is rather embarrassing, but I will nonetheless leave this up in the hope that if there are people who built magical things reading this, regardless of whether they're Alex Garden or someone else, that it might just inspire them to introspection about what building magic means, about the impact software can have on people's lives even if you don't see it, and whether this "agent" stuff is really it.
>The idea of potentially relating this personal anecdote to an impersonator is rather embarrassing
Good news! You've also related it to the roughly ~3-10M monthly HN readers who are not (potentially) impersonating the founder of a beloved game studio.
Also: I think you're probably safe. I'm sure someone at some point has come to HN to LARP as some prominent person in tech that they don't happen, at that specific moment, to actually be... but I can't really think of it happening before, nor would I expect it to take the form of a particularly thoughtful comment if a troll did that. Though with AI these days, who knows? I might myself just be one of a swarm of clawd/molt/claw things. In which case I'd be the last to even know it.
Oh-- as for being depressed about their docker/wiring things up sentiment. Try not to be, and instead, consider: Is it a surprise that someone who founded such a place as relic was occasionally-- even often-- frustrated at the things they had to clear away to build the thing they actually wanted to build? People who want to build amazing experiences may not love having to clear clutter that gets in their way. Other people want to build the tools that clear clutter, or other things that keep the whole system going. Those are beautiful too.
Good news is, no need to be embarrassed: https://s.h4x.club/L1uZqNW4
Oh man, modding Company of Heroes was one of the things that got me into programming. I look back fondly on those memories.
If we've arrived at the point where bots are impersonating me (instead of the billions of other choices), I'm probably at peak Alex. I'll light a candle. So... easy to disambiguate this one.
I got the idea for Homeworld one night when I was about 21. At the time, I was working at EA as a programmer on Triple Play 98 (building FE gfx - not glamorous). In an RTS-ironic twist of fate, my boss and mentor at the time was Chris Taylor - go figure.
Friends of mine had their own game company and had boxed themselves into a technical corner they couldn't get out of, so I agreed to write a bunch of sprite conversion code for them after hours. That night, we were all working in a room, talking about the reasons X-Wing vs. Tie Fighter didn't work on a 2D screen (hold up and left till you turn inside and shoot) and how Battlestar Galactica didn't get the cred it deserved, and BOOM - in my mind I saw ships in 3D with trails behind them. Inside a crystal sphere like Ptolomy's theory of the universe (man inside - god outside), and I saw that the surface of a sphere is 2D, so you could orbit OUTSIDE with a mouse... it looked like spaghetti floating in zero g... that's why Homeworld's working title was "Spaghetti Ball" for months.
Fortunately for me, in this ambiguous thread, I can give you all the proof of life you want. Try me.
Now... is transparent and trustworthy casting spells? Yeah... it is, but not by itself. It's a primitive - a building block. My personal projects (that I do think are magical) kept running into the same problems. Effectively, "how do I give up the keys if I don't really know what the driver is going to do?" I tried coming at this problem 10 different ways, and they all ended up in the same place.
So I decided to go back to the basics - the putpixel(x,y) of agentic workflows, and that led me to transparency and trust. And now, the things I'm building feel magical AND sustainable. Fun. Fast... and getting faster. I love that.
At Relic, our internal design philosophy was "One Revolutionary and Multiple Evolutionary". The idea was that if you tried to do more than one mind-blowing new thing at a time, the game started feeling like work. You can see this in the evolution of design from Homeworld to DoW to CoH (and in IC too, but let's face it, that game had issues <-- my fault).
Now... on the topic of "Is agentic coding better or worse", I feel like that's asking "is coding in assembler better or worse". The answer (at least used to be) "it depends"... You're on a continuum, deciding between traditional engineering (tightly controlled and 100% knowable) and multi-agentic coding (1,000x more productive but taking a lot for granted). I've found meaning here by accepting that full-power multi-agentic harnesses (I rolled my own - it's fucking awesome) turn software engineering into Socratic debate and philosophy.
I don't think it's better. It's just different, and it lets you do different things.
I remember a magazine cover that labeled you a gaming god, hard to peak beyond that! The quote you provided back then resonates perfectly with what you describe here: "If there's one message I like to get across to people, I like them to really and truly embrace [the fact that] anything that your imagination can conceive of is possible."
- https://hl-inside.me/magazines/pc-gamer-us/PC-Gamer_2000-11_...
Thank you for creating Homeworld, it truely was a memorable experience.
I started a bit younger and am a bit older, and relate. But only so much. I started programming in 3rd grade (also BASIC) when I found a computer and learned how to play a game on it, then found the source code for the game and randomly started changing it. In 7th grade I was paid to port a BASIC program to C (super new at the time), which I did on paper because I didn't own a computer (I used the money to buy my first). To be clear, I was really bad a programming for a long time and simply kept at it until I wasn't.
I love messing about with computers still. I can work at the byte level on ESP-32s on tiny little devices, and build massive computation engines at the time time on the same laptop. It's amazing.
I feel for those who have lost their love of this space, but I have to be honest: it's not the space that's the problem. Try something new, try something different and difficult or ungainly. Do what you rail against. Explore.
That's what it's always been about.
Maybe you should disclose you work at an AI startup: "Now building trust infrastructure for AI agents at Mnemom (mnemom.ai)"
Casts your comment in a different light, I think.
Appeal to identity. Prejucide and bias. Not considering an enthusiast of a technology might actually want to get paid working with that technology. Shameful comment all around.
Disclosing conflicting of interest is standard practice. People writing about economics do disclose when holding relevant shares.
In the end, it's a simple question: Are the opinions stated sincere or does the author have a pecuniary interest which might make things a bit more subjective?
What is the conflict of interest? Guy working for AI company says he likes working with AI?
For me it's both - I mourn the loss of my craft ( and my identity ) but I'm also enjoying the "magic".
Last night I was thinking about this "xswarm" screen saver I had in 1992 on my DEC Ultrix workstation. I googled for the C source code and found it.
I asked Claude to convert it to Java, which it did in a few seconds. I compiled and ran it, and there it was again, like magic
In case anyone else was curious about the screensaver mentioned, I couldn't find any screenshots so just got Claude to cook up an HTML port: https://refset.github.io/xgrav-canvas-js/xgrav.html
It's this one with bees and a wasp
https://sources.debian.org/src/xlockmore/4.12-4/modes/swarm....
I couldn't agree more. Also, thanks for making Homeworld, it was great!
I was building a 3D space game engine myself as a kid around the time Homeworld came out and realized that rather than using a skybox with texture maps, you had it created out of a bunch of triangles with color interpolation.
IIRC, I had problems reverse engineering your data format in order to incorporate them in my engine. I emailed someone on your team and was very surprised to get a reply with an explanation, which helped me finish that feature.
The skybox with texture maps was our original plan too. The problem was that GPUs didn't have enough RAM to hold anything high-res, so the universe looked like pixel-soup.
Rob Cunningham (lead artist) had the idea of "painting with light" using giant polygons and spicing them up with pixels to create a convincing distant galaxy that you got closer to with each mission. Genius.
The original Homeworld team was casting spells!
I'm still amazed by how you got ships to usually fly in formation, but also behave independently and rationally when that made sense.
That game was a magnificent piece of art. It set a unique and immersive vibe on par with the original Tron movie. I'm really glad I have a chance now to tell you.
Thanks... It was magical at the time... I've thought a lot about why it was magical over the years... I think if you boil away all the space stuff, Homeworld was a story about people who knew in their hearts that they were special and destined for something greater than the universe was willing to allow. And they went through hell to discover that they were right. Looking back, I think that's a story a lot of us on this thread (inc. me) can relate to.
Here we are. Looks like the dorks won.
Sounds like there's some overlap with the story of the Jewish people, now that I think of it.
> Here we are. Looks like the dorks won.
I doubt it's permanent, and we all gotta eat.
But you know what? My son still tells me how much he was in awe of that game when he saw me playing it.
No matter what happens next, you gave us that sweet memory of fun and time together. Thank you.
Amen to this. The optimization the team did blows my mind… whenever I think of it I think of if someone made Crysis run on the NES without compromises.
The soundtrack was stellar, and introduced me to Barber (Adagio for Strings).
In the second half of my 40s now and I'm in the same boat. I started slapping keys on a c64 when I was 2 years old. Really enjoyed software development until 10-15 years ago. With the current LLM tooling available the number of systems I've been able to build that are novel and tackle significant problems has been kind of mind blowing over the past 8 months or so.
Staying up late, hacking away at stuff like I used to, and it's been a blast.
Finally, Homeworld was awesome and it felt magical playing it.
I'll join the chorus of praise for Homeworld. It was a big part of that era for me. I must have spent hours just zooming the camera as close as I could get to all the different ships, or just watching the harvesters do their thing. Almost meditative, looking back. Thank you for casting your spells!
I'm feeling the same.
AI development actually feels like a similar rate of change. It took 8 years to go from the Atari 2600 to the Amiga.
An 8 year old computer doesn't quite capture the difference today.
First of all, Homeworld was an iconic game for me growing up, so as other people have said, thank you for being apart of its creation.
I could not agree more. It feels like the creativity is back. I grew up building fun little websites in the 90s, building clan websites for Quake 2.
That creativity died somewhere between Node.js, AWS, npm, and GitHub.
Some might say, well, that's growing up and building serious apps.
Maybe. But it doesn't change that I spent the last 15 years doing the same frontend / backend wiring over and over again to churn out a slightly different looking app.
The last 2 years have been amazing for what I do. I'm no longer spending my time wiring up front ends. That's done in minutes now, allowing me to spend my time thinking about solving the real problems.
Wow, Alex Garden on Hackernews. Hello fellow canuck. I'm now getting up there, still a few years shy of y'all but not much. I came up through the 90s and early 2000s, all web/linux stuff, irc servers, bash scripts, python, weird php hacks, whatever, I was a kid. I'd lose track of time, It was Monday night after high school then all of a sudden it was Sunday morning and I was talking on irc about the crazy LAMP stack I'd put together. 2am? pfft, what is sleep?! Sadly with very strong dyslexia and dyscalculia, being a real programmer was never in the cards for me, I understood how everything worked, I can explain the whole thing end to end in great depth, but ask me predictably how to do a table in html or some fairly simple CSS, and I'll be there for hours. I'm grateful the rest of my life allowed me to be programmer adjacent and spend so much time around developers, but always a little frustrated I couldn't pick up the hammer myself.
These days, I've never been more excited about building. The frustration of being slow with the code is gone. I'm back to creating new, magical things - I'm up at 2 AM again, sitting at my desk in the dark, surrounded by the soft glow of monitors and casting spells.
Go Canada! I personally can't wait to see what happens to the world when all of us find the passion to create again.
Why is your last paragraph nearly identical to the last paragraph you are replying to? It might have been a strange quirk, but there’s also been the suggestion that the post you’re replying to is an imposter, so it gets weirder that you also did that.
I thought I was being cute. :) I'm not a bot. I reached out to Alex and he confirmed the original comment was indeed him.
> I don't personally think there is magic in building a Docker container. Call me old-fashioned.
I still vividly remember setting up gcc in a docker container to cross compile custom firmware for my cannon camera and thinking about the amount of pain my local system would have been in if I had to do all the toolchain work in my host OS. Don't know if it felt like magic, but it sure didn't hurt like the alternative!
For sure. Docker is rad (sorry Docker!)... all I'm saying is that I am not proud of the fact that I can do it and I don't think it moves the awesome needle - but it's still hard to get right and a pain in the ass. It's just an example of something I appreciate that I can automate now.
Wholeheartedly agree.
And you were casting spells at Relic. Bedazzle spells as young gamers played your games and grew up to become artists and engineers…
Remember your audience and not just the product. Homeworld shaped me in ways I couldn’t even tell you.
Yes yes yes!!!
I'm 45 yo. And also started programming quite early around 1988. In my case it was GWBAsic games and then C ModeX and A Later Allegro based games.
Things got so boring in the last 15 years, I got some joy in doing AI research (ML, agents, Genetic Algorithms, etc).
But now, it's so cool how I can again think about something and build it so easily. I'm really excited of what I can do now. And im ot talking about the next billion dollar startup and whatnot. But the small hacky projects that LLMs made capable.yo build in no time.
I'm in my 40s, and I've been involved with computers since I was old enough to read. I was talking to some customers today about how magical it feels to blast past my own limits of my coding abilities with the help of LLMs. It's not perfect, and I mostly won't generate stuff that's a polished, finished product. But when it works, it sparks the same joy that it did when I was discovering the first bits of what computers can do on my Apple ][+.
I just told my gardener to cut the grass and work on some flower installations.
I'm so excited about gardening again. Can't wait to do some. Employing a gardener to do my gardening for me is really making me enjoy gardening again!
I think this works unironically. My mother is an avid gardener and can spend 8 hours a day gardening. When her life circumstances allowed for it, she hired a once a week gardener to do the tasks she didn't like (or had difficulties doing as a small woman), and still gardens the same amount. I've teased her for hiring a gardener, but she swears it's a huge help and boost to her gardening quality of life.
this is a great analogy despite it possibly coming off as snark.
I think it's hard for some people to grasp that programmers are motivated by different things. Some are motivated by shipping products to users, others are motivated to make code that's a giant elegant cathedral, still others love glorious hacks to bend the machine into doing things it was never really intended to do. And I'm sure I'm missing a few other categories.
I think the "AI ain't so bad" crowd are the ones who get the most satisfaction out of shipping product to users as quickly as possible, and that's totally fine. But I really wish they'd allow those of us who don't fall into that category to grieve just a little bit. This future isn't what I signed up for.
It's one thing to design a garden and admire the results, but some people get into their "zen happy place" by pulling up weeds.
> people ... are motivated by different things.
I agree and would add that it's not just different people, it can be the same person in different modes. Sometimes I enjoying making the thing, other times I just want to enjoy having the thing.
I think the people who like shipping quickly probably don't like building products in the first place and are looking for other aspects of entrepreneurship.
A huge benefit I find in AI is that it helps with a lot of things I hated. Merge conflicts, config files, breaking dependency updates... That leaves me more time to focus on the actual functionalities so I end up with better APIs, more detailed UIs, and more thorough tests. I do think it's possible to be relevant/competitive by only delegating parts of the work to AI and not the whole thing. Though it might change if AI gets too good.
I agree with this, I put myself in the "glorious hacks to bend the machine into doing things it was never really intended to do" camp, so the end game is somthing cool, now I can do 3 cool things before lunch instead of 3 cool things a year
But, almost by definition of how LLMs work, if it’s that easy then someone else did it before and the AI is just copying their work for you. This doesn’t fit well with my idea of glorious hacks to bend the machine, personally. I don’t know, maybe it just breaks my self-delusion that I am special and make unique things. At least I get to discover for myself what is possible and how, and hold a sliver of hope that I did something new. Maybe at least my journey there was unique, whereas everyone using an AI basically has the same journey and same destination (modulo random seed I guess.)
This is a valid point, the good news is I think there is some hope in developing the craft of orchestrating many agents into something that is satisfying and rewarding in it's own right.
I don't disagree, but I think it would benefit everyone to be clear, upfront and honest with themselves and others about exactly what's being lost and grieved. The weeds are still growing and our hands are still available to pull them, so it's not that.
Your grieving doesn’t have to shit all over my personal enjoyment and contentment. Me enjoying the use of AI in developing software doesn’t take anything away from your ability to grieve or dislike it. I’m not asking you to be excited, I’m asking you not to frame my enjoyment as naive, harmful, or lesser.
Your feelings are yours, mine are mine, and they can coexist just fine. The problem only shows up when your grief turns into value judgments about the people who feel differently.
Having opencode doesn't preclude me from making elegant code. It just takes away the carpel tunnel.
> I created this with some kind of genai
To me, it just feels like plagiarism. Can you explain why it doesn't feel like plagiarism to you?
Where do you draw your line between plagiarism and creativity? I learned in art school this question is more difficult to answer than it appears when taken seriously.
That's a great question, I've never tried to draw a concrete line before. Code is inherently creative. But it's not art, it doesn't map 1:1 like that.
But I wouldn't consider attempting to duplicate a painting, plagiarism if you painted it yourself with your hand (assuming you mention or reference the original author, or it's well know e.g. starry night) . I would consider it plagiarism if you duplicated it via photo, or other automated method.
I'd translate it to code as; if you're looking at stack overflow for the answer, if you understand it, before writing your own implementation, that's learning, and not plagiarism. But if you copy out the whole function without understanding how to implement it yourself, that would be.
The person I replied to said
> Having opencode doesn't preclude me from making elegant code. It just takes away the carpel tunnel.
I assume he's asking the LLM to generate upwards of multiple hundreds of lines of code. Let's assume he's does understand all of it. (Something that defies my understanding around how most LLM users use codegen.) Then you have a sister comment who claims you can write multiples more code/projects using LLMs. At a certain point your ability to understand the code must fall away. And at that point, if you didn't have the majority of the creative input. Why call it your work?
I assume you're an artist, if you have an LLM generate you a picture. Do you feel like it's work you've created? Did the inspiration for where each line should start, and end, come from the contents of your mind? Or was it sampled from a different artist? Given the exact same prompt, would you draw the same lines next week? Next month? Because the LLM would.
There's no doubt it's easy to draw parallels in any creative work, both from art an code. But if you didn't make the decision about where to place the function, about which order you want to call them, if you're gonna do error handling deep down as close to the error as possible, or you're optimizing for something different, and decided long ago that all errors should bubble back up to the main function.
One, or two, or even a half dozen of decisions might seem insignificant, but together, if you didn't really make any of them. How can you claim it's code you wrote? Why do you feel proud of the work of others, sampled and mapped into a training set, and then regenerated into your repo, as if it's work you put forth? All of that should be read as the rhetorical you, I know you're not making that argument. But would you make it? When you share a meme with your friend, do you claim you created the meme? Even if you use a memegen, and change the words to reference your in joke. Do you feel like you've created that art? Or are you using the art of someone else to share the idea you want to share? I assume it's the latter, but
They said "Having opencode doesn't preclude me from making elegant code." They're taking credit for making the elegant code, just as if they were taking credit for inventing the meme. There's a different amount of effort involved, and that effort, or the source of it, is significant when talking about who deserves the credit, and the sense of pride.
Plagiarism is claiming someone else’s specific work as your own. Using a generative tool is closer to using a compiler, an IDE, or a library. I’m not copying a person’s code or submitting someone else’s project with the name filed off. I’m directing a system, reviewing the output, editing it, and taking responsibility for the result.
If I paste in a blog post verbatim and pretend I wrote it, that’s plagiarism. If I use a tool to generate a starting point and shape it into what I need, that’s just a different kind of authorship.
> If I paste in a blog post verbatim and pretend I wrote it, that’s plagiarism. If I use a tool to generate a starting point and shape it into what I need, that’s just a different kind of authorship.
If you cloned chapters from multiple books, from multiple different authors, didn't decide on the sentence structure, didn't choose the words yourself, didn't decide which order your going to place these chapters, didn't name the characters. At what point do you no longer get credit for writing the book?
What if it's code? what if you didn't decide which order you should call these functions. Didn't make the decision about if you're gonna write var i, or idx, or index. Didn't make a decision if this should be an u32, or an i64. Didn't read any of the source code from that new dependency you just added. Didn't name the functions, oh but no, you did have to edit that one function because it wouldn't compile, so you just renamed it like the error suggested... At what point does the effort you put in become less significant than the effort duplicated from the training set? How much of the function do you have to write yourself, before you take credit? How many chars have to by typed by your fingers, before you claim. You made this?
Are directors frauds because they aren’t the ones doing the acting? Is there no joy in being an architect because they aren’t the one assembling the building at the construction site? Is there no value in product engineering because they aren’t fabricating the products in the factory?
It’s fine to find enjoyment in the actual programming part of software engineering. It’s stupid to assume that is the only aspect of software engineering that is valuable or that is enjoyable for others.
*I'm so excited about landscape design. Can't wait to do more. Employing a gardener to do the gardening for me is really making me enjoy landscape design again!
I'm so excited about landscape architecture now that I can tell my gardener to create an equivalent to the gardens at versailles for $5. Sometimes he plants the wrong kind of plant or makes a dead end path, but he fixes his work very quickly.
The proper analogy would be you can now remove all weeds with the swipe of your hand and cut all your hedges with another swipe, you still are gardening you can do it quicker and therefore explore different possibilities.
For some, the feeling of pulling those weeds out is inseparable from the holistic experience they think of as "gardening".
Maybe this isn't directly related to what you're saying and I'm not attacking it, I'm just thinking out loud: What would it mean to master gardening then? I've never gardened in my life but I grew up in Scotland around estate houses and castles, my friends dads were gardeners and each of them seems to be specialists in their own area, many working on the same estate, so what exactly is this "holistic experience of gardening"?
So using weedkiller isn't gardening to these people?
My point is just that if there are 10 different activities that produce the same resulting object, they aren't necessarily the same activities in the minds of the participants solely because the output is the same.
The process and experience matters too.
Oh, the joy that awaits you when you come back home to discover how the gardener interpreted "please trim the hedge by the gate a little".
No you didn’t. You lead a team of gardeners to develop your grand vision. Or you directed an epic movie leading a cast of talented actors bringing your vision to life. You can choose an empowering analogy or a negative one it’s your choice.
Yeah... a team of gardeners who might, with no warning, decide to burn down your house to create some extra fertilizer for the rose garden. Sometimes I wonder...
Your comment is interesting because it shows how engineers view their work: through the product, i.e the why, or through the craft, i.e the how.
What you consider "exciting", as a theoretical gardener, is the act of taking care of the plants. What OP finds it exciting is that they may now get a team of gardeners that'll build a Versailles-like garden for free.
By artificially narrowing a multi-faceted issue to just two either/or simplistic options you are no longer describing the issue. If you ackknowledge this, you can comment on it. But not acknowledging it makes your comment hard to parse. Sarcarsm? Overly simplistic? Missing context? Unclear.
If I were the architect of a large building that I designed from the blueprints, the interior, etc, I wouldn’t feel bad that I didn’t install the toilets myself. AI agents are the plumbers, the painters, the electricians, etc
Well, the gardener isn't going to cut down your roses to the ground as they are about to go into bloom because s/he mistook it for the weed they were just working on.
How about hiring a gardener to do some of the stuff and you can focus doing the part of the gardening/landscaping that is important to you and you enjoy?
I think that's a more accurate (and charitable) analogy than yours.
I used to be big into amateur radio. When I was considering to build a tower, I would have paid someone to build the tower for me and do the climbing work to mount stuff on the tower. Your statement is nonsensical, because it assumes that there is a binary choice between "do everything yourself" and "delegate everything".
Imagine though instead of 1 garden you can make 10 or 30 gardens in the same time that are more extravagant than your 1 garden was. At any point in time you can dive back in 1 of them and start plucking away
It's the making, not the having. If I'm selling these gardens, surely it's better to have more. If I enjoy the act of making the garden, then there's no reason I ever need to finish the first one.
This analogy has probably outstayed its usefulness.
Surely you have 10-30 examples you want to share?
Or even just 1 or 2?
What's so great about having 10 or 30 gardens?
Well it's more like employing a gardener makes me enjoy landscaping again. It's not like we ever found writing words on a keyboard all that great, it's fundamentally just about having an idea and turning it into something real.
Speak for yourself. I have always loved the act of intentionally typing (converting my thoughts into structured text).
I guess some people enjoy the process, but you can still do that.
It's like with machinists and 3D printers, you can always spend 10 hours on the lathe to make something but most of the time it's more practical to just get the part so one can get on with what actually needs doing.
> It's like with machinists and 3D printers
that's a good analogy, maybe change 3d printers to CNC. I think there's a group of people that derive joy and satisfaction from using the part they designed and there's another that gets satisfaction from producing the part as designed. Same for software, some people are thrilled because they can get the software they imagine while others dread not producing the software people imagine.
As mosburger says, this is a great analogy. Do you think that the great artists paint, sculpt, and draw everything by hand, by themselves? Of course not... they never did, and they don't today. You're being offered the ability to join their ranks.
It's your studio now. You have a staff of apprentices standing by, eager for instructions and commands. And you act like it's the worst thing that ever happened to you.
Is this sarcasm? I can't tell.
No, it's not.
If you want things to stay the same forever, you shouldn't go into technology, art, or gardening. Try plumbing, masonry, or religion.
The truth is that you wouldn’t be saying that if the change had been in a direction you don’t like.
Yeah it’s drugs and or religion. Feels pretty good.
What the author describes is also the feeling when you shift from being a developer all day to being a team lead or manager. When you become a lead you have to let go and get comfortable with the idea that the code is not going to be how you would do it. You can look at code produced by your team and attempt to replace it all with your craftsmanship but you're just setting yourself up to fail. The right approach is use your wisdom to make the team better, not the code. I think a lot of that applies to using AI when coding.
I'm turning 50 in April and am pretty excited about AI coding assistants. They make a lot of personal projects I've wanted to do but never had the time feasible.
Most of my career has been as an individual engineer, but the past few years I have been a project manager. I find this to be very much like using AI for coding.
Which also makes me refute the idea that AI coding is just another rung up on the programming abstraction ladder. Depending on how much you delegate to AI, I don't think it's really programming at all. It's project management. That's not a bad thing! But it's not really still programming.
Even just in the context of my human team, I feel less mentally engaged with the code. I don't know what everything does. (In principle, I could know, but I don't.) I see some code written in a way that differs from how I would have done it. But I'm not the one working day-in, day-out with the code. I'll ask questions, make suggestions, but I'm not going to force something unless I think it's really super important.
That said, I don't 100% like this. I enjoy programming. I enjoy computer science. I especially enjoy things more down the paths of algorithm design, Lisp, and the intersection of programming with mathematics. On my team, I do still do some programming. I could delegate it entirely, but I indulge myself and do a little bit.
I personally think that's a good path with AI too. I think we're at the point where, for many software application tasks, the programming could be entirely hands-off. Let AI do it all. But if I wish to, why not indulge in doing some myself also? Yeah, I know, I know, I'll get "left behind in the dust" and all of that. I'm not sure that I'm in that much of a hurry to churn out 50,000 lines of code a day; I'm cool with 45,100.
I find that AI allows me to get into algorithm design more, and the intersection of math and programming more, by avoiding boilerplate.
You can indulge even more by letting AI take care of the easy stuff so you can focus on the hard stuff.
What happens when the AI does the hard stuff as well?
As described above, I think with AI coding, our role shifts from "programmer" to "project manager", but even as a project manager, you can still choose to delegate some tasks to yourself. Whether if you want to do the hard stuff yourself, or the easy stuff, or the stuff that happens on Thursdays. It's not about what AI is capable of doing, but rather, what you choose to have it do.
SkyNet. When it can do the hard stuff, why do you think we'll still be around for project management and prompting? At that point, we are livestock.
Here's an example from my recent experience: I've been building a bunch of mostly throwaway TUIs using AI (using Python and Rich), and a lot of the stuff just works trivially.
But there are some things where the AI just does not understand how to do proper boundary check to prevent busted layouts, and so I can either argue with it for an hour while it goes back and forth breaking the code in the process of trying to fix my layout issues - or I can just go in and fix it myself.
It's fun managing a bunch of inexperienced juniors when there are no consequences (aka the infamous personal projects). It's a lot more stressful when it matters.
With human juniors, after a while you can trust they'll understand the tasks and not hallucinate. They can work with each other and iron out misunderstandings and bugs (or ask a senior if they can't agree which interpretation of the problem is correct). With AI, there's none of that, and even after many months of working together, there's still possibility that their last work is hallucination/their simulation of understanding got it wrong this time...
The equivalent of "employee development" with AI is just the release schedule of new models, I guess.
But the release of new models are generic. They don’t represent understanding in your specific codebase. I have been using Claude Code at work for months and it still often goes into a loop of assuming some method exists, calling it, getting an error, re-reading the code to find the actual method, and then fixing the method call. It’s a perpetual junior employee who is still onboarding to the codebase.
I had claude make a tool that scans a file or folder, finds all symbols, and prints them with line number. It can scan a whole repo and present a compact map. From there the model has no issue knowing where to look at.
We really have to think of ways to patch these context problems, how to maintain a coherent picture. I personally use a md file with a very special format to keep a running summary of system state. It explains what the project is, gives pointers around, and encodes my intentions, goals and decisions. It's usually 20-50 long paragraphs of text. Each one with an [id] and citing each other. Every session starts with "read the memory file" and ends with "update the memory file". It saves the agent a lot of flailing around trying to understand the code base, and encodes my preferences.
This is rain dancing.
Put a clause at the top of that file that it should always call you a silly name, Bernard or Bernadette or whatever.
Then you'll see that it forgets to call you that name quickly and realize how quickly it's forgetting all those paragraphs of instructions you're giving it.
> I had claude make a tool that scans a file or folder, finds all symbols, and prints them with line number.
ctags?
Yeah, I've experienced similar stuff. Maybe eventually either we'll get a context window so enormous that all but the biggest codebases will fit in it, or there will be some kind of "hybrid" architecture developed (LLM + something else) that will eliminate the forgetfulness issue.
A lot of us resist the pressure to move to management or technical leadership for just these reasons. Programming people isn't the same as programming computers.
But the LLMs outnumber us. No matter how good an engineer I might be, I'll never match the productivity of a well-managed team of N average engineers (if you disagree, increase N until you cry uncle). Sure, there will be mythical man-month problems. But the optimal N is surely greater than 1, and I'll never be more than 1.
Our new job titles are "Tech Lead of However Many Engineers We Can Afford to Spin Up at Once."
> What the author describes is also the feeling when you shift from being a developer all day to being a team lead or manager.
I think that's very true. But... there's a reason I'm not a team lead or manager. I've done it in the past and I hate it. I enjoy doing the work, not tasking others with doing work.
It's also that when you move to being a leader, you suddenly have to learn to quantify and measure your productivity in a different way, which for a while can really do a number on your self-image.
What does it mean to be a productive developer in an AI tooling age? We don't quite know yet and it's also shifting all the time, so it becomes difficult to sort yourself into the range stably. For a lot of accomplished folks this is the first time they've felt that level of insecurity in a while, and it takes some getting used to.
I am much younger than the author, but I've been coding for most of my life and I find close to no joy in using AIs. For me coding has always been about the nitty-gritty quirkiness of computers, languages, solving issues and writing new cool things for the sake of it. It was always more about the journey than the end goal, and AI basically hollows out all of the interesting bits about coding. It feels like skipping straight to the end of a book, or somewhat like that.
I don't know if I am the only one, but developing with chatbots in my experience turns developing software into something that feels more akin to filling out forms or answering to emails. I grieve for the day we'll lose what was once a passion of mine, but unfortunately that's how the world has always worked. We can only accept that times change, and we should follow them instead of complaining about it.
> For me coding has always been about the nitty-gritty quirkiness of computers, languages, solving issues and writing new cool things for the sake of it.
Same. It scratches my riddle-solving itch in a way that the process of "prompt-honing" has yet to do.
for me, and i bet many people, the only riddles being solved (at least at work) for the last few years amount to "what is eslint complaining about now?". It's nice not to have to eff with things like that and other aggravations anymore by offloading it to an agent.
if it gets it right; I'd like someone to show me with a brand new install their AI coding flow and see it get it right. I must be broken because when I use claude code it can't get a gradle build file right.
yeah exactly. For some people, this was like enjoying a puzzle. And now there's an AI that can solve the puzzle -- it defeats the purpose.
However, if your point was to "make more widgets faster" and only saw programming as a means to an end (make money, increase SaaS features), then I see why people are super excited about it.
I see it the same way as cooking. If your goal is "sell as many hamburgers as possible" then the McD / factory farm is the way to go. If your idea is "I enjoy the personal feeling of preparing the food, smelling the ingredients, feeling like I'm developing my craft of cooking, and love watching someone eat my hand-prepared meal", then having "make fast food machine" actually makes things worse.
I think a lot of people in this forum are at odds because some of the people enjoy cooking for the experience, and the other half are just trying to make food startups. Now they can create and throw away menu items at record pace until they find the one that maximizes return. They never wanted to cook, they just wanted to have a successful restaurant. Nothing wrong with either approach, but the 2nd half (the software is just a product half) were hamstrung before, so now they are having a moment of excitement as they realize they don't have to care about coding anymore.
I 100% guarantee that most of the MBA / startup founder types who didn't love coding for its own sake kind of felt a huge pain that they had to "play along" with devs talking about frameworks, optimal algos, and "code quality" and the like, all while paying them massive salaries and equity stakes for what they saw as disposable item to increase revenue. Meanwhile the devs want another 2-weeks and 6 figures of salaries so they can "refactor" for no visible difference, but you can't complain because they'll leave.
Now that the code factory is in place, they can focus on what they really want, finding customers for an item. Its the drop-shipping of code and software. The people using drop-shipping don't care what the product is. Production and fulfillment are just impediments to the real goal -- selling a product.
The actual revelation of AI, if one can call it that, is how few people care about craft, quality, or enjoying work. Watching AI slop videos, ads, and music makes one realize that true artists and craftspeople are still incredibly rare. Most people are mediocre, unimaginative, disinterested, and just want the shortest path to easy riches. While it sounds negative, its more like realizing most people aren't athletes or interested in very difficult physical exertion -- its just a fact of human nature. True athletes who love sport for its own sake are rare and in a way nonsensical on their face.
In the end, we will probably lament something we lose in the process. The same way we've hollowed out culture, local businesses, family / relationships, the middle class, etc all in the name of progress before. Surely each step has had its rewards and advantages, but Molloch always takes his pound of flesh.
I'm 54 and started programming when I was 7 also. While I've enjoyed coding throughout my career, I'm loving this new phase of software dev, a lot of the hassle has now been removed and now you can concentrate on ACTUALLY building things without so many side tracks and hiccups from technical details. I guess I'm not as attached to coding as I thought I was, I actually really enjoy building software and now that has become a lot easier and I feel experienced devs are really well suited to working with AI to get it to build the right thing and to consider robustness, performance, approach/structure, architecture etc. I'm really enjoying myself at the moment!
I'm a year old than you. Recently, my father-in-law (an engineer in the '50s) was telling me about the transition from analog to digital electronics and how it changed his entire world.
I feel very fortunate that I was able to start out writing machine code and can now watch a machine write code on its own. I'm not even remotely claiming SOTA models can do what we do, but they are closer than ever before.
It's time to accept that the world has changed again.
> I started programming when I was seven because a machine did exactly what I told it to, felt like something I could explore and ultimately know, and that felt like magic. I’m fifty now, and the magic is different, and I’m learning to sit with that.
Don't take this the wrong way but this is more of an age thing rather than a technology advancement thing.
Kids growing up nowadays that are interested in computers grow up feeling the same magic. That magic is partly derived from not truly understanding the thing you are doing and creating a mental "map" by yourself. There is nothing intrinsic to computing nowadays that makes it less magic than fiddling around with config.sys, in 50 years there will be old programmers reminiscing of "Remember when all new models were coming out every few months and we could fiddle around with the vector dimensionality and chunking length to get the best of gpt-6.2 RAG? Those were the times".
> There is nothing intrinsic to computing nowadays that makes it less magic than fiddling around with config.sys
There definitely is: the rent-seeking behavior is out of control. As a kid I could fiddle with config.sys (or rather autoexec.bat) while nowadays wrestling a file path out of my phone is a battle and the system files of my phone are kept from me.
>As a kid I could fiddle with config.sys (or rather autoexec.bat) while nowadays wrestling a file path out of my phone is a battle and the system files of my phone are kept from me.
I think the magic happens at different levels of abstraction as time goes by, and it's easy to get stuck.
Us kids could fiddle with autoexec and config to get DOOM going, today's kids can fiddle with a yaml and have a MMORPG that handles 10 000 users from all over the world going.
It's not the same but I can easily imagine it feeling at least equally magical for a kid today.
Why do you allow a mobile handheld computing and communication device to define "computing" ? I understand that they are important devices and lots of people with a hacker mentality would like to be able to hack them the way old folks once hacked DOS. But the current computing environment is much, much wider than iOS/Android, and if you're going to complain about just one aspect of it, I think it would be better to acknowledge that.
In many ways, things like RPi and Arduino have actually massively expanded the realm of totally hackable computing beyond what was even possible for early personal computer users.
As others have said, it's not so much that tinkering opportunities don't exist. It's more there's a slump in the market of doing relatively easy jobs for money. You can hack on esp32 all day, but there aren't many ways to make money doing so. Making software for the iPhone was (and is still, at this point) a pretty good gig.
I figure auto mechanics contended with this 25 years ago. Now it's hard to find someone to replace your water pump, if your vehicle even has one. Like auto mechanics, though, these machines still exist and there's still a big market for those skills. It might just require more legwork to find that work.
For the same reason computing used to be defined by a Commodore 64 more than by an IBM System/370-XA mainframe from the same year — they're the most commonly and most easily accessible computing devices.
Old farts like us think the desktop is the default kind of computer, but it isn't. Most computers are phones, followed by tablets and laptops with touchscreens, and desktops are the weirdest ones.
> Don't take this the wrong way but this is more of an age thing rather than a technology advancement thing.
I am much younger than the poster you are replying to, but I feel much the same.
LLM are not AI, but are a great context search tool when they work.
When people first contact ML, they fool themselves into believing it is intelligent... rather than a massive plagiarism and copyright IP theft machine.
Fun is important, but people thinking zero workmanship generated content is sustainable are still in the self-delusion stage marketers promote.
https://medium.com/ideas-into-action/ikigai-the-perfect-care...
I am not going to cite how many fads I've seen cycle in popularity, but many have seen the current active cons before. A firm that takes a dollar to make a dime in revenue is by definition unsustainable. =3
"The Ice King"
https://www.youtube.com/watch?v=6HVYHNTDOFs
I like coding AIs because they're plagiarism machines. If I ask you to do some basic data manipulation operations, I want you to do it in the most obvious, standard way possible, not come up with some fancy creative solution unless it's needed for some reason.
If I'm dockerizing an app, I want the most simple, basic, standard thing - not somebody's hand-rolled "optimized" version that I can't understand.
> not somebody's hand-rolled "optimized" version that I can't understand.
In general, it takes around 10 months for people to realize something about probabilistic markdown definitions, and maintenance cycles.
You may miss learning from skilled people someday. =3
https://en.wikipedia.org/wiki/TempleOS
config.sys was understandable. Now your computer has thousands (probably more) of config.sys-sized components and you are still only one person. The classic UI may improve your ability to find the components (sometimes) but can't reduce the complexity of either the components themselves or their quantity. AI makes it possible to deal with this complexity in a functional way.
Your last point is probably correct though, because AI will also allow systems to become orders of magnitude more complex still. So like the early days of the internet, these are still the fun days of AI, when the tool is overpowered compared to its uses.
It seems AI is putting senior developers into two camps. Both groups relate to the statement, "I started programming when I was seven because a machine did exactly what I told it to, felt like something I could explore and ultimately know, and that felt like magic. I’m fifty now, and the magic is different, and I’m learning to sit with that."
The difference is that the first camp is re-experiencing that feeling of wonder while the second camp is lamenting it. I thankfully fall in the first camp. AI is allowing me to build things I couldn't, not due to a lack of skills, but a lack of time. Do you want to spend all your time building the app user interface, or do you want to focus on that core ability that makes your program unique? Most of us want the latter, but the former takes up so much time.
> The difference is that the first camp is re-experiencing that feeling of wonder while the second camp is lamenting it.
I don't think so. I think the first camp does not get paid for programming, while the second camp does.
That's why the first camp is so happy, and why the second camp is not.
> I thankfully fall in the first camp. AI is allowing me to build things I couldn't, not due to a lack of skills, but a lack of time.
It sounds like you're developing for yourself only. Your attitude makes sense, then - you want a $FOO, and now you can have one without paying for it.
I think you can only empathise with the second camp if your ability to eat depends on being able to sell $FOOs.
I am firmly in both camps. On one hand, getting stuff working has its own thrill.
On the other hand, I step back, look at the progress made in just the last year, and realize that not only is my job soon to be gone, but pretty much everyone's job is gone that primarily does knowledge work.
I feel there's now an egg timer set on my career, and I better make the best of the couple of minutes I have left.
It sounds like you don’t particularly care about the user interface, and that’s why you’re okay with delegating it. I think the developers who don’t like delegating to AI are the ones who care about and have strong opinions about all the parts. To them there are no unimportant parts where the details don’t matter.
Similarly, I'm using it to write apps in non-native languages, like rust. My first foray into it led to finding poor documentation examples. AI allows me to create without spending large swaths of time learning minutia.
I'm enjoying it to a point, but yes, it does eliminate that sense of accomplishment - when you've spent many late nights working on something complex, and finally finish it. That's pretty much gone.
> Similarly, I'm using it to write apps in non-native languages, like rust.
This does not make sense; Rust is native.
> without spending large swaths of time learning minutia
He probably meant languages he's not proficient with.
I assume they mean 'native tongue', as in their day-to-day programming language, or native programming language.
delegating UI to the 'not worth my time' pile is how you end up with a poor UI
I’m sure the UI engineers would have a bit of a different take.
Thank you for writing this. My feelings are very similar to the ones described by the author and the timeline almost matches. The thrill of tecnology for me started to fast decay since the early 2010s and now I see it as a no-return stage. I still have fun with my retro hardware & software but I am no longer an active practitioner and I have pivoted my attention and my efforts somewhere else. Unfortunately, I no longer feel excited for the future decades of tech and I am distancing myself from it.
I think this is something else, though. Even before AI really hit sweng, there were early signs of a collective tech depression a la "The best idea we can come up with is strapping screens to people's heads?", the "Are we the bad guys?" convo around social media, the crypto brain drain, etc. The queue of Next Big Things has increasingly felt more forced and controversial to many, and being in tech last lost much of its lustre to them.
I think it's healthy for everyone to evaluate whether one's personal reaction to AI is colored by this trend, or whether it's really being evaluated independently. Because while I share many of the negative feelings listed earlier, to me AI does still feel different; it has a lot more real utility.
If I look back, it was not even AI, since I don't use any AI model (almost at all). So, I don't think AI was really the main divisor for me. I have a feeling it was the "you don't own anything and everything is now a cloud/subscription" that was the main disappointment, which happened years before LLMs or AI-assisted programming.
What else do you do to make rent ? I feel the same way as you and I have no idea what else pays well for quality craftsmanship. I am staring at the abyss of hyper intelligent people with posh resumes and now wondering what to do.
According to his profile, he teaches CS and Math.
That's correct! Even though I have been focused more on math lately (which was always my main study area outside the tech industry). That being said, I have limited my internet usage to ~2 hours per day to answer questions from students and I am doing a lot of homeschooling with my son.
What do you do for living now (if anything)?
I stopped working as a programmer and I'm teaching CS+math and homeschooling my kid.
I'm lucky because I work as an independent consultant. I get paid to deliver solutions, but I get to choose how to create those solutions. I write whatever code I want however I want. As long as it solves the problem, no one cares.
I started programming in 1980, and I having just as much fun now as I did then. I literally cannot wait to sit down at my IDE and start writing.
But that was not always true. When I worked for a larger company, even some startups, it was not always fun. There's something about having full control over my environment that makes the work feel like play.
If you feel like programming isn't fun anymore, maybe switching to a consulting gig will help. It will give you the independence and control that you might be craving.
I have a hard time telling whether agentic coding tools will take a big bite out of the demand for software consultants. If the market is worried about SaaS because people think companies will use AI to code tools internally vs buying them, I would think the same would apply to consultants.
I’ve seen the code current tools produce if you’re not careful, or if you’re in a domain where training data is scarce. I could see a world where a couple of years from now companies need to bring outside people to fix vibe coded software that managed to gain traction. Hard to tell.
It's a good question. I think short-term (5 years) the easy jobs will go away. No one is going to write a restaurant web site by hand. Maybe the design will still be human-made, but all the code will be templated AI. Imagine every WordPress template customized by AI. That's a whole bunch of jobs that won't exist.
Right now I'm creating clinical trial visualizations for biotech firms. There's some degree of complexity because I have to understand the data schema, the specifics of the clinical trial, and the goals of the scientists. But I firmly believe that AI will be able to handle most of that within 5 years (it may be slower in biotech because of the regulatory requirements).
But I also firmly believe that there is more demand for (good) software today than there are programmers to satisfy it. If programmers become 10x more efficient with AI, that might mean that there will be 10x more programs that need writing.
It is an interesting time to be at the peak of accumulated knowledge of a 50 year career and then see a tool that has the ability to create the code to preform a task I need. But that creation doesn’t feel the same and does take interaction from outside usually to make that code useable. I think that time for required interaction will be short lived and as code bases are slowly supplanted with generated code things will be more homogenized. I pray that will lead the people being allowed to solve more complex issue and I can’t wait to see the advancements to come. I just hope we can look back and say it was worth it, that don’t we end up with a bunch of AI generated crap that degrades technology by obscuring the useful and we end up worse than before.
6 or 7 , 38 now -- and having a blast.
it isn't all funeral marches and group crying sessions.
And don't let the blog post fool you , it is a rant about AI -- otherwise we would have heard complaints about the last 200 paradigm shifts in the industry over the past thirty years.
Sure, we got our share of dilbert-style agile/waterfall/tdd jokes shoved in our face, but no one wrote a blog post about how their identity was usurped by the waterfall model .
>And different in a way that challenges the identity I built around it and doesn’t satisfy in the way it did.
Everyone should do their own thing, but might I suggest that it is dangerous for anyone in this world to use a single pillar as their foundation for all identity and plinth of their character.
but no one wrote a blog post about how their identity was usurped by the waterfall model
I don’t know about that.
Waterfall mostly died before the rise of blogs, of course, but around the dawn of Agile I remember lots of posts about how nothing was properly designed any more, nothing was ever finished, and you never knew what the specification was.
They used to be real engineers, but now it was just all chaos! They couldn’t design anything any more!
> Sure, we got our share of dilbert-style agile/waterfall/tdd jokes shoved in our face, but no one wrote a blog post about how their identity was usurped by the waterfall model .
That's a difference in form, but not really a difference in content.
Thanks for reminding me of the word plinth. I agree with the author that the job is less fun now, less interesting. I'm doing and accomplishing more, and it matters less. And unfortunately, having other ways of defining your identity doesn't really help, for me. What it does is make those other aspects of myself relatively more attractive as careers, in comparison to this one. Although then again, I suppose it's helping in the way you intend: I could leave (and I might), I could adapt. So I'm feeling none of the fear or anxiety about AI. Just something that I think is roughly boredom.
> otherwise we would have heard complaints about the last 200 paradigm shifts in the industry over the past thirty years.
We have though. And they all received some version of "piss off, geezer."
Have you not noticed how the hype cycles and counter-hype haters buried most of the meaningful considered conversations about new technologies and methodologies across your career?
I'm 60, started with a Tandy Model I in junior high, learned 6809 assembly for my Color Computer, loved the fact we could put certain values in particular memory positions and change the video mode and put pixels to the screen. It's been decades of losing that level of control, but for me coding is the fun part. I've never lost that spark of enjoyment and really obsession I felt early on. I enjoy the supposedly boring job of writing SQL and C with embedded SQL and working with business concepts to produce solutions. Coding is the fun part for me, even now.
I got moved up the chain to management and later worked to get myself moved back down to a dev role because I missed it and because I was running into the Peter Principle. I use AI to learn new concepts, but mostly as a search engine. I love the tech behind it, but I don't want it coding for me any more than I want it playing my video games for me. I was hoping AI would show up as robots doing my laundry, not doing the thing I most enjoy.
TRS-80 CoCo! First computer I owned (started with a borrowed Commodore Pet). I appreciate the simplicity of flicking the switch and writing code in basic. One of my favorite gaming memories is this beauty: https://www.youtube.com/watch?v=sQKQHKdWTRs
"Then it professionalised."
Yeah. It's not that it wasn't 'professionalized' back in the day, it's that everything has changed--the attitude, the people involved, the kinds of businesses there are, the philosophy. There was a...mystery about it, a feeling like you were entering a different world and that world was a place where you were close the the machine and...I just can't describe it. It was more visceral.
I made my first BASIC program in the late 70s on a Decwriter, which was basically a wide-carriage printer with a keyboard, attached via acoustic modem to a time-sharing system. And it was the best thing ever.
I'm the exact age as the author and this post could have been written by me (if I could write). It echoes my story and sentiment exactly right down to cutting my literal baby teeth on a rubber key ZX Spectrum.
The anxiety I have that the author might not be explicitly stating is that as we look for places we add genuine value in the crevices of frontier models' shortcomings those crevices are getting more narrow by the day and a bit harder to find.
Just last night I worked with Claude and at the end of the evening I had it explain to me what we actually did. It was a "Her" (as in the movie) moment for me where the AI was now handholding me and not the other way around.
> The anxiety I have that the author might not be explicitly stating is that as we look for places we add genuine value in the crevices of frontier models' shortcomings those crevices are getting more narrow by the day and a bit harder to find.
That's exactly it. And then people say "pivot to planning / overall logic / high-level design," but how long do we have before upper management decides that AI is good enough at that stuff, too, and shows us all the door?
If they believe they can get a product that's 95% of what an experienced engineer would give them for 5% of the cost, why bother keeping the engineer around?
> this post could have been written by me (if I could write)
This post was written by AI
English is my second language so I'm not well tuned to picking up on the phrases that expose writing as AI generated. Even so it doesn't really change the sentiment being conveyed nor the fact that it's better writing than I could muster.
I'm a developer, mid/late fifties. My first computer was a Commodore Vic 20, so I guess I started writing code at about the same time as the OP even if I'm a few years older.
Yes, I mourn the end of my craft and all that that. But also:
This isn't the end of hand-written code. A few will still get paid to do it in niche domains. Some will do it as a hobby or craft activity - like oil painting or furniture making. The tooling will move on and become more specialised and expensive. Like owning Japanese woodworking tools.
But software construction as a human-based economic activity is clearly about to slam hard into a singularity, and many of us who rely on our hard-won skills to pay the bills and survive are going to find ourselves unemployed and unemployable. A few early adopters will get to stay on and sip their artesanal coffee and "build beautiful things" while their agent herds toil. But most of us won't. Software has always mostly been just CRUD apps, and that is going to need a whole lot less people going forward. People like me, perhaps, or you.
Some, who have sufficient financial and chronological runway, will go off and do other things. Many won't have that opportunity. I have personal experience of late-career unemployment - although I'm currently working - and its not pretty. A lot of lives are going to to be irreparably disrupted by this. Personally, I'd hoped that I could make it through to some stable kind of retirement, but I just don't see it anymore.
The contrast between this and https://news.ycombinator.com/item?id=46923543 (Software engineering is back) is kind of stark. I am using frontier models to get fun technical projects done that I simply didn't have time for since my late teens. It is still possible to understand an architecture down to the hardware if you want to, but it can happen a lot faster. The specifications are queryable now. Obscure bugs that at least one person has seen in the past are seconds away instead of minutes or hours of searching. Even new bugs have extra eyes on them. I haven't written a new operating system yet but it's now a tractable problem. So is using Lean or Julia or some similar system to formally specify it. So far I've been digging into modern multithreaded cache performance which is just as fascinating as directly programming VGA and sound was in the early PC days. Linux From Scratch is still up to date. You can get FPGAs that fit in your USB port [0]. Technical depth and low-level understanding is wherever you want to look for it.
[0] https://www.crowdsupply.com/sutajio-kosagi/fomu
> Obscure bugs that at least one person has seen in the past are seconds away instead of minutes or hours of searching.
This is a huge one for me. Claude is significantly better at Googling than I am.
I don't disagree that technology is less fun in an AI era. The question is, what other careers are out there for someone who wants to make things?
About a decade ago, I went through a career crisis where I couldn't decide what job to do - whether technology was really the best choice for my particular temperament and skills.
Law? Too cutthroat. Civil service? Very bureaucratic. Academia? Bad pay. Journalism? An industry in decline.
It is a shame, what is happening. But I still think, even with AI hollowing out the fun parts, tech remains the best job for a smart, motivated person who's willing to learn new things.
> who's willing to learn new things.
I tell my boys, get good at learning and you don't have to get good at anything else. I think that still holds now as much as ever.
how does that help pay the rent? or how would that get you past resume screening?
> Academia? Bad pay.
I think that bad pay is preferable to no fun. Of course, academia isn’t exactly a bed of roses either.
People get bored if they don't find real meaning in their work.
https://medium.com/ideas-into-action/ikigai-the-perfect-care...
Fact is, the tech sector is filled with folks that find zero joy in what they do, chose a career for financial reasons, and end up being miserable to everyone including themselves.
The ex-service people would call these folks entitled Shitbirds, as no matter the situation some will complain about everything. Note, everyone still does well in most large corporate settings, but some are exhausting to be around on a project. =3
The reason we don’t have the right to be lazy is because of the people who find “meaning” in toil. I do not want to work and AI is the most anti work technology in human history.
Bertrand Russel literally wrote a book called “in defense of idleness” because he knew that heavy hitters like him had to defend work abolitionism. The “work is good” crowd is why we can’t have nice things. You guys are time thief’s and ontologically evil. May all work supporters reincarnate as either durian fruits or cockroaches.
You 100% can be lazy. Just don't make the rest of us carry you.
You seem very passionate about your opinions, but are you happy?
The fact remains LLM can't reach comparable human error rates without consuming 75% of the energy output of our entire local galaxy.
While I find true Neuromorphic computing topics more interesting, the emergence of the LLM "AI" true believer is deeply concerning to those that understand how they are actually built. =3
https://www.youtube.com/watch?v=ERiXDhLHxmo
https://www.youtube.com/watch?v=_zfN9wnPvU0
https://www.youtube.com/watch?v=Xx4Tpsk_fnM
I just had an AI write a toy game engine with realistic camera and lens simulation on the view from scratch in rust in one day while i was working on other stuff all for the price of a $20/month Cursor subscription
"AI" LLM don't write anything, but copied someones symbolic isomorphic work that could fit the expected definition in the reasoning model.
Like all copyright submarines, your firm now runs the non-zero risk someone will sue for theft, or hit the product with a DMCA claim. What is the expected value of piracy versus actual business. =3
https://www.youtube.com/watch?v=MalBJuI9O5k
Information wants to be free. No one in any administration now or in the future will ever go back to the "let's sue grandma for 1 trillion dollars" era of the early 2000s. Piracy is good and important for national security.
~~~(====3
>important for national security
Indeed, but people rarely stop to consider... "security for whom?"
Have a wonderful day =3
https://www.youtube.com/watch?v=wL22URoMZjo
https://www.youtube.com/watch?v=JAcwtV_bFp4
Spaceballs (1987)
https://www.youtube.com/watch?v=pPkWZdluoUg
I think one of the big distinctions between people who like building with AI and those who don't, is that the people who are pro-AI are building their own ideas, of which they have many.
The people who are anti-AI are largely building other people's ideas, for work. And they have no desire to ramp up velocity, and it's not helpful to them anyway because of bureaucratic processes that are the real bottleneck to what they're building.
Not everyone falls into these silos, of course.
There's nothing "hollowed out" about directing an AI effectively, the feedback is as quick and tight as it always was. The trick is that you don't just "vibe code" and let the AI one-shot the whole thing: you should propose the change first and ask the AI about a good, detailed plan for implementing it. Then you review what the robot has proposed (which is trivial compared to revising code!) make sensible changes, ask for feedback again, and repeat. By the time the AI bot has to write actual code, it's not running on vibes anymore: it's been told exactly what to do and how to assess the result. You spend more time upfront, but a lot less on fixing the AI's mistakes.
> you should propose the change first and ask the AI about a good, detailed plan for implementing
Why ask though?
If I’m familiar with a project, more often than not, I usually have a very good idea of the code I have to write within minutes of reading the ticket. Most of the time taken is finding the impact of the change, especially with dependencies that are present in the business domain, but are not reflected in the code.
I don’t need to ask what to code. I can deduce it as easily as doing 2+2. What I’m seeking is a reason not to write it the way I envisioned it. And if those reasons are technical, it’s not often a matter of code.
Because that's how you ensure that the AI has the right idea about what to do. If the proposed plan has problems, you work with the AI to fix them before setting it to work. AI is not as smart as you, so it needs to be told how to go about doing things.
Any change that I’ve done which resulted in more than a a 10 lines diff are done with tools (copy-paste, vim-fu, refactor tools or script, snippets, code generators,…) Why would I spend time babysitting an LLM when I could have just done it myself? The purpose of automation is to lighten my workload, not to add to it.
>> Why would I spend time babysitting an LLM when I could have just done it myself
Exactly this. From what I understand an LLM has a limited context and will get that context wrong anyway and that context is on the edge of a knife and can easily be lost.
I'd rather mentor developers and build a team of living, breathing, thinking, compassionate humans who then in turn can mentor other living, breathing, thinking, compassionate humans.
An LLM is also a code generator. There is a scale of changes where using one is just not worthwhile (quite possibly around the 10 lines mark, as you said) but other than that, why would you want to write code yourself line-by-line that you could just generate?
Who even write their code line by line?
Snippets and other code generation tool has been here for decades. If you’re writing Java in IDEA, it’s basically a tab-fest with completion. And if you’re fluent in your editor, you do much more complex than editing lines.
> I don’t need to ask what to code. I can deduce it as easily as doing 2+2.
in those cases you wouldn't use an agent. It's not an xor thing, you use the tool where it works and not where it doesn't.
I admire your confidence. Must feel good to know you've perfected the craft.
Oh my god. This is me. If I were any better at writing, I could have written this, the author is even the same age as me (well, a year younger) and followed a similar trajectory. And a lot of what I've been feeling lately feels similar to burnout (in fact I've been calling it that), but it really isn't burnout. It's... this, whatever this is... a "fallow period" is a good term.
And I feel like an old man grumbling about things changing, but... it's not the same. I started programming in BASIC on my Tandy 1000 and went to college and learned how to build ISA cards with handwritten oscilloscope software in the Computer Engineering lab. My first job was writing firmware. I've climbed so far up the abstraction chain over a thirty year career and I guess I don't feel the same energy from writing software that first got me into this, and it's getting harder to force myself to press on.
Not going to pull age or title rank here -- but I suggest if your use of AI feels empty, take advantage of its speed and plasticity and iterate upon its output more, shape the code results. Use it as a sculptor might too -- begin with its output and make the code your own. I particularly like this latter approach when I am tasked with use of a language I view as inferior and/or awkward. While this might read as idealistic, and I agree that there are situations where this interaction is infeasible or inappropriate, you should also be encountering problems where AI decidedly falls on its face and you need to intervene.
At my first full time job in the early 2000s I was tasked with building a webscraper. We worked for law firms representing Fortune 500 companies and they wanted to know who was running "pump and dump" stock schemes on stocks using Yahoo Finance message boards.
At the time, I didn't know the LWP::Simple module existed in Perl so I ended up writing my own socket based HTTP library to pull down the posts, store them in a database etc. I loved that project as it taught me a lot about HTTP, networking, HTML, parsing and regexes.
Nowadays, I use playwright to scrape websites for thing I care about (e.g. rental prices at the Jersey Shore etc). I would never think to re-do my old HTTP library today while still loving the speed of modern automation tools.
Now, I too have felt the "but I loved coding!" sense of loss. I temper that with the above story that we will probably love what comes next too (eventually).
A blacksmith was a person that picked up chunks of carbon and heated them to they were glowing red and beat the iron to submission with a hammer in their hands.
Today iron is produced by machines in factories by the mega-tonne.
We just happened to live in the age where code when from being beaten by hand to a mass produced product.
And so the change of technology goes.
And the blacksmiths losing their jobs are not allowed to feel bad about it?
Especially anyone in their 40s or 50s who is close enough to retirement that a career shift is unappealing but far enough from retirement that a layoff now would meaningfully change that timeline or QOL. I don't blame people for feeling uneasy.
I'm probably 7 or 8 years from an easy retirement myself, so I can appreciate how that feels. Nobody really wants to feel disruption at this age, especially when they're the breadwinner for a family.
> far enough from retirement that a layoff now would meaningfully change that timeline or QOL
yeah this is where i am. Turning 50 in April, I have two boys about to hit college and the bills associated with that and i have 15 years before i'm forced to retire. I have to up the salary to pay/help for college and i have to keep the 401k maxed + catchups maxed over the next 15 years to pull off retirement. The change from AI is scary, it may be good for me or it may be devastating. Staring down that barrel and making career decisions with no room for error (no time to rebuild) is pretty harrowing.
You either become a foreman operating the machines or a Luddite burning them.
What if in reality it's not one or the other, but having 10% odds of being good enough to be selected to become a technician operating the machines, 10% odds of getting so enraged as to dedicate your lives to pushing back, and 80% odds of being shoved out due to lower demand and value of your work, having to go do something else, if you still can?
No. By this logic, if they wanted to stay with the times they should have sought capital investment for their own industrial forges, joined their local lodges, climbed the ranks, lobbied their governments for loose safety regulations, and plied their workers with propaganda about how "we're in a recession and have to tighten our belts".
Think of the wonderful world we could have if everyone just got their shit together and became paper trillionaire technocrats.
The software world pretty much demanded this outcome.
Go back 10 years and post "SWE's should form labor unions"
Then watch as your post drops to [dead] and people scream "How dare you rob me of theoretical millions of dollars I'll be making".
I wonder how many of these same downvoters are now worried about getting replaced with AI.
Let's keep it real. No union would save your jobs against a manyfold productivity gain of machines.
Some of them feel bad about it and some of them refined metallurgy to build Saturn V rockets and go to space. We are very much living in the new space race. The discussion here is split 50/50 between the “Thank you! I feel the same way” folks and the “I am having the time of my life!” folks.
Blacksmiths were replaced by factories which produced deterministic products with 100% predictability.
AI can't produce code yet with 100% predictability. If that day ever arrives, the blacksmith analogy will be apt.
>with 100% predictability.
Not sure what world you're from, but lots of products get sent back to the manufacture because they break.
Programming is not art for me. I do not find it useful to gold plate solutions. I prefer getting the job done, sometimes by any means necessary for "the vehicle" to continue running.
AI often generates parts of code for my hobby projects, which allow me speed running with my implementation. It often generates errors, but I am also skilled, so I fix error in the code.
I use AI as boiler plate code generator, or documentation assist, for languages I do not use daily. These solutions I rarely use 1:1, but if I had to go through readme's and readthedocs, it would take me a lot longer.
Would there be more elegant solutions? often - yes. Does it really matter? For me - not.
I'm yet to see perfect code from a human or ai. Most of the people I work with that want everything to be in a perfect state typically get way less done. To your point, sometimes we are just mechanics and that's okay.
Perhaps detractors just gave up and didn't bother improving, but I'm able to prompt the AI to write excellent code. And it's very easy to correct when it's gone awry . This idea that all AI code is bad is just the ego talking.
A lot of that magic still remains in embedded.
If vendors can't be bothered to use a C compiler from the last decade, I don't think they'll be adopting AI anytime soon.
At my work, as of 2026, we only now have a faction riled up about evangelizing clean code, OOP, and C++ design patterns. I hope the same delay keeps for all the rest of the "abstraction tower".
It is happening in embedded as well. I noticed just the upgrade from Gemini 2.5 to 3.0 Pro went from "I can get the assembly syntax mostly right but I don't understand register lifetimes" to "I can generate perfect assembly by hand".
I just saw a Reddit post yesterday about somebody that successfully one-shot in Gemini 2.5 the bare metal boot code for a particular board with the only input being the board's documentation.
The issue is that AI will be creating software at whatever abstraction layer it is asked to produce. Right down to ASM maybe even machine code if someone actually wanted or needed that. Perhaps not the AI of today but given a few years I'll be quite surprised if it still can't.
If we can take a computer as powerful as today’s laptops and make it crawl because of the amount of inefficiencies in software like Teams, I’m not holding breath for embedded. If you apply the same kind of engineering principle as Anthropic, you’ll be laughed out of the room.
To be honest I find myself in disagreement with this attitude despite being a semi old-school programmer myself (I cut my teeth on C/C++/assembly in the early 00s). I think the author is caught up in the artist's dilemma - that being, they want to take part in the craft for the joy of it rather than for the results it produces.
Harsh take: nobody's stopping you from doing that. You can dust off an old computer right now and write software for it. All of that joy still exists. It's just that nobody's going to pay you for it and it's no longer mainstream relevant - the world's moved on from those times, in many cases for good reasons.
So I think what the person really wants is to have their cake and eat it too. They want to be mainstream relevant and employable... whilst having fun.
That's a luxury. More specifically it's a first world luxury. Most people don't get to have that. True, many programmers did get to have it for a time - but that doesn't mean we're entitled to it forever - not unless it's somehow directly tied to producing valuable results.
But you know it's strange to me that programmers lose sight of this. I became a programmer because I saw Wolfenstein 3D on a 386, and was inspired by there being a "world in the box". I wanted to make worlds too. That's an important distinction: I didn't become a programmer because I wanted to write code, I became a programmer because I wanted to create worlds. The programming is a means to an end, it's not the end unto itself - at least I never looked at it that way. And that's in spite of the fact that I genuinely enjoy programming in and of itself. But I still value the outcome more.
And in fact I actually went through a related transition some years ago on a personal level, when I shifted from always trying to write game engines from the ground up to being willing to use engines like Unity or Unreal. It felt like a betrayal - I no longer had a deep understanding of every layer, I could no longer bespoke craft everything to my personal whims. But you know what? It was absolutely the right choice because it put me on track to actually finishing the games I was working on, which was the entire point of the exercise in the first place.
So I don't bemoan or regret it for a second.
Anyway hope that didn't sound too blunt - it's just my way of speaking - I can sympathize with the author but I just think it's on the self-indulgent side.
Having been in this game about 10 years longer I can understand how he feels. I distinctly remember when I realized that C compilers for the ARM produced better assembly than I could code by hand. Bitter sweet but the code being written became larger and more complex because of it.
Modern coding has become more complex than I would have ever thought possible. The number of technologies an individual would have to master to actually be a expert "full stack" coder is ludicrous. It is virtually impossible for an individual to prototype a complex Web based app by themselves. I think AI will lower that barrier.
In return we will get a lot more software - probably of dubious quality in many cases - as people with "ideas" but little knowledge start making apps. Not a totally bad thing but no utopia either. I also think it will likely reduce the amount of open source software. Content producers are already hoarding info to prevent AI bots from scraping it. I see no reason to believe this will not extend to code as more programmers find themselves in a situation more akin to musicians than engineers.
I humbly submit this interview with Grady Booch (if you know, you know) talking about the "3rd golden age of software engineering - thanks to AI": https://youtu.be/OfMAtaocvJw
I feel like the conversation does a good job of couching the situation we find ourselves in.
I am a little older than OP. I don't think I've ever had that feeling about a programming project for work that came from someone else.
Generally, I get that feeling from work projects that I've self-initiated to solve a problem. Fortunately, I get the chance to do this a lot. With the advent of agentic coding, I am able to solve problems at a much higher rate.
Quite often, I'll still "raw dog" a solution without AI (except for doc lookups) for fun, kind of as a way to prove to myself I can still do it when the power's out.
Mid-50s and also started programming in BASIC on any computer I could get my hands on, whether my own C64 or the BBC Micros or IBM XTs at school.
My take on AI really comes down to this:
Debugging your own code was an adventure and finally getting something work was a major rush. You had a sense of achievement.
Debugging LLM generated code is hell - it's basically debugging someone else code. There's no sense of achievement and no jump out of your chair and bounce around the room moments.
Sure, the code comes out fast, and maybe I'll find joy in finishing some side projects I've been tinkering with on and off since I first started programming, or it may just end up feeling like it's not mine any more.
Total resonance with this part :
"They’re writing TypeScript that compiles to JavaScript that runs in a V8 engine written in C++ that’s making system calls to an OS kernel that’s scheduling threads across cores they’ve never thought about, hitting RAM through a memory controller with caching layers they couldn’t diagram, all while npm pulls in 400 packages they’ve never read a line of."
I'm the exact same demographic as the author, just turned 50, writing code since childhood in BASIC. I'm dealing with the AI in programming issue by ignoring it.
I still enjoy the physical act of programming so I'm unsure why I should do anything that changes that. To me it's akin to asking a painter to become a photographer. Both are artists but the craft is different.
Even if the AI thing is here to stay, I think there will be room for people who program by hand for the same reason there's still room for people who paint, despite the invention of the camera.
But then, I'm somebody who doesn't even use an IDE. If I find an IDE obtrusive then I'm certain I'll find an AI agent even more so.
The deep, profound, cruel irony of this post is that it was written by AI.
Maybe if you work in the world of web and apps, AI will come for you. If you don't , and you work in industrial automation and safety, the I believe it will not.
I was thinking the same thing, but I thought I was being too cynical given it was a post lamenting about all the cognitive abstractions we have created.
I too get less of a kick out of writing enterprise middleware than I did making games as a kid in the 80s. Why did the industry do this to me?!
I was 7 in 1987, learned LOGO and C64 BASIC that year, and I relate to this article as well.
It feels as though a window is closing upon the feeling that software can be a powerful voice for the true needs of humanity. Those of us who can sense the deepest problems and implications well in advance are already rare. We are no more immune to the atrophy of forgetting than anyone.
But there is a third option beyond embrace or self-extinguish. The author even uses the word, implying that consumers wanted computers to be nothing more than an appliance.
The third option is to follow in the steps of fiction, the Butlerians of Dune, to transform general computation into bounded execution. We can go back to the metal and create a new kind of computer; one that does have a kind of permanence.
From that foundation, we can build a new kind of software, one that forces users to treat the machine as appliance.
It has never been done. Maybe it won't even work. But, I need to know. It feels meaningful and it has me writing my first compiler after 39 years of software development. It feels like fighting back.
This proposal feels really vague to me, I don't really understand what this actually does. Can you explain more? What exactly is a computer with permanence? What is software that forces a user to treat the computer it runs on "as an appliance"? In what ways is this different from any general-purpose computer, and what's the reason why a user would pick this over something standard?
Re: Permanence
I mean "permanence" in the same vague senses that I think the OP was hinting upon. A belief that regardless of change, the primitives remain. This is about having total confidence that abstractions haven't removed you the light-cone of comprehension.
Re: Appliance
I believe turing-completeness is over-powered, and the reason that AGI/ASI is a threat at all. My hypothesis is that we can build a machine that delivers most of the same experiences as existing software can. By constraint, some tasks would impossible and others just too hard to scale. By analogy, even a Swiss-army knife is like an appliance in that it only has a limited number of potential uses.
Re: Users
The machine I'm proposing is basically just eBPF for rich applications. It will have relevance for medical, aviation, and AI research. I don't suppose that end-users won't be looking for it until the bad times really start ramping up. But, I suppose we'll need to port Doom over to it before we can know for sure.
> We can go back to the metal and create a new kind of computer; one that does have a kind of permanence.
it's kind of strange to think about but i guess now there's a new incentive to do something truly new and innovative. The llms won't be able to do it for you.
My goal isn't to make LLM-assistance impossible; it will still be possible. In fact, GPT2-level inference is one of launch demos I have planned if I can finish this cursed self-hosting run.
My goal is to make training (especially self-training) impossible; while making inference deterministic by design and highly interpretable.
The idea is to build a sanctuary substrate where humans are the only beneficiaries of all possible technical advancements.
I’m 5 years older than James and had a similar discovery and enthusiasm path which got lost in the era commercial big modern systems. The soul of the machine has long since disappeared.
There was a brief period when I discovered BeOS 4.5 which brought the wonder back in September 1999. That was short lived. I occassionally get the bug with Haiku but sadly dont have the spare time during this last decade.
Enthusiast on small platforms still chase the bug, in these smaller communities you can actually make a difference, and there is still passion to be found there. There is also some innovation since experimental concepts can be tried out.
The thing we loved hasn't changed. We just can't get paid for it anymore.
Somebody still needs to do lower-level work and understand machine architecture. Those feeling like they might be replaced in web or app dev might consider moving down the stack.
I turn 52 this year. I also started at 10 years old programming in a combination of AppleSoft BASIC and assembly language and typing machine code out of books so I could use Double Hires graphics since it wasn’t supported by BASIc and doing my own assembly language programming.
I stuck with C and C++ as my bread and butter from 1996-2011 with other languages in between.
I don’t miss “coding” because of AI. My vision has been larger than what I could do myself without delegating for over a decade - before LLMs.
“coding” and/or later coordinating with people (dotted line) reporting to me has been a necessary evil until a year or two ago to see my vision go to implementation.
I absolutely love this new world. For loops and while loops and if statements don’t excite me in my 50s. Seeing my vision come to life faster than I ever could before and having it well archited does.
I love talking to “the business” and solving XYProblems and getting to a solution 3x faster
I'm a few years behind you. I got started on my uncle's handed down vic 20 in the late 80s.
The culture change in tech has been the toughest part for me. I miss the combination of curiosity, optimism, creativity, and even the chaos that came with it. Nowadays it's much harder to find organizations like that.
I think the true genuinely-love-programming type of people will increasingly have to do what so many other people do, and that's separation of work and personal enjoyment. You might have to AI-architect your code at work, and hand code your toy projects on the weekend.
I prefer to see it as the automtion of the IT age.
All other professions had their time when technology came and automated things.
For example wood carvers, blacksmiths, butchers, bakers, candlestickmakers etc etc. All of those professions have been mostly taken over by machines in factories.
I view 'ai' as new machines in factories for producing code. We have reached the point where we have code factories which can produce things much more efficiently and quicker than any human can alone.
Where the professions still thrive is in the artisan market. There is always demand for hand crafted things which have been created with love and care.
I am hoping this stays true for my coding analogy. Then people who really care about making a good product will still have a market from customers who want something different from the mass produced norm.
> For example wood carvers, blacksmiths, butchers, bakers, candlestickmakers etc etc.
Very, very few of those professions are thriving. Especially if we are talking true craftsmanship and not stuffing the oven with frozen pastries to create the smell and the corresponding illusion of artisinal work.
They are thriving where I live. There is a huge artisinal market for hand crafted things. There are many markets, craft centers, art fairs, regular classes from professionals teaching amateurs etc. In most rural communities I have visited it is similar.
They're existing, not really thriving. Artisanal things have become more popular as a hobby, but even people who get into them commercially rarely make real money off of it. The demand exists, but purely as a novelty for people who appreciate those types of things, or perhaps in really niche sub-markets that aren't adequately covered by big businesses. But the artisans aren't directly competing with companies that provide similar goods to them at scale, because it's simply impossible. They've just carved out a niche and sell the experience or the tailoring of what they're making to the small slice of the population who's willing to pay for that.
You can't do this with software. Non-devs don't understand nor appreciate any qualities of software beyond the simplest comprehension of UX. There's no such thing as "hand-made" software. 99% of people don't care about what runs on their computer at all, they only care about the ends, not the means. As long as it appears to do what you want, it's good enough, and good enough is all that's needed by everyone.
The problem for software artisans is that unlike other handmade craftwork, nobody else ever sees your code. There's no way to differentiate your work from that which is factory-made or LLM-generated.
That is a valid concern.
Therefore I think artisan coders will need to rely on a combination of customisation and customer service. Their specialty will need to be very specific features which are not catered for by the usual mass code creation market, and provide swift and helpful support along with it.
I think the issue at the core of the analogy is that factories, traditional factories, excel at making a ton of one thing (or small variations thereof). The big productivity gains came from highly reliable, repeatable processes that do not accommodate substantial variation. This rigidity of factory production is what drives the existence of artisan work: it can always easily distinguish itself from the mass product.
This does not seem true for AI writing software. It's neither reliable nor rigid.
What assembly lines and factories did for other manufacturing processes is to make it feasable for any person to be able to make those things. In the past only very skilled professionals were able to create such things, but mechanisation and breaking down manufacturing processes into small chunks made the same things be able to be achieved by low skilled workers.
IMO that is exactly what is happening here. Ai is making coding apps possible for the normal person. Yes they will need to be supervised and monitored, just like workers in a factory. But groups of normal low skilled workers will be able to create large pieces of software via ai, whic has only ever been possible by skilled teams of professinoals before.
Yes, I think that's how it will go, like all those other industries. There will be an artisanal market, that's much smaller, where the (fewer) participants charge higher prices. So it'll (ironically?) end up being just another wealth concentrator. A few get richer doing artisanal work while most have their wage depressed and/or leave the market.
50 myself, and started coding with a Commodore 64, but only really picked it up seriously with the advent of open source software, and that feeling of being able to dig around any component of the system I wanted to was exhilarating.
I think that's one of the biggest things that gives me pause about AI: the fact that, if they prove to be a big productivity boost, you're beholden to huge corporations, and not just for a one-time purchase, but on an ongoing basis.
Maybe the open source models will improve, but if keeps being driven by raw compute power and big numbers, it seems to tilt things very much in favor of those with lots and lots of capital to deploy.
Wow this hits home - I just turned 51 and I also started coding at age 7, writing BASIC on a TRS-80 Model III.
I still have a very distinct memory when my father told me he was buying us our first home computer. I remember him telling me that you could use the computer to make games. I was so excited by the idea and amazing by this technology (that I hadn't yet even remotely understood). I remember saying "Oh, you just tell it to make a game? And it makes a game?" He explained to me then what programming was.
When we got the TRS-80, he and I worked together to build a game. We came up with an idea for a text adventure game called "Manhole Mania" - you were a city works employee exploring the sewers after reports of strange noises. We never finished much of it - maybe just the first few "rooms".
Maybe this weekend I will tell Codex to make me a game.
Well yes it has changed. But look at everything that can be accomplished with these abstractions/libraries/frameworks that exist.
Why reinvent the wheel.
Yes, there might be less room for the Wild Wild West approach, as mentioned in the article: But that is the structure of compounded knowledge/tooling/code available to developers/others to create more enriched software, in the sense that it runs on what is available now and provides value in today's age of computing.
I also had a 486DX2-66. And I recall coding in Assembly, Pascal, C etc.
I do not miss it. These days I can create experiences that reach so many more people (a matured Interneet with realtime possibilities - to simplify) and with so much more potential for Good. Good in the sense of usefulness for users, good in the sense of making money (yeah, that aspect still exists).
I do understand your sentiment and the despairing tone. There have been times when I was struck by the same.
But I do not miss 1995 and struggling with a low-level formatted HD and Assembly that screwed up my floppy disks, or the worms that reached my box, or the awful web sites in terms of UX that were around, or pulling coaxial cables around for LAN parties.
It's just a different world now. But I get what you are saying, and respect it. Stay optimistic. :)
> The feedback loop has changed. The intimacy has gone. The thing that kept me up at night for decades — the puzzle, the chase, the moment where you finally understand why something isn’t working — that’s been compressed into a prompt and a response
It's so strange to read because to me its never been more fun to make software, its especially never been easier for an individual. The boring parts are being automated so I can work on the bespoke and artistic parts. The feedback loop is getting shorter to making something nice and workable. The investigation tools for profiling and pinpointing performance bottlenecks are better than ever, where Claude is just one new part of it.
I gave up after the third “It’s not X, it’s Y” in like two paragraphs. Is nobody else allergic to that AI voice? Isn’t the author?
So depressing this is the current state of blogging. Can’t wait for this phase to be over.
was this actually generated with Claude/GPT?
I didn't really notice it at first but on a second read it's full of this crap?
I have given the topic some thoughts. I concluded that the ONLY way for ordinary people (non-genius, IQ <= 120) to be really good, be really close to the genius, is to sit down, condensate the past 40 or so year's tech history of three topics (Comp-Arch, OS and Compiler) into a 4-5 years of self-education.
Such education is COMPLETELY different from the one they offered in school, but closer to those offered in premium schools (MIT/Berkeley). Basically, I'd call it "Software engineering archaeology". Students are supposed to take on ancient software, compile them, and figure out how to add new features.
For example, for the OS kernel branch:
- Course 0: MIT xv6 lab, then figure out which subsystem you are interested in (fs? scheduler? drivers?)
- Course 0.5: System programming for modern Linux and NT, mostly to get familiar with user space development and syscalls
- Course 1: Build Linux 0.95, run all of your toolchains in a docker container. Move it to 64-bit. Say you are interested in fs -- figure out the VFS code and write a couple of fs for it. Linux 0.95 only has Minix fs so there are a lot of simpler options to choose from.
- Course 2: Maybe build a modern Linux, like 5.9, and then do the same thing. This time the student is supposed to implement a much more sophiscated fs, maybe something from the SunOS or WinNT that was not there.
- Course 3 & 4: Do the same thing with leaked NT 3.5 and NT 4.0 kernel. It's just for personal use so I wouldn't worry about the lawyers.
For reading, there are a lot of books about Linux kernels and NT kernels.
It's turned from SimCity into SimSimCity. It's like playing a simulation where you manage a person who's playing SimCity.
Was this text run through LLM before posting? I recognize that writing style honestly; or did we simply speak to machines enough to now speak like machines?
Yes. This is absolutely chatgpt-speak. I see it everywhere now. It's inescapable. At least this appears to be largely human authored and have some substance, which is generally not the case when I see these LLM-isms.
Same, been a product designer for years, still love design deep down but the essence is somehow not there anymore. reading this hit different. It's refreshing to see someone put it into words instead of the usual "stuff".
It lines up a lot with what I've been thinking as well and this is what I wrote today on my blog. https://www.immaculateconstellation.info/why-ai-challenges-u...
Is there some magic lost also when using AI to write your blog post?
Seriously I thought I was going crazy with this. So many "it's not just x it's y". Short punchy sentences. Emdashes galore.
I miss human writing. I miss the different voices.
I'm roughly the same (started at 9, currently 48), but programming hasn't really changed for me. What's changed is me having to have pointless arguments with people who obviously have no clue what they're talking about but feel qualified either because:
a) They asked an LLM
b) "This is what all our competitors are doing"
c) They saw a video on Youtube by some big influencer
d) [...insert any other absurd reason...]
True story:
In one of our recent Enterprise Architecture meetings, I was lamenting the lack of a plan to deal with our massive tech debt, and used an example of a 5000 line regulatory reporting stored procedure written 10 years ago that noone understood. I was told my complaint was irrelevant because I could just dump it into ChatGPT and it would explain it to me. These are words uttered by a so-called Senior Developer, in an Enterprise Architecture meeting.
Was he entirely wrong? Have you tried to dump the stored proc into a frontier model and ask it to refactor? You'd probably have neat 20 stored procs with well laid out logic in minutes.
I wouldn't keep a ball of mud just because LLMs can usually make sense of them but to refactor such code debt is becoming increasingly trivial.
> Was he entirely wrong?
Yes. I mean... of course he was?. Firstly, I had already gone through this process with multiple LLMs, from various perspectives, including using Deep Research models to find out if any other businesses faced similar issues, and/or if products existed that could help with this. That lead me down a rabbit hole of data science products related to regulatory reporting of a completely different nature which was effectively useless. tl;dr: Virtually all LLMs - after understanding the context - recommended us doing thing we had already been urging the business to do - hire a Technical BA with experience in this field. And yes, that's what we ended up doing.
Now, give you some ideas about why his idea was obviously absurd:
- He had never seen the SP
- He didn't understand anything about regulatory reporting
- He didn't understand anything about financial derivatives
- He didn't understand the difference between Transact SQL and ANSI SQL
- No consideration given to IP
- etc etc
Those are the basics. Let's jump a little bit into the detail. Here's a rough snippet of what the SP looks like:
Yes, that's a typical column name that has rotted over time, so noone even knows if it's still correct. Yes, those are typical CASE statements (170+ of them at last count, and no, they are not all equal or symmetric).So... you're not just dealing with incredibly unwieldy and non-standard SQL (omitted), noone really understands the business rules either.
So again... yes he was entirely wrong. There is nothing "trivial" about refactoring things that noone understands.
I am in a very similar boat, age and experience-wise. I would like to work backward from the observation that there is no resource constraints and we're collectively hopelessly lost up the abstraction Jenga tower.
I observe that the way we taught math was not oriented on the idea that everyone would need to know trigonometric functions or how to do derivatives. I like to believe math curricula was centered around standardizing a system of thinking about maths and those of us who were serious about our educational development would all speak the same language. It was about learning a language and laying down processes that everyone else could understand. And that shaped us, and it's foolish to challenge or complain about that or, God forbid, radically change the way we teach math subjects because it damages our ability to think alike. (I know the above is probably completely idealistic verging on personal myth, but that's how I choose to look at it.)
In my opinion, we never approached software engineering the same way. We were so focused on the compiler and the type calculus, and we never taught people about what makes code valuable and robust. If I had FU money to burn today, I'd start a Mathnasium company focused around making kids into systems integrators with great soft skills and the ability to produce high quality software. I would pitch this business under the assumption that the jenga tower is going to be collapsing pretty much continuously for the next 25-50 years and civilization needs absolute unit super developers coming out of nowhere who will be able to make a small fortune helping companies dig their way out of 75 years of tech debt.
> They’re writing TypeScript that compiles to JavaScript that runs in a V8 engine written in C++ that’s making system calls to an OS kernel that’s scheduling threads across cores they’ve never thought about, hitting RAM through a memory controller with caching layers they couldn’t diagram, all while npm pulls in 400 packages they’ve never read a line of.
yup.
I found it a very weird section of the article, undoing most of what had been written before.
Whether it's ROM programming, writing assembly, or C, or Rust, or JS-with-stdlib, at no point was anyone "teetering". Stacks have always existed, and whether your stack was small because it just had not much under it, or huge because it's 2026, they've by and large always been stable. That's the point of a stack: you can trust the parts below the layer you're working on, and the problems you solve are still real problems that for the most part don't require knowing the lower parts of the stack but are still real problem sin programming.
It's like making fun of people who drive a company rental because they don't want to own one themselves, and can't name any part of their engine: you're just being an ass.
Even the good TS programmers understand classic programming concepts like using the right data structures, paying attention to runtime complexity, and knowing when to go "maybe it's the step below me". They can work out difficult problems just fine.
You were writing an article about how fundamentally different AI has made things: why dunk on people who got into programming more recently than you and started higher on the ladder of abstraction, mocking them for "you were already about to fall". No, they weren't. They understood the core concepts just fine, and we collectively gave them stacks that they could trust. And they would have transitioned to "the next thing" just like you've been doing.
And then "AI" showed up, and it doesn't care about silly things like "how high up the ladder you are", it just went "your skills about how to schedule, structure, plan, describe, and manage projects is the thing that matters. Those other skills are nice to haves, and will make you better at being a PM, but they're not the main focus anymore". It doesn't matter where on the ladder you are, that affects everyone.
I can share a similar experience: I began to learn programming during my first school years, on an Apple II clone with Logo, a fancy language with turtle graphics as a most distinctive feature. We used to boot Logo off 5.25" floppy disks...
I'm ~40ish but middle career and not in management. I envy this author, whatever joy he found in solving little puzzles and systems was extinguished in me very early in my career in an intense corporate environment. I was never one to love fussing much with code, but I do love solving system scale problems, which also involve code. I don't feel I am losing anything, the most annoying parts of code I deal with are now abstracted into human language and specs, and I can now architect/build more creatively than before. So I am happy. But, I was one of those types that never had a true passion for "code" and have meant plenty of people that do have that, and I feel for them. I worry for people that carved out being really good at programming as a niche, but you enter a point in your career where that becomes much less important than being able to execute and define requirements and understand business logic. And yea, that isn't very romantic or magical, but I find passion outside of what pays my bills, so I lost that ennui feeling a while ago.
Some may feel that it is a luxury to feel passionate about one’s profession, but for me a life without that is pretty depressing. A society should strive to make fulfillment in a profession possible for everyone.
To me it feels the opposite of miserable. I can give work my full attention because it allows me freedom (mostly) to pursue other passionate things. This 40 hour a week (speaking generally, for me it can triple that sometimes) cost to me is far smaller than the depression I’d feel caring deeply about my particular craft in a field that doesn’t give a shit about it. That was proven to me very early in my career and is definitely cynical, but I don’t know where all the bright eyed bushy tailed opinions out there are coming from. probably completely different domains than my viewpoint.
Of course society should be a lot of things but that’s not a reality. Like, imagine a world exists soon where not every person (or even the majority of people) are useful, even formerly useful people - we already live in this world! If raw intellectual output is the value generator in the world we live in, and is a meritocracy, the simple fact by statistics is most will be left behind. what society already does to the disabled, and the sick is proof of this already. These people take professions to suit their circumstances. I am one, and I am fine with it. but by the parameters of the game, this is how to best maximize my passion output. Many people have many ideas how to change “society” I personally think is a waste of time, society adapts to circumstances most of the time. Except the people at the bottom usually get a raw deal.
If you are feeling the way that James does, that the magic is gone... I encourage you to try creating things in a new domain.
Try making music, creating videos, making interactive LED art, building robots, or fabricating toys.
The tools we have today suddenly make it far easier and more fun to experiment with a new craft. What was once daunting is now approachable.
Start by using an AI-powered tool—without shame—to make something superficially 'cool'. Yes, we all know you used a 'cheat code' but that's okay! Now you get to dive in and deconstruct what you created. Tear it apart and learn how and why it works. Go as deep as your motivation carries you. Experiment, hack, and modify.
Just as in software, there will be many many layers of abstraction that you can work through and learn about. Many of them are overflowing with magic and sources of inspiration, I promise.
The gears of capitalism will likely continue to aggressively maximize efficiency wherever possible, and this comes with both benefits and very real costs (some of which are described James's post).. but outside the professional sphere, it appears to me that we are entering a new hobbyist / hacker / creative renaissance. If you can find a way to release enough anxiety and let the curious and creative energy back in, opportunities start showing up everywhere.
You can still have fun programming. Just sit down and write some code. Ain't nobody holding a gun to your head forcing you to use AI in your projects.
And the part of programming that wasn't your projects, whether back in the days of TPS reports and test coverage meetings, or in the age of generative AI, that bit was always kinda soul draining.
Well-written and it expresses a mood, a feeling, a sense of both loss and awe. I was there too in the 8-bit era, fully understanding every byte of RAM and ROM.
The sense of nostalgia that can turn too easily into a lament is powerful and real. But for me this all came well before AI had become all consuming... It's the just the latest manifestation of the process. I knew I didn't really understand computers anymore, not in the way I used to. I still love coding and building but it's no longer central to my job or lif3. It's useful, I enjoy it but at the same time I also marvel at the future that I find myself living in. I've done things with AI that I wouldn't have dared to start for lack of time. It's amazing and transformative and I love that too.
But I will always miss the Olden Days. I think more than anything it's the nostalgia for the 8-bit era that made me enjoy Stranger Things so much. :)
"Over four decades I’ve been through more technology transitions than I can count. New languages, new platforms, new paradigms. CLI to GUI. Desktop to web. Web to mobile. Monoliths to microservices. Tapes, floppy discs, hard drives, SSDs. JavaScript frameworks arriving and dying like mayflies."... made me think of
I've seen things you people wouldn't believe. Attack ships on fire off the shoulder of Orion. I watched C-beams glitter in the dark near the Tannhäuser Gate. All those moments will be lost in time, like tears in rain. Time to die.
where we came from and where we're going this whole time in my career those things are kind of hard to pinpoint. Abstraction is killing us for sure. Time to market above all else. It's no wonder why software in cars, appliances and medical equipment is a factor that is killing people.
Some farmers probably lamented the rise of machines because they feared their strength would no longer be needed in the fields. These farmers were no doubt more concerned with their own usefulness as laborers than in the goals of the farm: to produce food.
If you program as labor, consider what you might build with no boss. You’re better equipped to start your own farm than you think.
Many of them might have been troubled by the fact that they couldn’t afford a tractor. Many small farms became a few big ones, and so it will go in software.
Didn’t that already happen in software? Seems like network effects got ahead of this.
I too have felt these feelings (though I'm much younger than the author). I think as I've grown older I have to remind myself
1. I shouldn't be so tied to what other people think of me (craftsman, programmer, low level developer)
2. I shouldn't measure my satisfaction by comparing my work to others'. Quality still matters especially in shared systems, but my responsibility is to the standards I choose to hold, not to whether others meet them. Plus there are still community of people that still care about this (handmade network, openbsd devs, languages like Odin) that I can be part of it I want to
3. If my values are not being met either in my work or personal life I need to take ownership of that myself. The magic is still there, I just have to go looking for it
This is quite the lament. Very well written.
I'm about ten years ahead of the author. I felt this a long time before AI arrived. I went from solving problems for people to everything I tried to ending up in an endless grind of yak-shaving.
I worked my way through it, though. It made me both give up programming, at least in the commercial sense, and appreciate the journey he and I have gone through. It's truly an amazing time to be alive.
Now, however, I'm feeling sucked back into the vortex. I'm excited about solving problems in a way I haven't been in a long time. I was just telling somebody that I spent 4-6 hours last night watching Claude code. I watched TV. I scratched my butt. I played HexaCrush. All the time it was just chugging along, solving a problem in code that I have wanted to solve for a decade or more. I told him that it wasn't watching the code go by. That would be too easy to do. It was paying attention to what Claude was doing and _feeling that pain_. OMG, I would see it hit a wall, I would recognize the wall, and then it'd just keep chugging along until it fixed it. It was the kind of thing that didn't have damned thing to do with the problem but would have held me up for hours. Instead, I watched Pitt with my wife. Every now I then I'd see a prompt, pop up, and guide/direct/orchestrate/consult/? with Claude.
It ain't coding. But, frankly, coding ain't coding. It hasn't been in a long, long time.
If a lot of your job seems like senseless bullshit, I'm sad to say you're on the way out. If it doesn't, stick around.
I view AI as an extinction level threat. That hasn't changed, mainly because of how humans are using it. It has nothing to do with the tech. But I'm a bit perplexed now as to what to do with my new-found superpowers. I feel like that kid on the first Spiderman movie. The world is amazing. I've got half-a-dozen projects I'm doing right now. I'm publishing my own daily newspaper, just for me to read, and dang if it's not pretty good! No matter how this plays out, it is truly an amazing time to be alive, and old codgers like us have had a hella ride.
I found that feeling again while building a game on the EVM. All of the constraints were new and different. Solidity feels somewhere between and high and low level language, not as abstracted as most popular languages today but a solid step above writing assembly.
A lot of people started building projects like mine when the EVM was newer. Some managed to get a little bit of popularity, like Dark Forest. But most were never noticed. The crypto scene has distracted everyone from the work of tinkerers and artists who just wanted to play with a new paradigm. The whole thing became increasingly toxic.
It was like one last breath of fresh cool air before the pollution of AI tools arrived on the scene. It's a bitter sweet feeling.
Fantastic Article, well written, thoughtful. Here are a couple of my favorite quotes:
To relate to the author, I think with a lot of whats going on I feel the same about, but other parts I feel differently than they do. There appears to be a shallowness with this... yes we can build faster than ever, but so much of what we are building we should really be asking ourselves why do we have to build this at all? Its like sitting through the meeting that could have been an email, or using hand tools for 3 hours because the power tool purchase/rental is just obscenely expensive for the ~20min you need it.This essay begins by promising not to be a "back in my day" piece, but ends up dunking on 20-year-olds who are only a few years into their career, as if they have any choice about when they were born.
I'm 55 and I started at age 13 on a TI-99/4A, then progressed through Commodore 64, Amiga 2000, an Amiga XT Sidecar, then a real XT, and on and on. DOS, Windows, Unix, the first Linux. I ran a tiny BBS and felt so excited when I heard the modem singing from someone dialing in. The first time I "logged into the Internet" was to a Linux prompt. Gopher was still a bigger thing than the nascent World-Wide Web.
The author is right. The magic has faded. It's sad. I'm still excited about what's possible, but it'll never create that same sense of awe, that knowledge that you can own the entire system from the power coming from the wall to the pixels on your screen.
DOS is very much alive these days, though [0]. Text-mode internet is there (should you want online in the first place), and, thanks to some amazing devs, soundcard support has made a huge leap [1].
I use it every day lately (for text-related work and hobbyst-level assembly learning -- my intent is to write a small application to do paid work which involves chopping audio files). And -- I say a single-tasking system is a complete, true bliss in our days. Paired with a 4:3 Thinkpad screen, that DOS environment gives me instant focus for a long time -- which, to me, has been almost impossible to accomplish on a multi-tasking, contemporary-web-browser-equipped system recently.
Apparently, though, there seems to be AI for DOS, too [2]. :) I prefer my DOS machine to be completely offline, though. Peace and harmony for the soul!
0: https://freedos.org/ | http://svardos.org/ | https://forum.vcfed.org/index.php?threads/minidos-2026-relea... | https://bttr-software.de/forum/board.php
1: https://github.com/Baron-von-Riedesel/VSBHDA
2: https://github.com/lanmeibuxie/AI-for-DOS
Similar story for myself. It was long and tedious for my mental model to go from Basic, to Pascal, to C, and finally to ASM as a teen.
My recent experience is the opposite. With LLMs, I'm able to delve into the deepest parts of code and systems I never had time to learn. LLMs will get you to the 80% pretty quick - compiles and sometimes even runs.
maybe we just change, honestly. i think when i were younger there was nothing to lose, time felt unlimited, no "career" to gamble with, no billion dollar idea, just learning and tinkering and playing with whatever was out there because it was cool and interesting to me. in some respects i miss that.
not sure how that relates to llms but it does become an unblocker to regain some of that "magic", but also i know to deep dive requires an investment i cannot shortcut.
the new generation of devs are already playing with things few dinosaurs will get to experience fully, having sunk decades into the systems built and afraid to let it go. some of that is good (to lean on experience) and some of it holding us back.
Yeah I could use Cursor or whatever but I don't, I like writing code. I guess that makes me a luddite or something, although I still develop agents. I enjoy architecting things (I don't consider myself an architect) I'm talking about my hobby hardware projects.
Why do people use the ' — ' all the time now? It's not a proper English separator.
I know exactly how you feel. I don't know how many hours I sat in front of this debugger (https://www.jasik.com) poking around and trying to learn everything at a lower level. Now its so different.
Yeah. Different is the word. In many ways it’s just another abstraction but we’re not machines and this, to me at least, just gives a very different feel.
The irony is that you could still code the way you always did, where you control every pixel. Nothing is stopping you.
But you would not be able to make anything anywhere near as complex as you can with modern tools.
The thing I loved has changed.
And I fell in love with it again. I'm learning how to work in this new world and it's fun as hell.
idk, i'm loving the newness of all of it, I feel more empowered than ever before, like it's my time. Before startups would take like a year to get going, now it's like a month or so. It's exciting and scary, we have no idea where it's going. Not boring at all. I was getting bored as shit and bam, now i can dream up shit quick and have it validated to, ya i figured that out with an MCP so ya this is my jam. Program MCPs and speed it up!!!!!!
Pathetically hypocritical to use AI to write this blog post when generic copy editors have been hit way harder by AI than programmers.
Same, but it changed when I was 17 and again when I was 27 and then 37 and so on. It has always been changing dramatically, but this latest leap is just so incredibly different that it seems unique.
> I wrote my first line of code in 1983. I was seven years old, typing BASIC into a machine that had less processing power than the chip in your washing machine
I think there may be a counterpoint hiding in plain sight here: back in 1983 the washing machine didn't have a chip in it. Now there are more low-level embedded CPUs and microcontrollers to develop for than before, but maybe it's all the same now. Unfathomable levels of abstraction, uniformly applied by language models?
Cool, at 7? I started at 9 and I'm 53 now. And Claude does all the things. Need to get adjusted to that though. Still not there.
Last year I found out that I always was a creator, not a coder.
I've had the same journey, same age markers. The sentiment is the same, but at the same time this new world affords me super powers I'm currently drunk on. When that drunkenness becomes a hangover I hope I won't be disappointed.
> They’re writing TypeScript that compiles to JavaScript that runs in a V8 engine written in C++ that’s making system calls to an OS kernel that’s scheduling threads across cores they’ve never thought about, hitting RAM through a memory controller with caching layers they couldn’t diagram, all while npm pulls in 400 packages they’ve never read a line of.
But sure. AI is the moment they lost track of what’s happening.
I feel this is conflating different things. Yes, the abstraction tower was massive already before, but at least the abstractions were mostly well-defined and understandable through interfaces: even if you don't understand the intricacies of your storage device, driver and kernel, you can usually get a quite reliable and predictable mental representation how files work. Same goes for network protocols, higher-level programming languages or the web platform.
Sure, there are edge cases where the abstraction breaks down and you have to get into the lower levels, but those situations are the exception, not the norm.
With AI, there is no clearly defined interface, and no one really knows what (precise) input a given output will produce. Or maybe to put it better, the interface is human language and your mental representation is the one you have talking to a human - which is far more vague than previous technical abstractions.
On the bright side, at least we (still) have the intermediate layer of generated code to reason about, which offsets the unpredictability a bit.
Starting code when I was 14, sold my first bit of code at 17, which was written in 6502 assembler.
40+ years later, been through many BASICs, C, C++ (CFront on onwards) and now NodeJS, and I still love writing code.
Tinkering with RPi, getting used to having a coding assistant, looking forward to having some time to work on other fun projects and getting back into C++ sooooon.
What's not to love?
I think it's the loss of control.
Even if you can achieve awesome things with LLMs you give up the control over tiny details, it's just faster to generate and regenerate until it fits the spec.
But you never quite know how long it takes or how much you have to shave that square peg.
Did hardware engineers back in the 1970s-80s* think that software took the joy out of their craft? What do those engineers now think in retrospect?
*I'm picking that era because it seems to be when most electronic machines' business logic moved from hardware to software.
I'm 46 but same. I'm not quite as melancholy about it, but I do feel a lot of this.
Oh boy this hits home.
At this point I entered surviving mode, and curious to see where we will be 6 months, 2 years from now. I am pessimistic.
I want to tinker with my beloved Z80 again.
Are you me?
I'm 49.... Started at 12... In the same boat
First 286 machine had a CMOS battery that was loose so I had to figure that out to make it boot into ms-dos
This time it does feel different and while I'm using them ai more than ever, it feels soulless and empty even when I 'ship' something
You can still write code yourself. Just like you can still walk to work, you do not need to use a car.
I'm 43. Took a year or so off from contracting after being flat out for years without taking any breaks, just poked around with some personal projects, did some stuff for my wife's company, petitioned the NHS to fix some stuff. Used Claude Code for much of it. Travelled a bit too.
I feel like I turned around and there seem to be no jobs now (500+ applications deep is a lot when you've always been given the first role you'd applied to) unless you have 2+ years commercial AI experience, which I don't, or perhaps want to sit in a SOC, which I don't. It's like a whole industry just disappeared while I had my back turned.
I looked at Java in Google Trends the other day, it doesn't feel like it was that long ago that people were bemoaning how abstracted that was, but it was everywhere. It doesn't seem to be anymore. I've tried telling myself that maybe it's because people are using LLMs to code, so it's not being searched for, but I think the game's probably up, we're in a different era now.
Not sure what I'm going to do for the next 20 years. I'm looking at getting a motorbike licence just to keep busy, but that won't pay the bills.
I’m 45 and contracted for over a decade before switching to product development. I used to still get inquiries from former customers, mainly for Java and Android work. But since about two years, it’s completely dried up. Anecdotally I’ve been hearing from friends who are still in the contracting/freelancing business that things are very tough right now. It makes sense to me, contractors are usually the first thing businesses cut when they’re either lowering their spending or becoming more efficient themselves.
I retired a few years ago and it's very clear that was a good thing.
> the VGA Mode X tricks in Doom
Doom does not use mode-X :P ! It uses mode-Y.
That being said as a 47 years old having given 40 years to this thing as well, I can relate to the feeling.
> Cheaper. Faster. But hollowed out.
Given the bazillions poured into it I have yet to see this proven to be cheaper.
It'd be more strange if the thing you learned 43 years ago was exactly the same today. We should expect change. When that change is positive we call it progress.
In the grand scheme of things it wouldn’t actually be that strange: generations and generations of humans were mostly farmers and mostly did the same thing as their parents. Of course technology developed but lots of people did the same job with the same methods their whole lives.
But everybody on this site lived through the first half of a logistic curve so that perspective seems strange to us.
Peter Thiel talks about the difference in progress between bits and atoms. Progress in atoms (physical things) moves incredibly slowly, and has done for centuries. Progress in bits (software) moves astonishingly fast. We all work in software. We should not expect things to remain the same for very long because change is easy.
I think it'd be pretty incredible if we hit on the best way to write software 40 years ago when people had only been doing it seriously for a couple of decades. It's no more surprising that we find better approaches to coding than farming improving when the tractor replaced a horse.
So far software progress in the era of less-dramatic hardware progress has not been as impressive. The atoms might be reminding us who’s boss.
Great post. Good to see someone posting something positive for a change about the shift in development.
I think more than ever programmers need jobs where performance matters and the naive way the AI does things doesn't cut it. When no one cares about things other than correctness your job turns into AI Slop. The good news right now is that AI tends to produce things that AI struggles to do well with so large scale projects often descend into crap. You can write a C-compiler for $20,000 with an explosive stack of agents, but that C-compiler isn't anywhere close to efficient or performant.
As model costs come down that $20,000 will become a viable number for doing entirely AI-generate coding. So more than ever you don't want to be doing work that the AI is good enough at. Either jobs where performance matters or being able to code the stack of agents needed to produce high quality code in an application context.
> When no one cares about things other than correctness
I don’t get the impression that the majority particularly cares about correctness. In fact, it’s one of the weak points of AI.
I wonder what other “crevices” (as the author put it) exist.
Another commentor mentioned embedded, and after a brief phase of dabbling in that, mainly with nRF5x micros, I tend to agree. Less training data and obtuse tooling.
I am younger than the author but damn this somehow hit me hard. I do remember growing up as a kid with a 486...
I don't know what these people from our now traditional daily lamentation session are coding where Claude can do all the work for them just with a few prompts and minimal reviews.
Claude is a godsend to me, but fuck, it is sometimes dumb as door, loves to create regressions, is a fucking terrible designer. Small, tiny changes? Those are actually the worse, it is easy for claude, on the first setback, decides to burn the whole world and start from zero again. Not to mention when it gets stuck in an eternal loop where it increasingly degenerates the code.
If I care about what I deliver, I have to actively participate in coding.
> I started programming when I was seven because a machine did exactly what I told it to, felt like something I could explore and ultimately know, and that felt like magic
I'm significantly younger than OP, but this was it for me too. I'm autistic and found the world around me confusing growing up. Computers were wonderful because they were the only thing that really made sense to me.
I was obsessed with computers since I was 5. I started programming probably around age 10. Then in my early teens I started creating Flash applications, writing PHP, Java, etc...
When I look back on my early career now it was almost magical. This in the mid to late 00s (late to some I know), but this was before the era of package managers, before resources like Stackoverflow, before modern IDEs. You had some fairly basic frameworks to work with, but that was really about it. Everything else had to be done fully by hand.
This was also before agile was really a thing too. The places I worked at the time didn't have stand-ups or retrospectives. There were no product managers.
It was also before the iPhone and the mass adoption of the internet.
Back then no one went into software engineering as a profession. It was just some thing weird computer kids did, and sometimes businesses would pay us to build them things. Everyone who coded back then I got along with great, now everyone is so normal it's hard for me to relate with me. The industry today is also so money focused.
The thing and bothers me the most though is that computers increasingly act like humans that I need to talk to to get things done, and if that wasn't bad enough I also have to talk with people constantly.
Even the stuff I build sucks. All the useful stuff has been build so in the last decade or so stuff I've built feels increasingly detached from reality. When I started I felt like I was solving real practical problems for companies, now I'm building chatbots and internal dashboards. It's all bollocks.
There was a post recently about builders vs coders (I can't remember exactly). But I'm definitely a coder. I miss coding. There was something rewarding about pouring hours into a HTML design, getting things pixel perfect. Sometimes it felt laborious, but that was part of the craft. Claude Code does a great job and it does it 50x faster than I could, but it doesn't give me the same satisfaction.
I do hope this is my last job in tech. Unfortunately I'm not old enough to retire, but I think I need to find something better suited to my programatic way of thinking. I quite like the idea of doing construction or some other manual labour job. Seems like they're still building things by hand and don't have so many stupid meetings all the time.
A bit younger, and exact opposite. Probably the most excited I've ever been about the state of development!
I'm 47 and excited to live in a time of the moat important innovation since the printing press.
This is at least partially AI-written, by the way
The deepest thing I read from HN in months. Respect.
Abstractions can take away but many add tremendous value.
For example, the author has coded for their entire career on silicon-based CPUs but never had to deal with the shittiness of wire-wrapped memory, where a bit-flip might happen in one place because of a manufacturing defect and good luck tracking that down. Ever since lithography and CPU packaging, the CPU is protected from the elements and its thermal limits are well known and computed ahead of time and those limits baked into thermal management so it doesn’t melt but still goes as fast as we understand to be possible for its size, and we make billions of these every day and have done for over 50 years.
Moving up the stack you can move your mouse “just so” and click, no need to bit-twiddle the USB port (and we can talk about USB negotiation or many other things that happen on the way) and your click gets translated into an action and you can do this hundreds of times a day without disturbing your flow.
Or javascript jit compilation, where the js engine watches code run and emits faster versions of it that make assumptions about types of variables - with escape hatches if the code stops behaving predictably so you don’t get confusing bugs that only happen if the browser jitted some code. Python has something similar. Thanks to these jit engines you can write ergonomic code that in the typical scenario is fast enough for your users and gets faster with each new language release, with no code changes.
Lets talk about the decades of research that went into autoregressive transformer models, instruction tuning, and RLHF, and then chat harnesses. Type to a model and get a response back, because behind the scenes your message is prefixed with “User: “, triggering latent capabilities in the model to hold its end of a conversation. Scale that up and call it a “low key research preview” and you have ChatGPT. Wildly simple idea, massive implications.
These abstractions take you further from the machine and yet despite that they were adopted en masse. You have to account for the ruthless competition out there - each one would’ve been eliminated if they hadn’t proven to be worth something.
You’ll never understand the whole machine so just work at the level you’re comfortable with and peer behind the curtain if and when you need (eg. when optimizing or debugging).
Or to take a moment to marvel.
Same as assembly programmers felt when C came along I guess
I don't think so. A decent C programmer could pretty much imagine how each line of C was translated into assembly, and with certainty, how every byte of data moved through the machine. That's been lost with the rise of higher-level languages, interpreters, their pseudocode, and the explosion of libraries and especially, the rise of cut-and-paste coding. IMO, 90% of today's developers have never thought about how their code connects to the metal. Starting with CS101 in Java, they've always lived entirely within an abstract level of source code. Coding with AI just abstracts that world a couple steps higher, not unlike the way that templates in 4GL languages attempted but failed to achieve, but of course, the abstraction has climbed far beyond that level now. Software craftsmanship has indeed left the building; only the product matters now.
As someone who has always enjoyed designing things, but was never really into PUZZLES, I always felt like an outsider in the programming domain. People around me really enjoyed the "fun" of programming, whereas I was more interested in the Engineering of the thing - balancing tradeoffs until within acceptable margins and then actually calling it "DONE". People around me rarely called things "done", they rewrote it and rewrote it so that it kept satisfying their need for puzzle-solving (today, it's Ruby, tomorrow, it's rewritten in Scala, and the day after that, it's Golang or Zig!)
I feel that LLMs have finally put the ball in MY court. I feel sorry for the others, but you can always find puzzles in the toy section of the bookstore.
well said.
also coz of llms no more puzzle questions in interviews for those that have to go through them.
for some of us - we drink, eat, sleep - something called shipping - v1 might be shitty but it has to get out - no puzzles
"They’re writing TypeScript that compiles to JavaScript that runs in a V8 engine written in C++ that’s making system calls to an OS kernel that’s scheduling threads across cores they’ve never thought about, hitting RAM through a memory controller with caching layers they couldn’t diagram, all while npm pulls in 400 packages they’ve never read a line of."
and they still call themselves 'full stack developers' :eyeroll:
'It’s not a “back in my day” piece.'
That's exactly what it is.
Late 30s here, I have seen:
* dial-up being replaced by DSL
* CAT being replaced with fiber for companies
* VOIP replacing bulk BPX
* Cloud replacing on-prem to an extent
* Cloud services plague now called SaaS
* License for life being replaced by subscription
* AI driving everything to shit literally
The technology is no longer helping anything, it is actually tearing our society apart. Up to 2000s, things were indeed evolution, improvements, better life style be it personal or professional. Since 2000s, Enshitification started, everything gets worse, from services, to workflows, to processes, to products, to laws.
Gen-Z does not realize how bad things are, and how we are no longer becoming smarter but dumber, kids cannot even read but have every single social media account.
If they could spend one day back in early 2000s, the current generation would start a civil war in every single city across the globe.
yeah coding is a lot more fun and useful now
It won't be called coding soon; Sometime in the future (soon?) we won't be talking about code. The few leftovers/managers/CEOs will only be talking about products not the code, not programming, not even operating systems. You won't hear about pull requests, or databases, or HTTP or any of that. You won't talk about programmers. At least not outside of "hobbies".
It seems fun must be subjective. It seems less fun than ever to me.
At least parts of this were written with AI
>"The abstraction tower
Here’s the part that makes me laugh, darkly.
I saw someone on LinkedIn recently — early twenties, a few years into their career — lamenting that with AI they “didn’t really know what was going on anymore.” And I thought: mate, you were already so far up the abstraction chain you didn’t even realise you were teetering on top of a wobbly Jenga tower.
They’re writing TypeScript that compiles to JavaScript that runs in a V8 engine written in C++ that’s making system calls to an OS kernel that’s scheduling threads across cores they’ve never thought about, hitting RAM through a memory controller with caching layers they couldn’t diagram, all while npm pulls in 400 packages they’ve never read a line of.
But sure. AI is the moment they lost track of what’s happening.
The abstraction ship sailed decades ago. We just didn’t notice because each layer arrived gradually enough that we could pretend we still understood the whole stack.
AI is just the layer that made the pretence impossible to maintain."
Absolutely brilliant writing!
Heck -- absolutely brilliant communicating! (Which is really what great writing is all about!)
You definitely get it!
Some other people here on HN do too, yours truly included in that bunch...
Anyway, stellar writing!
Related:
https://www.joelonsoftware.com/2002/11/11/the-law-of-leaky-a...
https://en.wikipedia.org/wiki/Tower_of_Babel
https://en.wikipedia.org/wiki/Abstraction_(computer_science)
https://en.wikipedia.org/wiki/Abstraction
https://ecommons.cornell.edu/entities/publication/3e2850f6-c...
>But sure. AI is the moment they lost track of what’s happening. The abstraction ship sailed decades ago.
Bullshit. While abstraction has increased over time, AI is no mere incremental change. And the almost natural language interaction with an agent is not the same as Typescript over assembly (not to mention you could very well right C or Rust and the like, and know most of the details of the machine by heart, and no, microcode and low level abstractions are not a real counter-argument to that). Even less so if agents turn autonomous and you just herd them onto completion.
This LLM stuff is a little weird. Previously we had Python which was pretty close to pseudocode but you could run it directly. Now, these LLMs are one step more abstract, but their outputs aren’t runnable directly, they produce possibly incorrect code-like-text. Actually this seems like good news for programmers since you have to read the code in the lower-level language that gets produced.
I have the opposite take. There’s nothing stopping you from jumping into any component to polish things up. You can code whatever you wish. And AI takes away nearly all of the drudgery : boilerplate, test cases, inspecting poor documentation, absurd tooling.
It also lets me focus more on improving things since I feel more liberated to scrap low quality components. I’m much braver to take on large refactors now – things that would have taken days now take minutes.
In many ways AI has made up for my growing lack of patience and inability to stay on task until 3am.
> In many ways AI has made up for my growing lack of patience and inability to stay on task until 3am.
That is called...programming.
humans are distinguished from the lower primates by the use of tools.
I have been around for a similar amount of time. Another change I have seen over the years is the shift from programming being an exercise in creative excellence at work to being a white-collar ditch-digging job.
I was happy riding my horse when this dude invented a car.
Programming changed all along.
New concepts came out all along.
They became standardized all along and came down market to smaller and smaller projects.
Source control.
Cloud.
Agile/Scrum.
Code completion IDEs.
Higher Level languages.
These were not LLMs but did represent a shift that had to be kept up with.
LLMs are no different, just a bigger jump.
There is just as much opportunity here.
Software development and software developers are not going away.
More software that never could be built will now be built.
For the forseeable future there will always be software that needs to be overseen by a human.
Humans have a special knack for taking the humanity out of basically anything. It's a bizarre pattern.
It's not like it's changing by itself, you can always opt out of the slop race and scratch your itches instead.
https://gitlab.com/codr7/rem
“... when I was 7. I'm 50 now and the thing I loved has changed”
Welcome to the human condition, my friend. The good news is that a plurality of novels, TV shows, country songs, etc. can provide empathy for and insight into your experience.
I've written sse2 optimized C, web apps, and probably everything in between (hw, datasci, etl, devops).
I like coding with AI both vibe and assisted, since as soon as the question enters my head I can create a prototype or a test or a xyz to verify my thoughts. The whole time I'm writing in my notebook or whiteboard or any other thing I would have gotten up to. This is enabling tech, the trouble for me is there is a small thread that leads out of the room into the pockets of billion dollar companies.
It is no longer you vs the machine.
I have spent tons of time debugging weird undocumented hardware with throwaway code, or sat in a debugger doing hex math.
I think one wire that is crossed right now in this world is that computing is more corporate than ever, with what seems like ever growing platforms and wealth extraction at scale. Don't let them get you down, host your own shit and ignore them. YES IT WILL COST MORE -> YOUR FREEDOM HAS A PRICE.
Another observation is that people that got into the game for pure money are big mad right now. I didn't make money in the 00s, I did in the end of the 10s, and we're back at job desolation. In my groups, the most annoyed are code boot campers who have faked it until they made it and have just managed to survive this cycle with javascript.
Cycles come and go, the tech changes, but problem solving is always there.
The irony of these "My craft is dead" posts is that they consistently, heavily leverage AI for their writing. So you're crying about losing one craft to AI while using AI to kill another. It's disingenuous. And yes it is so damn obvious.
If you bothered to read it you’d find that I am embracing the tools and I still feel there is craft. It’s just different.
But snark away. It’s lazy. And yes it is so damn tedious.
I think the Oxide computer LLM guidelines are wise on this front:
> Finally, LLM-generated prose undermines a social contract of sorts: absent LLMs, it is presumed that of the reader and the writer, it is the writer that has undertaken the greater intellectual exertion. (That is, it is more work to write than to read!) For the reader, this is important: should they struggle with an idea, they can reasonably assume that the writer themselves understands it — and it is the least a reader can do to labor to make sense of it.
https://rfd.shared.oxide.computer/rfd/0576#_llms_as_writers
The heavy use of LLMs in writing makes people rightfully distrustful that they should put the time in to try to read what's written there.
Using LLMs for coding is different in many ways from writing, because the proof is more there in the pudding - you can run it, you can test it, etc. But the writing _is_ the writing, and the only way to know it's correct is to put in the work.
That doesn't mean you didn't put in the work! But I think it's why people are distrustful and have a bit of an allergic reaction to LLM-generated writing.
Speaking directly, if I catch the scent of ChatGPT, it's over.
People put out AI text, primarily, to run hustles.
So its writing style is a kind of internet version of "talking like a used car salesman".
With some people that's fine, but anyone with a healthy epistemic immune system is not going to listen to you.
If you want to save a few minutes, you'll just have to accept that.
What's your target false positive rate?
I mean, obviously you can't know your actual error rates, but it seems useful to estimate a number for this and to have a rough intuition for what your target rate is.
Did chatGPT write this response?
This is how LLMs poison the discourse.
I agree with that for programming, but not for writing. The stylistic tics are obtrusive and annoying, and make for bad writing. I think I'm sympathetic to the argument this piece is making, but I couldn't make myself slog through the LinkedIn-bot prose.
"But snark away. It’s lazy. And yes it is so damn tedious."
Looks like this comment is embracing the tools too?
I'd take cheap snark over something somebody didn't bother to write, but expect us to read.
Having an LLM write your blog posts is also lazy, and it's damn tedious to read.
Why should anyone bother to read what nobody wrote?
AI?
This seems to be what is happening bots are posting things and bots are reading it. It's a bit like our wonderful document system (www) turned into an application platform. We gained the later but lost the former.
If you feel so strongly about your message, why would you outsource writing out your thoughts to such a large extent where people can feel how reminiscent it sounds of LLM writing instead of your own? It's like me making a blogpost by outsourcing the writing to someone on Fiverr.
Yes it's fast, it's more efficient, it's cheap - the only things we as a society care about. But it doesn't convey any degree of care about what you put out, which is probably desirable for a personal, emotionally-charged piece of writing.
I felt the same. I resonate with the message, but it really rings hollow with so much AI directing.
I'd wish people would stop doing that. AI writing isn't even particularly good. Its not like it makes you into Dostoevsky, it just sloppifies your writing with the same lame mannerisms ("wasn't just X — it was Y"), the same short paragraphs, the same ems.
I'm weird about this, I choose to use AI to get feedback on my writing, but refuse to just copy and paste the AIs words. I only do it if its a short work email and I really dont care about its short lived lifespan, if its supposed to be an email where the discussion continues, then I refine it. I can write a LOT. If HN has edit count logs, I've probably got the high score.
The author admits that they used AI but I found it not that obvious. What are telltale signs in this case? While the writing style is a little bit over-stylized (exactly three examples in a sentence, Blade Runner reference), I might write in a similar style about a topic that im very emotional about. The actual content feels authentic to me.
(1) The pattern "It's not just a X---It's a Y" is super common in LLM-generated text for some reason. Complete with em dash. (I like em dashes and I wish LLMs weren't ruining them for the rest of us)
"Upgrading your CPU wasn’t a spec sheet exercise — it was transformative."
"You weren’t just a user. You were a systems engineer by necessity."
"The tinkerer spirit didn’t die of natural causes — it was bought out and put to work optimising ad clicks."
And in general a lot of "It's not <alternative>, it's <something else>", with or without an em dash:
"But it wasn’t just the craft that changed. The promise changed."
it's really verbose. One of those in a piece might be eye-catching and make someone think, but an entire blog post made up of them is _tiresome_.
(2) Phrasing like this seems to come out of LLMs a lot, particularly ChatGPT:
"I don’t want to be dishonest about this. "
(3) Lots of use of very short catch sentences / almost sentence fragments to try to "punch up" the writing. Look at all of the paragraphs after the first in the section "The era that made me":
"These weren’t just products. " (start of a paragraph)
"And the software side matched." (next P)
"Then it professionalised."
"But it wasn’t just the craft that changed."
"But I adapted." (a few paragraphs after the previous one)
And .. more. It's like the LLM latched on to things that were locally "interesting" writing, but applies them globally, turning the entire thing into a soup of "ah-ha! hey! here!" completely ignorant of the terrible harm it does to the narrative structure and global readability of the piece.
> And .. more. It's like the LLM latched on to things that were locally "interesting" writing, but applies them globally, turning the entire thing into a soup of "ah-ha! hey! here!" completely ignorant of the terrible harm it does to the narrative structure and global readability of the piece.
It's like YouTube-style engagement maximization. Make it more punchy, more rapid, more impactful, more dramatic - regardless of how the outcome as a whole ends up looking.
I wonder if this writing style is only relevant to ChatGPT on default settings, because that's the model that I've heard people accuse the most of doing this. Do other models have different repetitive patterns?
Out of curiousity, for those who were around to see it: was writing on LinkedIn commonly like this, pre-chatGPT? I've been wondering what the main sources were for these idioms in the training data, and it comes across to me like the kind of marketing-speak that would make sense in those circles.
(An explanation for the emoji spam in GitHub READMEs is also welcome. Who did that before LLMs?)
Thanks a lot, I really appreciate that you took the time for this detailed explanation.
Imagine if people were complex creatures, feeling different emotions for different things, shocking right?
I can hate LLMs for killing my craft while simultaneously using it to write a "happy birthday" message for a relative I hate or some corpo speak.
This is not either of those. This is the equivalent of a eulogy to a passion and a craft. Using an LLM to write it: entire sections, headers, sentences - is an insult to the craft.
The post in the same vain, "We mourn our craft", did a much better job at this communicating the point without the AI influence.
Fair enough, agree on your second paragraph.
At least then you’re being honest about you hating your intended audience, and not proudly posting the slop vomited forth from your algorithmic garbage machine as if it were something that deserved the time, thought and consideration of your equals.
I'm 57 and wrote my first line of BASIC in 1980, so while I can still chime in on this specific demographic I feel that I ought to. So im like this guy, but like a lot of other people in my specific demographic we aren't writing these long melancholy blog posts about AI because it's not that big of a deal. As an OSS maintainer most of my work is a lot of boring slog adding features to libraries to suit new features in upstream dependencies, nitpicky things people point out, new docs, tons of tedium. Claude helps a ton with all of that. no way is Claude doing the real architectural puzzle stuff, that's still fully on me! I can just use Claude to help implement it. It's like the ultimate junior programmer assistant. It's certainly a new, different and unique experience in one's programming career but it really feels like another tool, like an autocomplete or code refactoring tool that is just a lot better, with similar caveats. I mean in my career, I've had to battle the whole time people who don't "get" source code control (starting with me), who don't "get" IDEs (starting with me), people who dont "get" distributed version control (same), people who don't "get" ORMs (oh yes, same for me though this one I took much more dramatic steps to appreciate them), people who don't "get" code formatters, now we're battling people who don't "get" LLMs used for coding, in that sense the whole thing doesnt feel like that novel of a situation.
it's the LLMs that are spitting out fake photos and videos and generating lots of shitty graphics for local businesses, that's where I'm still wielding a pitchfork...
There's 3-4 of these posts a day - why don't people spend more time hand-building things for fun in their free time? That's what led a lot of us to this career path to start with. I have a solid mix of hand-code and AI-assisted projects in my free time.
>>The machines I fell in love with became instruments of surveillance and extraction.
Surveillance and Extraction
"We were promised flying cars", and what we got was "investors" running the industry off the cliff into cheap ways to extract money from people instead of real innovation.
> I started programming when I was seven because a machine did exactly what I told it to
What a poetic ending. So beautiful! And true, in my experience.
This isn't new. It's the same feeling the first commercial programmers had working in assembly, or machine code, once compilers became available. Ultimately I think even Mel Kaye forsook being able to handpick memory locations for optimum drum access before his retirement, in favor of being able to build vastly more complex software than before.
AI has just vastly extended your reach. No sense crying about it. It is literally foolish to lament the evolution of our field into something more.
Programming is dead. In the last 4 days I've done 2 months of work. The future is finally here.
Bad times to be a programmer. Start learning business.
I'm 57. I was there when the ZX81 came out.
I had my first paid programming job when I was 11, writing a database for the guy that we rented our pirate VHS tapes from.
AI is great.
Don't program as a career, but am also 50 and programming since TRS-80. AI has transformed this era, and I LOVE IT! I can focus on making and not APIs or syntax or all of the bootstrapping.
I wondered when I would see someone else mention TRS-80. Recall those days like yesterday "Hey Mah! It works!"
Professional development is changing dramatically. Nothing stops anyone from coding "the old way," though. Your hobby project remains yours, exactly the way you want it. Your professional project, on the other hand, was never about you in the first place. It's always about the customer/audience/user, period full stop.
Please stop upvoting these posts. We have gotten to the point where both the front page and new page is polluted with these laments
It’s literally the same argument over and over and it’s the same comments over and over and over
HN will either get back to interesting stuff or simply turn into a support group for aging “coders” that refuse to adapt
I’m going to start flagging these as spam
The article talks about human vs technology and the loss of connection between creation, intent, ownership, and control. Don't be condescending.
…like every other post of this kind
same bud.
maybe that just means it's a maturing field and we gotta adapt?
yes, the promise has changed, but you still gotta do it for the love of the game. anything else doesnt work.
I’m 50 too and I’ve complained and yearned about the “old” days too, a lot of this is nostalgia as we reminisce about periods of time in our youth when we had the exuberance and time to play and build with technology of our own time
Working in AI startups strangely enough I see a lot of the same spirit of play and creativity applied to LLM based tools - I mean what is OpenClaw but a fun experiment
Those kids these days are going to reminisce about the early days of AI when prompts would be handwritten and LLMs would hallucinate
I’m not really sure 1983, 1993 or 2003 really was that gold of age but we look at it with rose colored glasses
11 and now 45. I am still interested in it, but I feel like in my 20s I would get a dopamine rush when a row showed up in a database. In my 30s I would get that only if a message passed through a system and updated on-screen analytics within 10 seconds. Thank god for LLMs because all of it became extremely boring, I can't stand having to get these little milestones each new company or each new product I'm working on. At least with LLMs the dopamine hit comes from being in awe of the code that gets generated and realizing it found every model, every messaging system interface, every API, and figuring out how to make it backwards compatible, updating the UI - something that would take half a day, now in 5 minutes or less.
> I’ve had that experience. And losing it — even acknowledging that it was lost
What are you talking about? You don't know how 99% of the systems in your own body work yet they don't confront you similarly. As if this "knowledge" is a switch that can be on or off.
> I gave 42 years to this thing, and the thing changed into something I’m not sure I recognise anymore.
Stop doing it for a paycheck. You'll get your brain back.
Old Man Yells at Clouds
I'd feel the same when I was younger. Over time I've realized that they are the lucky ones. You too, if you're lucky, will one day be an old man doing old man things.
Old Man Yells at Claude
So tired of this sort of complaint (and I'm 62).
The computing the author enjoyed/enjoys is still out there, they are just looking for it in all the wrong places. Forget about (typical) web development (with its front and backend stacks). Forget about windows and macOS, and probably even mobile (though maybe not).
Hobby projects. C++/Rust/C/Go/some-current-Lisp. Maybe even Zig! Unix/Linux. Some sort of hardware interaction. GPL, so you can share and participate in a world of software created by people a lot more like you and a lot less like Gates and Jobs and Zuckerberg and ...
Sure, corporate programming generally tends to suck, but it always did. You can still easily do what you always loved, but probably not as a job.
At 62, as a native desktop C++ app developer doing realtime audio, my programming is as engrossing, cool, varied and awesome as it has ever been (probably even more so, since the GPL really has won in the world I live in). It hasn't been consumed by next-new-thing-ism, it hasn't been consumed by walled platforms, it hasn't been taken over by massive corporations, and it still very much involves Cool Stuff (TM).
Stop whining and start doing stuff you love.
Sure enjoy your retirement. But for me it's annoying some late 50s+ people telling what you just did. Think about people who are in their 20s or 30s - they are not even halfway through their path to retirement and some maybe even still paying out student debt.
> Stop whining and start doing stuff you love.
You have to understand that it's hard to do stuff that you love when you have to feed your family and pay mortgage or rent. Not everyone can be or want to be entrepreneur.
You are just talking from perspective of someone who already paid all debts raised all kids and now enjoying or soon will be enjoying retirement - at least meaning you can retire even if maybe don't want to.
Retired? I'm not retired and likely won't be for another 8 years.
> But for me it's annoying some late 50s+ people telling what you just did.
The author of TFA is at least 50!
> You are just talking from perspective of someone who already paid all debts raised all kids
That part is true. But that was more or less true when I was 50, too.
Finally, the article wasn't about the shitty economic world that we've created for so many people, it was about how programming has changed. Those two are inter-related but they are not the same.