I also appreciate the craft of coding. My suspicion is that eventually it just won't be a thing anymore, even though such a future is not my preference. Not that humans will be completely removed from coding, but writing every line yourself will just be incredibly uncommon and not industry standard. I'm not good at predicting, this is just where my gut is right now.
This is a similar reaction folks had to autocomplete/intellisense when it came out also. "What you just press tab and get all the possible options. Psh, how about reading the docs like the rest of us did?". I think AI is a pretty big step function up in terms of the computer's capability to predict the code you want - but at the end of the day we'll still be writing code (I think. I hope).
Overreliance on intellisense does worsen your abilities, though, especially when dealing with complex packages and APIs. So does stack overflow. In my domain of data science and finance, it is VERY noticeable when someone doesn't actually understand how Pandas and Numpy work, for example, and how to write good vectorized solutions to their problems
You might be right that intellisense is to blame, but my suspicion is that some people are just worse at writing code and it isn't really about the available tools.
Pandas is a very difficult api to use correctly and a huge swath of programs that the AIs have trained on was "programmed by accident" meaning people just typed stuff at it until they got the output they wanted.
To offset all the bad code, you would have to make a Pandas fine tune and/or ablate the bad Pandas from the weights.
I felt the same - have to relearn/lookup everything every time I went back to a project or wanted to do some operations that are simple to describe in SQL but I couldn't wrap my mind around e.g. using multi-indexed dataframes & aggregations properly. These days, I always jump to Polars instead of Pandas - much more intuitive and consistent API. Tons of props to Pandas for all that they did (and continue to do) in the data space, but their API did not evolve very well IMO.
I've also been wanting to play with Ibis[1] recently, but Polars has been sufficient for me.
What helped me with Pandas was a (very) short stint with array programming, specifically uiua. It gave me a good understanding of the possible operations on arrays.
I think Copilot (and all other AI coding variant) is different from auto-complete. Auto-complete just give you function names, variable names, but AI coding can write whole block or even file of code, if you so wish.
By the same principle, I also think advancement in AI is different from other technological advancement. Even the invention of computer is at most on par with AI if AI is able to walk far. People always try to use the train-horse analogue but I think we will see a gloom future in the next 5-10 years -- especially when the whole world is not turning to the left, but to the right.
Now that US and China and everyone else are competing on AI, that future might come earlier than I thought.
With the difference that intelli* is usually correct - i.e. it fetches all the possible methods from actual code, and you can see the related api docs. It’s another view for the same data (more or less - of course there can be docs other than api docs)
With AI you’ve got no idea whether something is right, it’s a probablistic thing
That’s why I don’t buy the “AI as just another abstraction layer” argument. AI is something different and we should use it differently.
From my experience, people are really wary of learning the tool they're using to code, because any editor similar to VSCode fulfills their needs.
However, as you said, mastering a proper IDE is a superpower and allows much more to be done in a single window in a shorter amount of time.
Also, since VSCode is everywhere, getting people off of it, or showing much capable software is a sad and moot effort, from my experience, again. KDEs humble KATE is much better than VSCode on many fronts.
> Also, since VSCode is everywhere, getting people off of it, or showing much capable software is a sad and moot effort
The issue is that they're not learning VS Code. They install a lot of plugins, but they're not really learning them. I've seen people that still go through the explorer to open a file.
I do read the docs a lot less now though with IntelliSense, so if my craft were reading and understanding docs I'd probably be measurably disappointed by this future.
Aside: I can't stand popups when I'm coding. I always have to change it to something more manual or it just breaks up my flow. ^K to bring up a man page from vim is great--that's what I'm after.
I never understood how people could maintain focus while their cursor is jumping around and things are flying across the screen. I can type 90+ wpm, I don't need help typing.
This sort if makes me want to learn Lisp. Maybe if every line packs a lot of punch and you need fewer of them, you can do things AI can't do yet. I think AI would struggle with Lisp since in Lisp, you engineer a language for the task. AI tends to be good on languages it is trained on. For now!
If you're ever bored for a weekend I'd recommend reading through the Reasoned Schemer, or at least the final chapter and the appendices to see how a simple Kanren can be implemented. It's what got me to properly appreciate macros.
This has been happening for a long time though. No one has written every single line of a program since the 80s. Every one line of a JS function call can boil down to hundreds or thousands of asm lines. Same thing happens here with english to asm.
I love writing code. I recently tried refactoring some modules using AI agents. The amount of time it took me explaining requirements to AI, reviewing the code it generated and fixing the mistakes was not trivial. And, it didn't feel as joyful as figuring it out and coding it manually.
If AI writes majority of code, then we will stop seeing shortcomings of existing tools, libraries and frameworks and eventually stop having new frameworks or libraries.
Would there be rails if DHH vibe coded 25 years ago? Or would there be library like delayed_job if Tobi vibe coded back when he started Shopify?
It’s arguably not a thing now and hasn’t been since the mid 2010s at the latest.
I blame the proliferation of MBAs but regardless of cause management killed software engineering as a true craft once they co-opted what agile development means and threw proper testing out the window and wanted devs to become turn key feature factories
Very good article. I try to (respectfully) push my coworkers in this direction but it’s a bit of a lost cause.
I work mostly on small codebases and you can smell unchecked AI codegen from a mile away. Even small changes become giant refactors.
People who push this have never had to work in such codebases. God forbid your requirements change mid project. Rarely does the sum of separate prompts without full context result in something maintainable and modifiable. You’re cooked.
This article really hit home—especially this part from the conclusion:
> In a world pushing for “reflexive AI usage,” I’m advocating for something different: thoughtful, intentional collaboration with AI that preserves the essence of coding as a craft.
> ...
> Like Rocky, we sometimes need to step away from the comfortable, civilized environment and return to the old gym – the place where real growth happens through struggle, persistence, and focused practice.
> Because coding isn’t just about output. It’s about the journey of becoming better problem solvers, better thinkers, and better engineers. And some journeys can’t be outsourced, even to the most advanced AI.
But here’s the reality: those ideals feel increasingly out of reach. Business demands and short-term thinking rarely leave room for “intentional” or “thoughtful” work. For many of us, having time to grow as engineers is a luxury.
Worse, it’s often personal. I’ve had to carry the weight for friends in crisis, pretending two people were working just to help someone keep their job. It’s brutal—and sadly, not rare.
As AI gets more buzz, many stakeholders now think our work is overvalued. A quick AI PoC becomes “good enough” in their eyes, and we’re expected to polish it into something real—fast, cheap, and under pressure. Meanwhile, we’re constantly defending our craft against the next threat of being replaced by “cheaper” labor.
When I started out, we cared about clean code and craftsmanship. Now, I feel like I should be taking sales courses just to survive.
Today, it’s all about output. Ship faster or get replaced. Quality only matters when it’s too late—after the person who made the bad call has already cashed out.
I know this sounds pessimistic, but for many of us who aren’t in the top 1% of this industry, it’s just reality.
Thanks for the article, Christian. You’re not wrong—but I think you’re one of the few lucky enough to live that perspective. I wish you all the best, and hope you can keep enjoying that rare luxury. There will be a need for true craftsmen—especially when the rest of us have gone numb just trying to keep up.
For me it’s not complicated when to use AI. If the task is something where having fluency will make me significantly more productive and nimble in the future, then I’ll go the old gym way.
It’s like communicating in a foreign language- you can do it with AI, but if it’s an essential element of your job then you’re going to invest the time to learn it so it becomes part of you.
I think this fails to recognize how many more important problems there are in the world, and that the writing of code was not meant to be one of them, but only came into existence to solve them.
Of course, that does not have to be true now. You can certainly do this for personal satisfaction.
But the argument in this article is a bit confused. The step that lies behind "coding" is not of lesser difficulty, on the contrary. Instead of worrying about coding, we can instead worry about the bigger picture, and all the beautiful thinking, contemplating and deadlock it entails.
Only now, we are one step closer to solving a real problem.
My anxiety for where this is all going is as follows. Using AI for all your coding is a wet dream for CEOs because the goal is to fire all their engineering staff except for a handful of maintainers. However, it reminds me of trying to explain to your stakeholders why you need time to work on tech debt before it becomes a problem. There is no metric now that says there is a problem that’s easily measured but engineers are grinding their teeth trying to work within a system that is slowly degrading and is showing signs.
“Who cares, ship it, also we need this new feature next week. What do you mean it will take longer this time? Ridiculous, why didn’t you say something before?”
Likewise, the brainrot and lost knowledge, as well as possible new tech debt that fewer engineers working in their codebase understand, will eventually cause issues down the line. The same pressures will ensue causing stakeholders to ignore all the signs of degradation.
> “Who cares, ship it, also we need this new feature next week. What do you mean it will take longer this time? Ridiculous, why didn’t you say something before?”
That's the reason I tried to have most of my communication (and complaints) in written and auditable form.
This reminds me that I should do more coding challenges. Maybe nothing major, small data wrangling. Stuff like puzzles in the game Human Resource Machine. Advent of code light maybe. So refreshing and fun sometimes.
Really interesting and I agree with everything. With my team we always try to improve our programming skills without AI, even though we all recognize its great utility.
Today every type of problem and every type of solution seems to have to be solved with AI, when there are more creative, original and artisanal ways to solve them (even if, sometimes, they need more time and patience)
> The phrase “reflexive AI usage” is what triggered my strongest reaction. “Reflexive” suggests unthinking, automatic reliance. It implies delegating not just tasks but judgment itself.
Does it? When I trained as a schoolteacher, we were required to engage in 'reflexive practice', meaning at the end of the school day, we were expected to sit down and think about - reflect - on what had happened that day. I don't know how the Shopify CEO meant that phrase, but 'reflexive AI usage' has two conflicting meanings - it can be AI usage that is either actively or passively chosen - and we might need some better phrasing around this.
I left Shopify a couple weeks ago and Tobi is very, very all-in on AI being an integral part of all jobs at Shopify.
Tobi said that how you use AI is now an official part of your review, and that for any new recs, you need to show that the job can't be done by an ai. I left shortly after the memo so I do not know if things have changed.
Shopify also brought in a very AI CTO a few months ago that internally has been... interesting to say the least.
Also, anecdotally, the quality of code at Shopify was declining rapidly (leaderships words, not mine). All sorts of code-reds and yellows were happening to try and wrangle quality. This isn't Blind so no need for the gore and opinions, but we'll have to see how this shakes out for Shopify.
I thought ceos are more like coaches who motivate, and inspire. But not dictate employees on how they should execute. Since, engineers are expected to be far more capable than the ceo on their daily work (if not, the ceo needs to evaluate hiring practices). Tying ai-use to perf review and compensation is just more unnecessary process which incentivizes behaviors which may be counter-productive - "Oh, look, I am such a brilliant prompt-engineer"
So, the memo seemed to baby-sit adult engineers. It goes without saying that engineers will use AI as they see fit, and the least a company would do is to make copilot subscriptions available for the staff if needed.
> we were required to engage in 'reflexive practice', meaning at the end of the school day, we were expected to sit down and think about - reflect - on what had happened that day.
That is _reflective_ practice (which involves reflection). Reflexive otoh comes from 'reflex', which does suggest unthinking automaticity.
No, reflexive and reflective are synonyms; they are alternative forms of adjectives derived from the Latin verb flecto, flectere, flexi, flexum (note that both English spellings are present in the principal parts).
As bad as Merriam-Webster is, you might notice that 'characterised by habitual and unthinking behaviour' is the fourth, i.e., least common, definition offered, not the first.
Merriam-Webster uses historical order, not how common the meanings are [0], which makes more sense to me - I'm not entirely sure I've ever heard the "reflective" meaning for "reflexive". The "unthinking" meaning is definitely more common.
Regardless of etymology, I believe the use of “reflexive” means something different in the article than “reflective.” The Shopify CEO isn’t describing insightful use of AI in coding. He is describing automatic, unthinking use of AI.
At least, that it was my understanding.
> If people stop the occasional deadlock of grinding teeth, looking at a problem, crying, going for a walk, praying and screaming until suddenly it makes sense (and you learn something!), I’d call it severe regression, not progress.
People for which development is not their job will absolutely want to get rid of it as much as possible because it costs money. I really agree with the author, it does feel like a regression and it’s so easy to overlook what makes the most part of the job when it looks like it can be fully automated. Once you don’t have people who are used to do what’s quoted, and there is 500 million lines of code and bugs, good luck with that to ask a human to take a look. Maybe AI will be powerful enough to help debugging but it’s a dangerous endeavor to build critical business around that. If for any reason (political or else) AI got more expensive it could kill businesses (twitter api ?)
Yeah, anytime I am about to do some long multiplication, I start reaching for my calculator and stop, "no, you will go to the multiplication gym, and do this by hand, need to stay sharp"
Same here. I stopped writing in cursive in high school and completely lost the ability, and it was difficult to learn again as an adult when I wanted to. I don't want to lose basic skills like multiplication, and exercising them occasionally prevents that.
I don't reach for calculators so often. When numbers are small or mental approximations will do, it's simply faster to do it in my head. And I'm not even good at mental math.
This is an absolutely false equivalency. There’s no decision making, design consideration, architecture, real problem solving, etc. when doing long multiplication.
You still need to do the decision making, design, architecture when using AI. AI is still more like a enthusiastic junior engineer. It will mindless start trying to solve a problem, copy in bad code, often makes mistakes, etc. You're still responsible for the hard problems and finding the issues. You are more of an senior/lead engineer who is doing as much thinking but not that much of the actual typing.
The question in my mind is if you need to become less productive to keep your thinking skills sharp. Do we need to separate the work from the "gym". We have times when we are using AI heavily to be as productive as possible. Then we have other times where we don't use it all to keep us sharp.
Is this necessary or are we being old fashion? I lean more towards this being necessary but if I grew up with AI, I might look at not using it as trying to write a web app in assembly. Yes, I learned it in college but there no reason to keep using it.
There is more complicated math systems that computers have solved, just like Chess, and Go. Systems that seemed impossible for a machine to beat and eventually they do.
Code is formal language, there’s nothing to be solved because it’s already as precise as 2*2. The issue is not with programming language, the issue is the domain where the problem is and the human that does translate the solution.
Let’s take text rendering. We already have words on papers and various ways to get them there. But doing the same with computer is a difficult job because of all the parameters for drawing characters and laying them to form words and lines. Once you find those parameters, you have to account for future changes so you write you code in a way that minimize that impact. And because someone else will probably do maintenance, you try to come up with good abstractions so that your solution become understandable.
If AI will solve coding, it may as well write machine code directly or be embedded as firmware, because every programming language was made for humans.
>>It won’t design your domain layer with the care and consideration that
>>comes from deep experience and hard-won knowledge.
What if every time you had an Aha! moment, you blogged about it in detail. Many people do. AI ingests those blog posts. It uses what they say when writing new code, or assessing existing code. It does use hard-won knowledge; it just wasn't hard-won by AI itself.
The problem is that someone elses aha might not apply to your situation. The AI cant reason and generalize like a human can to apply lessons from someone else to you slightly different situation
To me, knowledge is about knowing things, intelligence is about being able to apply your knowledge at the right context and for the right reasons.
The current crop of LLMs has a lot of knowledge, but severely lacks on the "intelligence" part. Sure, it can "guess" how to write a unit test consistent with your codebase (based on examples), but for those 1% where you need to make a deviation from the rule, it's completely clueless how to do it properly. Guessing is not intelligence, although it might appear masked as such.
Don't get me wrong, the "guessing" part is sometimes uncannily good, but it just can't replace real reasoning.
"It’s about maintaining the human element in a craft that’s increasingly automated."
I mean, what can anyone do, anyway? We’ve been on a "quest" toward the total automation of work for decades! and unfortunately these reflections are coming far too late.
didn’t anyone notice what was happening all these years?
Talking with a musician friend, he pointed out that today, studying, producing, and releasing music is almost volunteer work because the vast majority of artists will likely see no return on their investment, especially with AI flooding the music platforms, so I really expect it to happen to many other jobs.
>majority of artists will likely see no return on their investment
I wonder if music is the best example, because if I recall it has been always like this for musicians. Never have I heard that in my, my parents or grandparents time Musician was a career you would get in for money
When I was young I got to meet a lot of the aging jazz musicians of the 1930s in Kansas City. It absolutely was a career here. Granted, that’s a distant memory for most people.
Going this route, what’s the point of of learning anything if everything is instantly accessible from an AI with working solutions ? So no learning = no teaching or teaching that feels useless. That’s a weird and dangerous road. Everyone should own this technology for the situation to be balanced, not private or country based. Because we make ourself kind of useless in the process we loose leverage and value and we are at the mercy of the powerful ones
At the same time, as AI takes over the actual coding practice more and more, I find the situation with multiple programming languages a waste of resources.
If AI could generate binaries, web assembly directly, or even some "AI specific bytecode" then we could skip the steps in the middle and save a ton of energy.
Even delegating just the boring parts doesn't appeal to me.
Generating boilerplate code - getting frustrated about code is what drives new ideas and improvements, I don't want to lose that friction.
Summarizing documentation - Reading and making sense of written material is a skill.
Explaining complex concepts - I don't want explanations on a silver plate, I want to figure things out. Who knows what great ideas I'll run into on the way there.
Helping debug tricky error messages - Again, a skill I like to keep sharp.
Drafting unit tests - No one knows better than me what needs testing in my code, this sounds like the kind of unit tests no one wants to maintain.
Formatting data - Maybe, or maybe whip out Perl and refresh that skill instead.
Keep delegating everything to AI for a year and I suspect you'll be completely worthless as a developer without it...
"I noticed the following facts about people who work with the door open or the door closed. I notice that if you have the door to your office closed, you get more work done today and tomorrow, and you are more productive than most. But 10 years later somehow you don't know quite know what problems are worth working on; all the hard work you do is sort of tangential in importance. He who works with the door open gets all kinds of interruptions, but he also occasionally gets clues as to what the world is and what might be important."
Each of those little interruptions is a clue about the wider state of the world (codebase / APIs etc). AI offers a shortcut, but it does not provide the same mental world-building.
That's a nice quote. And your comment reminds me of why I (and maybe some other people) prefer windows managers over desktop environments. You go with the basic and everytime you notice some missing capabilities or inefficiency, you code it away. The end result is something that fit you like a glove and you understand thoroughly. It's 100% your own.
But with DE, you need maybe 80%, and the 20% you build with workarounds is constantly under threat. Why, because you're effectively enclosed in a small space by the design decisions of the DE.
Exactly this, though for me a lot of boilerplate is actually a comfort zone that I often look forward to, the way an athlete might to a light jog. Earbuds in, forget about everything, crank it out.
(That said your point is valid — there is boilerplate that is tedious and the resulting pain will be motivation to improve things)
First, the pervasive assumption that there is no skill involved in food preparation is wrong. While the floor may be higher in a kitchen operated by an executive chef, there is a noticeable difference between a badly-made Big Mac and a well-made one. Execution matters.
Next, at this point "IT" is so broad as to be almost meaningless. In this discussion, we're talking about programming.
Finally, you're holding up Michelin starred chefs as being inherently better than all other chefs. The Michelin star program is skewed towards one particular end result; to put it in technology terms, it's like grading your business solely on a narrow set of SLOs rather than a holistic understanding.
Hate it all you want, the vast majority of "programmers" aren't working on anything novel, meaningful or hard. For the vast majority of people it's just a job, it's not a hobby, it's not a passion, it's not something they dream about, it's just a thing that they have to do 8 hours a day to make money and go do stuff in the real world. They don't want to think about it on walks, they don't want to cry about it, they don't want to dream about it and solve problems in their fucking sleep
AI is liberating them because it automatise 80% of their work, and there is nothing wrong about that. Most people work on projects that won't even exist in 10 years, let's stop pretending we're all working on Apollo tier software... Coding isn't a craft, it's not an art, it's a job in which you spend the vast majority of your time fucking up your eyes and spine to piss code for companies treating you like cattle.
For every """code artisan""" you have a thousand people who'd be as excited about working in a car factory or flipping burgers, it just so happens that tech working conditions are better
There is nothing wrong with people wanting to just be able to afford food, shelter, and comfort.
However, if people are writing software that other people rely on, there has to be some expectation of quality. Software that controls a machine responsible for keeping someone alive, for instance, should function reliably.
Relying on AI to vibe-code such software is dangerous at best.
Well, I don't know about you but everything I have written for money has been new (novel), interesting to me, and of value for the organisations I worked for. I would not want it to be otherwise, and I never saw it as a Mac job.
I wish I was making 100k as a dev, but plot twist outside of the US this is the minority.
I was comparing burger flippers to michelin chefs, not to devs. The vast majority of devs are gluing tools together and working on basic CRUD stuff, which is the burger flipping of the tech world. It's just a job, people don't want to think about code in the shower, on walks, or "cry" about tech problems as the author seems to romanticise. A job is here to provide money so you can live life, not the other way around. If I can automate my burger flipping to go to the gym or read a book instead I'll gladly do it
I do some of the, ahem, "romanticised" things you mentioned because even in CRUD stuff there are hard problems to solve - particularly problems introduced by other people.
We work to get paid, but you can't get around the fact that we spend so much of our lives at work that it is a part of life.
I also appreciate the craft of coding. My suspicion is that eventually it just won't be a thing anymore, even though such a future is not my preference. Not that humans will be completely removed from coding, but writing every line yourself will just be incredibly uncommon and not industry standard. I'm not good at predicting, this is just where my gut is right now.
This is a similar reaction folks had to autocomplete/intellisense when it came out also. "What you just press tab and get all the possible options. Psh, how about reading the docs like the rest of us did?". I think AI is a pretty big step function up in terms of the computer's capability to predict the code you want - but at the end of the day we'll still be writing code (I think. I hope).
Overreliance on intellisense does worsen your abilities, though, especially when dealing with complex packages and APIs. So does stack overflow. In my domain of data science and finance, it is VERY noticeable when someone doesn't actually understand how Pandas and Numpy work, for example, and how to write good vectorized solutions to their problems
You might be right that intellisense is to blame, but my suspicion is that some people are just worse at writing code and it isn't really about the available tools.
It doesn't help that AI generally creates very mediocre Pandas code (based on a lot of the training data showing mediocre practices).
Pandas is a very difficult api to use correctly and a huge swath of programs that the AIs have trained on was "programmed by accident" meaning people just typed stuff at it until they got the output they wanted.
To offset all the bad code, you would have to make a Pandas fine tune and/or ablate the bad Pandas from the weights.
Does anyone have good reference material on learning how to use pandas effectively and understanding the api as a whole?
Any time ive had to use pandas I am shocked at how convuluted and opaque the docs are, and end up just hacking away till something wirks alright.
I felt the same - have to relearn/lookup everything every time I went back to a project or wanted to do some operations that are simple to describe in SQL but I couldn't wrap my mind around e.g. using multi-indexed dataframes & aggregations properly. These days, I always jump to Polars instead of Pandas - much more intuitive and consistent API. Tons of props to Pandas for all that they did (and continue to do) in the data space, but their API did not evolve very well IMO.
I've also been wanting to play with Ibis[1] recently, but Polars has been sufficient for me.
[1] https://ibis-project.org/
There’s Pandas for Everyone by Daniel Chen.
What helped me with Pandas was a (very) short stint with array programming, specifically uiua. It gave me a good understanding of the possible operations on arrays.
Is this a setup??? :)
The guy two comments up wrote a great book on using pandas effectively. It’s called Effective Pandas
I think Copilot (and all other AI coding variant) is different from auto-complete. Auto-complete just give you function names, variable names, but AI coding can write whole block or even file of code, if you so wish.
By the same principle, I also think advancement in AI is different from other technological advancement. Even the invention of computer is at most on par with AI if AI is able to walk far. People always try to use the train-horse analogue but I think we will see a gloom future in the next 5-10 years -- especially when the whole world is not turning to the left, but to the right.
Now that US and China and everyone else are competing on AI, that future might come earlier than I thought.
With the difference that intelli* is usually correct - i.e. it fetches all the possible methods from actual code, and you can see the related api docs. It’s another view for the same data (more or less - of course there can be docs other than api docs)
With AI you’ve got no idea whether something is right, it’s a probablistic thing
That’s why I don’t buy the “AI as just another abstraction layer” argument. AI is something different and we should use it differently.
I also wonder if people truly learned their IDE. Even xCode with all its warts have nice feature for completion, debugging, profiling, documentation,…
It seems like all they know is VS Code which is neither a good editor, nor a good IDE.
From my experience, people are really wary of learning the tool they're using to code, because any editor similar to VSCode fulfills their needs.
However, as you said, mastering a proper IDE is a superpower and allows much more to be done in a single window in a shorter amount of time.
Also, since VSCode is everywhere, getting people off of it, or showing much capable software is a sad and moot effort, from my experience, again. KDEs humble KATE is much better than VSCode on many fronts.
> Also, since VSCode is everywhere, getting people off of it, or showing much capable software is a sad and moot effort
The issue is that they're not learning VS Code. They install a lot of plugins, but they're not really learning them. I've seen people that still go through the explorer to open a file.
I think there's a fundamental difference.
Autocomplete and intellisense are tools first and foremost. AI is centralized into a handful of companies you are forced to pay every month for.
Autocomplete and intellisense don't care about your data. There's an inherent issue with data, privacy and ownership when it comes to AI.
If we can run useful models locally and make it generally available on consumer hardware... things would be different.
I do read the docs a lot less now though with IntelliSense, so if my craft were reading and understanding docs I'd probably be measurably disappointed by this future.
Aside: I can't stand popups when I'm coding. I always have to change it to something more manual or it just breaks up my flow. ^K to bring up a man page from vim is great--that's what I'm after.
There are dozens of us!
I never understood how people could maintain focus while their cursor is jumping around and things are flying across the screen. I can type 90+ wpm, I don't need help typing.
This sort if makes me want to learn Lisp. Maybe if every line packs a lot of punch and you need fewer of them, you can do things AI can't do yet. I think AI would struggle with Lisp since in Lisp, you engineer a language for the task. AI tends to be good on languages it is trained on. For now!
If you're ever bored for a weekend I'd recommend reading through the Reasoned Schemer, or at least the final chapter and the appendices to see how a simple Kanren can be implemented. It's what got me to properly appreciate macros.
I think the best way to learn Lisp is to make a Lisp.
Two routes, I would recommend both.
https://github.com/kanaka/mal
https://t3x.org/
This has been happening for a long time though. No one has written every single line of a program since the 80s. Every one line of a JS function call can boil down to hundreds or thousands of asm lines. Same thing happens here with english to asm.
Ok... Edit my comment to say "writing the bulk of your business logic yourself will be incredibly uncommon."
I love writing code. I recently tried refactoring some modules using AI agents. The amount of time it took me explaining requirements to AI, reviewing the code it generated and fixing the mistakes was not trivial. And, it didn't feel as joyful as figuring it out and coding it manually.
If AI writes majority of code, then we will stop seeing shortcomings of existing tools, libraries and frameworks and eventually stop having new frameworks or libraries.
Would there be rails if DHH vibe coded 25 years ago? Or would there be library like delayed_job if Tobi vibe coded back when he started Shopify?
It’s arguably not a thing now and hasn’t been since the mid 2010s at the latest.
I blame the proliferation of MBAs but regardless of cause management killed software engineering as a true craft once they co-opted what agile development means and threw proper testing out the window and wanted devs to become turn key feature factories
Very good article. I try to (respectfully) push my coworkers in this direction but it’s a bit of a lost cause.
I work mostly on small codebases and you can smell unchecked AI codegen from a mile away. Even small changes become giant refactors.
People who push this have never had to work in such codebases. God forbid your requirements change mid project. Rarely does the sum of separate prompts without full context result in something maintainable and modifiable. You’re cooked.
This article really hit home—especially this part from the conclusion:
> In a world pushing for “reflexive AI usage,” I’m advocating for something different: thoughtful, intentional collaboration with AI that preserves the essence of coding as a craft. > ... > Like Rocky, we sometimes need to step away from the comfortable, civilized environment and return to the old gym – the place where real growth happens through struggle, persistence, and focused practice.
> Because coding isn’t just about output. It’s about the journey of becoming better problem solvers, better thinkers, and better engineers. And some journeys can’t be outsourced, even to the most advanced AI.
But here’s the reality: those ideals feel increasingly out of reach. Business demands and short-term thinking rarely leave room for “intentional” or “thoughtful” work. For many of us, having time to grow as engineers is a luxury.
Worse, it’s often personal. I’ve had to carry the weight for friends in crisis, pretending two people were working just to help someone keep their job. It’s brutal—and sadly, not rare.
As AI gets more buzz, many stakeholders now think our work is overvalued. A quick AI PoC becomes “good enough” in their eyes, and we’re expected to polish it into something real—fast, cheap, and under pressure. Meanwhile, we’re constantly defending our craft against the next threat of being replaced by “cheaper” labor.
When I started out, we cared about clean code and craftsmanship. Now, I feel like I should be taking sales courses just to survive.
Today, it’s all about output. Ship faster or get replaced. Quality only matters when it’s too late—after the person who made the bad call has already cashed out.
I know this sounds pessimistic, but for many of us who aren’t in the top 1% of this industry, it’s just reality.
Thanks for the article, Christian. You’re not wrong—but I think you’re one of the few lucky enough to live that perspective. I wish you all the best, and hope you can keep enjoying that rare luxury. There will be a need for true craftsmen—especially when the rest of us have gone numb just trying to keep up.
For me it’s not complicated when to use AI. If the task is something where having fluency will make me significantly more productive and nimble in the future, then I’ll go the old gym way.
It’s like communicating in a foreign language- you can do it with AI, but if it’s an essential element of your job then you’re going to invest the time to learn it so it becomes part of you.
I think this fails to recognize how many more important problems there are in the world, and that the writing of code was not meant to be one of them, but only came into existence to solve them.
Of course, that does not have to be true now. You can certainly do this for personal satisfaction.
But the argument in this article is a bit confused. The step that lies behind "coding" is not of lesser difficulty, on the contrary. Instead of worrying about coding, we can instead worry about the bigger picture, and all the beautiful thinking, contemplating and deadlock it entails.
Only now, we are one step closer to solving a real problem.
> thinking, contemplating and deadlock
This is what I’d call ‘programming’. Which you’ll still be doing even if the AI is writing the code.
The question is whether you can become good/better at programming without writing code?
My anxiety for where this is all going is as follows. Using AI for all your coding is a wet dream for CEOs because the goal is to fire all their engineering staff except for a handful of maintainers. However, it reminds me of trying to explain to your stakeholders why you need time to work on tech debt before it becomes a problem. There is no metric now that says there is a problem that’s easily measured but engineers are grinding their teeth trying to work within a system that is slowly degrading and is showing signs.
“Who cares, ship it, also we need this new feature next week. What do you mean it will take longer this time? Ridiculous, why didn’t you say something before?”
Likewise, the brainrot and lost knowledge, as well as possible new tech debt that fewer engineers working in their codebase understand, will eventually cause issues down the line. The same pressures will ensue causing stakeholders to ignore all the signs of degradation.
> “Who cares, ship it, also we need this new feature next week. What do you mean it will take longer this time? Ridiculous, why didn’t you say something before?”
That's the reason I tried to have most of my communication (and complaints) in written and auditable form.
This reminds me that I should do more coding challenges. Maybe nothing major, small data wrangling. Stuff like puzzles in the game Human Resource Machine. Advent of code light maybe. So refreshing and fun sometimes.
Really interesting and I agree with everything. With my team we always try to improve our programming skills without AI, even though we all recognize its great utility.
Today every type of problem and every type of solution seems to have to be solved with AI, when there are more creative, original and artisanal ways to solve them (even if, sometimes, they need more time and patience)
> The phrase “reflexive AI usage” is what triggered my strongest reaction. “Reflexive” suggests unthinking, automatic reliance. It implies delegating not just tasks but judgment itself.
Does it? When I trained as a schoolteacher, we were required to engage in 'reflexive practice', meaning at the end of the school day, we were expected to sit down and think about - reflect - on what had happened that day. I don't know how the Shopify CEO meant that phrase, but 'reflexive AI usage' has two conflicting meanings - it can be AI usage that is either actively or passively chosen - and we might need some better phrasing around this.
> I don't know how the Shopify CEO meant
I left Shopify a couple weeks ago and Tobi is very, very all-in on AI being an integral part of all jobs at Shopify.
Tobi said that how you use AI is now an official part of your review, and that for any new recs, you need to show that the job can't be done by an ai. I left shortly after the memo so I do not know if things have changed.
Shopify also brought in a very AI CTO a few months ago that internally has been... interesting to say the least.
Also, anecdotally, the quality of code at Shopify was declining rapidly (leaderships words, not mine). All sorts of code-reds and yellows were happening to try and wrangle quality. This isn't Blind so no need for the gore and opinions, but we'll have to see how this shakes out for Shopify.
I thought ceos are more like coaches who motivate, and inspire. But not dictate employees on how they should execute. Since, engineers are expected to be far more capable than the ceo on their daily work (if not, the ceo needs to evaluate hiring practices). Tying ai-use to perf review and compensation is just more unnecessary process which incentivizes behaviors which may be counter-productive - "Oh, look, I am such a brilliant prompt-engineer"
So, the memo seemed to baby-sit adult engineers. It goes without saying that engineers will use AI as they see fit, and the least a company would do is to make copilot subscriptions available for the staff if needed.
> we were required to engage in 'reflexive practice', meaning at the end of the school day, we were expected to sit down and think about - reflect - on what had happened that day.
That is _reflective_ practice (which involves reflection). Reflexive otoh comes from 'reflex', which does suggest unthinking automaticity.
No, reflexive and reflective are synonyms; they are alternative forms of adjectives derived from the Latin verb flecto, flectere, flexi, flexum (note that both English spellings are present in the principal parts).
They both have multiple meanings in English. The article was using reflexive this way: “ characterized by habitual and unthinking behavior.” https://www.merriam-webster.com/dictionary/reflexive
Is that fair to the word given its roots, no, but that is English for you. :)
As bad as Merriam-Webster is, you might notice that 'characterised by habitual and unthinking behaviour' is the fourth, i.e., least common, definition offered, not the first.
Merriam-Webster uses historical order, not how common the meanings are [0], which makes more sense to me - I'm not entirely sure I've ever heard the "reflective" meaning for "reflexive". The "unthinking" meaning is definitely more common.
[0] https://www.merriam-webster.com/help/explanatory-notes/dict-... (See "Order of Senses")
Somehow I knew you would keep digging. :) It’s not marked as an archaic so any of them are valid. Context tells the reader which is being used.
> Somehow I knew you would keep digging. :)
I was the teacher who thought words were fun. Sadly, that doesn't seem to be acceptable any more. :(
> Context tells the reader which is being used.
That's the thing: if you read the CEO's post on its own, with both meanings in mind, it's not clear (at least to me) which is intended.
Words and civil debate seeking truth are fun!
The following behavior is not.
> No, reflexive and reflective are synonyms; ...
> I was the teacher who thought words were fun. Sadly, that doesn't seem to be acceptable any more. :(
Regardless of etymology, I believe the use of “reflexive” means something different in the article than “reflective.” The Shopify CEO isn’t describing insightful use of AI in coding. He is describing automatic, unthinking use of AI. At least, that it was my understanding.
It's been a LONG time since my latin. But doesn't the active vs. passive capture the distinction we're talking about in English quite well?
A reflexive action is taken passively, without thought.
A reflective action is taken actively, with careful thought.
> If people stop the occasional deadlock of grinding teeth, looking at a problem, crying, going for a walk, praying and screaming until suddenly it makes sense (and you learn something!), I’d call it severe regression, not progress.
People for which development is not their job will absolutely want to get rid of it as much as possible because it costs money. I really agree with the author, it does feel like a regression and it’s so easy to overlook what makes the most part of the job when it looks like it can be fully automated. Once you don’t have people who are used to do what’s quoted, and there is 500 million lines of code and bugs, good luck with that to ask a human to take a look. Maybe AI will be powerful enough to help debugging but it’s a dangerous endeavor to build critical business around that. If for any reason (political or else) AI got more expensive it could kill businesses (twitter api ?)
"Generating boilerplate code, Summarizing documentation, Understanding complex concepts, Debugging tricky error messages, Drafting unit tests, Formatting data"
Thing is, this is probably 99% of the programming work of a junior dev at a place where management thinks like that.
Yeah, anytime I am about to do some long multiplication, I start reaching for my calculator and stop, "no, you will go to the multiplication gym, and do this by hand, need to stay sharp"
I am not sure if you are serious, but I literally DO this, not just for multiplication, but for lots of automatable things.
Same here. I stopped writing in cursive in high school and completely lost the ability, and it was difficult to learn again as an adult when I wanted to. I don't want to lose basic skills like multiplication, and exercising them occasionally prevents that.
I started journaling in cursive after taking a 40-year break from it. I've gotten better, and am now faster at writing cursive than non.
Even losing programming skills to AI... I'll bet we can get them back.
Definitely easier to not lose them in the first place, though.
Love your surname dude (also your religion). AMDG
Yes I do for estimation. If I can calculate requests per minute in my head I saved time.
Also in your analogy the calculator is the compiler :). AI would be someone telling you the numbers to use and you just trust em.
I don't reach for calculators so often. When numbers are small or mental approximations will do, it's simply faster to do it in my head. And I'm not even good at mental math.
I actually do this and so do many of my peers...
This is an absolutely false equivalency. There’s no decision making, design consideration, architecture, real problem solving, etc. when doing long multiplication.
You still need to do the decision making, design, architecture when using AI. AI is still more like a enthusiastic junior engineer. It will mindless start trying to solve a problem, copy in bad code, often makes mistakes, etc. You're still responsible for the hard problems and finding the issues. You are more of an senior/lead engineer who is doing as much thinking but not that much of the actual typing.
The question in my mind is if you need to become less productive to keep your thinking skills sharp. Do we need to separate the work from the "gym". We have times when we are using AI heavily to be as productive as possible. Then we have other times where we don't use it all to keep us sharp.
Is this necessary or are we being old fashion? I lean more towards this being necessary but if I grew up with AI, I might look at not using it as trying to write a web app in assembly. Yes, I learned it in college but there no reason to keep using it.
There is more complicated math systems that computers have solved, just like Chess, and Go. Systems that seemed impossible for a machine to beat and eventually they do.
Coding will be exactly the same soon.
Code is formal language, there’s nothing to be solved because it’s already as precise as 2*2. The issue is not with programming language, the issue is the domain where the problem is and the human that does translate the solution.
Let’s take text rendering. We already have words on papers and various ways to get them there. But doing the same with computer is a difficult job because of all the parameters for drawing characters and laying them to form words and lines. Once you find those parameters, you have to account for future changes so you write you code in a way that minimize that impact. And because someone else will probably do maintenance, you try to come up with good abstractions so that your solution become understandable.
If AI will solve coding, it may as well write machine code directly or be embedded as firmware, because every programming language was made for humans.
>>It won’t design your domain layer with the care and consideration that >>comes from deep experience and hard-won knowledge.
What if every time you had an Aha! moment, you blogged about it in detail. Many people do. AI ingests those blog posts. It uses what they say when writing new code, or assessing existing code. It does use hard-won knowledge; it just wasn't hard-won by AI itself.
The problem is that someone elses aha might not apply to your situation. The AI cant reason and generalize like a human can to apply lessons from someone else to you slightly different situation
To me, knowledge is about knowing things, intelligence is about being able to apply your knowledge at the right context and for the right reasons.
The current crop of LLMs has a lot of knowledge, but severely lacks on the "intelligence" part. Sure, it can "guess" how to write a unit test consistent with your codebase (based on examples), but for those 1% where you need to make a deviation from the rule, it's completely clueless how to do it properly. Guessing is not intelligence, although it might appear masked as such.
Don't get me wrong, the "guessing" part is sometimes uncannily good, but it just can't replace real reasoning.
"It’s about maintaining the human element in a craft that’s increasingly automated."
I mean, what can anyone do, anyway? We’ve been on a "quest" toward the total automation of work for decades! and unfortunately these reflections are coming far too late.
didn’t anyone notice what was happening all these years?
Talking with a musician friend, he pointed out that today, studying, producing, and releasing music is almost volunteer work because the vast majority of artists will likely see no return on their investment, especially with AI flooding the music platforms, so I really expect it to happen to many other jobs.
There's a difference between having an opinion that a particular future is unavoidable, and choosing to live in that world before it even arrives.
Not a single book on the NYT bestseller list is written by AI.
>majority of artists will likely see no return on their investment
I wonder if music is the best example, because if I recall it has been always like this for musicians. Never have I heard that in my, my parents or grandparents time Musician was a career you would get in for money
When I was young I got to meet a lot of the aging jazz musicians of the 1930s in Kansas City. It absolutely was a career here. Granted, that’s a distant memory for most people.
Going this route, what’s the point of of learning anything if everything is instantly accessible from an AI with working solutions ? So no learning = no teaching or teaching that feels useless. That’s a weird and dangerous road. Everyone should own this technology for the situation to be balanced, not private or country based. Because we make ourself kind of useless in the process we loose leverage and value and we are at the mercy of the powerful ones
I like the article.
At the same time, as AI takes over the actual coding practice more and more, I find the situation with multiple programming languages a waste of resources.
If AI could generate binaries, web assembly directly, or even some "AI specific bytecode" then we could skip the steps in the middle and save a ton of energy.
In this scenario are humans not making changes to existing code anymore?
So we just trust what AI says it does. What could go wrong, lol
Even delegating just the boring parts doesn't appeal to me.
Generating boilerplate code - getting frustrated about code is what drives new ideas and improvements, I don't want to lose that friction.
Summarizing documentation - Reading and making sense of written material is a skill.
Explaining complex concepts - I don't want explanations on a silver plate, I want to figure things out. Who knows what great ideas I'll run into on the way there.
Helping debug tricky error messages - Again, a skill I like to keep sharp.
Drafting unit tests - No one knows better than me what needs testing in my code, this sounds like the kind of unit tests no one wants to maintain.
Formatting data - Maybe, or maybe whip out Perl and refresh that skill instead.
Keep delegating everything to AI for a year and I suspect you'll be completely worthless as a developer without it...
It reminds me of the Hamming quote:
"I noticed the following facts about people who work with the door open or the door closed. I notice that if you have the door to your office closed, you get more work done today and tomorrow, and you are more productive than most. But 10 years later somehow you don't know quite know what problems are worth working on; all the hard work you do is sort of tangential in importance. He who works with the door open gets all kinds of interruptions, but he also occasionally gets clues as to what the world is and what might be important."
Each of those little interruptions is a clue about the wider state of the world (codebase / APIs etc). AI offers a shortcut, but it does not provide the same mental world-building.
That's a nice quote. And your comment reminds me of why I (and maybe some other people) prefer windows managers over desktop environments. You go with the basic and everytime you notice some missing capabilities or inefficiency, you code it away. The end result is something that fit you like a glove and you understand thoroughly. It's 100% your own.
But with DE, you need maybe 80%, and the 20% you build with workarounds is constantly under threat. Why, because you're effectively enclosed in a small space by the design decisions of the DE.
Exactly this, though for me a lot of boilerplate is actually a comfort zone that I often look forward to, the way an athlete might to a light jog. Earbuds in, forget about everything, crank it out.
(That said your point is valid — there is boilerplate that is tedious and the resulting pain will be motivation to improve things)
Reformatting data is the very last thing I’d trust an LLM to do. What if it picks numbers it likes better? Compiler won’t catch that.
[dead]
[dead]
But then again most IT jobs are the equivalent of flipping burgers at mcdonald's, nobody's asking them to be Michelin starred chefs.
There is a lot I hate about this statement.
First, the pervasive assumption that there is no skill involved in food preparation is wrong. While the floor may be higher in a kitchen operated by an executive chef, there is a noticeable difference between a badly-made Big Mac and a well-made one. Execution matters.
Next, at this point "IT" is so broad as to be almost meaningless. In this discussion, we're talking about programming.
Finally, you're holding up Michelin starred chefs as being inherently better than all other chefs. The Michelin star program is skewed towards one particular end result; to put it in technology terms, it's like grading your business solely on a narrow set of SLOs rather than a holistic understanding.
Hate it all you want, the vast majority of "programmers" aren't working on anything novel, meaningful or hard. For the vast majority of people it's just a job, it's not a hobby, it's not a passion, it's not something they dream about, it's just a thing that they have to do 8 hours a day to make money and go do stuff in the real world. They don't want to think about it on walks, they don't want to cry about it, they don't want to dream about it and solve problems in their fucking sleep
AI is liberating them because it automatise 80% of their work, and there is nothing wrong about that. Most people work on projects that won't even exist in 10 years, let's stop pretending we're all working on Apollo tier software... Coding isn't a craft, it's not an art, it's a job in which you spend the vast majority of your time fucking up your eyes and spine to piss code for companies treating you like cattle.
For every """code artisan""" you have a thousand people who'd be as excited about working in a car factory or flipping burgers, it just so happens that tech working conditions are better
There is nothing wrong with people wanting to just be able to afford food, shelter, and comfort.
However, if people are writing software that other people rely on, there has to be some expectation of quality. Software that controls a machine responsible for keeping someone alive, for instance, should function reliably.
Relying on AI to vibe-code such software is dangerous at best.
Well, I don't know about you but everything I have written for money has been new (novel), interesting to me, and of value for the organisations I worked for. I would not want it to be otherwise, and I never saw it as a Mac job.
In other words, for the vast majority of people: work (code) or starve to death.
I wish I could make six figures flipping burgers. Actually, why aren't most burger flippers applying for IT jobs?
I wish I was making 100k as a dev, but plot twist outside of the US this is the minority.
I was comparing burger flippers to michelin chefs, not to devs. The vast majority of devs are gluing tools together and working on basic CRUD stuff, which is the burger flipping of the tech world. It's just a job, people don't want to think about code in the shower, on walks, or "cry" about tech problems as the author seems to romanticise. A job is here to provide money so you can live life, not the other way around. If I can automate my burger flipping to go to the gym or read a book instead I'll gladly do it
It helps to like your job though.
I do some of the, ahem, "romanticised" things you mentioned because even in CRUD stuff there are hard problems to solve - particularly problems introduced by other people.
We work to get paid, but you can't get around the fact that we spend so much of our lives at work that it is a part of life.