> if you’re doing this for your own learning: you will learn better without AI.
This is not the distinction I would want to tell newcomers. AI is extremely good for finding out what the most common practices are for all kinds of situations. That's a powerful learning tool. Besides, learning how to use a tool well (one that we can expect professionals to use) is part of learning.
Now most common practices and best practices are two different things. They are often the same, but not always. That's the major caveat for many fields, but if you can keep it in mind, you're going to do OK.
It's also extremely good at describing what code is doing, architecture, extrapolating why something is done a certain way etc. Invaluable for me for learning how unfamiliar code works in unfamiliar languages
It is a great learning tool for people who are willing to learn and put in the time and effort. Ask good questions, double check everything, read documentation and make sure they understand everything before they move on. It's a tremendous tool if used correctly. People who just hit tab or past everything Claude generates will get worse. The benefits of "the old way" is that even the people who didn't want to put in the effort where making some improvement if only by friction and time spent.
This is like telling people to learn how to draw by only looking at the masters' paintings in person instead of tracing and imitating from possibly stolen but otherwise cheap books at home.
I would say to at least just read what the AI does and ask it questions if you don't understand what it did. You can interactively learn software development from AI in a way that you cannot from a human simply because it won't run out of patience even if it will lie to you.
It bothers me that so many programmers I know, here and in real life, seem to never actually have cared about the craft of software development?
Just about solving problems.
I like problem solving too. But I also like theory and craft, and in my naïveté I assumed most of us were like me. LLMs divorced craft-programming from tool-programming and now it seems like there were never any craft-programmers at all.
It feels like the group I was part of was just a mirage, a historical accident. Maybe craft-painters felt the same way about the camera.
The one engineer in my life who cared about the craft of software development also refused to have a color profile pic and also did his zooms in black and white and thought Ruby was the only good programming language.
There’s room for both types of people in any trade. Some photographers obsess over the equipment, some only care about the photos. Carpenters with tools. Musicians with instruments & gear. Every craft has people who care about the how and those who focus on the product.
I’ve always enjoyed the craft of software engineering, though even I admit the culture around it can be a bit overly contemplative .
Nevertheless, there is room for both personalities. Just hang out with likeminded people and ignore the rest.
Caring about craft in programming is more like a photographer caring about light and composition and creativity and taste than a photographer caring about equipment.
In some ways yes. Many “engineers” obsess over “idioms” and other trends to the detriment of performance, correctness and usability. So this analogy is a bit too charitable.
After a certain point, I think "craft" becomes a meaningless and self-flattering scapegoat, similar to how people use "taste" or "courage" as an excuse to make boneheaded decisions. Most software customers aren't buying software because it's tasteful or impressively crafted, but because it fills a gap in their workflow. People who obsess over polish often end up missing the forest for the trees.
Plus, code-based meritocracy flat-out doesn't exist outside the FOSS circle. Many of the people you know are clocking-in at a job using a tech stack from 2004, they aren't paid to recognize good craftsmanship. They show up, close some tickets, play Xbox during on-call and collect their paycheck on Friday.
The people who care might be self-selecting for their own failure. It's hard to make money in tech if your passion for craft is your strongest attribute.
Why would any actual software engineer be against slopware?
When it inevitably all comes crashing down because there was no actual software architecture or understanding of the code, someone will have to come in to make the actual product.
Hopefully by then we will have realistic expectations for the LLM, have skilled up, and we as a community treat them as just another feature in the IDE.
Personally, I'd rather make something good instead of cleaning up a mess.
But beyond that, I'm really not looking forward to trying to discover new good libraries, tools, and such in 5 years time. The signal to noise is surely dropping.
A glance at the r/python will show that almost every week there is a new pypi package generated by ai, with dubious utility.
I did a quick research using bigquery-public-data.pypi.distribution_metadata and out of 844719 package, 126527 have only 1 release, almost 15%.
While is not unfathomable that a chunk of those really only needed one release and/or were manually written, the number is too high. And pypi is struggling for resources.
I wonder how much crap there is on github and I think this is an even larger issue, with the new versions of LLMs being trained on crap generated by older versions.
As a practitioner I also inherently believe in well written software but as a lifelong learner, things change, and evolve. There is absolutely no reason why software today has to be written like software of yesterday.
There is no need to be so prescriptive about how software is made. In the end the best will win on the merits. The bad software will die under its own weight with no think pieces necessary.
On the other hand, code might be becoming more like clay than like LEGO bricks. The sculptor is not minding each granule.
We don't know yet if there's long term merit in this new way of crafting software and telling people not to try it both won't work, and honestly looks like old people yelling at clouds.
The last six decades of commercial programming don't exactly bear this out...
The real lesson is that writing software is such a useful, high-leverage activity that even absolutely awful software can be immensely valuable. But that doesn't tell us that better software is useless, it just tells us it is not absolutely necessary.
> There is absolutely no reason why software today has to be written like software of yesterday.
I get what you're saying, but the irony is that AI tools have sort of frozen the state of the art of software development in time. There is now less incentive to innovate on language design, code style, patterns, etc., when it goes outside the range of what an LLM has been trained on and will produce.
Personally I am experimenting with a lot more data-driven, declarative, correct-by-construction work by default now.
AI handles the polyglot grunt work, which frees you to experiment above the language layer.
I have a dimensional analysis typing metacompiler that enforces physical unit coherence (length + time = compile error) across 25 languages. 23,000 lines of declarative test specs compile down to language-specific validation suites. The LLM shits out templates; it never touches the architecture.
We are still at very very early days.
Specs for my hobby physical types metacompiler tests:
LLMs aren't like a screwdriver at all, the analogy doesn't work. I think I was clear. LLMs aren't useful outside the domain of what they were trained on. They are copycats. To really innovate on software design means going outside what has been done before, which an LLM won't help you do.
No, you weren't clear, nor are you correct: you shared FUD about something it seems you have not tried, because testing your claims with a recent agentic system would dispel them.
I've had great success teaching Claude Code use DSLs I've created in my research. Trivially, it has never seen exactly these DSLs before -- yet it has correctly created complex programs using those DSLs, and indeed -- they work!
Have you had frontier agents work on programs in "esoteric" (unpopular) languages (pick: Zig, Haskell, Lisp, Elixir, etc)?
I don't see clarity, and I'm not sure if you've tried any of your claims for real.
Software engineers are desperate to have their work be like machining aircraft parts.
It’s a tool. No one cares about code quality because the person using your code isn’t affected by it. There are better and worse tools. No one cares whether a car is made with SnapOn tools or milled on HAAS machines. Only that it functions.
We know there is no long term merit to this idea just looking back at the last 40 years of coding.
> if you’re doing this for your own learning: you will learn better without AI.
I'm certain that's not true. AI is the single biggest gift we could possible give to people who are learning to program - it's shaved that learning curve down to a point where you don't need to carve out six months of your life just to get to a point where you can build something small and useful that works.
AI only hurts learning if you let it. You can still use AI and learn effectively if you are thoughtful about the way you apply it.
100% rejecting AI as a learner programmer may feel like the right thing to do, but at this point it's similar to saying "I'm going to learn to program without ever Googling for anything at all".
(I do not yet know how to teach people to learn effectively with AI though. I think that's a very important missing piece of this whole puzzle.)
I'm a BIG fan of these three points though:
rewrite the parts you understand
learn the parts you don’t
make it so you can reason about every detail
If you are learning to program you should have a very low tolerance for pieces that you don't understand, especially since we now have a free 24/7 weird robot TA that we can ask questions of.
I think it's a pretty small generosity to implicitly extend what the author is saying to "you will learn better without generating your code". I don't know if that's what they meant, but AI is certainly a good tool for learning how things work and seeing examples (if you don't blindly trust everything it says and use other sources too).
That's fair. I also just noticed that the sentence before the bit I quoted is important:
> AI overuse hurts you:
> - if you’re doing this for your own learning: you will learn better without AI.
So they're calling out "AI overuse", and I agree with that - that's where the skill comes in of deciding how to use AI to help your learning in a way that doesn't damage that learning process.
I think the parallel is photobashing. I've seen art teachers debating how early a student should start photobashing. Everyone knows it's a widely adopted technique in the industry, but some consider it harmful for beginners.
Needlessly to say there is no consensus. I err on the side of photobashing personally.
It cannot be understated how much of a boon AI-assisted programming has been for getting stuff up and running. Once you get past the initial hurdle of setting up an environment along with any boilerplate, you can actually start running code and iterating in order to figure out how something works.
Cognitive bandwidth is limited, and if you need to fully understand and get through 10 different errors before anything works, that's a massive barrier to entry. If you're going to be using those tools professionally then eventually you'll want to learn more about how they work, but frontloading a bunch of adjacent tooling knowledge is the quickest way to kill someone's interest.
The standard choice isn't usually between a high-quality project and slopware, it's between slopware or nothing at all.
AI only hurts learning if you let it. You can still use AI and learn effectively if you are thoughtful about the way you apply it.
I think that's very important.
Never mind six months; with AI, "you" can "build" something small and useful that works in six minutes. But "you" almost certainly didn't learn anything, and I think it's quite questionable if "you" "built" something.
I have found AI to be a great tool for learning, but I see it -- me, personally -- as a very slippery slope into not learning at all. It is so easy, so trivial, to produce a (seemingly accurate) answer to just about any question whatsoever, no matter how mundane or obscure, that I can really barely engage my own thinking at all.
On one hand, with the goal of obtaining an answer to a question quickly, it's awesome.
On the other hand, I feel like I have learned almost nothing at all. I got precisely, pinpointed down, the exact answer to the question I asked. Going through more traditional means of learning -- looking things up in books, searching web sites, reading tutorials, etc. -- I end up with my answer, but I also end up with more context, and a deeper+broader understanding of the overall problem space.
Can I get that with AI? You bet. And probably even better, in some respects. But I have to deliberately choose to. It's way too easy to just grab the exact answer I wanted and be on my way.
I feel like that is both good and bad. I don't want to be too dismissive of the good, but I also feel like it would be unwise to ignore the bad.
Whoa hey though, isn't this just exactly like books? Didn't, like, Plato and all them Greek cats centuries ago say that writing things down would ruin our brains, and what I'm claiming here is 100% the same thing? I don't think so. I see it as a matter of scale. It's a similar effect -- you probably do lose something (whether if it's valuable or not is debatable) when you choose to rely on written words rather than memorize. But it's tiny. With our modern AI tools, there is potential to lose out on much more. You can -- you don't have to, but you can -- do way more coasting, mentally. You can pretty much coast nonstop now.
> Never mind six months; with AI, "you" can "build" something small and useful that works in six minutes. But "you" almost certainly didn't learn anything, and I think it's quite questionable if "you" "built" something.
I think you learned something critically important: that the thing you wanted to build is feasible to build.
A lot of ideas people have are not possible to build. You can't prove a negative but you CAN prove a positive: seeing a version of the thing you want to exist running in front of you is a big leap forward from pondering if it could be built.
That's a useful thing to learn.
The other day, at brunch, I had Claude Code on my phone add webcam support (with pinch-to-zoom) to my https://tools.simonwillison.net/is-it-a-bird is-it-a-bird CLIP-in-your-browser app. I didn't even have to look at the code it wrote to learn that it's possible for Mobile Safari to render the webcam input in a box on the page (not full screen) and to have a rough pinch-to-zoom mechanism work - it's pixelated, not actual-camera-zoom, but for a CLIP app that's fine because the zoom is really just to try and exclude things from the image that aren't a potential bird.
> Can I get that with AI? You bet. And probably even better, in some respects. But I have to deliberately choose to. It's way too easy to just grab the exact answer I wanted and be on my way.
100% agree with that. You need a lot of self-discipline to learn effectively with AI. I'd argue you need self-discipline to learn via other means as well though.
Right, but this is a TA that gives a wrong answer more like 10% of the time (or less).
I think it's possible that for learning a 90% accuracy rate is MORE helpful than 100%. If it gets things wrong 1/10th of the time it means you have to think critically about everything it tells you. That's a much better way to approach any source of information than blindly trusting it.
The key to learning is building your own robust mental model, from multiple sources of information. Treat the LLM as one of those sources, not the exclusive source, and you should be fine.
This post mirrors my sentiment and the reasons I dislike these sorts of "projects" much more closely than the main site does, deserved to be the main submission, in retrospect.
I find these really are not only condescending but also really miss the mark and ironically come off as really uneducated in my opinion, and that really is the most infuriating type of condescension. What you call slopware today is becoming less and less sloppy every six months as new coding models drop. In 2 years the “unmaintainable mess” is going to be far better and far more maintainable than anything the engineers behind these snide websites will make. Do folks realize you can also use the same coding models to ask questions and reason about the “slop” that these code models are writing that somehow is able to do something I would never have been able to do before? I don’t really care if it’s 100% accurate, hit it with a hammer until everything makes sense. Yell at Claude and learn how to wrangle it to get what you want, that skill is an investment that’s going to pay you back far more than following the advice of these folks, that’s my opinion.
Like “you will learn better without AI” is just a bad short sighted opinion dressed up in condescension to appear wise and authoritative.
Learn your tools, learn the limitations, understand where this is going, do the things you want to do and then realize “hey my opinions don’t have to be condescendingly preached to other people as though they are facts”
My reflex is to call the website useless because the problem isn't usually software produced by individuals. My problem is the buggy messes that trillion dollar corporations produce.
Indeed, Adobe continues shipping their slop and by all accounts of those who keep using it continue to make it worse, this was all in motion well before modern AI tools. If anything, I bet AI tools are more likely to make things better. I read an anti-AI opinion recently where they had a sense that a small open source project was using AI (and were right, it was easy to see by noticing Claude in the commit history) but what tipped their sense was there were too many "professionalisms" in the code base. Oh no, more code that has documentation and tests and benchmarks, the horror! I also just can't take these "it's stinking up the commons" posts all that seriously -- like where have you been the last couple decades? The baseline seems as likely to improve as to degrade more than it already has. Even the spam of useless pull requests isn't new, the peak of that was probably with peak MOOC popularity some years ago when classes would require submitting pull requests to some open source project as an assignment.
I mean, look at Microsoft. They're a tiny little 5 trillion dollar company and their own cloud storage software can't reliably extract zip files compressed with their own compression software on their own flagship operating system.
How dare some nobody in a third world country use AI resources to accelerate the development of some process that fixes an issue for them and occasionally ask you to buy them a coffee when a poor sad pathetic evil worthless hateful disgusting miserable useless 5 trillion dollar company that actively hates you does the same thing with worse results that makes your life more miserable while lining their pockets with every penny in the entire world?!
"Stop people using their money and time to build free software on GitHub using non-consensus-approved AI tools! I'm literally crying and shaking right now. How can anyone be so cruel to the art of software engineering that I went to university for? Other people having fun building software and solving their problems? This can't be happening. They are destroying the planet!" - vegan "GNU plus Linux" anti-AI artisanal coder, 2025, colorized
This is a bit more complex that you make it sound, and I'm wondering this is on purpose.
- Submission based magazines, who pay their writers, like Clarkesworld for example, are being flooded with LLM generated submissions. Those have been automated by people who hit every magazines multiple times with the hope that one makes it through and make a few bucks. This made the work of reviewer absolute hell, their volume of work multiplied by 10.
- The same is true with music, fake bands are being created and the music submitted to streaming platform with the hope that it will generate some kind of passive income
- Etsy shops are absolutely filled to the brim with bad AI slop, the whole platform became barely usable anymore
The same thing is absolutely true with software. You make is sound like it's a few hobbyist that are using AI tools to solve problems and that evil engineers are gatekeeping them because they want to be the only one mastering the arcane arts, but that's overly simplistic and frankly, intellectually dishonest, even if you set aside the fact that at some point one of those vibe coders is going the bite more that they can chew and expose their users, the vast majority of them is looking at making money easily, that means flooding the market with subpar software in hope to generate some money. Just browse a few of the vibe coders subreddits or discord channel to see that it's mostly what is being discussed.
And, I know I'm not the only one that noticed that over reliance on LLMs has some bad consequences on people, especially juniors. Yes it can be used properly in a productive way, but a lot of people don't bother. Make fun of people who are worried about it all you want, but for all the good gen AI will bring, there absolutely will be an enshitification of about everything. I won't even talk about the golden opportunity it is for scammers and malicious agents. I don't know if it'll be worth it, possibly ? But there will be a price to pay.
It would help if the form included a mandatory checkbox stating, "As the author, I declare that I created the submitted work myself without the use of AI." The terms and conditions would state that authors would be banned forever if it was discovered that they had lied.
This attitude is of course very common and I just don’t understand it. Like when I read news from the other political party - it’s just confusing to me.
Are they scared for their jobs? Angry that their hard earned skills are getting devalued? Genuinely concerned about copyright violations in LLM training? Or just don’t want the world to change? (Kids get off my lawn!)
The site describes a specific subset of issues very tersely. Can't you simply disagree with those, rather than imply they are never stated?
Regardless, what's confusing about the list of problems you gave? Are you implying they are somehow un-real in all contexts? The reference to political news is also confusing.
I don't read "specific issues" so much as generalized complaints and implied insults. "Low effort" is not a specific issue - it's just derogatory. "Cut the clutter" is not useful advice. All of it sounds like "code better".
The list of proposed motivations I gave are hypotheticals. I don't understand which if any of these apply to the people who agree with the sentiment in "Stop Slopware". It sounds like maybe you do. Which of these resonate with you?
If AI tools produced good code that was well-designed, nobody would fucking care. Maintainers of repos want good submissions that solve issues, they don't care if you wrote it on a notepad first or generated it in 5 minutes, if it fixes the thing, and is of good quality, it would be accepted.
AI submissions are getting hate because the code isn't good and isn't designed well and the person submitting it would also know that, if they understood what the LLM wrote. Hell, maybe they could even re-architect it into something worth submitting in the first place.
I use AI tools semi-frequently, to get answers, to riddle problems, all kinds of stuff. However if you just punch directions into ChatGPT and copy/paste the answer, you are not growing as a dev, you are not learning, and yeah your code is probably shit. Not sorry.
Edit: And I feel it's an under-observed point that AI-dependent devs constantly try and hide the fact that they are doing exactly that: copying from ChatGPT, because they also know it's shit and they will be judged. So just... STOP DOING IT.
Using Claude to submit PRs to huge open source projects is stupid, for sure.
But if I need a quick tool, like a secret Santa name picker, I’ll just have Claude build it, push it to a repo, link the repo on some PaaS and have a working, deployed app in 20 minutes. No ads, no accounts & no signing up to random websites. I can build it exactly like I want it and include fun Easter eggs for my family.
Building it myself would take 2-3 hours, and the code quality would be drastically better, but that just doesn’t matter.
People aren't complaining about that. What you do in the privacy of your own computer is only your problem. The issue is people pouring a whole "arduous" 2 hours into vibecoding a project, then advertising it and posting to communities everywhere as a revolutionary bullet-proof high-quality project asking for visibility and contributions.
you'll understand the first time you lose half an hour evaluating a library that has all the old signs of competent design and even the trivial examples don't work and you realize the project was generated and you've had your time completely wasted
In the new world, increasingly you’ll be better off writing your libraries from scratch than pulling in external dependencies. Less supply chain risk. Less bloat from features you don’t need. Sure complex mature libraries aren’t going away. But for many simple tasks the balance is shifting.
> When you publish something under the banner of open–source, you implicitly enter a stewardship role. You’re not just shipping files, you’re making a contribution to a shared commons. That carries certain responsibilities: clarity about purpose, honesty about limitations, and a basic alignment with the community’s collaborative ethos.
(from the second link)
You're not just writing angry screeds, you are producing slop prose and asking us to spend our time reading it.
How is this not an implicit repudiation of your entire argument? Are you not hurting yourself by avoiding learning how to write better?
> if you’re doing this for your own learning: you will learn better without AI.
This is not the distinction I would want to tell newcomers. AI is extremely good for finding out what the most common practices are for all kinds of situations. That's a powerful learning tool. Besides, learning how to use a tool well (one that we can expect professionals to use) is part of learning.
Now most common practices and best practices are two different things. They are often the same, but not always. That's the major caveat for many fields, but if you can keep it in mind, you're going to do OK.
It's also extremely good at describing what code is doing, architecture, extrapolating why something is done a certain way etc. Invaluable for me for learning how unfamiliar code works in unfamiliar languages
It is a great learning tool for people who are willing to learn and put in the time and effort. Ask good questions, double check everything, read documentation and make sure they understand everything before they move on. It's a tremendous tool if used correctly. People who just hit tab or past everything Claude generates will get worse. The benefits of "the old way" is that even the people who didn't want to put in the effort where making some improvement if only by friction and time spent.
You can go to a restaurant to see new dishes (which isn't nothing) but it wont exactly teach you how to cook.
Unless you happen to meet the unendingly patient and helpful cook who is willing to explain the recipe in any depth one desires.
This is like telling people to learn how to draw by only looking at the masters' paintings in person instead of tracing and imitating from possibly stolen but otherwise cheap books at home.
I would say to at least just read what the AI does and ask it questions if you don't understand what it did. You can interactively learn software development from AI in a way that you cannot from a human simply because it won't run out of patience even if it will lie to you.
The results depend mostly on how you use it.
It bothers me that so many programmers I know, here and in real life, seem to never actually have cared about the craft of software development? Just about solving problems.
I like problem solving too. But I also like theory and craft, and in my naïveté I assumed most of us were like me. LLMs divorced craft-programming from tool-programming and now it seems like there were never any craft-programmers at all.
It feels like the group I was part of was just a mirage, a historical accident. Maybe craft-painters felt the same way about the camera.
The one engineer in my life who cared about the craft of software development also refused to have a color profile pic and also did his zooms in black and white and thought Ruby was the only good programming language.
That's like one person with terrible taste.
My point lol.
I think most devs, especially ones that call it a "craft, take themselves too seriously. We're glorified construction workers that get paid a lot
There’s room for both types of people in any trade. Some photographers obsess over the equipment, some only care about the photos. Carpenters with tools. Musicians with instruments & gear. Every craft has people who care about the how and those who focus on the product.
I’ve always enjoyed the craft of software engineering, though even I admit the culture around it can be a bit overly contemplative .
Nevertheless, there is room for both personalities. Just hang out with likeminded people and ignore the rest.
Caring about craft in programming is more like a photographer caring about light and composition and creativity and taste than a photographer caring about equipment.
In some ways yes. Many “engineers” obsess over “idioms” and other trends to the detriment of performance, correctness and usability. So this analogy is a bit too charitable.
> Just hang out with likeminded people and ignore the rest.
Or find ways to integrate with the rest, challenging one another to facilitate growth.
While I appreciate your optimism, the cost of conversion is 1000x the cost of reaching & identifying the right people.
After a certain point, I think "craft" becomes a meaningless and self-flattering scapegoat, similar to how people use "taste" or "courage" as an excuse to make boneheaded decisions. Most software customers aren't buying software because it's tasteful or impressively crafted, but because it fills a gap in their workflow. People who obsess over polish often end up missing the forest for the trees.
Plus, code-based meritocracy flat-out doesn't exist outside the FOSS circle. Many of the people you know are clocking-in at a job using a tech stack from 2004, they aren't paid to recognize good craftsmanship. They show up, close some tickets, play Xbox during on-call and collect their paycheck on Friday.
The people who care might be self-selecting for their own failure. It's hard to make money in tech if your passion for craft is your strongest attribute.
Why would any actual software engineer be against slopware?
When it inevitably all comes crashing down because there was no actual software architecture or understanding of the code, someone will have to come in to make the actual product.
Hopefully by then we will have realistic expectations for the LLM, have skilled up, and we as a community treat them as just another feature in the IDE.
New job title: “vibe coding cleanup specialist”
Abbreviated as "VC Cleanup Specialist".
With the ambiguity in the meaning of "VC" being intentional.
Launch HN: Vibely - VC Cleaners For Your VC Slop (YC S25)
Personally, I'd rather make something good instead of cleaning up a mess.
But beyond that, I'm really not looking forward to trying to discover new good libraries, tools, and such in 5 years time. The signal to noise is surely dropping.
> someone will have to come in to make the actual product.
My experience has been more that they expect you to fix the broken mess, not rebuild it properly.
> keep the commons clean [from the second link]
A glance at the r/python will show that almost every week there is a new pypi package generated by ai, with dubious utility.
I did a quick research using bigquery-public-data.pypi.distribution_metadata and out of 844719 package, 126527 have only 1 release, almost 15%.
While is not unfathomable that a chunk of those really only needed one release and/or were manually written, the number is too high. And pypi is struggling for resources.
I wonder how much crap there is on github and I think this is an even larger issue, with the new versions of LLMs being trained on crap generated by older versions.
As a practitioner I also inherently believe in well written software but as a lifelong learner, things change, and evolve. There is absolutely no reason why software today has to be written like software of yesterday.
There is no need to be so prescriptive about how software is made. In the end the best will win on the merits. The bad software will die under its own weight with no think pieces necessary.
On the other hand, code might be becoming more like clay than like LEGO bricks. The sculptor is not minding each granule.
We don't know yet if there's long term merit in this new way of crafting software and telling people not to try it both won't work, and honestly looks like old people yelling at clouds.
> In the end the best will win on the merits.
The last six decades of commercial programming don't exactly bear this out...
The real lesson is that writing software is such a useful, high-leverage activity that even absolutely awful software can be immensely valuable. But that doesn't tell us that better software is useless, it just tells us it is not absolutely necessary.
> There is absolutely no reason why software today has to be written like software of yesterday.
I get what you're saying, but the irony is that AI tools have sort of frozen the state of the art of software development in time. There is now less incentive to innovate on language design, code style, patterns, etc., when it goes outside the range of what an LLM has been trained on and will produce.
> frozen the state of the art
Personally I am experimenting with a lot more data-driven, declarative, correct-by-construction work by default now.
AI handles the polyglot grunt work, which frees you to experiment above the language layer.
I have a dimensional analysis typing metacompiler that enforces physical unit coherence (length + time = compile error) across 25 languages. 23,000 lines of declarative test specs compile down to language-specific validation suites. The LLM shits out templates; it never touches the architecture.
We are still at very very early days.
Specs for my hobby physical types metacompiler tests:
https://gist.github.com/ctoth/c082981b2766e40ad7c8ad68261957...
Citation needed. I see no reason at all why that's true any more than the screwdriver freezing the state of home design in time.
LLMs aren't like a screwdriver at all, the analogy doesn't work. I think I was clear. LLMs aren't useful outside the domain of what they were trained on. They are copycats. To really innovate on software design means going outside what has been done before, which an LLM won't help you do.
No, you weren't clear, nor are you correct: you shared FUD about something it seems you have not tried, because testing your claims with a recent agentic system would dispel them.
I've had great success teaching Claude Code use DSLs I've created in my research. Trivially, it has never seen exactly these DSLs before -- yet it has correctly created complex programs using those DSLs, and indeed -- they work!
Have you had frontier agents work on programs in "esoteric" (unpopular) languages (pick: Zig, Haskell, Lisp, Elixir, etc)?
I don't see clarity, and I'm not sure if you've tried any of your claims for real.
Software engineers are desperate to have their work be like machining aircraft parts.
It’s a tool. No one cares about code quality because the person using your code isn’t affected by it. There are better and worse tools. No one cares whether a car is made with SnapOn tools or milled on HAAS machines. Only that it functions.
We know there is no long term merit to this idea just looking back at the last 40 years of coding.
> if you’re doing this for your own learning: you will learn better without AI.
I'm certain that's not true. AI is the single biggest gift we could possible give to people who are learning to program - it's shaved that learning curve down to a point where you don't need to carve out six months of your life just to get to a point where you can build something small and useful that works.
AI only hurts learning if you let it. You can still use AI and learn effectively if you are thoughtful about the way you apply it.
100% rejecting AI as a learner programmer may feel like the right thing to do, but at this point it's similar to saying "I'm going to learn to program without ever Googling for anything at all".
(I do not yet know how to teach people to learn effectively with AI though. I think that's a very important missing piece of this whole puzzle.)
I'm a BIG fan of these three points though:
If you are learning to program you should have a very low tolerance for pieces that you don't understand, especially since we now have a free 24/7 weird robot TA that we can ask questions of.I think it's a pretty small generosity to implicitly extend what the author is saying to "you will learn better without generating your code". I don't know if that's what they meant, but AI is certainly a good tool for learning how things work and seeing examples (if you don't blindly trust everything it says and use other sources too).
That's fair. I also just noticed that the sentence before the bit I quoted is important:
> AI overuse hurts you:
> - if you’re doing this for your own learning: you will learn better without AI.
So they're calling out "AI overuse", and I agree with that - that's where the skill comes in of deciding how to use AI to help your learning in a way that doesn't damage that learning process.
I think the parallel is photobashing. I've seen art teachers debating how early a student should start photobashing. Everyone knows it's a widely adopted technique in the industry, but some consider it harmful for beginners.
Needlessly to say there is no consensus. I err on the side of photobashing personally.
It cannot be understated how much of a boon AI-assisted programming has been for getting stuff up and running. Once you get past the initial hurdle of setting up an environment along with any boilerplate, you can actually start running code and iterating in order to figure out how something works.
Cognitive bandwidth is limited, and if you need to fully understand and get through 10 different errors before anything works, that's a massive barrier to entry. If you're going to be using those tools professionally then eventually you'll want to learn more about how they work, but frontloading a bunch of adjacent tooling knowledge is the quickest way to kill someone's interest.
The standard choice isn't usually between a high-quality project and slopware, it's between slopware or nothing at all.
> It cannot be understated
You mean it cannot be overstated?
You got ‘em!!
AI only hurts learning if you let it. You can still use AI and learn effectively if you are thoughtful about the way you apply it.
I think that's very important.
Never mind six months; with AI, "you" can "build" something small and useful that works in six minutes. But "you" almost certainly didn't learn anything, and I think it's quite questionable if "you" "built" something.
I have found AI to be a great tool for learning, but I see it -- me, personally -- as a very slippery slope into not learning at all. It is so easy, so trivial, to produce a (seemingly accurate) answer to just about any question whatsoever, no matter how mundane or obscure, that I can really barely engage my own thinking at all.
On one hand, with the goal of obtaining an answer to a question quickly, it's awesome.
On the other hand, I feel like I have learned almost nothing at all. I got precisely, pinpointed down, the exact answer to the question I asked. Going through more traditional means of learning -- looking things up in books, searching web sites, reading tutorials, etc. -- I end up with my answer, but I also end up with more context, and a deeper+broader understanding of the overall problem space.
Can I get that with AI? You bet. And probably even better, in some respects. But I have to deliberately choose to. It's way too easy to just grab the exact answer I wanted and be on my way.
I feel like that is both good and bad. I don't want to be too dismissive of the good, but I also feel like it would be unwise to ignore the bad.
Whoa hey though, isn't this just exactly like books? Didn't, like, Plato and all them Greek cats centuries ago say that writing things down would ruin our brains, and what I'm claiming here is 100% the same thing? I don't think so. I see it as a matter of scale. It's a similar effect -- you probably do lose something (whether if it's valuable or not is debatable) when you choose to rely on written words rather than memorize. But it's tiny. With our modern AI tools, there is potential to lose out on much more. You can -- you don't have to, but you can -- do way more coasting, mentally. You can pretty much coast nonstop now.
> Never mind six months; with AI, "you" can "build" something small and useful that works in six minutes. But "you" almost certainly didn't learn anything, and I think it's quite questionable if "you" "built" something.
I think you learned something critically important: that the thing you wanted to build is feasible to build.
A lot of ideas people have are not possible to build. You can't prove a negative but you CAN prove a positive: seeing a version of the thing you want to exist running in front of you is a big leap forward from pondering if it could be built.
That's a useful thing to learn.
The other day, at brunch, I had Claude Code on my phone add webcam support (with pinch-to-zoom) to my https://tools.simonwillison.net/is-it-a-bird is-it-a-bird CLIP-in-your-browser app. I didn't even have to look at the code it wrote to learn that it's possible for Mobile Safari to render the webcam input in a box on the page (not full screen) and to have a rough pinch-to-zoom mechanism work - it's pixelated, not actual-camera-zoom, but for a CLIP app that's fine because the zoom is really just to try and exclude things from the image that aren't a potential bird.
(The prompts I used for this are quoted in the PR description: https://github.com/simonw/tools/pull/175)
> Can I get that with AI? You bet. And probably even better, in some respects. But I have to deliberately choose to. It's way too easy to just grab the exact answer I wanted and be on my way.
100% agree with that. You need a lot of self-discipline to learn effectively with AI. I'd argue you need self-discipline to learn via other means as well though.
A robot TA that gives the wrong answer 50% of the time isn't very helpful.
Right, but this is a TA that gives a wrong answer more like 10% of the time (or less).
I think it's possible that for learning a 90% accuracy rate is MORE helpful than 100%. If it gets things wrong 1/10th of the time it means you have to think critically about everything it tells you. That's a much better way to approach any source of information than blindly trusting it.
The key to learning is building your own robust mental model, from multiple sources of information. Treat the LLM as one of those sources, not the exclusive source, and you should be fine.
You need to choose another field if it takes you 6 months to hello world
Don't speak like this to people. Also, don't put words in people's mouths (they didn't say "hello world").
I deliberately didn't say "hello world", I said "build something small that works" - I'm editing my post now to add the words "and useful".
The author probably meant "coding without AI", not "learning without AI".
The author wrote an essay to accompany this site here: https://ficd.sh/blog/your-project-sucks/
This post mirrors my sentiment and the reasons I dislike these sorts of "projects" much more closely than the main site does, deserved to be the main submission, in retrospect.
We'll put that link in the top text, thanks.
If you are an experience software developer, AI lets you fly and build things much faster.
If you are a new software developer, I don't see how you grow to develop taste and experience when everything is a <ENTER> away.
I think we are the last generation of engineers who give a fuck tbh.
I find these really are not only condescending but also really miss the mark and ironically come off as really uneducated in my opinion, and that really is the most infuriating type of condescension. What you call slopware today is becoming less and less sloppy every six months as new coding models drop. In 2 years the “unmaintainable mess” is going to be far better and far more maintainable than anything the engineers behind these snide websites will make. Do folks realize you can also use the same coding models to ask questions and reason about the “slop” that these code models are writing that somehow is able to do something I would never have been able to do before? I don’t really care if it’s 100% accurate, hit it with a hammer until everything makes sense. Yell at Claude and learn how to wrangle it to get what you want, that skill is an investment that’s going to pay you back far more than following the advice of these folks, that’s my opinion.
Like “you will learn better without AI” is just a bad short sighted opinion dressed up in condescension to appear wise and authoritative.
Learn your tools, learn the limitations, understand where this is going, do the things you want to do and then realize “hey my opinions don’t have to be condescendingly preached to other people as though they are facts”
My reflex is to call the website useless because the problem isn't usually software produced by individuals. My problem is the buggy messes that trillion dollar corporations produce.
Indeed, Adobe continues shipping their slop and by all accounts of those who keep using it continue to make it worse, this was all in motion well before modern AI tools. If anything, I bet AI tools are more likely to make things better. I read an anti-AI opinion recently where they had a sense that a small open source project was using AI (and were right, it was easy to see by noticing Claude in the commit history) but what tipped their sense was there were too many "professionalisms" in the code base. Oh no, more code that has documentation and tests and benchmarks, the horror! I also just can't take these "it's stinking up the commons" posts all that seriously -- like where have you been the last couple decades? The baseline seems as likely to improve as to degrade more than it already has. Even the spam of useless pull requests isn't new, the peak of that was probably with peak MOOC popularity some years ago when classes would require submitting pull requests to some open source project as an assignment.
I mean, look at Microsoft. They're a tiny little 5 trillion dollar company and their own cloud storage software can't reliably extract zip files compressed with their own compression software on their own flagship operating system.
How dare some nobody in a third world country use AI resources to accelerate the development of some process that fixes an issue for them and occasionally ask you to buy them a coffee when a poor sad pathetic evil worthless hateful disgusting miserable useless 5 trillion dollar company that actively hates you does the same thing with worse results that makes your life more miserable while lining their pockets with every penny in the entire world?!
If man-made software was high quality, this problem would resolve itself , because “slopware” would be easily distinguishable.
The best way to resolve this is to write man-made software that’s good quality.
It’s just a tool
My “slopware” has brought in $200K a month.
As long as it works and people’s problems are solved, I don’t see any issue with it?
This seems directed at people sharing low-effort AI-generated open source projects.
If possible, could you please explain what your slopware does?
You don't seem to be responding to anything the site says.
What's your software?
"Stop people using their money and time to build free software on GitHub using non-consensus-approved AI tools! I'm literally crying and shaking right now. How can anyone be so cruel to the art of software engineering that I went to university for? Other people having fun building software and solving their problems? This can't be happening. They are destroying the planet!" - vegan "GNU plus Linux" anti-AI artisanal coder, 2025, colorized
Seems like this link is for you.
This is a shitpost straight out of 4chan, and it's the top comment after an hour. Do I need to be worried about HN?
And before anybody gets high-and-mighty, I know because I write these too. On 4chan.
Hacker Reddit is socially acceptable nu-/g/. And nu-/g/ is terrible!
10 points for the first person to admonish me with newsguidelines.html. As old as the hills... ;-)
This is a bit more complex that you make it sound, and I'm wondering this is on purpose.
- Submission based magazines, who pay their writers, like Clarkesworld for example, are being flooded with LLM generated submissions. Those have been automated by people who hit every magazines multiple times with the hope that one makes it through and make a few bucks. This made the work of reviewer absolute hell, their volume of work multiplied by 10.
- The same is true with music, fake bands are being created and the music submitted to streaming platform with the hope that it will generate some kind of passive income
- Etsy shops are absolutely filled to the brim with bad AI slop, the whole platform became barely usable anymore
The same thing is absolutely true with software. You make is sound like it's a few hobbyist that are using AI tools to solve problems and that evil engineers are gatekeeping them because they want to be the only one mastering the arcane arts, but that's overly simplistic and frankly, intellectually dishonest, even if you set aside the fact that at some point one of those vibe coders is going the bite more that they can chew and expose their users, the vast majority of them is looking at making money easily, that means flooding the market with subpar software in hope to generate some money. Just browse a few of the vibe coders subreddits or discord channel to see that it's mostly what is being discussed.
And, I know I'm not the only one that noticed that over reliance on LLMs has some bad consequences on people, especially juniors. Yes it can be used properly in a productive way, but a lot of people don't bother. Make fun of people who are worried about it all you want, but for all the good gen AI will bring, there absolutely will be an enshitification of about everything. I won't even talk about the golden opportunity it is for scammers and malicious agents. I don't know if it'll be worth it, possibly ? But there will be a price to pay.
It would help if the form included a mandatory checkbox stating, "As the author, I declare that I created the submitted work myself without the use of AI." The terms and conditions would state that authors would be banned forever if it was discovered that they had lied.
This attitude is of course very common and I just don’t understand it. Like when I read news from the other political party - it’s just confusing to me.
Are they scared for their jobs? Angry that their hard earned skills are getting devalued? Genuinely concerned about copyright violations in LLM training? Or just don’t want the world to change? (Kids get off my lawn!)
The site describes a specific subset of issues very tersely. Can't you simply disagree with those, rather than imply they are never stated?
Regardless, what's confusing about the list of problems you gave? Are you implying they are somehow un-real in all contexts? The reference to political news is also confusing.
I don't read "specific issues" so much as generalized complaints and implied insults. "Low effort" is not a specific issue - it's just derogatory. "Cut the clutter" is not useful advice. All of it sounds like "code better".
The list of proposed motivations I gave are hypotheticals. I don't understand which if any of these apply to the people who agree with the sentiment in "Stop Slopware". It sounds like maybe you do. Which of these resonate with you?
It feels like virtue signaling of the year.
Code meritocracy matters. There's a reason Linux is used by billions but slopware isn't.
If AI tools produced good code that was well-designed, nobody would fucking care. Maintainers of repos want good submissions that solve issues, they don't care if you wrote it on a notepad first or generated it in 5 minutes, if it fixes the thing, and is of good quality, it would be accepted.
AI submissions are getting hate because the code isn't good and isn't designed well and the person submitting it would also know that, if they understood what the LLM wrote. Hell, maybe they could even re-architect it into something worth submitting in the first place.
I use AI tools semi-frequently, to get answers, to riddle problems, all kinds of stuff. However if you just punch directions into ChatGPT and copy/paste the answer, you are not growing as a dev, you are not learning, and yeah your code is probably shit. Not sorry.
Edit: And I feel it's an under-observed point that AI-dependent devs constantly try and hide the fact that they are doing exactly that: copying from ChatGPT, because they also know it's shit and they will be judged. So just... STOP DOING IT.
Using Claude to submit PRs to huge open source projects is stupid, for sure.
But if I need a quick tool, like a secret Santa name picker, I’ll just have Claude build it, push it to a repo, link the repo on some PaaS and have a working, deployed app in 20 minutes. No ads, no accounts & no signing up to random websites. I can build it exactly like I want it and include fun Easter eggs for my family.
Building it myself would take 2-3 hours, and the code quality would be drastically better, but that just doesn’t matter.
People aren't complaining about that. What you do in the privacy of your own computer is only your problem. The issue is people pouring a whole "arduous" 2 hours into vibecoding a project, then advertising it and posting to communities everywhere as a revolutionary bullet-proof high-quality project asking for visibility and contributions.
the issue is false advertising.
you'll understand the first time you lose half an hour evaluating a library that has all the old signs of competent design and even the trivial examples don't work and you realize the project was generated and you've had your time completely wasted
In the new world, increasingly you’ll be better off writing your libraries from scratch than pulling in external dependencies. Less supply chain risk. Less bloat from features you don’t need. Sure complex mature libraries aren’t going away. But for many simple tasks the balance is shifting.
Most of these people are either:
Students
Activists
Hyper opinionated and “principled” engineers (that won’t touch AI)
University professors
Engineers working with Linux and Free software who hate AI.
They all secretly use ChatGPT when it launched and still do.
Nah, they hate Altman, and many are running DeepSeek @ home.
Even if they run DeepSeek at home, it is still AI.
Those who have principles in hating AI should never touch it or use it and swear by that.
Written by AI ;)
At least it feels a bit like it
You can't be serious.
> When you publish something under the banner of open–source, you implicitly enter a stewardship role. You’re not just shipping files, you’re making a contribution to a shared commons. That carries certain responsibilities: clarity about purpose, honesty about limitations, and a basic alignment with the community’s collaborative ethos.
(from the second link)
You're not just writing angry screeds, you are producing slop prose and asking us to spend our time reading it.
How is this not an implicit repudiation of your entire argument? Are you not hurting yourself by avoiding learning how to write better?
That assertion is also highly debatable. To me no part of releasing software under a free license implies a "stewardship role."
I'm going to give the author the benefit of the doubt here. Not all "you're not X, you're Y" was written by an LLM.
I can't tell what you're referring to. The quote reads well.