Today it's LLMs/ChatBots, when I was younger it was StackOverflow. Interestingly, when I was just starting out I remember being told to avoid copying & pasting snippets from SO in favor of learning to solve problems on my own, or at the very least, slow down and manually type out the snippets (which actually helped). I used to be against LLMs/ChatBots (as someone who likes to write) but now I've gone all in, it's just such a great productivity booster for very common grunt work that doesn't require too much project specific context... At the same time, while I do sometimes wonder if this will cause my problem solving skills to stagnate, I'm far more concerned about the broader ecosystem. LLMs perform best when the solution domain is well represented in their training dataset and with hard cutoffs and whatnot, I do wonder if this will cause tools / frameworks / libraries to stagnate because people will favor older tools over newer tools for better ease of use. Great for established authors, not so great for new ideas, something that too few people are talking about.
I don't personally use AI for code completion inside my editor, however I do use ChatGPT like Google nowadays. Whenever I had a question in the early days, I just typed it into Google and scavenged through all the different links it gave me. Nowadays I just type it into ChatGPT (or any other LLM available for me at that moment) and see what it says.
I also try to use my brain to keep solving problems, but those sometimes seem like problems that are sometimes beyond the comprehension of LLMs (wicked questions, etc). For coding purposes, I consider ChatGPT a rubber duck and love to bounce ideas with it, but eventually I still end up refactoring whatever it is that it gave me.
I don't use it for anything. If that means I'm "being left behind" then I welcome it. I'm quite happy to continue using my brain to solve problems, and understand the solutions.
I have every intention of continuing to avoid LLM usage in any capacity of my job.
I actively seek physical exercise and have no intention of outsourcing my mental processes to avoid thought either.
I suspect my capabilities will be more rare in the near future due to the dumbing down caused by extensive LLM usage in every aspect of our trade.
You're probably leaving speed on the table but these tools are not hard to pick up in the future if you have good engineering fundamentals. If your career is going well and you have no need to go faster don't worry about it.
A code-completion agent is extremely far removed from the quality of the output of a frontier coding model. Try Claude or similar and ask it to generate code specific-well defined tedious tasks. You may be surprised!
Today it's LLMs/ChatBots, when I was younger it was StackOverflow. Interestingly, when I was just starting out I remember being told to avoid copying & pasting snippets from SO in favor of learning to solve problems on my own, or at the very least, slow down and manually type out the snippets (which actually helped). I used to be against LLMs/ChatBots (as someone who likes to write) but now I've gone all in, it's just such a great productivity booster for very common grunt work that doesn't require too much project specific context... At the same time, while I do sometimes wonder if this will cause my problem solving skills to stagnate, I'm far more concerned about the broader ecosystem. LLMs perform best when the solution domain is well represented in their training dataset and with hard cutoffs and whatnot, I do wonder if this will cause tools / frameworks / libraries to stagnate because people will favor older tools over newer tools for better ease of use. Great for established authors, not so great for new ideas, something that too few people are talking about.
I don't personally use AI for code completion inside my editor, however I do use ChatGPT like Google nowadays. Whenever I had a question in the early days, I just typed it into Google and scavenged through all the different links it gave me. Nowadays I just type it into ChatGPT (or any other LLM available for me at that moment) and see what it says.
I also try to use my brain to keep solving problems, but those sometimes seem like problems that are sometimes beyond the comprehension of LLMs (wicked questions, etc). For coding purposes, I consider ChatGPT a rubber duck and love to bounce ideas with it, but eventually I still end up refactoring whatever it is that it gave me.
I don't use it for anything. If that means I'm "being left behind" then I welcome it. I'm quite happy to continue using my brain to solve problems, and understand the solutions.
I have every intention of continuing to avoid LLM usage in any capacity of my job. I actively seek physical exercise and have no intention of outsourcing my mental processes to avoid thought either.
I suspect my capabilities will be more rare in the near future due to the dumbing down caused by extensive LLM usage in every aspect of our trade.
I see this as dodging a sort of bullet.
You're probably leaving speed on the table but these tools are not hard to pick up in the future if you have good engineering fundamentals. If your career is going well and you have no need to go faster don't worry about it.
Do you use search engines and stack overflow?
Llm as a starting point are a much better alternative to those.
So perhaps start by just asking llm coding questions instead of searching Google and stack overflow
A code-completion agent is extremely far removed from the quality of the output of a frontier coding model. Try Claude or similar and ask it to generate code specific-well defined tedious tasks. You may be surprised!